reference
stringlengths
376
444k
target
stringlengths
31
68k
Scatter Search metaheuristic for Post-Enrolment Course Timetabling Problems: A Review <s> Introduction <s> A very primitive version of Gotlieb’s timetable problem is shown to be NP-complete, and therefore all the common timetable problems are NP-complete. A polynomial time algorithm, in case all teachers are binary, is shown. The theorem that a meeting function always exists if all teachers and classes have no time constraints is proved. The multicommodity integral flow problem is shown to be NP-complete even if the number of commodities is two. This is true both in the directed and undirected cases. <s> BIB001 </s> Scatter Search metaheuristic for Post-Enrolment Course Timetabling Problems: A Review <s> Introduction <s> Hybrid metaheuristics have received considerable interest these recent years in the field of combinatorial optimization. A wide variety of hybrid approaches have been proposed in the literature. In this paper, a taxonomy of hybrid metaheuristics is presented in an attempt to provide a common terminology and classification mechanisms. The taxonomy, while presented in terms of metaheuristics, is also applicable to most types of heuristics and exact optimization algorithms. ::: ::: As an illustration of the usefulness of the taxonomy an annoted bibliography is given which classifies a large number of hybrid approaches according to the taxonomy. <s> BIB002 </s> Scatter Search metaheuristic for Post-Enrolment Course Timetabling Problems: A Review <s> Introduction <s> Combinations of population-based approaches with local search have provided very good results for a variety of scheduling problems. This paper describes the development of a population-based algorithm called Electromagnetism-like mechanism with force decay rate great deluge algorithm for university course timetabling. This problem is concerned with the assignment of lectures to a specific numbers of timeslots and rooms. For a solution to be feasible, a number of hard constraints must be satisfied. A penalty value which represents the degree to which various soft constraints are satisfied is measured which reflects the quality of the solution. This approach is tested over established datasets and compared against state-of-the-art techniques from the literature. The results obtained confirm that the approach is able to produce solutions to the course timetabling problem which demonstrate some of the lowest penalty values in the literature on these benchmark problems. <s> BIB003 </s> Scatter Search metaheuristic for Post-Enrolment Course Timetabling Problems: A Review <s> Introduction <s> A unified view of metaheuristics This book provides a complete background on metaheuristics and shows readers how to design and implement efficient algorithms to solve complex optimization problems across a diverse range of applications, from networking and bioinformatics to engineering design, routing, and scheduling. It presents the main design questions for all families of metaheuristics and clearly illustrates how to implement the algorithms under a software framework to reuse both the design and code. Throughout the book, the key search components of metaheuristics are considered as a toolbox for: Designing efficient metaheuristics (e.g. local search, tabu search, simulated annealing, evolutionary algorithms, particle swarm optimization, scatter search, ant colonies, bee colonies, artificial immune systems) for optimization problems Designing efficient metaheuristics for multi-objective optimization problems Designing hybrid, parallel, and distributed metaheuristics Implementing metaheuristics on sequential and parallel machines Using many case studies and treating design and implementation independently, this book gives readers the skills necessary to solve large-scale optimization problems quickly and efficiently. It is a valuable reference for practicing engineers and researchers from diverse areas dealing with optimization or machine learning; and graduate students in computer science, operations research, control, engineering, business and management, and applied mathematics. <s> BIB004 </s> Scatter Search metaheuristic for Post-Enrolment Course Timetabling Problems: A Review <s> Introduction <s> The post enrolment course timetabling problem (PECTP) is one type of university course timetabling problems, in which a set of events has to be scheduled in time slots and located in suitable rooms according to the student enrolment data. The PECTP is an NP-hard combinatorial optimisation problem and hence is very difficult to solve to optimality. This paper proposes a hybrid approach to solve the PECTP in two phases. In the first phase, a guided search genetic algorithm is applied to solve the PECTP. This guided search genetic algorithm, integrates a guided search strategy and some local search techniques, where the guided search strategy uses a data structure that stores useful information extracted from previous good individuals to guide the generation of offspring into the population and the local search techniques are used to improve the quality of individuals. In the second phase, a tabu search heuristic is further used on the best solution obtained by the first phase to improve the optimality of the solution if possible. The proposed hybrid approach is tested on a set of benchmark PECTPs taken from the international timetabling competition in comparison with a set of state-of-the-art methods from the literature. The experimental results show that the proposed hybrid approach is able to produce promising results for the test PECTPs. <s> BIB005 </s> Scatter Search metaheuristic for Post-Enrolment Course Timetabling Problems: A Review <s> Introduction <s> In this work we present a new approach to tackle the problem of Post Enrolment Course Timetabling as specified for the International Timetabling Competition 2007 (ITC2007), competition track 2. The heuristic procedure is based on Ant Colony Optimization (ACO) where artificial ants successively construct solutions based on pheromones (stigmergy) and local information. The key feature of our algorithm is the use of two distinct but simplified pheromone matrices in order to improve convergence but still provide enough flexibility for effectively guiding the solution construction process. We show that by parallelizing the algorithm we can improve the solution quality significantly. We applied our algorithm to the instances used for the ITC2007. The results document that our approach is among the leading algorithms for this problem; in all cases the optimal solution could be found. Furthermore we discuss the characteristics of the instances where the algorithm performs especially well. <s> BIB006 </s> Scatter Search metaheuristic for Post-Enrolment Course Timetabling Problems: A Review <s> Introduction <s> In this work, we propose a variant of the honey-bee mating optimization algorithm for solving educational timetabling problems. The honey-bee algorithm is a nature inspired algorithm which simulates the process of real honey-bees mating. The performance of the proposed algorithm is tested over two benchmark problems; exam (Carter’s un-capacitated datasets) and course (Socha datasets) timetabling problems. We chose these two datasets as they have been widely studied in the literature and we would also like to evaluate our algorithm across two different, yet related, domains. Results demonstrate that the performance of the honey-bee mating optimization algorithm is comparable with the results of other approaches in the scientific literature. Indeed, the proposed approach obtains best results compared with other approaches on some instances, indicating that the honey-bee mating optimization algorithm is a promising approach in solving educational timetabling problems. <s> BIB007
University course timetabling problems (UCTP) can be defined as the process of assigning a set of courses into a limited number of timeslot and room. The task is to generate a feasible timetable (that comply all hard constraints) and aims to minimise violation on soft constraints . The UCTP is considered as an NP-hard problem BIB001 , which is difficult to solve for optimality. Various metaheuristics have been applied by many researchers to solve it . [4] classified metaheuristics into two classes: population-based and local search metaheuristics. Some common population-based methods applied to the problem are the ant colony optimization BIB006 , honey bee colony BIB007 , fish swarm BIB003 and memetic algorithm BIB005 . The population-based metaheuristics are intensively investigated, due to their capability of search space exploration and can be easily combined with local search methods to enhance the solution exploitation process BIB002 . Whilst, some common local search methods applied to the problem are tabu search and iterated local search . Due to their capability of solution space exploitation, the local search metaheuristics are utilized. According to , the strength of population-based methods is based on the recombination of solutions capability to obtain new ones. In population-based algorithms such as the Scatter Search (SS), a structured solution recombination of elite solutions is performed explicitly using one or more recombination operators, such as crossover and mutation. This involves moving/swapping of assignments in a solution representing the information exchange between generations about an elite solution. The process enables the search to perform structured solutions combinations. The term explicit means that a solution is represented directly by the actual assignments (e.g. course1-timeslot44, course1-room2) and their fitness values. Therefore, the tendency of finding search spaces using recombination technique will certainly be more effective. Many studies have suggested to hybridized population-based metaheuristic with other local search metaheuristics BIB002 BIB004 . In addition, the utilization of an explicit memory (e.g. reference set), controlling the search diversity, and a dynamic manipulation of the population size are also recommended for a better performance of hybrid metaheuristics BIB002 . A good performance is presented by maintaining a balance between diversification and intensification of the search. Therefore, the SS is chosen for this study, due to its capability in maintaining a balance between diversification and intensification of the search. This study focused on reviewing the performance of SS over the post-enrolment course timetabling problem. The work mainly aims at illustrating the impact of the strategies within SS to provide a balance between diversification and intensification of the search. Finally, this study concluded the performance, consistency, strengths and weaknesses of SS in solving the postenrolment course timetabling problems.
Scatter Search metaheuristic for Post-Enrolment Course Timetabling Problems: A Review <s> Scatter search <s> In spite of the widespread importance of nonlinear and parametric optimization, many standard solution methods allow a large gap between local optimality and global optimality, inviting consideration of metaheuristics capable of reducing such gaps. We identify ways to apply the tabu search metaheuristic to nonlinear optimization problems from both continuous and discrete settings. The step beyond strictly combinatorial settings enlarges the domain of problems to which tabu search is typically applied. We show how tabu search can be coupled with directional search and scatter search approaches to solve such problems. In addition, we generalize standard weighted combinations (as employed in scatter search) to include structured weighted combinations capable of satisfying specified feasibility conditions (e.g., mapping weighted combinations of scheduling, partitioning and covering solutions into new solutions of the same type). The outcome suggests ways to exploit potential links between scatter search and genetic algorithms, and also provides a basis for integrating genetic algorithms with tabu search. <s> BIB001 </s> Scatter Search metaheuristic for Post-Enrolment Course Timetabling Problems: A Review <s> Scatter search <s> Scatter search (SS) is a population-based method that has recently been shown to yield promising outcomes for solving combinatorial and nonlinear optimization problems. Based on formulations originally proposed in the 1960s for combining decision rules and problem constraints, SS uses strategies for combining solution vectors that have proved effective in a variety of problem settings. Path relinking (PR) has been suggested as an approach to integrate intensification and diversification strategies in a search scheme. The approach may be viewed as an extreme (highly focused) instance of a strategy that seeks to incorporate attributes of high quality solutions, by creating inducements to favor these attributes in the moves selected. The goal of this paper is to examine SS and PR strategies that provide useful alternatives to more established search methods. We describe the features of SS and PR that set them apart from other evolutionary approaches, and that offer opportunities for creating increasingly more versatile and effective methods in the future. Specific applications are summarized to provide a clearer understanding of settings where the methods are being used. <s> BIB002 </s> Scatter Search metaheuristic for Post-Enrolment Course Timetabling Problems: A Review <s> Scatter search <s> Scatter Search and its generalized form Path Relinking, are evolutionary methods that have been successfully applied to hard optimization problems. Unlike genetic algorithms, they operate on a small set of solutions and employ diversification strategies of the form proposed in Tabu Search, which give precedence to strategic learning based on adaptive memory, with limited recourse to randomization. The fundamental concepts and principles were first proposed in the 1970s as an extension of formulations, dating back to the 1960s, for combining decision rules and problem constraints. (The constraint combination approaches, known as surrogate constraint methods, now independently provide an important class of relaxation strategies for global optimization.) The Scatter Search framework is flexible, allowing the development of alternative implementations with varying degrees of sophistication. Path Relinking, on the other hand, was first proposed in the context of the Tabu Search metaheuristics, but it has been also applied with a variety of other methods. This chapter's goal is to provide a grounding in the essential ideas of Scatter Search and Path Relinking, together with pseudo-codes of simple versions of these methods, that will enable readers to create successful applications of their own. <s> BIB003 </s> Scatter Search metaheuristic for Post-Enrolment Course Timetabling Problems: A Review <s> Scatter search <s> Scatter search is an evolutionary method that has been successfully applied to hard optimization problems. The fundamental concepts and principles of the method were first proposed in the 1970s, based on formulations dating back to the 1960s for combining decision rules and problem constraints. In contrast to other evolutionary methods like genetic algorithms, scatter search is founded on the premise that systematic designs and methods for creating new solutions afford significant benefits beyond those derived from recourse to randomization. It uses strategies for search diversification and intensification that have proved effective in a variety of optimization problems. This paper provides the main principles and ideas of scatter search and its generalized form path relinking. We first describe a basic design to give the reader the tools to create relatively simple implementations. More advanced designs derive from the fact that scatter search and path relinking are also intimately related to the tabu search (TS) metaheuristic, and gain additional advantage by making use of TS adaptive memory and associated memory-exploiting mechanisms capable of being tailored to particular contexts. These and other advanced processes described in the paper facilitate the creation of sophisticated implementations for hard problems that often arise in practical settings. Due to their flexibility and proven effectiveness, scatter search and path relinking can be successfully adapted to tackle optimization problems spanning a wide range of applications and a diverse collection of structures, as shown in the papers of this volume. <s> BIB004 </s> Scatter Search metaheuristic for Post-Enrolment Course Timetabling Problems: A Review <s> Scatter search <s> In this paper, we present a scatter search algorithm for the well-known nurse scheduling problem (NSP). This problem aims at the construction of roster schedules for nurses taking both hard and soft constraints into account. The objective is to minimize the total preference cost of the nurses and the total penalty cost from violations of the soft constraints. The problem is known to be NP-hard. The contribution of this paper is threefold. First, we are, to the best of our knowledge, the first to present a scatter search algorithm for the NSP. Second, we investigate two different types of solution combination methods in the scatter search framework, based on four different cost elements. Last, we present detailed computational experiments on a benchmark dataset presented recently, and solve these problem instances under different assumptions. We show that our procedure performs consistently well under many different circumstances, and hence, can be considered as robust against case-specific constraints. <s> BIB005 </s> Scatter Search metaheuristic for Post-Enrolment Course Timetabling Problems: A Review <s> Scatter search <s> In this chapter we present a metaheuristic procedure constructed for the special case of the Vehicle Routing Problem in which the demands of clients can be split, i.e., any client can be serviced by more than one vehicle. The proposed algorithm, based on the scatter search methodology, produces a feasible solution using the minimum number of vehicles. The quality of the obtained results is comparable to the best results known up to date on a set of instances previously published in the literature. <s> BIB006 </s> Scatter Search metaheuristic for Post-Enrolment Course Timetabling Problems: A Review <s> Scatter search <s> Hyper-heuristic can be defined as a “heuristics to choose heuristics” that intends to increase the level of generality in which optimization methodologies can operate. In this work, we propose a scatter search based hyper-heuristic (SS-HH) approach for solving examination timetabling problems. The scatter search operates at high level of abstraction which intelligently evolves a sequence of low level heuristics to use for a given problem. Each low level heuristic represents a single neighborhood structure. We test our proposed approach on the un-capacitated Carter benchmarks datasets. Experimental results show the proposed SS-HH is capable of producing good quality solutions which are comparable to other hyper-heuristics approaches (with regarding to Carter benchmark datasets). <s> BIB007 </s> Scatter Search metaheuristic for Post-Enrolment Course Timetabling Problems: A Review <s> Scatter search <s> In this chapter, a scatter search (SS) method is proposed to solve the multiobjective permutation fuzzy flow shop scheduling problem. The objectives are minimizing the average tardiness and the number of tardy jobs. The developed scatter search method is tested on real-world data collected at an engine piston manufacturing company. Using the proposed SS algorithm, the best set of parameters is used to obtain the optimal or near optimal solutions of multiobjective fuzzy flow shop scheduling problem in the shortest time. These parameters are determined by full factorial design of experiments (DOE). The feasibility and effectiveness of the proposed scatter search method is demonstrated by comparing it with the hybrid genetic algorithm (HGA). <s> BIB008 </s> Scatter Search metaheuristic for Post-Enrolment Course Timetabling Problems: A Review <s> Scatter search <s> Scatter search is an evolutionary metaheuristic that explores solution spaces by evolving a set of reference points, operating on a small set of solutions while making only limited use of randomization. We give a comprehensive description of the elements and methods that make up its template, including the most recent elements incorporated in successful applications in both global and combinatorial optimization. Path-relinking is an intensification strategy to explore trajectories connecting elite solutions obtained by heuristic methods such as scatter search, tabu search, and GRASP. We describe its mechanics, implementation issues, randomization, the use of pools of high-quality solutions to hybridize path-relinking with other heuristic methods, and evolutionary path-relinking. We also describe the hybridization of path-relinking with genetic algorithms to implement a progressive crossover operator. Some successful applications of scatter search and of path-relinking are also reported. <s> BIB009
SS is a population-based metaheuristic proposed by . It constructs solutions by combining elite solutions to exploit the problem-specific knowledge (e.g. good components of an elite solution). Recently, SS became one of the popular methods for solving hard combinatorial optimisation problems. SS operates with a population of solutions. It employs procedures of combining elite solutions to create new solutions. There are two main differences between SS and other classical population-based metaheuristics (such as Genetic Algorithms (GA)) BIB002 [18]: i. The size of the evolving (reference) set of elite solutions: the evolving set (RefSet) in SS has a relatively small or moderate size (typical sizes are 10, 15, or between 20 and 40, according to . The RefSet contains a diverse collection of elite solutions that are systematically selected. Whilst, in other population-based approaches (such as GA and Memetic Algorithms), the population size is typically larger (e.g. ≥ 100), where the solutions pool contains a diverse collection of randomly selected solutions. For example, GA has 100 solutions . ii. The way the method combines the existing solutions to provide new ones: SS combines good solutions to construct better quality solutions by exploiting the search experience (e.g. good quality and diverse solutions stored in a memory). Whilst in GA, a population of solutions is evolved by randomly/probabilistically selects parents from the solution pool (which not only contains elite solutions). Furthermore, in GA, for example, the updating process of the search relies on randomised selections which select solutions based to their relative fitness cost value. In SS, the updating process relies on the use of memory to maintain a good balance between diversification and intensification of the search. This is due to the exploitation of an adaptive memory which has been built on the foundations of linking SS with TS BIB009 . The basic pseudo code of the SS is illustrated in Algorithm 1. The SS consists of five component processes as described by BIB002 [19]: 1) Diversification Generation Method. It is used to generate a collection of diverse initial solutions as an input. 2) Improvement Method. It is used to enhance the quality of a trial solution using any local search to explore the neighbors of a solution. 3) Reference Set Update Method. It is used to build and maintain a reference set consisting of elite solutions (both good quality and diverse), organized to provide structured solution combinations. The size of the set is usually not more than 20 solutions (10 good quality and 10 diverse solutions). The Reference Set presents a diversity of the search. 4) Subset Generation Method. It is used to select solutions from the reference set, to produce a subset of its solutions as a basis for creating combined solutions. The most common subset generation method is to generate all pairs of reference solutions, namely Type-I selection (e.g. all subsets of size 2). 5) Solution Combination Method. It is used to generate one or more solutions by combining good parts of a given subset of solutions produced by the Subset Generation Method. The combination method is analogous to the crossover operator in GA but it must be capable of combining two or more solutions in the aspect of combining two elite solutions (good quality and diverse). Step1: Start with P = 0. Use Diversification Generation Method to construct a solution. Step2: Apply the Improvement Method. Let x be the resulting solution. If x P then add x to P (i.e., P = P  x), otherwise, discard x. Repeat this step until |P| = PSize. Step3: Use the Reference Set Update Method to build RefSet (divided into two sets) with the "best quality" b1 = {x 1 , …, x b } solutions and; the "most diverse" BIB004 Interestingly, SS is designed to operate on a set of solutions, called reference solutions (RefSet), which constitute good solutions obtained from previous solution efforts BIB001 . The approach systematically generates combinations of the reference solutions to create new ones. Each of which is mapped into an associated feasible solution. An adaptive memory is exploited to avoid the search from reinvestigating solutions that have already been evaluated. This is achieved by preventing the duplication of reference solutions in the memory, which contains a diverse collection of elite solutions. argued that the use of memory in SS encourages search diversification and intensification. These memory components may help the search to escape from the local optima by generating new solutions from those in the memory. In some cases, the search finds a globally optimal solution by systematically combining elite solutions. Both SS and GA incorporate the same idea on how to generate new solution from some form of combination of existing solutions. On the other hand, several contrasts between both methods can be noted. The early GA approaches choose parents randomly to produce offspring, and then introducing randomization to determine which components of the parents should be combined. In contrast, the SS approach performs a systematic selection of parents and the way of combining them which does not rely too much on randomization. In addition, the SS is also designed to incorporate strategic probabilistic biases by taking into account the evaluation and history of the solutions. This is done by exploring the possible pairs of combining solutions from the memory. SS focuses on maintaining a balance between diversification and intensification of the search. For example, the approach includes the generation of new solutions that are not infeasible combinations of the original solutions. The new solutions may then contain information that is not contained in the original reference solutions. SS is an information-driven approach which exploits knowledge derived from the search space . According to BIB002 , the basic design (shown in figure 1 ) can be expanded and improved in different ways. The SS methodology is very flexible, since each of its elements can be implemented in a variety of ways and degrees of sophistication. The following criteria summarise the strength of the SS metaheuristic BIB003 : i. Useful information regarding the structure or location of optimal solutions is usually contained in a diverse collection of elite solutions. ii. Mechanisms that are capable of constructing solution combinations must be provided when solutions are combined as a strategy for exploiting information. Similarly, heuristic processes mapping that combined solutions into new solutions must also be incorporated. These combination mechanisms are used to incorporate diversity and quality. iii. Multiple solutions must be considered in creating combinations and enhancing opportunity in order to exploit information contained in the union of elite solutions. At present, SS has been applied successfully to a variety of combinatorial optimisation problems, such as nurse rostering BIB005 , vehicle routing BIB006 , examination timetabling BIB007 , course timetabling , and flow shop scheduling BIB008 problems. The success might be due to the following factors stated by BIB002 : i. It provides a deterministic selection of reference set of elite solutions in term of quality and diversity. This performs a systematic neighbourhood search in the Euclidean or Hamming spaces. ii. Structured solutions combinations using diversification strategies that do not merely rely on randomisation. iii. The search evolves a strategic updating in the form of exploiting an adaptive memory to preserve quality and diversity. iv. It provides useful information about the structure or location of the global solution contained in a sufficiently diverse collection of elite solutions. The strategies and mechanisms of the SS have been comprehensively investigated, as well as the advances and applications have been reviewed in many studies such as in BIB002 [25] BIB005 .
A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Single and Multi-objective optimization <s> NP-complete problems form an extensive equivalence class of combinatorial problems for which no nonenumerative algorithms are known. Our first result shows that determining a shortest-length schedule in an m-machine flowshop is NP-complete for m ≥ 3. For m = 2, there is an efficient algorithm for finding such schedules. The second result shows that determining a minimum mean-flow-time schedule in an m-machine flowshop is NP-complete for every m ≥ 2. Finally we show that the shortest-length schedule problem for an m-machine jobshop is NP-complete for every m ≥ 2. Our results are strong in that they hold whether the problem size is measured by number of tasks, number of bits required to express the task lengths, or by the sum of the task lengths. <s> BIB001 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Single and Multi-objective optimization <s> We show that finding minimum finish time preemptive and non-preemptive schedules for flow shops and job shops is NP-complete. Bounds on the performance of various heuristics to generate reasonably good schedules are also obtained. <s> BIB002 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Single and Multi-objective optimization <s> The theory of deterministic sequencing and scheduling has expanded rapidly during the past years. In this paper we survey the state of the art with respect to optimization and approximation algorithms and interpret these in terms of computational complexity theory. Special cases considered are single machine scheduling, identical, uniform and unrelated parallel machine scheduling, and open shop, flow shop and job shop scheduling. We indicate some problems for future research and include a selective bibliography. <s> BIB003 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Single and Multi-objective optimization <s> The problem of minimizing the total tardiness for a set of independent jobs on one machine is considered. Lawler has given a pseudo-polynomial-time algorithm to solve this problem. In spite of extensive research efforts for more than a decade, the question of whether it can be solved in polynomial time or it is NP-hard in the ordinary sense remained open. In this paper the problem is shown to be NP-hard in the ordinary sense. <s> BIB004 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Single and Multi-objective optimization <s> This paper is a complete survey of flowshop-scheduling problems and contributions from early works of Johnson of 1954 to recent approaches of metaheuristics of 2004. It mainly considers a flowshop problem with a makespan criterion and it surveys some exact methods (for small size problems), constructive heuristics and developed improving metaheuristic and evolutionary approaches as well as some well-known properties and rules for this problem. Each part has a brief literature review of the contributions and a glimpse of that approach before discussing the implementation for a flowshop problem. Moreover, in the first section, a complete literature review of flowshop-related scheduling problems with different assumptions as well as contributions in solving these other aspects is considered. This paper can be seen as a reference to past contributions (particularly in n/m/p/c max or equivalently F/prmu/c max) for future research needs of improving and developing better approaches to flowshop-related schedulin... <s> BIB005 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Single and Multi-objective optimization <s> Makespan minimization in permutation flow-shop scheduling is an operations research topic that has been intensively addressed during the last 40 years. Since the problem is known to be NP-hard for more than two machines, most of the research effort has been devoted to the development of heuristic procedures in order to provide good approximate solutions to the problem. However, little attention has been devoted to establish a common framework for these heuristics so that they can be effectively combined or extended. In this paper, we review and classify the main contributions regarding this topic and discuss future research issues. <s> BIB006 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Single and Multi-objective optimization <s> In this work we present a review and comparative evaluation of heuristics and metaheuristics for the well-known permutation flowshop problem with the makespan criterion. A number of reviews and evaluations have already been proposed. However, the evaluations do not include the latest heuristics available and there is still no comparison of metaheuristics. Furthermore, since no common benchmarks and computing platforms are used, the results cannot be generalised. We propose a comparison of 25 methods, ranging from the classical Johnson's algorithm or dispatching rules to the most recent metaheuristics, including tabu search, simulated annealing, genetic algorithms, iterated local search and hybrid techniques. For the evaluation we use the standard test of Taillard [Eur. J. Operation. Res. 64 (1993) 278] composed of 120 instances of different sizes. In the evaluations we use the experimental design approach to obtain valid conclusions on the effectiveness and efficiency of the different methods tested. <s> BIB007 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Single and Multi-objective optimization <s> This paper considers the n-job, m-machine permutation flowshop with the objective of minimizing the mean flowtime. Initial sequences that are structured to enhance the performance of local search techniques are constructed from job rankings delivered by a trained neural network. The network's training is done by using data collected from optimal sequences obtained from solved examples of flowshop problems. Once trained, the neural network provides rankable measures that can be used to construct a sequence in which jobs are located as close as possible to the positions they would occupy in an optimal sequence. The contribution of these 'neural' sequences in improving the performance of some common local search techniques, such as adjacent pairwise interchange and tabu search, is examined. Tests using initial sequences generated by different heuristics show that the sequences suggested by the neural networks are more effective in directing neighborhood search methods to lower local optima. <s> BIB008 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Single and Multi-objective optimization <s> The problem of scheduling in flowshops with the objective of minimizing total flowtime is studied. For solving the problem two ant-colony algorithms are proposed and analyzed. The first algorithm refers to some extent to ideas by Stuetzle [Stuetzle, T. (1998). An ant approach for the flow shop problem. In: Proceedings of the sixth European Congress on intelligent techniques and soft computing (EUFIT '98) (Vol. 3) (pp. 1560-1564). Aachen: Verlag Mainz] and Merkle and Middendorf [Merkle, D., & Middendorf, M. (2000). An ant algorithm with a new pheromone evaluation rule for total tardiness problems. In: Proceedings of the EvoWorkshops 2000, lecture notes in computer science 1803 (pp. 287-296). Berlin: Springer]. The second algorithm is newly developed. The proposed ant-colony algorithms have been applied to 90 benchmark problems taken from Taillard [Taillard, E. (1993). Benchmarks for basic scheduling problems. European Journal of Operational Research, 64, 278-285]. A comparison of the solutions yielded by the ant-colony algorithms with the best heuristic solutions known for the benchmark problems up to now, as published in extensive studies by Liu and Reeves [Liu, J., & Reeves, C.R. (2001). Constructive and composite heuristic solutions to the P//ΣCi scheduling problem. European Journal of Operational Research, 132,439-452, and Rajendran and Ziegler [Rajendran, C., & Ziegler, H. (2004). Ant-colony algorithms for permutation flowshop scheduling to minimize makespan/total flowtime of jobs. European Journal of Operational Research, 155, 426-438], shows that the presented ant-colony algorithms are better, on an average, than the heuristics analyzed by Liu and Reeves and Rajendran and Ziegler. <s> BIB009 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Single and Multi-objective optimization <s> In this work, a review and comprehensive evaluation of heuristics and metaheuristics for the m-machine flowshop scheduling problem with the objective of minimising total tardiness is presented. Published reviews about this objective usually deal with a single machine or parallel machines and no recent methods are compared. Moreover, the existing reviews do not use the same benchmark of instances and the results are difficult to reproduce and generalise. We have implemented a total of 40 different heuristics and metaheuristics and we have analysed their performance under the same benchmark of instances in order to make a global and fair comparison. In this comparison, we study from the classical priority rules to the most recent tabu search, simulated annealing and genetic algorithms. In the evaluations we use the experimental design approach and careful statistical analyses to validate the effectiveness of the different methods tested. The results allow us to clearly identify the state-of-the-art methods. <s> BIB010
Single optimization criteria for the PFSP are mainly based on the completion times for the jobs at the different machines which are denoted by C ij , i ∈ M, j ∈ N . Given a permutation π of n jobs, where π (j) denotes the job in the j-th position of the sequence, the completion times are calculated with the following expression: C i,π (j) = max C i−1,π (j) , C i,π (j−1) + p iπ (j) ( 1) where C 0,π (j) = 0 and C i,π (0) = 0, ∀i ∈ M, ∀j ∈ N . Additionally, the completion time of job j equals to C mj and is commonly denoted as C j in short. By far, the most thoroughly studied single criterion is the minimization of the maximum completion time or makespan, denoted as C max = C m,π (n) . Under this objective, the PFSP is referred to as F/prmu/C max according to BIB003 and was shown by BIB001 to be N P-Hard in the strong sense for more than two machines (m > 2). Recent reviews and comparative evaluations of heuristics and metaheuristics for this problem are given in BIB006 ; BIB007 and in BIB005 . The second most studied objective is the total completion time or T CT = n j=1 C j . The PFSP with this objective (F/prmu/ C j ) is already N P-Hard for m ≥ 2 according to BIB002 . Some recent results for this problem can be found in BIB008 or BIB009 . If there are no release times for the jobs, i.e., r j = 0, ∀j ∈ N , then the total or average completion time equals the total or average flowtime, denoted as F in the literature. Probably, the third most studied criterion is the total tardiness minimization. Given a due date d j for job j, we denote by T j the measure of tardiness of job j, which is defined as T j = max{C j − d j , 0}. As with the other objectives, total tardiness minimization results in a N P-Hard problem in the strong sense for m ≥ 2 as shown in BIB004 . A recent review for the total tardiness version of the PFSP (the F/prmu/ T j problem) can be found in BIB010 . Single and multi-objective scheduling problems have been studied extensively. However, in the multi-objective case, the majority of studies use the simpler "a priori" approach where multiple objectives are weighted into a single one. As mentioned, the main problem in this method is that the weights for each objective must be given. The "a posteriori" multi-objective approach is more complex since in this case, there is no single optimum solution, but rather lots of "optimum" solutions. For example, given two solutions x 1 and x 2 for a given problem with two minimization objectives f 1 and f 2 and being f 1 (·) and f 2 (·) the objective values for a given solution. Is x 1 better than x 2 if f 1 (x 1 ) < f 1 (x 2 ) but at the same time f 2 (x 1 ) > f 2 (x 2 )? It is clear than in a multiobjective scenario, neither solution is better than the other. However, given a third solution x 3 we can say than x 3 is worse than x 1 if f 1 (x 1 ) < f 1 (x 3 ) and f 2 (x 1 ) < f 2 (x 3 ). In order to properly compare two solutions in a Multi-Objective Optimization Problem (MOOP) some definitions are needed. Without loss of generality, let us suppose that there are M minimization objectives in a MOOP. We use the operator ⊳ as "better than", so that the relation x 1 ⊳ x 2 implies that x 1 is better than x 2 for any minimization objective. Zitzler et al. (2003) present a much more extensive notation which is later extended in Paquete (2005) and more recently in . For the sake of completeness, some of this notation is also introduced here: Strong (or strict) domination: A solution x 1 is said to strongly dominate a solution x 2 (x 1 ≺≺ x 2 ) if: . . , M ; x 1 is better than x 2 for all the objective values.
A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Literature review on multi-objective optimization <s> Abstract This paper gives an overview of meta-heuristics methods utilized within the paradigm of multi-objective programming. This is an area of research that has undergone substantial expansion and development in the past decade. A literature review for this period is presented and analyzed. Analysis of the types of multi-objective techniques and meta-heuristics is undertaken and reasons for their use hypothesized. The strengths and weaknesses of meta-heuristic methods as applied to multi-objective programmes are discussed. Finally, a summary is given together with suggestions for future research. <s> BIB001 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Literature review on multi-objective optimization <s> Scheduling and multicriteria optimisation theory have been subject, separately, to numerous studies. Since the last twenty years, multicriteria scheduling problems have been subject to a growing interest. However, a gap between multicriteria scheduling approaches and multicriteria optimisation field exits. This book is an attempt to collect the elementary of multicriteria optimisation theory and the basic models and algorithms of multicriteria scheduling. It is composed of numerous illustrations, algorithms and examples which may help the reader in understanding the presented concepts. This book covers general concepts such as Pareto optimality, complexity theory, and general method for multicriteria optimisation, as well as dedicated scheduling problems and algorithms: just-in-time scheduling, flexibility and robustness, single machine problems, parallel machine problems, shop problems, etc. The second edition contains revisions and new material. <s> BIB002
The literature on multi-objective optimization is plenty. However, the multi-objective PFSP field is relatively scarce, specially when compared against the number of papers published for this problem that consider one single objective. The few proposed multi-objective methods for the PFSP are mainly based on evolutionary optimization and some in local search methods like simulated annealing or tabu search. It could be argued that many reviews have been published about multi-objective scheduling. However we find that little attention has been paid to the flowshop scheduling problem. For example, the review by is mostly centered around single machine problems. As a matter of fact there are only four survey papers related with flowshop. In another review by T'kindt and Billaut (2001) we find about 15 flowshop papers reviewed where most of them are about the specific two machine case. Another review is given by BIB001 . However, this is more a quantification of papers in multi-objective optimization. Finally, the more recent review of contains mainly results for one machine and parallel machines scheduling problems. The papers reviewed about flowshop scheduling are all restricted to the two machine case. For all these reasons, in this paper we provide a complete and comprehensive review about multi-objective flowshop. However, note that we restrict ourselves to the pure flowshop setting, i.e., with no additional constraints. In the following, we will use the notation of BIB002 to specify the technique and objectives studied by each reviewed paper. For example, a weighted makespan and total tardiness bi-criteria flowshop problem is denoted as F//F l (C max , T ). For more details, the reader is referred to or BIB002 .
A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Lexicographical and ε-constraint approaches <s> Each of a collection of items are to be produced on two machines (or stages). Each machine can handle only one item at a time and each item must be processed through machine one and then through machine two. The setup time plus work time for each item for each machine is known. A simple decision rule is obtained in this paper for the optimal scheduling of the production so that the total elapsed time is a minimum. A three‐machine problem is also discussed and solved for a restricted case. <s> BIB001 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Lexicographical and ε-constraint approaches <s> Previous research on the scheduling of multimachine systems has generally focused on the optimization of individual performance measures. This article considers the sequencing of jobs through a multimachine flow shop, where the quality of the resulting schedule is evaluated according to the associated levels of two scheduling criteria, schedule makespan (Cmax) and maximum job tardiness (Tmax). We present constructive procedures that quantify the trade-off between Cmax and Tmax. The significance of this trade-off is that the optimal solution for any preference function involving only Cmax and Tmax must be contained among the set of efficient schedules that comprise the trade-off curve. For the special case of two-machine flow shops, we present an algorithm that identifies the exact set of efficient schedules. Heruistic procedures for approximating the efficient set are also provided for problems involving many jobs or larger flow shops. Computational results are reported for the procedures which indicate that both the number of efficient schedules and the error incurred by heuristically approximating the efficient set are quite small. <s> BIB002 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Lexicographical and ε-constraint approaches <s> The two-stage flowshop scheduling problem with the objective of minimizing total flowtime subject to obtaining the optimal makespan is discussed. A branch-and-bound algorithm and two heuristic algorithms have been developed. The results of the experimental investigation of the effectiveness of the algorithms are also presented. <s> BIB003 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Lexicographical and ε-constraint approaches <s> Abstract This paper considers the two-stage bicriteria flow shop scheduling problem with the objective of minimizing the total flow time subject to obtaining the optimal makespan. In view of the NP-hard nature of the problem, two Genetic Algorithms (GA) based approaches are proposed to solve the problem. The effectiveness of the proposed GA based approaches is demonstrated by comparing their performance with the only known heuristic for the problem. The computational experiments show that the proposed GA based approaches are effective in solving the problem and recommend that the proposed GA based approaches are useful for solving the multi-machine, multi-criteria scheduling problems. <s> BIB004 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Lexicographical and ε-constraint approaches <s> This article deals with the development of a heuristic for scheduling in a flowshop with the objective of minimizing the makespan and maximum tardiness of a job. The heuristic makes use of the simulated annealing technique. The proposed heuristic is relatively evaluated against the existing heuristic for scheduling to minimize the weighted sum of the makespan and maximum tardiness of a job. The results of the computational evaluation reveal that the proposed heuristic performs better than the existing one. <s> BIB005 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Lexicographical and ε-constraint approaches <s> This paper discusses the process of desigining a tabu search-based heuristic for the two-stage flow shop problem with makespan minimization as the primary criterion and the minimization of total flow time as the secondary criterion. A factorial experiment is designed to analyse thoroughly the effects of four different factors, i.e. the initial solution, type of move, size of neighbourhood and the list size, on the performance of the tabu search-based heuristic. Using the techniques of evolution curves, and response tables and response graphs, coupled with the Taguchi method, the best combination of the factors for the tabu search-based heuristic is identified, and the effectiveness of the heuristic algorithm in finding an optimal solution is evaluated by comparing its performance with the best known heuristic to solve this problem. <s> BIB006 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Lexicographical and ε-constraint approaches <s> This paper develops and compares different local search heuristics for the two-stage flow shop problem with makespan minimization as the primary criterion and the minimization of either the total flow time, total weighted flow time, or total weighted tardiness as the secondary criterion. We investigate several variants of simulated annealing, threshold accepting, tabu search, and multi-level search algorithms. The influence of the parameters of these heuristics and the starting solution are empirically analyzed. The proposed heuristic algorithms are empirically evaluated and found to be relatively more effective in finding better quality solutions than the existing algorithms. <s> BIB007 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Lexicographical and ε-constraint approaches <s> Abstract In this paper, we tackle the problem of makespan minimisation in a permutation flowshop where the maximum tardiness is limited by a given upper bound. Since this problem is known to be NP-hard, we focus our attention on approximate approaches that allow obtaining good heuristic solutions to the problem. We first review the related literature and then propose a new algorithm. The algorithm is found to be competitive with the existing algorithms in terms of quality of the solution as well as in terms of the number of feasible solutions found. <s> BIB008
Lexicographical approaches have been also been explored in the literature. BIB002 proposed a constructive heuristic for the m machine flowshop where makespan is minimized subject to a maximum tardiness threshold, a problem denoted by F/prmu/ε(C max /T max ). This heuristic along with the one of BIB005 (see next section) are compared with a method recently proposed in BIB008 . In this later paper, the newly proposed heuristic is shown to outperform the methods of BIB002 and BIB005 both on quality and on the number of feasible solutions found. A different set of objectives is considered in BIB003 were the authors minimize total flowtime subject to optimum makespan value in a two machine flowshop. Such an approach is valid for the PFSP problem since the optimum makespan can be obtained by applying the well known algorithm of BIB001 . Rajendran proposes a branch and bound (B&B) method together with some heristics for the problem. However, the proposed methods are shown to solve 24 jobs maximum. In BIB004 two genetic algorithms were proposed for solving the two machine bicriteria flowshop problem also in a lexicographical way as in BIB003 . The first algorithm is based in the VEGA (Vector Evaluated Genetic Algorithm) of . In this algorithm, two subpopulations are maintained (one for each objective) and are combined by the selection operator for obtaining new solutions. In the second GA, referred to as the weighted criteria approach, a linear combination of the two criteria is considered. This weighted sum of objectives is used as the fitness value. The same problem is studied by BIB006 where a tabu search is employed. This algorithm is finely-tuned by means of statistical experiments and shown to outperform some of the earlier existing methods. BIB007
A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Weighted objectives <s> Previous research on the scheduling of multimachine systems has generally focused on the optimization of individual performance measures. This article considers the sequencing of jobs through a multimachine flow shop, where the quality of the resulting schedule is evaluated according to the associated levels of two scheduling criteria, schedule makespan (Cmax) and maximum job tardiness (Tmax). We present constructive procedures that quantify the trade-off between Cmax and Tmax. The significance of this trade-off is that the optimal solution for any preference function involving only Cmax and Tmax must be contained among the set of efficient schedules that comprise the trade-off curve. For the special case of two-machine flow shops, we present an algorithm that identifies the exact set of efficient schedules. Heruistic procedures for approximating the efficient set are also provided for problems involving many jobs or larger flow shops. Computational results are reported for the procedures which indicate that both the number of efficient schedules and the error incurred by heuristically approximating the efficient set are quite small. <s> BIB001 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Weighted objectives <s> In this paper, we present a branch-and-bound approach for solving a two-machine flow shop scheduling problem, in which the objective is to minimize a weighted combination of job flowtime and schedule makespan. Experimental results show that the algorithm works very well for certain special cases and moderately well for others. In fact, it is able to produce optimal schedules for 500-job problems in which the second machine dominates the first machine. It is also shown that the algorithm developed to provide an upper bound for the branch-and-bound is optimal when processing times for jobs are the same on both machines. The primary reason for developing the branch-and-bound approach is that its results can be used to guide other heuristic techniques, such as simulated annealing, tabu search and genetic algorithms, in their search for optimal solutions for larger problems. <s> BIB002 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Weighted objectives <s> In this paper, we study the application of a meta-heuristic to a two-machine flowshop scheduling problem. The meta-heuristic uses a branch-and-bound procedure to generate some information, which in turn is used to guide a genetic algorithm's search for optimal and near-optimal solutions. The criteria considered are makespan and average job flowtime. The problem has applications in flowshop environments where management is interested in reducing turn-around and job idle times simultaneously. We develop the combined branch-and-bound and genetic algorithm based procedure and two modified versions of it. Their performance is compared with that of three algorithms: pure branch-and-bound, pure genetic algorithm, and a heuristic. The results indicate that the combined approach and its modified versions are better than either of the pure strategies as well as the heuristic algorithm. <s> BIB003 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Weighted objectives <s> Abstract The problem of scheduling in flowshop and flowline-based cellular manufacturing systems (CMS) is considered with the objectives of minimizing makespan, total flowtime and machine idletime. We first discuss the formulation of time-tabling in a flowline-based CMS. A genetic algorithm is then presented for scheduling in a flowshop. The proposed genetic algorithm is compared with the existing multi-criterion heuristic, and results of the computational evaluation are presented. We introduce some modifications in the heuristic seed sequences, while using them to initialize subpopulations in the algorithm for scheduling in a flowline-based CMS. The proposed algorithm is also found to perform well for scheduling in a flowline-based CMS. <s> BIB004 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Weighted objectives <s> Abstract In this paper, we propose a multi-objective genetic algorithm and apply it to flowshop scheduling. The characteristic features of our algorithm are its selection procedure and elite preserve strategy. The selection procedure in our multi-objective genetic algorithm selects individuals for a crossover operation based on a weighted sum of multiple objective functions with variable weights. The elite preserve strategy in our algorithm uses multiple elite solutions instead of a single elite solution. That is, a certain number of individuals are selected from a tentative set of Pareto optimal solutions and inherited to the next generation as elite individuals. In order to show that our approach can handle multi-objective optimization problems with concave Pareto fronts, we apply the proposed genetic algorithm to a two-objective function optimization problem with a concave Pareto front. Last, the performance of our multi-objective genetic algorithm is examined by applying it to the flowshop scheduling problem with two objectives: to minimize the makespan and to minimize the total tardiness. We also apply our algorithm to the flowshop scheduling problem with three objectives: to minimize the makespan, to minimize the total tardiness, and to minimize the total flowtime. <s> BIB005 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Weighted objectives <s> We propose a hybrid algorithm for finding a set of nondominated solutions of a multi objective optimization problem. In the proposed algorithm, a local search procedure is applied to each solution (i.e., each individual) generated by genetic operations. Our algorithm uses a weighted sum of multiple objectives as a fitness function. The fitness function is utilized when a pair of parent solutions are selected for generating a new solution by crossover and mutation operations. A local search procedure is applied to the new solution to maximize its fitness value. One characteristic feature of our algorithm is to randomly specify weight values whenever a pair of parent solutions are selected. That is, each selection (i.e., the selection of two parent solutions) is performed by a different weight vector. Another characteristic feature of our algorithm is not to examine all neighborhood solutions of a current solution in the local search procedure. Only a small number of neighborhood solutions are examined to prevent the local search procedure from spending almost all available computation time in our algorithm. High performance of our algorithm is demonstrated by applying it to multi objective flowshop scheduling problems. <s> BIB006 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Weighted objectives <s> Abstract This paper attempts to solve a two-machine flowshop bicriteria scheduling problem with release dates for the jobs, in which the objective function is to minimize a weighed sum of total flow time and makespan. To tackle this scheduling problem, an integer programming model with N 2 +3 N variables and 5 N constraints where N is the number of jobs, is formulated. Because of the lengthy computing time and high computing complexity of the integer programming model, a heuristic scheduling algorithm is presented. Experimental results show that the proposed heuristic algorithm can solve this problem rapidly and accurately. The average solution quality of the heuristic algorithm is above 99% and is much better than that of the SPT rule as a benchmark. A 15-job case requires only 0.018 s, on average, to obtain an ultimate or even optimal solution. The heuristic scheduling algorithm is a more practical approach to real world applications than the integer programming model. <s> BIB007 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Weighted objectives <s> This article deals with the development of a heuristic for scheduling in a flowshop with the objective of minimizing the makespan and maximum tardiness of a job. The heuristic makes use of the simulated annealing technique. The proposed heuristic is relatively evaluated against the existing heuristic for scheduling to minimize the weighted sum of the makespan and maximum tardiness of a job. The results of the computational evaluation reveal that the proposed heuristic performs better than the existing one. <s> BIB008 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Weighted objectives <s> Abstract In this research, we propose the gradual-priority weighting (GPW) approach to search the Pareto optimal solutions for the multi-objective flowshop scheduling problem. The proposed approach will search feasible solution space from the first objective at the beginning and towards the other objectives step by step. To verify the effectiveness and efficiency of our proposed approach, GPW will be compared with the variable weight approach. Through the extensive experimental tests, the proposed method performs quite effectively and efficiently. <s> BIB009 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Weighted objectives <s> In this paper we analyse the performance of flowshop sequencing heuristics with respect to the objectives of makespan and flowtime minimisation. For flowtime minimisation, we propose the strategy employed by the NEH heuristic to construct partial solutions. Results show that this approach outperforms the common fast heuristics for flowtime minimisation while performing similarly or slightly worse than others which, on reward, prove to be much more CPU time-consuming. Additionally, the suggested approach is well balanced with respect to makespan and flowtime minimisation. Based on the previous results, two algorithms are proposed for the sequencing problem with multiple objectives – makespan and flowtime minimisation. These algorithms provide the decision maker with a set of heuristically efficient solutions such that he/she may choose the most suitable sequence for a given ratio between costs associated with makespan and those assigned to flowtime. Computational experience shows both algorithms to perform better than the current heuristics designed for the two-criteria problem. <s> BIB010 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Weighted objectives <s> Abstract This paper considers the flowshop scheduling problem with the objective of minimizing the weighted sum of makespan and mean flowtime. Even though the two-machine problem has been addressed and several heuristics have been established in the literature, these heuristics have not been compared. In this paper, a comparison of these available heuristics in the literature is conducted. Moreover, three new heuristic algorithms are proposed, which can be utilized for both the two-machine and m-machine problems. Computational experiments indicate that the proposed heuristics are superior to all the existing heuristics in the literature including a genetic algorithm. Two dominance relations are also developed; one for the two-machine and the other for the three-machine problem. Experimental results show that both relations are efficient. <s> BIB011 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Weighted objectives <s> This paper addresses the m-machine flowshop problem with the objective of minimizing a weighted sum of makespan and maximum tardiness. Two types of the problem are addressed. The first type is to minimize the objective function subject to the constraint that the maximum tardiness should be less than a given value. The second type is to minimize the objective without the constraint. A new heuristic is proposed and compared to two existing heuristics. Computational experiments indicate that the proposed heuristic is much better than the existing ones. Moreover, a dominance relation and a lower bound are developed for a three-machine problem. The dominance relation is shown to be quite efficient. <s> BIB012 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Weighted objectives <s> In this paper we consider a production scheduling problem in a two-machine flowshop. The bicriteria objective is a linear combination or weighted sum of the makespan and total completion time. This problem is computationally hard because the special case concerning the minimization of the total completion time is already known to be strongly NP-hard. To find an optimal schedule, we deploy the Johnson algorithm and a lower bound scheme that was previously developed for total completion time scheduling. Computational experiments are presented to study the relative performance of different lower bounds. While the best known bound for the bicriteria problem can successfully solve test cases of 10 jobs within a time limit of 30 min, under the same setting our branch-and-bound algorithm solely equipped with the new scheme can produce optimal schedules for most instances with 30 or less jobs. The results demonstrate the convincing capability of the lower bound scheme in curtailing unnecessary branching during pr... <s> BIB013 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Weighted objectives <s> In this paper, we propose a parallel exact method to solve bi-objective combinatorial optimization problems. This method has been inspired by the two-phase method which is a very general scheme to optimally solve bi-objective combinatorial optimization problems. Here, we first show that applying such a method to a particular problem allows improvements. Secondly, we propose a parallel model to speed up the search. Experiments have been carried out on a bi-objective permutation flowshop problem for which we also propose a new lower bound. <s> BIB014
As mentioned, most studies make use of the "a priori" approach. This means that objectives are weighted (mostly linearly) into a single combined criterion. After this conversion, most single objective algorithms can be applied. BIB002 proposed a B&B procedure for solving a two machine flowshop problem with a weighted combination of flowtime and makespan as objective. The algorithm initializes the branch and bound tree with an initial feasible solution and an upper bound, both obtained from a greedy heuristic. This algorithm was able to find the optimal solutions of problems with two machines and up to 500 jobs but only under some strong assumptions and data distribu-tions. The same authors use this branch and bound in BIB003 as a tool for providing the initial population in a genetic algorithm. The hybrid B&B+GA approach is tested for the same two-job bi-criteria flowshop and it is shown to outperform the pure B&B and GA algorithms. Another genetic algorithm is presented in BIB004 for makespan and flowtime, including also idle time as a third criterion. The algorithm uses effective heuristics for initialization. Cavalieri and Gaiardelli (1998) study a realistic production problem that they modelize as a flowshop problem with makespan and tardiness criteria. Two genetic algorithms are proposed where many of their parameters are adaptive. Yeh (1999) proposes another B&B method that compares favorably against that of BIB002 . For un-structured problems, Yeh's B&B is able to solve up to 14-job instances in less time than the B&B of BIB002 . The same author improved this B&B in Yeh (2001) and finally proposed a hybrid GA in Yeh (2002) showing the best results among all previous work. Note that all these papers of Yeh deal with the specific two machine case only. proposed heuristic methods and a mixed integer programming model for the m machine problem combining makespan and flowtime objectives. Their study shows that the integer programming approach is only valid for very small instances. A very similar work and results was given in a paper by the same authors (see BIB007 . Sivrikaya-Şerifoğlu and Ulusoy (1998) presented three B&B algorithms and two heuristics for the two machine flowshop with makespan and flowtime objectives. All these methods are compared among them in a series of experiments. The largest instances solved by the methods contain 18 jobs. A linear combination of makespan and tardiness is studied in BIB008 but in this case a Simulated Annealing (SA) algorithm is proposed. BIB009 study the gradual-priority weighting approach in place of the variable weight approach for genetic and genetic local search methods. These two methods are related to those of BIB005 and BIB006 , respectively. In numerical experiments, the gradual-priority weighting approach is shown superior. BIB010 proposed several heuristics along with a comprehensive computational evaluation for the m machine makespan and flowtime flowshop problem. BIB011 also studies the same objectives. A total of 10 heuristics are comprehensively studied in a computational experiment. Among the studied methods, three proposed heuristics from the author outperform the others. Several dominance relations for special cases are proposed as well. A different set of objectives, namely makespan and maximum tardiness, are studied by BIB012 . Two variations are tested, in the first one, a weighted combination of the two objectives subject to a maximum tardiness value is studied. In the second, the weighted combination of criteria is examined. The author proposes a heuristic and compares it against the results of BIB001 and BIB008 . The proposed method is shown to outperform these two according to the results. proposed a genetic algorithm that uses some ideas from the Traveling Salesman Problem (TSP). The imple-mented GA is a straightforward one that just uses a weighted combination of criteria as the fitness of each individual in the population. The algorithm is not compared against any other method from the literature and just some results on small flowshop instances are reported. BIB013 focuses on the two machine case with a weighted combination of makespan and flowtime. The authors present a B&B method that is tested against a set of small instances. The proposed method is able to find optimum solutions to instances of up to 15 jobs in all cases. BIB014 have studied the m machine problem with makespan and total tardiness criteria. A special methodology based on a B&B implementation, called two-phase method is employed. Due to performance reasons, the method is parallelized. As a result, some instances of up to 20 jobs and 20 machines are solved to optimality. However, the reported solving times for these cases are of seven days in a cluster of four parallel computers.
A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Pareto approaches <s> Each of a collection of items are to be produced on two machines (or stages). Each machine can handle only one item at a time and each item must be processed through machine one and then through machine two. The setup time plus work time for each item for each machine is known. A simple decision rule is obtained in this paper for the optimal scheduling of the production so that the total elapsed time is a minimum. A three‐machine problem is also discussed and solved for a restricted case. <s> BIB001 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Pareto approaches <s> Previous research on the scheduling of multimachine systems has generally focused on the optimization of individual performance measures. This article considers the sequencing of jobs through a multimachine flow shop, where the quality of the resulting schedule is evaluated according to the associated levels of two scheduling criteria, schedule makespan (Cmax) and maximum job tardiness (Tmax). We present constructive procedures that quantify the trade-off between Cmax and Tmax. The significance of this trade-off is that the optimal solution for any preference function involving only Cmax and Tmax must be contained among the set of efficient schedules that comprise the trade-off curve. For the special case of two-machine flow shops, we present an algorithm that identifies the exact set of efficient schedules. Heruistic procedures for approximating the efficient set are also provided for problems involving many jobs or larger flow shops. Computational results are reported for the procedures which indicate that both the number of efficient schedules and the error incurred by heuristically approximating the efficient set are quite small. <s> BIB002 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Pareto approaches <s> The two-stage flowshop scheduling problem with the objective of minimizing total flowtime subject to obtaining the optimal makespan is discussed. A branch-and-bound algorithm and two heuristic algorithms have been developed. The results of the experimental investigation of the effectiveness of the algorithms are also presented. <s> BIB003 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Pareto approaches <s> In trying to solve multiobjective optimization problems, many traditional methods scalarize the objective vector into a single objective. In those cases, the obtained solution is highly sensitive to the weight vector used in the scalarization process and demands that the user have knowledge about the underlying problem. Moreover, in solving multiobjective problems, designers may be interested in a set of Pareto-optimal points, instead of a single point. Since genetic algorithms (GAs) work with a population of points, it seems natural to use GAs in multiobjective optimization problems to capture a number of solutions simultaneously. Although a vector evaluated GA (VEGA) has been implemented by Schaffer and has been tried to solve a number of multiobjective problems, the algorithm seems to have bias toward some regions. In this paper, we investigate Goldberg's notion of nondominated sorting in GAs along with a niche and speciation method to find multiple Pareto-optimal points simultaneously. The proof-of-principle results obtained on three problems used by Schaffer and others suggest that the proposed method can be extended to higher dimensional and more difficult multiobjective problems. A number of suggestions for extension and application of the algorithm are also discussed. <s> BIB004 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Pareto approaches <s> Abstract In this paper, we propose a multi-objective genetic algorithm and apply it to flowshop scheduling. The characteristic features of our algorithm are its selection procedure and elite preserve strategy. The selection procedure in our multi-objective genetic algorithm selects individuals for a crossover operation based on a weighted sum of multiple objective functions with variable weights. The elite preserve strategy in our algorithm uses multiple elite solutions instead of a single elite solution. That is, a certain number of individuals are selected from a tentative set of Pareto optimal solutions and inherited to the next generation as elite individuals. In order to show that our approach can handle multi-objective optimization problems with concave Pareto fronts, we apply the proposed genetic algorithm to a two-objective function optimization problem with a concave Pareto front. Last, the performance of our multi-objective genetic algorithm is examined by applying it to the flowshop scheduling problem with two objectives: to minimize the makespan and to minimize the total tardiness. We also apply our algorithm to the flowshop scheduling problem with three objectives: to minimize the makespan, to minimize the total tardiness, and to minimize the total flowtime. <s> BIB005 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Pareto approaches <s> Bicriterion scheduling problems have attracted the attention of many researchers, especially in the past decade. Although more than fifty papers have been published on this topic, most studies done so far focus only on a single machine. In this paper, we extend the development to the two-machine case and present algorithms for the bicriterion of minimising makespan and number of tardy jobs and of makespan and total tardiness. Computational results are also presented. <s> BIB006 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Pareto approaches <s> We propose a hybrid algorithm for finding a set of nondominated solutions of a multi objective optimization problem. In the proposed algorithm, a local search procedure is applied to each solution (i.e., each individual) generated by genetic operations. Our algorithm uses a weighted sum of multiple objectives as a fitness function. The fitness function is utilized when a pair of parent solutions are selected for generating a new solution by crossover and mutation operations. A local search procedure is applied to the new solution to maximize its fitness value. One characteristic feature of our algorithm is to randomly specify weight values whenever a pair of parent solutions are selected. That is, each selection (i.e., the selection of two parent solutions) is performed by a different weight vector. Another characteristic feature of our algorithm is not to examine all neighborhood solutions of a current solution in the local search procedure. Only a small number of neighborhood solutions are examined to prevent the local search procedure from spending almost all available computation time in our algorithm. High performance of our algorithm is demonstrated by applying it to multi objective flowshop scheduling problems. <s> BIB007 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Pareto approaches <s> This paper adapts metaheuristic methods to develop Pareto optimal solutions to multi-criteria production scheduling problems. Approach is inspired by enhanced versions of genetic algorithms. Method first extends the Nondominated Sorting Genetic Algorithm (NSGA), a method recently proposed to produce Pareto-optimal solutions to numerical multi-objective problems. Multi-criteria flowshop scheduling is addressed next. Multi-criteria job shop scheduling is subsequently examined. Lastly the multi-criteria open shop problem is solved. Final solutions to each are Pareto optimal. The paper concludes with a statistical comparison of the performance of the basic NSGA to NSGA augmented by elitist selection. <s> BIB008 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Pareto approaches <s> This paper shows how the performance of evolutionary multiobjective optimization (EMO) algorithms can be improved by hybridization with local search. The main positive effect of the hybridization is the improvement in the convergence speed to the Pareto front. On the other hand, the main negative effect is the increase in the computation time per generation. Thus, the number of generations is decreased when the available computation time is limited. As a result, the global search ability of EMO algorithms is not fully utilized. These positive and negative effects are examined by computational experiments on multiobjective permutation flowshop scheduling problems. Results of our computational experiments clearly show the importance of striking a balance between genetic search and local search. In this paper, we first modify our former multiobjective genetic local search (MOGLS) algorithm by choosing only good individuals as initial solutions for local search and assigning an appropriate local search direction to each initial solution. Next, we demonstrate the importance of striking a balance between genetic search and local search through computational experiments. Then we compare the modified MOGLS with recently developed EMO algorithms: the strength Pareto evolutionary algorithm and revised nondominated sorting genetic algorithm. Finally, we demonstrate that a local search can be easily combined with those EMO algorithms for designing multiobjective memetic algorithms. <s> BIB009 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Pareto approaches <s> In this paper we analyse the performance of flowshop sequencing heuristics with respect to the objectives of makespan and flowtime minimisation. For flowtime minimisation, we propose the strategy employed by the NEH heuristic to construct partial solutions. Results show that this approach outperforms the common fast heuristics for flowtime minimisation while performing similarly or slightly worse than others which, on reward, prove to be much more CPU time-consuming. Additionally, the suggested approach is well balanced with respect to makespan and flowtime minimisation. Based on the previous results, two algorithms are proposed for the sequencing problem with multiple objectives – makespan and flowtime minimisation. These algorithms provide the decision maker with a set of heuristically efficient solutions such that he/she may choose the most suitable sequence for a given ratio between costs associated with makespan and those assigned to flowtime. Computational experience shows both algorithms to perform better than the current heuristics designed for the two-criteria problem. <s> BIB010 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Pareto approaches <s> Abstract In this research, we propose the gradual-priority weighting (GPW) approach to search the Pareto optimal solutions for the multi-objective flowshop scheduling problem. The proposed approach will search feasible solution space from the first objective at the beginning and towards the other objectives step by step. To verify the effectiveness and efficiency of our proposed approach, GPW will be compared with the variable weight approach. Through the extensive experimental tests, the proposed method performs quite effectively and efficiently. <s> BIB011 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Pareto approaches <s> Abstract In this study we address the problem of minimizing makespan and maximum earliness simultaneously in a two-machine flow shop environment. We develop a branch-and-bound procedure that generates all efficient solutions with respect to two criteria. We propose several lower and upper bounding schemes to enhance the efficiency of the algorithm. We also propose a heuristic procedure that generates approximate efficient solutions. Our computational results reveal that the branch-and-bound procedure is capable of solving problems with up to 25 jobs and the heuristic procedure produces approximate efficient solutions that are very close to exact efficient solutions in very small computation times. <s> BIB012 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Pareto approaches <s> In this paper, a metaheuristic procedure based on simulated annealing (SA) is proposed to find Pareto-optimal or non-dominated solution set for the permutation flow shop scheduling problems (FSPs) with the consideration of regular performance measures of minimizing the makespan and the total flow time of jobs. A new perturbation mechanism called "segment-random insertion (SRI)" scheme is used to generate the neighbourhood of a given sequence. The performance of the proposed algorithm is evaluated by solving benchmark FSP instances provided by (B. Taillard, 1993). The results obtained are evaluated in terms of the number of non-dominated schedules generated by the algorithm and the proximity of the obtained non-dominated front to the Pareto front. The results and simple quality measures suggested in this paper can be used to evaluate the quality of the non-dominated fronts obtained by different algorithms <s> BIB013 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Pareto approaches <s> This paper addresses the flowshop scheduling problem with multiple performance objectives in such a way as to provide the decision maker with approximate Pareto optimal solutions. It is well known that the partial enumeration constructive heuristic NEH and its adaptations perform well for single objectives such as makespan, total tardiness and flowtime. In this paper, we develop a similar heuristic using the concept of Pareto dominance when comparing partial and complete schedules. The heuristic is tested on problems involving combinations of the above criteria. For the two-machine case, and the pairs of objectives: (i) makespan and maximum tardiness, (ii) makespan and total tardiness, the heuristic is compared with branch-and-bound algorithms proposed in the literature. For two and more than two machines, and the criteria combinations considered in this article, the heuristic performance is tested against constructive heuristics reported in the literature. By means of an illustrative example, it is shown that a genetic algorithm from the literature performs better when starting from heuristic solutions rather than random solutions. <s> BIB014 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Pareto approaches <s> Abstract Most of research in production scheduling is concerned with the optimization of a single criterion. However the analysis of the performance of a schedule often involves more than one aspect and therefore requires a multi-objective treatment. In this paper we first present ( Section 1 ) the general context of multi-objective production scheduling, analyze briefly the different possible approaches and define the aim of this study i.e. to design a general method able to approximate the set of all the efficient schedules for a large set of scheduling models. Then we introduce ( Section 2 ) the models we want to treat––one machine, parallel machines and permutation flow shops––and the corresponding notations. The method used––called multi-objective simulated annealing––is described in Section 3 . Section 4 is devoted to extensive numerical experiments and their analysis. Conclusions and further directions of research are discussed in the last section. <s> BIB015 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Pareto approaches <s> This paper addresses flowshop scheduling problems with multiple performance criteria in such a way as to provide the decision maker with approximate Pareto optimal solutions. Genetic algorithms have attracted the attention of researchers in the nineties as a promising technique for solving multi-objective combinatorial optimization problems. We propose a genetic local search algorithm with features such as preservation of dispersion in the population, elitism, and use of a parallel multi-objective local search so as intensify the search in distinct regions. The concept of Pareto dominance is used to assign fitness to the solutions and in the local search procedure. The algorithm is applied to the flowshop scheduling problem for the following two pairs of objectives: (i) makespan and maximum tardiness; (ii) makespan and total tardiness. For instances involving two machines, the algorithm is compared with Branch-and-Bound algorithms proposed in the literature. For such instances and larger ones, involving up to 80 jobs and 20 machines, the performance of the algorithm is compared with two multi-objective genetic local search algorithms proposed in the literature. Computational results show that the proposed algorithm yields a reasonable approximation of the Pareto optimal set. <s> BIB016 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Pareto approaches <s> In this paper the problem of permutation flow shop scheduling with the objectives of minimizing the makespan and total flow time of jobs is considered. A Pareto-ranking based multi-objective genetic algorithm, called a Pareto genetic algorithm (GA) with an archive of non-dominated solutions subjected to a local search (PGA-ALS) is proposed. The proposed algorithm makes use of the principle of non-dominated sorting, coupled with the use of a metric for crowding distance being used as a secondary criterion. This approach is intended to alleviate the problem of genetic drift in GA methodology. In addition, the proposed genetic algorithm maintains an archive of non-dominated solutions that are being updated and improved through the implementation of local search techniques at the end of every generation. A relative evaluation of the proposed genetic algorithm and the existing best multi-objective algorithms for flow shop scheduling is carried by considering the benchmark flow shop scheduling problems. The non-dominated sets obtained from each of the existing algorithms and the proposed PGA-ALS algorithm are compared, and subsequently combined to obtain a net non-dominated front. It is found that most of the solutions in the net non-dominated front are yielded by the proposed PGA-ALS. <s> BIB017 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Pareto approaches <s> In this paper, we contribute with the first results on parallel cooperative multi-objective meta-heuristics on computational grids. We particularly focus on the island model and the multi-start model and their cooperation. We propose a checkpointing-based approach to deal with the fault tolerance issue of the island model. Nowadays, existing Dispatcher-Worker grid middlewares are inadequate for the deployment of parallel cooperative applications. Indeed, these need to be extended with a software layer to support the cooperation. Therefore, we propose a Linda-like cooperation model and its implementation on top of Xtrem Web. This middleware is then used to develop a parallel meta-heuristic applied to a bi-objective Flow-Shop problem using the two models. The work has been experimented on a multidomain education network of 321 heterogeneous Linux PCs. The preliminary results, obtained after more than 10 days, demonstrate that the use of grid computing allows to fully exploit effectively different parallel models and their combination for solving large-size problem instances. An improvement of the effectiveness by over 60% is realized compared to serial meta-heuristic. <s> BIB018 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Pareto approaches <s> Flow shop problems as a typical manufacturing challenge have gained wide attention in academic fields. In this paper, we consider a bi-criteria permutation flow shop scheduling problem, where weighted mean completion time and weighted mean tardiness are to be minimized simultaneously. Since a flow shop scheduling problem has been proved to be NP-hard in strong sense, an effective multi-objective particle swarm (MOPS), exploiting a new concept of the Ideal Point and a new approach to specify the superior particle's position vector in the swarm, is designed and used for finding locally Pareto-optimal frontier of the problem. To prove the efficiency of the proposed algorithm, various test problems are solved and the reliability of the proposed algorithm, based on some comparison metrics, is compared with a distinguished multi-objective genetic algorithm, i.e. SPEA-II. The computational results show that the proposed MOPS performs better than the genetic algorithm, especially for the large-sized problems. <s> BIB019
When focusing on the "a posteriori" approach the number of existing studies drops significantly. In the previously commented work of BIB002 , the authors also propose a B&B procedure for the C max and T max objectives that computes the pareto global front for the case of two machines. A genetic algorithm was proposed by BIB005 Later, in BIB007 the algorithm is extended by using a local search step that is applied to every new solution, after the crossover and mutation procedures. Sayın and Karabatı (1999) studied a B&B algorithm that generates the optimum pareto front for a two machine flowshop with makespan and flowtime objectives. The experimental evaluation compares only against heuristics like those of BIB001 and BIB003 . Some instances of up to 24 jobs are solved to optimality. BIB006 proposed a B&B algorithm for the two machine bi-criteria optimization problem, with the objectives of minimizing makespan and number of tardy jobs and also with the objectives of makespan and total tardiness. BIB008 . This method is based on the NSGA method by BIB004 . Some brief experiments are given for a single flowshop instance with flowtime and makespan objectives. improve the earlier MOGA algorithm of BIB005 . This new method, called CMOGA, refines the weight assignment. A few experiments with makespan and total tardiness criteria are conducted. The new CMOGA outperforms MOGA in the experiments carried out. Ishibuchi et al. (2003) present a comprehensive study about the effect of adding local search to their previous algorithm BIB007 . The local search is only applied to good individuals and by specifying search directions. This form of local search was shown to give better solutions for many different multi-objective genetic algorithms. In Loukil et al. (2000) many different scheduling problems are solved with different combinations of objectives. The main technique used is a multi-objective tabu search (MOTS). The paper contains a general study involving single and parallel machine problems as well. Later, in BIB015 , a similar study is carried out, but in this case the multi-objective approach employed is the simulated annealing algorithm (MOSA). A B&B approach is also shown by BIB012 for the two machine case under makespan and maximum earliness criteria. To the best of our knowledge, such combination of objectives has not been studied in the literature before. The procedure is able to solve problems of up to 25 jobs. The authors also propose a heuristic method. BIB013 propose a Pareto-based simulated annealing algorithm for makespan and total flowtime criteria. The proposed method is compared against that of BIB009 and against an early version of the SA proposed later by . The results, shown only for small problems of up to 20 jobs, show the proposed algorithm to be better on some specific performance metrics. BIB014 studied heuristics for several two and three objective combinations among makespan, flowtime and maximum tardiness. For two machines, the authors compare the heuristics proposed against the existing B&B methods of BIB002 and BIB006 . For the general m machine case, the authors compare the results against those of BIB010 . The results favor the proposed method that is also shown to improve the results of the GA of BIB005 if used as a seed sequence. The same authors developed a tabu search for the makespan and maximum tardiness objectives in . The algorithm includes several advanced features like diversification and local search in several neighborhoods. For the two machine case, again the proposed method is compared against BIB002 and for more than two machines against BIB007 . The proposed method is shown to be competitive in numerical experiments. In a more recent paper BIB016 carry out a similar study but in this case using genetic algorithms as solution tools. Although shown to be better than other approaches, the authors do not compare this GA with their previous methods. Makespan and total flowtime are studied by with the help of simulated annealing methods. These algorithms start from heuristic solutions that are further enhanced by improvement schemes. Two versions of these SA (MOSA and MOSA-II) are shown to outperform the GA of BIB007 . BIB017 have proposed a Pareto-archived genetic algorithm with local search and have tested it with the makespan and flowtime objectives. The authors test this approach against BIB007 and BIB011 . Apparently, the newly proposed GA performs better under some limited tests. BIB018 propose a grid-based parallel genetic algorithm aimed at obtaining an accurate Pareto front for makespan and total tardiness criteria. While the authors do not test their approach against other existing algorithms, the results appear promising. However, the running days are of 10 days in a set of computers operating as a grid. More recently, BIB019 have proposed a complex hybrid multi-objective particle swarm optimization (MOPS) method. The considered criteria are flowtime and total tardiness. In this method, a elite tabu search algorithm is used as an initialization of the swarm. A parallel local search procedure is employed as well to enhance the solution represented by each particle. This complex algorithm is compared against the SPEAII multi-objective genetic algorithm of
A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Goal Programming and other approaches <s> Until recently, the majority of models used to find an optimal sequence for the standard flow-shop problem were based on a single objective, typically makespan. In many applications, the practitioner may also want to consider other criteria simultaneously, such as mean flow-time or throughput time. As makespan and flow-time are equivalent criteria for optimizing machine idle-time and job idle-time, respectively, these additional criteria could be inherently considered as well. The effect of job idle-time, measuring in-process inventory, could be of particular importance. <s> BIB001 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Goal Programming and other approaches <s> Abstract We consider the flowshop scheduling problem with the twin-objective of minimizing makespan and total flowtime. We apply the Simulated Annealing (SA) technique to develop the proposed heuristic. Two new heuristics are proposed to provide the seed sequences for the SA heuristic. When compared with the existing heuristics, the proposed SA heuristic is found to perform much better. <s> BIB002 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Goal Programming and other approaches <s> Abstract The problem of scheduling in flowshop and flowline-based manufacturing cell is considered with the bicriteria of minimizing makespan and total flowtime of jobs, The formulation of the scheduling problems for both the flowshop and the flowline-based manufacturing cell is first discussed. We then present the development of the proposed heuristic for flowshop scheduling. A heuristic preference relation is developed as the basis for the heuristic so that only the potential job interchanges are checked for possible improvement with respect to bicriteria, The proposed heuristic algorithm as well as the existing heuristic are evaluated in a large number of randomly generated large-sized flowshop problems. We also investigate the effectiveness of these heuristics with respect to the objective of minimizing total machine idletime. We then modify the proposed heuristic for scheduling in a cell, and evaluate its performance. <s> BIB003 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Goal Programming and other approaches <s> Abstract Most of the heuristics for flowshop scheduling aim at minimizing makespan. However, scheduling with multiple objectives, such as that of minimizing makespan, total flowtime and machine idletime, is more effective in reducing the total scheduling cost. In this article, we first address the problem of scheduling to minimize makespan and total flowtime, and propose a new heuristic algorithm. A heuristic preference relation is developed as the basis for the heuristic so that only the potential job interchanges are checked for possible improvement with respect to these two objectives. The proposed as well as the existing heuristics are evaluated in a large number of randomly generated large-sized problems. The proposed heuristic algorithm is then extended to cover the problem of scheduling to minimize makespan, total flowtime and machine idletime. The results of the experimental investigation of the evaluation of this heuristic algorithm and the existing heuristic in meeting these objectives are also reported. It is found that both the proposed heuristics are more effective than the existing one for scheduling with multiple objectives. <s> BIB004 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Goal Programming and other approaches <s> This paper describes a new algorithm for solving the flow-shop scheduling problem of n/m/F/F max to minimize maximum flow time. A set of 2000 randomly produced problems of different sizes was solved and compared with three well-known heuristics. The result reveals that the new algorithm improves the best heuristics in performance measures by a significant percentage. Also, because the number of necessary operations for solving problems in this algorithm is independent of the number of machines involved, it is apt for systems with large numbers of machines. This work helps in solving flow-shop scheduling problems for manufacturing systems with large numbers of machines. <s> BIB005
There are some cases of other multi-objective methodologies like goal programming. For example, BIB001 proposed a mixed-integer goal programming formulation for a bi-objective PFSP dealing with makespan and flowtime criteria. As with every goal programming method, a minimum desired value for each objective has to be introduced. Later, Wilson (1989) proposed a different model with fewer variables but a larger number of constraints. However, both models have the same number of binary variables. The comparison between both models results in the one of BIB001 being better for problems with n ≥ 15. Many algorithms in the literature have been proposed that do not explicitly consider many objectives as in previous sections. For example, BIB005 propose a heuristic that is specifically devised for minimizing machine idle time in a m machine flowshop. Although the heuristic does not allow for setting weights or threshold values and does not work with the Pareto approach either, the authors test it against a number of objectives. A similar approach is followed by BIB002 where a simulated annealing is proposed for the m machine problem and evaluated under makespan and flowtime criteria. Along with the SA method, two heuristics are also studied. BIB004 proposes a heuristic for the same problem dealt with in BIB005 . After a comprehensive numerical experimentation, the new proposed heuristic is shown to be superior to that of Ho and Chang's. A very similar study is also presented by the same author in BIB003 . Ravindran et al. (2005) present three heuristics aimed at minimizing makespan and flowtime. The authors test the three proposed method against the heuristic of Rajendran (1995) but using only very small instances of 20 jobs and 20 machines maximum. The three heuristics appear to outperform Rajendran's albeit slightly. It is difficult to draw a line in these type of papers since many authors test a given proposed heuristic under different objectives. However, the heuristics commented above were designed with several objectives in mind and therefore we have included them in the review. To sum up, Table 1 contains, in chronological order, the reviewed papers along with the number of machines (2 or m), the multi-objective approach along with the criteria as well as the type of method used. [Insert Table 1 about here] In total, 54 papers have been reviewed. Among them, 21 deal with the specific two machine case. From the remaining 33 that study the more general m machines, a total of 16 use the "a posteriori" or Pareto based approach. The results of these methods are not comparable for several reasons. First, the authors do not always deal with the same combination of criteria. Second, comparisons are many times carried out with different benchmarks and against heuristics or older methods. Last and most importantly, the quality measures employed are not appropriate as recent studies have shown. Next Section deals with these measures.
A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Multi-objective quality measures <s> Flow shop problems as a typical manufacturing challenge have gained wide attention in academic fields. In this paper, we consider a bi-criteria permutation flow shop scheduling problem, where weighted mean completion time and weighted mean tardiness are to be minimized simultaneously. Since a flow shop scheduling problem has been proved to be NP-hard in strong sense, an effective multi-objective particle swarm (MOPS), exploiting a new concept of the Ideal Point and a new approach to specify the superior particle's position vector in the swarm, is designed and used for finding locally Pareto-optimal frontier of the problem. To prove the efficiency of the proposed algorithm, various test problems are solved and the reliability of the proposed algorithm, based on some comparison metrics, is compared with a distinguished multi-objective genetic algorithm, i.e. SPEA-II. The computational results show that the proposed MOPS performs better than the genetic algorithm, especially for the large-sized problems. <s> BIB001 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Multi-objective quality measures <s> Abstract Multi-objective optimization using evolutionary algorithms identifies Pareto-optimal alternatives or their close approximation by means of a sequence of successive local improvement moves. While several successful applications to combinatorial optimization problems are known, studies of underlying problem structures are still scarce. The paper presents a study of the problem structure of multi-objective permutation flow shop scheduling problems and investigates the effectiveness of local search neighborhoods within an evolutionary search framework. First, small problem instances with up to six objective functions for which the optimal alternatives are known are studied. Second, benchmark instances taken from literature are investigated. It turns out for the investigated data sets that the Pareto-optimal alternatives are found relatively concentrated in alternative space. Also, it can be shown that no single neighborhood operator is able to equally identify all Pareto-optimal alternatives. Taking this into consideration, significant improvements have been obtained by combining different neighborhood structures into a multi-operator search framework. <s> BIB002
As commented in previous sections, comparing the solutions of two different Pareto approximations coming from two algorithms is not straightforward. Two approximation sets A and B can be even incomparable. Recent studies like those of Zitzler et al. (2003), or more recently, are an example of the enormous effort being carried out in order to provide the necessary tools for a better evaluation and comparison of multi-objective algorithms. However, the multi-objective literature for the PFSP frequently uses quality measures that have been shown to be misleading. For example, in the two most recent papers reviewed BIB001 BIB001 BIB002 some metrics like generational distance or maximum deviation from the best Pareto front are used. These metrics, among other ones are shown to be non Pareto-compliant in the study of , meaning that they can give a better metric for a given Pareto approximation front B and worse for another front A even in a case where A ≺ B. What is worse, in the comprehensive empirical evaluation of quality measures given in , it is shown that the most frequently used measures are non Pareto-compliant and are demonstrated to give wrong and misleading results more often than not. Therefore, special attention must be given to the choice of quality measures to ensure sound and generalizable results. These three approaches range from straightforward and easy to compute in the case of dominance ranking to the not so easy and computationally intensive attainment functions. In this paper, we choose the hypervolume (I H ) and the unary multiplicative Epsilon (I 1 ε ) indicators. The choice is first motivated by the fact that dominance ranking is best observed when comparing one algorithm against another. By doing so, the number of times the solutions given by the first algorithm strongly, regularly or weakly dominate those given by the second gives a direct picture of the performance assessment among the two. The problem is that with 23 algorithms compared in this paper (see next Section) the possible two-algorithms pairs is 253 and therefore this type of analysis becomes unpractical. The same conclusion can be reached for the empirical attainment functions because these have to be compared in pairs. Furthermore, the computation of attainment functions is costly and the outcome has to be examined graphically one by one. As a result, such type of analysis is not useful in our case. According to , I H and I 1 ε are Pareto-compliant and represent the state-ofthe-art as far as quality indicators are concerned. Additionally, combining the analysis of these two indicators is a powerful approach since if the two indicators provide contradictory conclusions for two algorithms, it means that they are incomparable. In the following we give some additional details on how these two indicators are calculated. The hypervolume indicator I H , first introduced by Zitzler and Thiele (1999) just measures the area (in the case of two objectives) covered by the approximated Pareto front given by one algorithm. A reference point is used for the two objectives in order to bound this area. A greater value of I H indicates both a better convergence to as well as a good coverage of the optimal Pareto front. Calculating the hypervolume can be costly and we use the algorithm proposed in . This algorithm already calculates a normalized and scaled value. The binary epsilon indicator I ε proposed initially by Zitzler et al. (2003) is calculated as follows: Given two approximation sets A and B produced by two algorithms, the binary multiplicative , where x A and x B are each of the solutions given by algorithms A and B, respectively. Notice that such a binary indicator would require to calculate all possible pairs of algorithms. However, in , a unary I 1 ε version is proposed where the approximation set B is substituted by the best known Pareto front. This is an interesting indicator since it tells us how much worse (ε) an approximation set is w.r.t. the best known Pareto front in the best case. Therefore "ε" gives us a direct performance measure. Note however that in our case some objectives might take a value of zero (for example tardiness). Also, objectives must be normalized. Therefore, for the calculation of the I 1 ε indicator, we first normalize and translate each objective, i.e., in the previous calculation, f j (x A ) and f j (x B ) are replaced by + 1, respectively, where f + j and f − j are the maximum and minimum known values for a given objective j, respectively. As a result, our normalized I 1 ε indicator will take values between 1 and 2. A value of one for a given algorithm means that its approximation set is not dominated by the best known one.
A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Pareto approaches for the flowshop problem <s> In a general flow-shop situation, where all the jobs must pass through all the machines in the same order, certain heuristic algorithms propose that the jobs with higher total process time should be given higher priority than the jobs with less total process time. Based on this premise, a simple algorithm is presented in this paper, which produces very good sequences in comparison with existing heuristics. The results of the proposed algorithm have been compared with the results from 15 other algorithms in an independent study by Park [13], who shows that the proposed algorithm performs especially well on large flow-shop problems in both the static and dynamic sequencing environments. <s> BIB001 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Pareto approaches for the flowshop problem <s> Abstract In this paper, we propose a multi-objective genetic algorithm and apply it to flowshop scheduling. The characteristic features of our algorithm are its selection procedure and elite preserve strategy. The selection procedure in our multi-objective genetic algorithm selects individuals for a crossover operation based on a weighted sum of multiple objective functions with variable weights. The elite preserve strategy in our algorithm uses multiple elite solutions instead of a single elite solution. That is, a certain number of individuals are selected from a tentative set of Pareto optimal solutions and inherited to the next generation as elite individuals. In order to show that our approach can handle multi-objective optimization problems with concave Pareto fronts, we apply the proposed genetic algorithm to a two-objective function optimization problem with a concave Pareto front. Last, the performance of our multi-objective genetic algorithm is examined by applying it to the flowshop scheduling problem with two objectives: to minimize the makespan and to minimize the total tardiness. We also apply our algorithm to the flowshop scheduling problem with three objectives: to minimize the makespan, to minimize the total tardiness, and to minimize the total flowtime. <s> BIB002 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Pareto approaches for the flowshop problem <s> This article deals with the development of a heuristic for scheduling in a flowshop with the objective of minimizing the makespan and maximum tardiness of a job. The heuristic makes use of the simulated annealing technique. The proposed heuristic is relatively evaluated against the existing heuristic for scheduling to minimize the weighted sum of the makespan and maximum tardiness of a job. The results of the computational evaluation reveal that the proposed heuristic performs better than the existing one. <s> BIB003 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Pareto approaches for the flowshop problem <s> This paper adapts metaheuristic methods to develop Pareto optimal solutions to multi-criteria production scheduling problems. Approach is inspired by enhanced versions of genetic algorithms. Method first extends the Nondominated Sorting Genetic Algorithm (NSGA), a method recently proposed to produce Pareto-optimal solutions to numerical multi-objective problems. Multi-criteria flowshop scheduling is addressed next. Multi-criteria job shop scheduling is subsequently examined. Lastly the multi-criteria open shop problem is solved. Final solutions to each are Pareto optimal. The paper concludes with a statistical comparison of the performance of the basic NSGA to NSGA augmented by elitist selection. <s> BIB004
We now detail the algorithms that have been re-implemented and tested among those proposed specifically for the flowshop scheduling problem. These methods have been already reviewed in Section 3 and here we extend some details about them and about their re-implementation. The MOGA algorithm of BIB002 was designed to tackle the multi objective flowshop problem. It is a simple genetic algorithm with a modified selection operator. During this selection, a set of weights for the objectives are generated. In this way the algorithm tends to distribute the search toward different directions. The authors also incorporate an elite preservation mechanism which copies several solutions from the actual Pareto front to the next generation. We will refer to our MOGA implementation as MOGA_Murata. BIB003 presented a simple simulated annealing algorithm which tries to minimize the weighted sum of two objectives. The best solution between those generated by the Earliest Due Date (EDD), Least Static Slack (LSS) and NEH (from the heuristic of BIB001 methods is selected to be the initial solution. The adjacent interchange scheme (AIS) is used to generate a neighborhood for the actual solution. Notice that this algorithm, referred to as SA_Chakravarty, is not a real Pareto approach since the objectives are weighted. However, we have included it in the comparison in order to have an idea of how such methods can perform in practice. We "simulate" a Pareto approach by running SA_Chakravarty 100 times with different weight combinations of the objectives. All the 100 resulting solutions are analyzed and the best non dominated subset is given as a result. BIB004 proposed a modification of the well known NSGA procedure (see next section) and adapted it to the flowshop problem. This algorithm, referred to as ENGA, differentiates from NSGA in that it incorporates elitism. In particular, the parent and offspring populations are combined in a unique set, then a non-dominated sorting is applied and the 50% of the non-dominated solutions are copied to the parent population of the following generation. enhanced the original MOGA of BIB002 . A different way of distributing the weights during the run of the algorithm is presented. The proposed weight specification method makes use of a cellular structure which permits to better select weights in order to find a finer approximation of the optimal Pareto front. We refer to this later algorithm as CMOGA. (2004) proposed a Pareto archived simulated annealing (PASA) method. A new perturbation mechanism called "segment-random insertion (SRI)" scheme is used to generate the neighborhood of a given sequence. An archive containing the non-dominated solution set is used. A randomly generated sequence is used as an initial solution. The SRI is used to generate a neighborhood set of candidate solutions and each one is used to update the archive set. A fitness function that is a scaled weighted sum of the objective functions is used to select a new current solution. A restart strategy and a re-annealing method are also implemented. We refer to this method as MOSA_Suresh. developed a multi-objective tabu search method called MOTS. The algorithm works with several paths of solutions in parallel, each with its own tabu list. A set of initial solutions is generated using a heuristic. A local search is applied to the set of current solutions to generate several new solutions. A clustering procedure ensures that the size of the current solution set remains constant.
A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Suresh and Mohanasundaram <s> We propose a hybrid algorithm for finding a set of nondominated solutions of a multi objective optimization problem. In the proposed algorithm, a local search procedure is applied to each solution (i.e., each individual) generated by genetic operations. Our algorithm uses a weighted sum of multiple objectives as a fitness function. The fitness function is utilized when a pair of parent solutions are selected for generating a new solution by crossover and mutation operations. A local search procedure is applied to the new solution to maximize its fitness value. One characteristic feature of our algorithm is to randomly specify weight values whenever a pair of parent solutions are selected. That is, each selection (i.e., the selection of two parent solutions) is performed by a different weight vector. Another characteristic feature of our algorithm is not to examine all neighborhood solutions of a current solution in the local search procedure. Only a small number of neighborhood solutions are examined to prevent the local search procedure from spending almost all available computation time in our algorithm. High performance of our algorithm is demonstrated by applying it to multi objective flowshop scheduling problems. <s> BIB001 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Suresh and Mohanasundaram <s> This paper shows how the performance of evolutionary multiobjective optimization (EMO) algorithms can be improved by hybridization with local search. The main positive effect of the hybridization is the improvement in the convergence speed to the Pareto front. On the other hand, the main negative effect is the increase in the computation time per generation. Thus, the number of generations is decreased when the available computation time is limited. As a result, the global search ability of EMO algorithms is not fully utilized. These positive and negative effects are examined by computational experiments on multiobjective permutation flowshop scheduling problems. Results of our computational experiments clearly show the importance of striking a balance between genetic search and local search. In this paper, we first modify our former multiobjective genetic local search (MOGLS) algorithm by choosing only good individuals as initial solutions for local search and assigning an appropriate local search direction to each initial solution. Next, we demonstrate the importance of striking a balance between genetic search and local search through computational experiments. Then we compare the modified MOGLS with recently developed EMO algorithms: the strength Pareto evolutionary algorithm and revised nondominated sorting genetic algorithm. Finally, we demonstrate that a local search can be easily combined with those EMO algorithms for designing multiobjective memetic algorithms. <s> BIB002 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Suresh and Mohanasundaram <s> This paper addresses the flowshop scheduling problem with multiple performance objectives in such a way as to provide the decision maker with approximate Pareto optimal solutions. It is well known that the partial enumeration constructive heuristic NEH and its adaptations perform well for single objectives such as makespan, total tardiness and flowtime. In this paper, we develop a similar heuristic using the concept of Pareto dominance when comparing partial and complete schedules. The heuristic is tested on problems involving combinations of the above criteria. For the two-machine case, and the pairs of objectives: (i) makespan and maximum tardiness, (ii) makespan and total tardiness, the heuristic is compared with branch-and-bound algorithms proposed in the literature. For two and more than two machines, and the criteria combinations considered in this article, the heuristic performance is tested against constructive heuristics reported in the literature. By means of an illustrative example, it is shown that a genetic algorithm from the literature performs better when starting from heuristic solutions rather than random solutions. <s> BIB003 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Suresh and Mohanasundaram <s> This paper addresses flowshop scheduling problems with multiple performance criteria in such a way as to provide the decision maker with approximate Pareto optimal solutions. Genetic algorithms have attracted the attention of researchers in the nineties as a promising technique for solving multi-objective combinatorial optimization problems. We propose a genetic local search algorithm with features such as preservation of dispersion in the population, elitism, and use of a parallel multi-objective local search so as intensify the search in distinct regions. The concept of Pareto dominance is used to assign fitness to the solutions and in the local search procedure. The algorithm is applied to the flowshop scheduling problem for the following two pairs of objectives: (i) makespan and maximum tardiness; (ii) makespan and total tardiness. For instances involving two machines, the algorithm is compared with Branch-and-Bound algorithms proposed in the literature. For such instances and larger ones, involving up to 80 jobs and 20 machines, the performance of the algorithm is compared with two multi-objective genetic local search algorithms proposed in the literature. Computational results show that the proposed algorithm yields a reasonable approximation of the Pareto optimal set. <s> BIB004 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Suresh and Mohanasundaram <s> Abstract Most of research in production scheduling is concerned with the optimization of a single criterion. However the analysis of the performance of a schedule often involves more than one aspect and therefore requires a multi-objective treatment. In this paper we first present ( Section 1 ) the general context of multi-objective production scheduling, analyze briefly the different possible approaches and define the aim of this study i.e. to design a general method able to approximate the set of all the efficient schedules for a large set of scheduling models. Then we introduce ( Section 2 ) the models we want to treat––one machine, parallel machines and permutation flow shops––and the corresponding notations. The method used––called multi-objective simulated annealing––is described in Section 3 . Section 4 is devoted to extensive numerical experiments and their analysis. Conclusions and further directions of research are discussed in the last section. <s> BIB005 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Suresh and Mohanasundaram <s> In this paper the problem of permutation flow shop scheduling with the objectives of minimizing the makespan and total flow time of jobs is considered. A Pareto-ranking based multi-objective genetic algorithm, called a Pareto genetic algorithm (GA) with an archive of non-dominated solutions subjected to a local search (PGA-ALS) is proposed. The proposed algorithm makes use of the principle of non-dominated sorting, coupled with the use of a metric for crowding distance being used as a secondary criterion. This approach is intended to alleviate the problem of genetic drift in GA methodology. In addition, the proposed genetic algorithm maintains an archive of non-dominated solutions that are being updated and improved through the implementation of local search techniques at the end of every generation. A relative evaluation of the proposed genetic algorithm and the existing best multi-objective algorithms for flow shop scheduling is carried by considering the benchmark flow shop scheduling problems. The non-dominated sets obtained from each of the existing algorithms and the proposed PGA-ALS algorithm are compared, and subsequently combined to obtain a net non-dominated front. It is found that most of the solutions in the net non-dominated front are yielded by the proposed PGA-ALS. <s> BIB006 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Suresh and Mohanasundaram <s> Flow shop problems as a typical manufacturing challenge have gained wide attention in academic fields. In this paper, we consider a bi-criteria permutation flow shop scheduling problem, where weighted mean completion time and weighted mean tardiness are to be minimized simultaneously. Since a flow shop scheduling problem has been proved to be NP-hard in strong sense, an effective multi-objective particle swarm (MOPS), exploiting a new concept of the Ideal Point and a new approach to specify the superior particle's position vector in the swarm, is designed and used for finding locally Pareto-optimal frontier of the problem. To prove the efficiency of the proposed algorithm, various test problems are solved and the reliability of the proposed algorithm, based on some comparison metrics, is compared with a distinguished multi-objective genetic algorithm, i.e. SPEA-II. The computational results show that the proposed MOPS performs better than the genetic algorithm, especially for the large-sized problems. <s> BIB007 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Suresh and Mohanasundaram <s> Abstract Multi-objective optimization using evolutionary algorithms identifies Pareto-optimal alternatives or their close approximation by means of a sequence of successive local improvement moves. While several successful applications to combinatorial optimization problems are known, studies of underlying problem structures are still scarce. The paper presents a study of the problem structure of multi-objective permutation flow shop scheduling problems and investigates the effectiveness of local search neighborhoods within an evolutionary search framework. First, small problem instances with up to six objective functions for which the optimal alternatives are known are studied. Second, benchmark instances taken from literature are investigated. It turns out for the investigated data sets that the Pareto-optimal alternatives are found relatively concentrated in alternative space. Also, it can be shown that no single neighborhood operator is able to equally identify all Pareto-optimal alternatives. Taking this into consideration, significant improvements have been obtained by combining different neighborhood structures into a multi-operator search framework. <s> BIB008
The algorithm makes also use of an external archive for storing all the non-dominated solutions found during the execution. After some initial experiments we found that under the considered stopping criterion (to be detailed later), less than 12 iterations were carried out. This together with the fact that the diversification method is not sufficiently clear from the original text has resulted in our implementation not including this procedure. The initialization procedure of MOTS takes most of the allotted CPU time for large values of n. Considering the large neighborhood employed, this all results in extremely lengthy computations for larger n values. BIB004 proposed a genetic local search algorithm with the following features: preservation of population's diversity, elitism (a subset of the current Pareto front is directly copied to the next generation) and usage of a multi-objective local search. The concept of Pareto dominance is used to assign fitness (using the non-dominated sorting procedure and the crowding measure both proposed for the NSGAII) to the solutions and in the local search procedure. We refer to this method as MOGALS_Arroyo. A multi-objective simulated annealing (MOSA) is presented in . The algorithm starts with an initialization procedure which generates two initial solutions using simple and fast heuristics. These sequences are enhanced by three improvement schemes and are later used, alternatively, as the solution of the simulated annealing method. MOSA tries to obtain non dominated solutions through the implementation of a simple probability function that attempts to generate solutions on the Pareto optimal front. The probability function is varied in such a way that the entire objective space is covered uniformly obtaining as many non-dominated and well dispersed solutions as possible. We refer to this algorithm as MOSA_Varadharajan. BIB006 proposed a genetic algorithm which we refer to as PGA_ALS. This algorithm uses an initialization procedure which generates four good initial solutions that are introduced in a random population. PGA_ALS handles a working population and an external one. The internal one evolves using a Pareto-ranking based procedure similar to that used in NSGAII. A crowding procedure is also proposed and used as a secondary selection criterion. The non-dominated solutions are stored in the external archive and two different local searches are then applied to half of archive's solutions for improving the quality of the returned Pareto front. Finally, we have also re-implemented PILS from BIB008 . This new algorithm is based on iterated local search which in turn relies on two main principles, intensification using a variable neighborhood local search and diversification using a perturbation procedure. The Pareto dominance relationship is used to store the non-dominated solutions. This scheme is repeated through successive iterations to reach favorable regions of the search space. Notice that at the time of the writing of this paper, this last algorithm has not even been published yet. Notice that among the 16 multi-objective PFSP specific papers reviewed in Section 3, we are re-implementing a total of 10. We have chosen not to re-implement the GAs of BIB001 and BIB002 since they were shown to be inferior to the multi-objective tabu search of and some others. and BIB005 have presented some rather general methods applied to many scheduling problems. This generality and the lack of details have deterred us from trying a re-implementation. BIB003 proposed just some heuristics and finally, the hybrid Particle Swarm Optimization (PSO) proposed by BIB007 in incredibly complex, making use of parallel programming techniques and therefore we have chosen not to implement it.
A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Other general Pareto algorithms <s> In trying to solve multiobjective optimization problems, many traditional methods scalarize the objective vector into a single objective. In those cases, the obtained solution is highly sensitive to the weight vector used in the scalarization process and demands that the user have knowledge about the underlying problem. Moreover, in solving multiobjective problems, designers may be interested in a set of Pareto-optimal points, instead of a single point. Since genetic algorithms (GAs) work with a population of points, it seems natural to use GAs in multiobjective optimization problems to capture a number of solutions simultaneously. Although a vector evaluated GA (VEGA) has been implemented by Schaffer and has been tried to solve a number of multiobjective problems, the algorithm seems to have bias toward some regions. In this paper, we investigate Goldberg's notion of nondominated sorting in GAs along with a niche and speciation method to find multiple Pareto-optimal points simultaneously. The proof-of-principle results obtained on three problems used by Schaffer and others suggest that the proposed method can be extended to higher dimensional and more difficult multiobjective problems. A number of suggestions for extension and application of the algorithm are also discussed. <s> BIB001 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Other general Pareto algorithms <s> We introduce a simple evolution scheme for multiobjective optimization problems, called the Pareto Archived Evolution Strategy (PAES). We argue that PAES may represent the simplest possible nontrivial algorithm capable of generating diverse solutions in the Pareto optimal set. The algorithm, in its simplest form, is a (1 + 1) evolution strategy employing local search but using a reference archive of previously found solutions in order to identify the approximate dominance ranking of the current and candidate solution vectors. (1 + 1)-PAES is intended to be a baseline approach against which more involved methods may be compared. It may also serve well in some real-world applications when local search seems superior to or competitive with population-based methods. We introduce (1 + lambda) and (mu + lambda) variants of PAES as extensions to the basic algorithm. Six variants of PAES are compared to variants of the Niched Pareto Genetic Algorithm and the Nondominated Sorting Genetic Algorithm over a diverse suite of six test functions. Results are analyzed and presented using techniques that reduce the attainment surfaces generated from several optimization runs into a set of univariate distributions. This allows standard statistical analysis to be carried out for comparative purposes. Our results provide strong evidence that PAES performs consistently well on a range of multiobjective optimization tasks. <s> BIB002 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Other general Pareto algorithms <s> We introduce a new multiobjective evolutionary algorithm called PESA (the Pareto Envelope-based Selection Algorithm), in which selection and diversity maintenance are controlled via a simple hyper-grid based scheme. PESA's selection method is relatively unusual in comparison with current well known multiobjective evolutionary algorithms, which tend to use counts based on the degree to which solutions dominate others in the population. The diversity maintenance method is similar to that used by certain other methods. The main attraction of PESA is the integration of selection and diversity maintenance, whereby essentially the same technique is used for both tasks. The resulting algorithm is simple to describe, with full pseudocode provided here and real code available from the authors. We compare PESA with two recent strong-performing MOEAs on some multiobjective test problems recently proposed by Deb. We find that PESA emerges as the best method overall on these problems. <s> BIB003 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Other general Pareto algorithms <s> We describe a new selection technique for evolutionary multiobjective optimization algorithms in which the unit of selection is a hyperbox in objective space. In this technique, instead of assigning a selective fitness to an individual, selective fitness is assigned to the hyperboxes in objective space which are currently occupied by at least one individual in the current approximation to the Pareto frontier. A hyperbox is thereby selected, and the resulting selected individual is randomly chosen from this hyperbox. This method of selection is shown to be more sensitive to ensuring a good spread of development along the Pareto frontier than individual-based selection. The method is implemented in a modern multiobjective evolutionary algorithm, and performance is tested by using Deb's test suite of `T' functions with varying properties. The new selection technique is found to give significantly superior results to the other methods compared, namely PAES, PESA, and SPEA; each is a modern multi-objective optimization algorithm previously found to outperform earlier approaches on various problems. <s> BIB004 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Other general Pareto algorithms <s> This paper demonstrates how adaptive population-sizing and epsilon-dominance archiving can be combined with the Nondominated Sorted Genetic Algorithm-II (NSGAII) to enhance the algorithm's efficiency, reliability, and ease-of-use. Four versions of the enhanced Epsilon Dominance NSGA-II (e-NSGAII) are tested on a standard suite of evolutionary multiobjective optimization test problems. Comparative results for the four variants of the (e-NSGAII demonstrate that adapting population size based on online changes in the epsilon dominance archive size can enhance performance. The best performing version of the (e-NSGAII is also compared to the original NSGAII and the (eMOEA on the same suite of test problems. The performance of each algorithm is measured using three running performance metrics, two of which have been previously published, and one new metric proposed by the authors. Results of the study indicate that the new version of the NSGAII proposed in this paper demonstrates improved performance on the majority of two-objective test problems studied. <s> BIB005
The multi-objective literature is marred with many interesting proposals, mainly in the form of evolutionary algorithms, that have not been applied to the PFSP before. Therefore, in this section we review some of these methods that have been re-implemented and adapted to the PFSP. BIB001 proposed the well known non-dominated sorting genetic algorithm, referred to as NSGA. This method differs from a simple genetic algorithm only for the way the selection is performed. The non-dominated Sorting procedure (N DS) iteratively divides the en-tire population into different Pareto fronts. The individuals are assigned a fitness value that depends on the Pareto front they belong to. Furthermore, this fitness value is modified by a factor that is calculated according to the number of individuals crowding a portion of the objective space. A sharing parameter σ share is used in this case. All other features are similar to a standard genetic algorithm. Zitzler and Thiele (1999) presented another genetic algorithm referred to as SPEA. The most important characteristic of this method is that all non-dominated solutions are stored in an external population. Fitness evaluation of individuals depend on the number of solutions from the external population they dominate. The algorithm also incorporates a clustering procedure to reduce the size of the non-dominated set without destroying its characteristics. Finally, population's diversity is maintained by using the Pareto dominance relationship. Later, Zitzler et al. (2001) proposed an improved SPEAII version that incorporates a different fine-grained fitness strategy to avoid some drawbacks of the SPEA procedure. Other improvements include a density estimation technique that is an adaptation of the k-th nearest neighbor method, and a new complex archive truncation procedure. BIB002 presented another algorithm called PAES. This method employs local search and a population archive. The algorithm is composed of three parts, the first one is the candidate solution generator which has an archive of only one solution and generates a new one making use of random mutation. The second part is the candidate solution acceptance function which has the task of accepting or discarding the new solution. The last part is the non-dominated archive which contains all the non-dominated solutions found so far. According to the authors, this algorithm represents the simplest nontrivial approach to a multi-objective local search procedure. In the same paper, the authors present an enhancement of PAES referred to as (µ + λ)−PAES. Here a population of µ candidate solutions is kept. By using a binary tournament, a single solution is selected and λ mutant solutions are created using random mutation. Hence, a µ + λ population is created and a dominance score is calculated for each individual. µ individuals are selected to update the candidate population while an external archive of nondominated solutions is maintained. Another genetic algorithm is proposed by BIB003 . This method, called PESA uses an external population EP and an internal one IP to pursuit the goal of finding a well spread Pareto front. A selection and replacement procedure based on the degree of crowding is implemented. A simple genetic scheme is used for the evolution of IP while EP contains the non-dominated solutions found. The size of the EP is upper bounded and a hyper-grid based operator eliminates the individuals in the more crowded zones. Later, in BIB004 a enhanced PESAII method is provided. This algorithm differs from the preceding one only in the selection technique in which the fitness value is assigned according to a hyperbox calculation in the objective space. In this technique, instead of assigning a selective fitness to an individual, it is assigned to the hyperboxes in the objective space which are occupied by at least one element. During the selection process, the hyperbox with the best fitness is selected and an individual is chosen at random among all inside the selected hyperbox. In an evolution of the NSGA was presented. This algorithm, called NSGAII, uses a new Fast Non-Dominated Sorting procedure (F N DS). Unlike the NSGA, here a rank value is assigned to each individual of the population and there is no need for a parameter to achieve fitness sharing. Also, a crowding value is calculated with a fast procedure and assigned to each element of the population. The selection operator uses the rank and the crowding values to select the better individuals for the mating pool. An efficient procedure of elitism is implemented by comparing two successive generations and preserving the best individuals. This NSGAII method is extensively used in the multi objective literature for the most varied problem domains. Later, introduced yet another GA called CNSGAII. Basically, in this algorithm the crowding procedure is replaced by a clustering approach. The rationale is that once a generation is completed, the previous generation has a size of P size (parent set) and the current one (offspring set) is also of the same size. Combining both populations yields a 2P size set but only half of them are needed for the next generation. To select these solutions the non-dominated sorting procedure is applied first and the clustering procedure second. studied another different genetic algorithm. This method, called ε−MOEA uses two co-evolving populations, the regular one called P and an archive A. At each step, two parent solutions are selected, the first from P and the second from A. An offspring is generated, and it is compared with each element of the population P . If the offspring dominates at least a single individual in P then it replaces this individual. The offspring is discarded if it is dominated by P . The offspring individual is also checked against the individuals in A. In the archive population the ε−dominance is used in the same way. For example, and using the previous notation, a solution x 1 strongly ε−dominates another solution Zitzler and Künzli (2004) proposed another method called B−IBEA. The main idea in this method is defining the optimization goal in terms of a binary quality measure and directly using it in the selection process. B-IBEA performs binary tournaments for mating selection and implements environmental selection by iteratively removing the worst individual from the population and updating the fitness values of the remaining individuals. An ε−indicator is used. In the same work, an adaptive variation called A−IBEA is also presented. An adapted scaling procedure is proposed with the goal of making the algorithm's behavior independent from the tuning of the parameter k used in the basic B−IBEA version. Finally, BIB005 proposed also a NSGAII variation referred to as ε−NSGAII by adding ε−dominance archiving and adaptive population sizing. The ε parameter establishes the size of the grid in the objective space. Inside each cell of the grid no more than one solution is allowed. Furthermore, the algorithm works by alternating two phases. It starts using a very small population of 10 individuals and several runs of NSGAII are executed. During these runs all the non-dominated solutions are copied to an external set. When there are no further improvements in the current Pareto front, the second phase starts. In this second phase the ε−dominance procedure is applied on the external archive. The 23 re-implemented algorithms, either specific for the PFSP or general multi-objective proposals, are summarized in Table 2 . [Insert Table 2 about here]
A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Benchmark and computational evaluation details <s> Abstract In this paper, we propose 260 randomly generated scheduling problems whose size is greater than that of the rare examples published. Such sizes correspond to real dimensions of industrial problems. The types of problems that we propose are: the permutation flow shop, the job shop and the open shop scheduling problems. We restrict ourselves to basic problems: the processing times are fixed, there are neither set-up times nor due dates nor release dates, etc. Then, the objective is the minimization of the makespan. <s> BIB001 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Benchmark and computational evaluation details <s> The problem of scheduling in static flowshops is considered with the objective of minimizing mean or total tardiness of jobs. A heuristic algorithm based on the simulated annealing (SA) technique is developed. The salient features of the proposed SA algorithm are the development of two new perturbation schemes for use in the proposed SA algorithm and a new improvement scheme to improve the quality of the solutions. The proposed algorithm is evaluated by using the benchmark problems available in the literature. The performance of the proposed SA algorithm is found to be very good, and the proposed heuristic performs better than the existing heuristics. <s> BIB002
Each one of the 23 proposed algorithms is tested against a new benchmark set. There are no known comprehensive benchmarks in the literature for the multi-objective PFSP. The only reference we know of is the work of Basseur (2005) where a small set of 14 instances is proposed. In order to carry out a comprehensive and sound analysis, a much larger set is needed. We augment the well known instances of BIB001 . This benchmark is organized in 12 groups with 10 instances each. The groups contain different combinations of the number of jobs n and the number of machines m. The n × m combinations are: {20, 50, 100} × {5, 10, 20}, 200 × {10, 20} and 500 × 20. The processing times (p ij ) in Taillard's instances are generated from a uniform distribution in the range [1, 99] . We take the first 110 instances and drop the last 10 instances in the 500 × 20 group since this size is deemed as too large for the experiments. As regards the due dates for the tardiness criterion we use the same approach of BIB002 . In this work, a tight due date d j is assigned to each job j ∈ N following the expression: p ij is the sum of the processing times over all machines for job j and random is a random number uniformly distributed in [0, 1] . This method of generating due dates results in very tight to relatively tight due dates depending on the actual value of random for each job, i.e., if random is close to 0, then the due date of the job is going to be really tight as it would be more or less the sum of its processing times. As a result, the job will have to be sequenced very early to avoid any tardiness. These 110 augmented instances can be downloaded from http://www.upv.es/gio/rruiz. Each algorithm has been carefully re-implemented following all the explanations given by the authors in the original papers. We have re-implemented all the algorithms in Delphi 2006. It should be noted that all methods share most structures and functions and the same level of coding has been used, i.e., all of them contain most common optimizations and speed-ups. Fast Non-Dominated Sorting (F N DS) is frequently used for most methods. Unless indicated differently by the authors in the original papers, the crossover and mutation operators used for the genetic methods are the two point order crossover and insertion mutation, respectively. Unless explicitly stated, all algorithms incorporate a duplicate-deletion procedure in the populations as well as in the non-dominated archives. The stopping criterion for most algorithms and is given by a time limit depending on the size of the instance. The algorithms are stopped after a CPU running time of n · m/2 · t milliseconds, where t is an input parameter. Giving more time to larger instances is a natural way of separating the results from the lurking "total CPU time" variable. Otherwise, if worse results are obtained for large instances, it would not be possible to tell if it is due to the limited CPU time or due to the instance size. Every algorithm is run 10 different independent times (replicates) on each instance with three different stopping criteria: t = 100, 150 and 200 milliseconds. This means that for the largest instances of 200 × 20 a maximum of 400 seconds of real CPU time (not wall time) are allowed. For every instance, stoping time and replicate we use the same random seed as a common variance reduction technique. We run every algorithm on a cluster of 12 identical computers with Intel Core 2 Duo E6600 processors running at 2.4 GHz with 1 Gbyte of RAM. For the tests, each algorithm and instance replicate is randomly assigned to a single computer and the results are collected at the end. According to Section 2, the three most common criteria for the PFSP are makespan, total completion time and total tardiness. All these criteria are of the minimization type. Therefore, all experiments are conducted for the three following criteria combinations: 1) makespan and total tardiness, 2) total completion time and total tardiness and finally, 3) makespan and total completion time. A total of 75,900 data points are collected per criteria combination if we consider the 23 algorithms, 110 instances, 10 replicates per instance and three different stopping time criteria. In reality, each data point is an approximated Pareto front containing a set of vectors with the objective values. In total there are 75, 900 · 3 = 227, 700 Pareto fronts taking into account the three criteria combinations. The experiments have required approximately 5,100 CPU hours. From the 23 · 10 · 3 = 690 available Pareto front approximations for each instance and criteria combination, a F N DS is carried out and the best non-dominated Pareto front is stored. These "best" 110 Pareto fronts for each criteria combination are available for future use of the research community and are also downloadable from http://www.upv.es/gio/rruiz. Additionally, a set of best Pareto fronts are available for the three different stopping time criteria. These last Pareto fronts are also used for obtaining the reference points for the hypervolume (I H ) indicator and are fixed to 1.2 times the worst known value for each objective. Also, these best Pareto fronts are also used as the reference set in the multiplicative epsilon indicator (I 1 ε ).
A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Conclusions and future research <s> Abstract In this paper, we propose 260 randomly generated scheduling problems whose size is greater than that of the rare examples published. Such sizes correspond to real dimensions of industrial problems. The types of problems that we propose are: the permutation flow shop, the job shop and the open shop scheduling problems. We restrict ourselves to basic problems: the processing times are fixed, there are neither set-up times nor due dates nor release dates, etc. Then, the objective is the minimization of the makespan. <s> BIB001 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Conclusions and future research <s> We introduce a new multiobjective evolutionary algorithm called PESA (the Pareto Envelope-based Selection Algorithm), in which selection and diversity maintenance are controlled via a simple hyper-grid based scheme. PESA's selection method is relatively unusual in comparison with current well known multiobjective evolutionary algorithms, which tend to use counts based on the degree to which solutions dominate others in the population. The diversity maintenance method is similar to that used by certain other methods. The main attraction of PESA is the integration of selection and diversity maintenance, whereby essentially the same technique is used for both tasks. The resulting algorithm is simple to describe, with full pseudocode provided here and real code available from the authors. We compare PESA with two recent strong-performing MOEAs on some multiobjective test problems recently proposed by Deb. We find that PESA emerges as the best method overall on these problems. <s> BIB002 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Conclusions and future research <s> We describe a new selection technique for evolutionary multiobjective optimization algorithms in which the unit of selection is a hyperbox in objective space. In this technique, instead of assigning a selective fitness to an individual, selective fitness is assigned to the hyperboxes in objective space which are currently occupied by at least one individual in the current approximation to the Pareto frontier. A hyperbox is thereby selected, and the resulting selected individual is randomly chosen from this hyperbox. This method of selection is shown to be more sensitive to ensuring a good spread of development along the Pareto frontier than individual-based selection. The method is implemented in a modern multiobjective evolutionary algorithm, and performance is tested by using Deb's test suite of `T' functions with varying properties. The new selection technique is found to give significantly superior results to the other methods compared, namely PAES, PESA, and SPEA; each is a modern multi-objective optimization algorithm previously found to outperform earlier approaches on various problems. <s> BIB003 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Conclusions and future research <s> This paper addresses flowshop scheduling problems with multiple performance criteria in such a way as to provide the decision maker with approximate Pareto optimal solutions. Genetic algorithms have attracted the attention of researchers in the nineties as a promising technique for solving multi-objective combinatorial optimization problems. We propose a genetic local search algorithm with features such as preservation of dispersion in the population, elitism, and use of a parallel multi-objective local search so as intensify the search in distinct regions. The concept of Pareto dominance is used to assign fitness to the solutions and in the local search procedure. The algorithm is applied to the flowshop scheduling problem for the following two pairs of objectives: (i) makespan and maximum tardiness; (ii) makespan and total tardiness. For instances involving two machines, the algorithm is compared with Branch-and-Bound algorithms proposed in the literature. For such instances and larger ones, involving up to 80 jobs and 20 machines, the performance of the algorithm is compared with two multi-objective genetic local search algorithms proposed in the literature. Computational results show that the proposed algorithm yields a reasonable approximation of the Pareto optimal set. <s> BIB004 </s> A Review and Evaluation of Multiobjective Algorithms for the Flowshop Scheduling Problem <s> Conclusions and future research <s> Abstract Multi-objective optimization using evolutionary algorithms identifies Pareto-optimal alternatives or their close approximation by means of a sequence of successive local improvement moves. While several successful applications to combinatorial optimization problems are known, studies of underlying problem structures are still scarce. The paper presents a study of the problem structure of multi-objective permutation flow shop scheduling problems and investigates the effectiveness of local search neighborhoods within an evolutionary search framework. First, small problem instances with up to six objective functions for which the optimal alternatives are known are studied. Second, benchmark instances taken from literature are investigated. It turns out for the investigated data sets that the Pareto-optimal alternatives are found relatively concentrated in alternative space. Also, it can be shown that no single neighborhood operator is able to equally identify all Pareto-optimal alternatives. Taking this into consideration, significant improvements have been obtained by combining different neighborhood structures into a multi-operator search framework. <s> BIB005
In this paper we have conducted a comprehensive survey of the multi-objective literature for the flowshop, one of the most common and thoroughly studied problems in the scheduling field. This survey complements others like by those of , or that did not deal with multi machine flowshops. The papers surveyed include exact as well as heuristic techniques for many different multi-objective approaches. Another important contribution is this paper is a complete and comprehensive computational evaluation of 23 different metaheuristics proposed for the Pareto or "a posteriori" multi-objective approach. Recent and state-of-the-art quality measures have been employed in an extensive experiment where makespan, total completion time and total tardiness criteria have been studied in three different two-criteria combinations. The comparative evaluation not only includes flowshopspecific algorithms but also adaptations of other general methods proposed in the multi-objective optimization literature. A new set of benchmark instances, based on the well known benchmark of BIB001 has been proposed and is currently available online along with the best known Pareto fronts for the tested objectives. A comprehensive statistical analysis of the results has been conducted with both parametric as well as non-parametric techniques. We have shown the preferred qualities of the parametric tests over the non-parametric counterparts, contrary to what is mainstream in the literature. The array of statistical analyses soundly support the observed performances of the evaluated algorithms. As a result, we have identified the best algorithms from the literature, which, along with the survey, constitute an important study and reference work for further research. Overall, the multiobjective simulated algorithm of , MOSA_Varadharajan can be regarded as the best performer under our experimental settings. Another consistent performer is the genetic local search method MOGALS_Arroyo of BIB004 . Our adapted versions of PESA and PESAII from BIB002 and BIB003 , respectively, have shown a very good performance over many other flowshop-specific algorithms. The recent PILS method from BIB005 has shown a promising performance for small instances. In our study we have also shown that different stopping criteria as well as different criteria combinations result in little changes, i.e., the algorithms that give best results do so in a wide array of circumstances. Table 4 : Results for the total completion time and total tardiness criteria. Average quality indicator values for the 23 algorithms tested under the three different termination criteria. Each value is averaged across 110 instances and 10 replicates per instance (1,100 values). For each termination criteria level, the methods are sorted according to I H .
A Survey on Photo Forgery Detection Methods <s> Introduction <s> The past decade has seen considerable advances in the application of principles from projective geometry to problems in image analysis and computer vision. In this paper, we review a subset of this work, and leverage these results for the purpose of forensic analysis. Specifically, we review three techniques for making metric measurements on planar surfaces from a single image. The resulting techniques should prove useful in forensic settings where real-world measurements are required. <s> BIB001 </s> A Survey on Photo Forgery Detection Methods <s> Introduction <s> The performance of a fragile watermarking method based on discrete cosine transform (DCT) has been improved in this paper by using intelligent optimization algorithms (IOA), namely genetic algorithm, differential evolution algorithm, clonal selection algorithm and particle swarm optimization algorithm. In DCT based fragile watermarking techniques, watermark embedding can usually be achieved by modifying the least significant bits of the transformation coefficients. After the embedding process is completed, transforming the modified coefficients from the frequency domain to the spatial domain produces some rounding errors due to the conversion of real numbers to integers. The rounding errors caused by this transformation process were corrected by the use of intelligent optimization algorithms mentioned above. This paper gives experimental results which show the feasibility of using these optimization algorithms for the fragile watermarking and demonstrate the accuracy of these methods. The performance comparison of the algorithms was also realized. <s> BIB002 </s> A Survey on Photo Forgery Detection Methods <s> Introduction <s> Copy-move forgery is one of the most common type of tampering in digital images. Copy-moves are parts of the image that are copied and pasted onto another part of the same image. Detection methods in general use block-matching methods, which first divide the image into overlapping blocks and then extract features from each block, assuming similar blocks will yield similar features. In this paper we present a block-based approach which exploits texture as feature to be extracted from blocks. Our goal is to study if texture is well suited for the specific application, and to compare performance of several texture descriptors. Tests have been made on both uncompressed and JPEG compressed images. <s> BIB003 </s> A Survey on Photo Forgery Detection Methods <s> Introduction <s> Although the detection of duplicated regions plays an important role in image forensics, most of the existing methods aimed at detecting duplicates are too sensitive to geometric changes in the replicated areas. As a result, a slight rotation can be used not only for the copied region to fit better the scene in the image, but also to hinder the detection of the tampering. In this paper, a novel forensic method is presented to detect duplicated regions, even when the copied portions have undergone reflection, rotation and/or scaling. To achieve this, overlapping blocks of pixels are mapped to log-polar coordinates, and then summed along the angle axis, to produce a one-dimensional (1-D) descriptor invariant to reflection and rotation. Besides, scaling in rectangular coordinates results in a simple translation of the descriptor. The dimension-reduced representation of each block has a favourable impact in the computational cost of the search of similar regions. Extensive experimental results, including a comparative evaluation with two existing methods, are presented to demonstrate the effectiveness of the proposed method. <s> BIB004 </s> A Survey on Photo Forgery Detection Methods <s> Introduction <s> One of the principal problems in image forensics is determining if a particular image is authentic or not. This can be a crucial task when images are used as basic evidence to influence judgment like, for example, in a court of law. To carry out such forensic analysis, various technological instruments have been developed in the literature. In this paper, the problem of detecting if an image has been forged is investigated; in particular, attention has been paid to the case in which an area of an image is copied and then pasted onto another zone to create a duplication or to cancel something that was awkward. Generally, to adapt the image patch to the new context a geometric transformation is needed. To detect such modifications, a novel methodology based on scale invariant features transform (SIFT) is proposed. Such a method allows us to both understand if a copy-move attack has occurred and, furthermore, to recover the geometric transformation used to perform cloning. Extensive experimental results are presented to confirm that the technique is able to precisely individuate the altered area and, in addition, to estimate the geometric transformation parameters with high reliability. The method also deals with multiple cloning. <s> BIB005 </s> A Survey on Photo Forgery Detection Methods <s> Introduction <s> With the advancement of the digital image processing software and editing tools, a digital image can be easily manipulated. The detection of image manipulation is very important because an image can be used as legal evidence, in forensics investigations, and in many other fields. The pixel-based image forgery detection aims to verify the authenticity of digital images without any prior knowledge of the original image. There are many ways for tampering an image such as splicing or copy-move, resampling an image (resize, rotate, stretch), addition and removal of any object from the image. In this paper we have discussed various pixel-based techniques for image forgery detection, mainly copy-move and splicing techniques. <s> BIB006 </s> A Survey on Photo Forgery Detection Methods <s> Introduction <s> These days digital image forgery has turned out to be unsophisticated because of capable PCs, propelled image editing softwares and high resolution capturing gadgets. Checking the respectability of pictures and identifying hints of altering without requiring additional pre-embedded information of the picture or pre-installed watermarks are essential examine field. An endeavor is prepared to review the current improvements in the research area of advanced picture fraud detection and comprehensive reference index has been exhibited on passive methods for forgery identification. Passive techniques donot require pre-embedded information in the image. Several image forgery detection techniques are arranged first and after that their summed up organization is produced. Author will review the various image forgery detection techniques along with their results and also compare the various different techniques based on their accuracy. <s> BIB007
Widespread of the digital cameras and the image editing tools like Adobe Photoshop, Microsoft Paint that gives for people doctored images for the bad aims. Increasing in image forensics has given boosting to special techniques for the detection of changing image. Image Forgery Detection is a new studying area which goals confirm the authenticity of image by collected information their feature. Diverse methods have been evolved to handle with changing image and forgery because of provide the origin of the image. We will evaluate different groups of algorithm. Unfortunately, we have not been able to review some important study here. Digital image forgery methods can be classified two main categories. One of them active methods and the other passive(blind) methods BIB006 . Active methods must be have preembedded information, such as digital watermarking and steganography. Digital watermarking is a method embedding secret information in the data, can be divided two categories, visible and invisible BIB002 . Steganography's example of digital signatures. Passive methods can be divided five group. • Pixel Based (copy-move, resampling, splicing, statistical) • Format Based (JPEG quantization, double JPEG, JPEG blocking) BIB006 • Camera Based (chromatic aberration,color filter array,camera response, sensor noise) • Geometric Based (principal point, metric measurements) BIB001 We evaluated the study on copy-move (or cloning) in this survey. Copy-move forgery detection methods are following three groups. • Brute Forces • Block-Base Techniques • Keypoint Based Techniques Brute force techniques is based exhaustive search and auto correlation methods BIB007 . Block based techniques use like this algorithms; DCT(Discrete Cosine Transform) BIB003 , PCA(Principle Component Analysis) BIB004 , SVD(Singular Value Decomposition) BIB004 , DWT (Discrete Wavelet Transform) BIB005 . Keypoint based techniques use like this algorithms; SIFT(Scale Invariant feature transform), SURF(Speeded-up Robust features). An Digital forgery example of a newspaper report on Saddam and Bill Clinton photographs is shown in Figure 1 .
A Survey on Photo Forgery Detection Methods <s> Basic Steps of the Methods <s> As forgeries have become popular, the importance of forgery detection is much increased. Copy-move forgery, one of the most commonly used methods, copies a part of the image and pastes it into another part of the the image. In this paper, we propose a detection method of copy-move forgery that localizes duplicated regions using Zernike moments. Since the magnitude of Zernike moments is algebraically invariant against rotation, the proposed method can detect a forged region even though it is rotated. Our scheme is also resilient to the intentional distortions such as additive white Gaussian noise, JPEG compression, and blurring. Experimental results demonstrate that the proposed scheme is appropriate to identify the forged region by copy-rotate-move forgery. <s> BIB001 </s> A Survey on Photo Forgery Detection Methods <s> Basic Steps of the Methods <s> One of the principal problems in image forensics is determining if a particular image is authentic or not. This can be a crucial task when images are used as basic evidence to influence judgment like, for example, in a court of law. To carry out such forensic analysis, various technological instruments have been developed in the literature. In this paper, the problem of detecting if an image has been forged is investigated; in particular, attention has been paid to the case in which an area of an image is copied and then pasted onto another zone to create a duplication or to cancel something that was awkward. Generally, to adapt the image patch to the new context a geometric transformation is needed. To detect such modifications, a novel methodology based on scale invariant features transform (SIFT) is proposed. Such a method allows us to both understand if a copy-move attack has occurred and, furthermore, to recover the geometric transformation used to perform cloning. Extensive experimental results are presented to confirm that the technique is able to precisely individuate the altered area and, in addition, to estimate the geometric transformation parameters with high reliability. The method also deals with multiple cloning. <s> BIB002 </s> A Survey on Photo Forgery Detection Methods <s> Basic Steps of the Methods <s> A commonly considered image manipulation is to conceal undesirable objects or people in the scene with a region of pixels copied from the same image. Forensic mechanisms aimed at detecting this type of forgeries must also consider other potential types of post-processing, including geometric distortions. In this paper, a new method is proposed to detect duplicated regions, even when the cloned region has undergone reflection, rotation and scaling. The algorithm uses colour-dependent feature vectors to reduce the number of comparisons in the search stage, and one-dimensional (1-D) descriptors, invariant to reflection and rotation, to perform an efficient search in terms of memory usage. Comparison results are presented to evaluate the effectiveness of the proposed method and two existing schemes. <s> BIB003 </s> A Survey on Photo Forgery Detection Methods <s> Basic Steps of the Methods <s> Mammograms are the soft X-rays kind of imaging technique used for the detection of any lesions or cysts in breasts. Digital mammograms have many kinds of artifacts that affect the accuracy of the detection of tumor tissues in the automated Computer Aided Detection (CAD) system for mammograms. Preprocessing helps to remove such artifacts is an important step. Image preprocessing is used to maintain image efficiency in mammogram images there are many artifacts need to be removed like labels, patient name, muscle part, etc. and enhance the region of interest which helps for efficient segmentation and detection of tumor. The basic objective of this study is to evaluate and discuss different techniques and approaches proposed in order to enhance the breast cancer images and an efficient preprocessing technique for mammography. It aims to find the existing preprocessing techniques for mammography images and discuss the techniques used and their advantages. General Terms Digital Image Processing, Preprocessing of Mammography and Image Enhancement. <s> BIB004
Although there are a large number of proposed copy-move fraud detection methods, all the methods applies the general flow chart given in Figure 2 . As seen in the flow chart, there are two alternatives techniques after pre-processing: block based methods and keypoint based methods. In both methods pre-processing steps are applied. For example; many methods studying on gray level images and so color channels must be combined. Step 1. Image Pre-Processing; is the first step. Methods change some specific details on the image like image filtering, DCT coefficients, RGB to gray level before the feature extraction step BIB004 . Step 2. Feature Extraction; in the block based method the image is divided into sub-blocks in the form of a rectangle. For each sub-block, the feature vectors are computed. Then similar attribute vectors are matched. On the other hand, in feature based methods, only image regions with high entropy value are determined without any image subdivision. These regions are called "key points" and the feature vectors of these regions subtracted BIB002 . Step 3. Feature Matching; similarity between two feature is marked for a duplicated image. Some methods uses lexicographic sorting and others best-bin-first search(kd-tree algorithm) in determing the similar features vectors. Step 4. Filtering; have been committed for decrease the possibility of the false positive matches. There are many distance algorithm with the near intensities. Some research proposed Euclidean distance BIB001 , and someone correlation coefficient BIB003 . Step 5. Pro-Processing; this is last step of all the methods and its optional. The goal of this last step is to protect only those blocks(or keypoint) that share common feature. When considering a set of mappings for the region, it is expected that the source and target blocks(or keypoint) of these mappings are close to each other.
A Survey on Photo Forgery Detection Methods <s> Comparison of the Related Work <s> A novel copy–move forgery detection scheme using adaptive oversegmentation and feature point matching is proposed in this paper. The proposed scheme integrates both block-based and keypoint-based forgery detection methods. First, the proposed adaptive oversegmentation algorithm segments the host image into nonoverlapping and irregular blocks adaptively. Then, the feature points are extracted from each block as block features, and the block features are matched with one another to locate the labeled feature points; this procedure can approximately indicate the suspected forgery regions. To detect the forgery regions more accurately, we propose the forgery region extraction algorithm, which replaces the feature points with small superpixels as feature blocks and then merges the neighboring blocks that have similar local color features into the feature blocks to generate the merged regions. Finally, it applies the morphological operation to the merged regions to generate the detected forgery regions. The experimental results indicate that the proposed copy–move forgery detection scheme can achieve much better detection results even under various challenging conditions compared with the existing state-of-the-art copy–move forgery detection methods. <s> BIB001 </s> A Survey on Photo Forgery Detection Methods <s> Comparison of the Related Work <s> In this paper, we propose a scheme to detect the copy-move forgery in an image, mainly by extracting the keypoints for comparison. The main difference to the traditional methods is that the proposed scheme first segments the test image into semantically independent patches prior to keypoint extraction. As a result, the copy-move regions can be detected by matching between these patches. The matching process consists of two stages. In the first stage, we find the suspicious pairs of patches that may contain copy-move forgery regions, and we roughly estimate an affine transform matrix. In the second stage, an Expectation-Maximization-based algorithm is designed to refine the estimated matrix and to confirm the existence of copy-move forgery. Experimental results prove the good performance of the proposed scheme via comparing it with the state-of-the-art schemes on the public databases. <s> BIB002
In this subdivision of manuscript, we supply some experimental solutions to describe the demonstration of the analytic algorithm suggested. Author(s) Techniques Benefits Drawbacks BIB001 Coefficient map and threshold. DWT and segmentation A good result for diverse condition, copy-move and transformation Execution time BIB002 Transform matrix. Segmentation Give preferable result using SIFT
A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> I. INTRODUCTION <s> This document specifies Mobile IPv6, a protocol which allows nodes to ::: remain reachable while moving around in the IPv6 Internet. Each mobile ::: node is always identified by its home address, regardless of its ::: current point of attachment to the Internet. While situated away from ::: its home, a mobile node is also associated with a care-of address, ::: which provides information about the mobile node's current ::: location. IPv6 packets addressed to a mobile node's home address ::: are transparently routed to its care-of address. The protocol enables ::: IPv6 nodes to cache the binding of a mobile node's home address ::: with its care-of address, and to then send any packets destined for ::: the mobile node directly to it at this care-of address. To support ::: this operation, Mobile IPv6 defines a new IPv6 protocol and a new ::: destination option. All IPv6 nodes, whether mobile or stationary, can ::: communicate with mobile nodes. This document obsoletes RFC 3775. <s> BIB001 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> I. INTRODUCTION <s> This document introduces extensions to Mobile IPv6 and IPv6 Neighbour ::: Discovery to allow for local mobility handling. Hierarchical mobility ::: management for Mobile IPv6 is designed to reduce the amount of ::: signalling between the Mobile Node, its Correspondent Nodes, and its ::: Home Agent. The Mobility Anchor Point (MAP) described in this document ::: can also be used to improve the performance of Mobile IPv6 in terms of ::: handover speed. This memo defines an Experimental Protocol for the ::: Internet community. <s> BIB002 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> I. INTRODUCTION <s> Mobile IPv6 enables a Mobile Node to maintain its connectivity to the ::: Internet when moving from one Access Router to another, a process ::: referred to as handover. During handover, there is a period during ::: which the Mobile Node is unable to send or receive packets because of ::: link switching delay and IP protocol operations. This "handover ::: latency" resulting from standard Mobile IPv6 procedures, namely ::: movement detection, new Care of Address configuration, and Binding ::: Update, is often unacceptable to real-time traffic such as Voice over ::: IP. Reducing the handover latency could be beneficial to non-real- ::: time, throughput-sensitive applications as well. This document ::: specifies a protocol to improve handover latency due to Mobile IPv6 ::: procedures. This document does not address improving the link ::: switching latency. This memo defines an Experimental Protocol for the ::: Internet community. <s> BIB003 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> I. INTRODUCTION <s> Data transmission from sources to sink is the most common service in wireless sensor networks (WSNs) but sink mobility brings new challenges to it. How to keep the sensors informed of the current state of the mobile sink is the primary issue for the mobile sink management. In this paper, we propose a distributed mobility management scheme that uses a set of access points (APs) to support the data transmission from sensors to mobile sink. Compared with existing sink mobility support algorithms like the broadcast based method, TTDD and LURP, our approach eliminates network wide broadcast while balancing the communication overhead over all APs. A theoretical analysis shows that while the number of source nodes below certain threshold, our approach outperforms the network wide broadcast and local broadcast approaches in view of communication overhead for ranges of network parameters such as network size and mobile speed. The simulation results match the analysis very well and proved the advantage of proposed algorithm. <s> BIB004 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> I. INTRODUCTION <s> Network-based mobility management enables IP mobility for a host ::: without requiring its participation in any mobility-related signaling. ::: The network is responsible for managing IP mobility on behalf of the ::: host. The mobility entities in the network are responsible for ::: tracking the movements of the host and initiating the required ::: mobility signaling on its behalf. This specification describes a ::: network-based mobility management protocol and is referred to as Proxy ::: Mobile IPv6. [STANDARDS-TRACK] <s> BIB005 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> I. INTRODUCTION <s> IP mobility support has been a hot topic over the last years, recently fostered by the role of IP in the evolution of the 3G mobile communication networks. Standardization bodies, namely IETF, IEEE and 3GPP are working on different aspects of the mobility aiming at improving the mobility ::: experience perceived by users. Traditional IP mobility support mechanisms, Mobile IPv4 or Mobile IPv6, are based on the operation of the terminal to keep ongoing sessions despite the movement. The current trend is towards network-based solutions where mobility support is based on network operation. Proxy Mobile IPv6 is a promising specification that allows network operators to provide ::: localized mobility support without relying on mobility functionality or configuration present in the mobile nodes, which greatly eases the deployment of the solution. This paper presents Proxy Mobile IPv6 and the different extensions that are been considered by the standardization bodies to enhance the basic protocol with interesting features needed to offer a richer mobility experience, namely, flow mobility, multicast and network mobility support. <s> BIB006 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> I. INTRODUCTION <s> IP based Wireless Sensor Networks (IP-WSNs) are gaining importance for their broad range of applications in health-care, home automation, environmental monitoring, industrial control, vehicle telematics and agricultural monitoring. In all these applications, mobility in the sensor network with special attention to energy efficiency is a major issue to be addressed. Host-based mobility management protocols are not suitable for IP-WSNs because of their energy inefficiency, so network based mobility management protocols can be an alternative for the mobility supported IP-WSNs. In this paper we propose a network based mobility supported IP-WSN protocol called Sensor Proxy Mobile IPv6 (SPMIPv6). We present its architecture, message formats and also evaluate its performance considering signaling cost, mobility cost and energy consumption. Our analysis shows that with respect to the number of IP-WSN nodes, the proposed scheme reduces the signaling cost by 60% and 56%, as well as the mobility cost by 62% and 57%, compared to MIPv6 and PMIPv6, respectively. The simulation results also show that in terms of the number of hops, SPMIPv6 decreases the signaling cost by 56% and 53% as well as mobility cost by 60% and 67% as compared to MIPv6 and PMIPv6 respectively. It also indicates that proposed scheme reduces the level of energy consumption significantly. <s> BIB007 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> I. INTRODUCTION <s> Proxy Mobile IPv6 (PMIPv6) is being considered as a promising mobility support protocol in the next-generation mobile network, thanks to its simplicity. However, since the basic specification of PMIPv6 was developed, several extensions to PMIPv6 are still being developed in the IETF. In this paper, we present a survey on route optimization (RO) schemes proposed to improve the performance of packet transmission during the PMIPv6 RO development at the IETF. Qualitative analysis of the RO schemes are provided. Then, we also describe remaining chanllenges and issues. <s> BIB008
T HE RAPID development of the wireless and communication technology has led to increase the mobile Internet users. Concurrent to these advances are the challenges embodied in designing the mobility management protocols to meet the demand of mobile users in maintaining continues communication sessions without disruptions while they are moving. Mobility management protocols are very essential in the wireless communications as the static attributes of nodes are no longer dominant in the current environment. It aims to track and locate the Mobile Nodes (MNs) efficiently to provide users with full access to information irrespective of their locations. Mobility management involves two main functions: location management and handoff management . Location management refers to the procedure needed for tracking the MN's location and it involves location registration and update. On the other hand, handoff management refers to the procedure needed to allow the MN keeps its connection while it moves between access points - . Mobility management protocols can be classified based on the entity responsible of mobility management process as hostbased and network-based mobility management protocols - . The latter is more suitable for the low power devices and it eases the protocol deployment because it relieves MNs from participating in the mobility process BIB007 . Host based mobility management protocols including MIPv6 BIB001 , HMIPv6 BIB002 , and FMIPv6 BIB003 , involve the MNs in the mobility process and generally introduce a significant network overhead in terms of handoff latency, packet loss, and signaling cost when MNs change their point of attachment very frequently. In addition, in case when a MN has no capability to transmit the mobility related signaling, host-based mobility management protocols will be no longer functional BIB004 . Therefore, methods for relieving MN from participating in mobility process and reducing handoff delay, packet loss, and communication path are indeed essential for providing users a continuous communication session without disruptions. Proxy Mobile IPv6 (PMIPv6) BIB005 was standardized by the IETF NETLMM working group to solve these problems associated to the host based mobility protocols. PMIPv6 added two functional entities which are the Mobile Access Gateway (MAG) and the Local Mobility Anchor (LMA). The LMA is responsible for maintaining reachability to the MN address while it moves in the local PMIPv6 domain. The MAG is responsible for detecting MNs attachments and initiating the required authentication and registration messages to register MNs with LMA. The network-based feature of the PMIPv6 relieves the MN from participating in the handoff process, such that, the network detects the node mobility and initiates the required mobility signals. This feature reduces the need for installing a complex mobility stack in the MN, which in turn eases and expedites the PMIPv6 deployment. However, since its emergence, PMIPv6 attracts researchers to enhance its performance in several directions including the LMA load reduction, fast handoff, route optimization, network mobility support, and load balancing. There have been a number of survey papers covered the PMIPv6 enhancements, for example, the route optimization schemes for the PMIPv6 protocol were covered by Guan et al. BIB008 , Bernardos et al. BIB006 , presented a survey paper on the current trends in standardizations of the PMIPv6 protocol. Handoff schemes were deliberated by Modares et al. in . However, these surveys focused on one of the PMIPv6 enhancements and no performance evaluation was presented. Thus, in this paper, we survey and analyze the research works which have been conducted to enhance the PMIPv6 protocol to provide the interesting and more advanced features required for offering a rich mobility experience. This includes the schemes proposed for LMA load reduction, fast handoff, route optimization, network mobility support, and the load balancing. These extensions can be integrated in some way together to overcome multiple aspects in a single integrated scheme, like providing a low handoff latency, short communication path, and low network overhead. This will provide users better experiences in terms of service disruption. For example, a load balancing, fast handoff, and route optimization schemes can be built on the top of clustered PMIPv6 architecture, such that when MN moves to a new MAG which should be the lowest load one, the fast handoff scheme takes into consideration not only packet buffering, but recovering the optimum route between the communicating MNs, and performing the handoff signaling locally if the MN moves inside the same cluster. The main contributions of this paper is: 1) providing a comprehensive surevy to include more than one PMIPv6 extensions. This gives the reader a general vision about the current work related to enhance PMIPv6. 2) analysing these extensions to specifiy the most aspects that should be taken into account during schemes design. 3) presenting a signaling cost analysis and evaluation for the reviewed schemes. 4) presenting a proposal to integrate multiple extensions into one scheme to settle more than one PMIPv6 aspects in a single solution. This paper is organized as follows: the basic PMIPv6 protocol architecture, signaling, and limitations are described in section II. Section III reviews the schemes proposed to reduce LMA load by dividing the PMIPv6 domain into sub-domains. To reduce the handoff latency and packet loss, researches are devoted to apply the fast handoff principles on the PMIPv6 architectures, these fast handoff works with their analysis and performance evaluation are shown in Section IV. Schemes proposed to shorten the communication path are reviewed and analyzed along with their performance evaluations in Section V. Network mobilty support mechanisms in PMIPv6 protocol are reviewed and analyzed in Section VI. Section VII deliberates the research work done for load sharing among MAGs in PMIPv6 domain. Section VIII, shows the proposed integrated solution. Section IX, summarizes the paper and the future trends to improve the PMIPv6 are also given. This will offer the reader a good understanding of the current status of the research and standardization work regarding network-based localized mobility support.
A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> III. CLUSTERING <s> This document introduces extensions to Mobile IPv6 and IPv6 Neighbour ::: Discovery to allow for local mobility handling. Hierarchical mobility ::: management for Mobile IPv6 is designed to reduce the amount of ::: signalling between the Mobile Node, its Correspondent Nodes, and its ::: Home Agent. The Mobility Anchor Point (MAP) described in this document ::: can also be used to improve the performance of Mobile IPv6 in terms of ::: handover speed. This memo defines an Experimental Protocol for the ::: Internet community. <s> BIB001 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> III. CLUSTERING <s> This work extends proxy mobile IPv6 to support mobility to Mobile Nodes having standard IPv6 stack in Cluster Based Heterogeneous Wireless Mesh Architecture. We also propose an enhanced network-based IP-layer movement detection mechanism which allows the network to detect the attachment and the movement of each Mobile Node independently from the access technologies without any special support from Mobile Nodes. We implemented and evaluated the extensions in a virtual IPv6 wireless testbed using User-mode Linux ad Ns-2 Emulation. Some qualitative results are also provided to prove the correctness and the advantages of the proposals. <s> BIB002 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> III. CLUSTERING <s> Abstract Targeting an increasing number of potential application domains, wireless sensor networks (WSN) have been the subject of intense research, in an attempt to optimize their performance while guaranteeing reliability in highly demanding scenarios. However, hardware constraints have limited their application, and real deployments have demonstrated that WSNs have difficulties in coping with complex communication tasks – such as mobility – in addition to application-related tasks. Mobility support in WSNs is crucial for a very high percentage of application scenarios and, most notably, for the Internet of Things. It is, thus, important to know the existing solutions for mobility in WSNs, identifying their main characteristics and limitations. With this in mind, we firstly present a survey of models for mobility support in WSNs. We then present the Network of Proxies (NoP) assisted mobility proposal, which relieves resource-constrained WSN nodes from the heavy procedures inherent to mobility management. The presented proposal was implemented and evaluated in a real platform, demonstrating not only its advantages over conventional solutions, but also its very good performance in the simultaneous handling of several mobile nodes, leading to high handoff success rate and low handoff time. <s> BIB003
Managing the micro-and macro-mobility separately was first presented through the HMIPv6 BIB001 which was designed to reduce the signaling overhead and handoff latency in MIPv6 protocol by using a hierarchical network architecture. The Mobility Anchor Point (MAP) was introduced to control the micro-mobility to reduce the amount of signaling required for MN registration. However, HMIPv6 protocol still incurs long handoff latency and packet loss problems. In addition, it involves the MN in the mobility process which requires installing a complex mobility stack in the MN. Using similar HPMIPv6 idea, there have been several research works proposed to reduce the LMA load in PMIPv6 domain. Nguyen et al. in BIB002 developed a cluster-based PMIPv6 for wireless mesh networks, wherein LMA serves as the cluster head and MAGs represent the cluster members. However, they proposed a multi-LMA environment, where LMAs are involved in all binding and communication processes. Hwang et al. in proposed a localized management support for PMIPv6 to solve the bottleneck problem by providing a localized handoff and route optimization using a reactive fast handover and hierarchical architecture. Their proposed architecture performs handoffs without the participation of LMA with a short handoff latency time. However, MAGs are overloaded by managing the communications and handoffs of their attached MAGs and MNs, resulting in additional functions at MAG that may lead to a long end-to-end delay time. Moreover, their proposed method requires multiple updates as the nesting level becomes larger, especially during the initial MN registration. Jabir et al. in proposed the Cluster based PMIPv6 (CPMIPv6) to enhance the PMIPv6 architecture, where proxy domain is divided into sub-domains that form as clusters. Each cluster encompasses a number of MAGs, with one MAG elected as a cluster head (HMAG). Dividing the proxy domain into clusters reduces the load on LMA, allows HMAGs to perform the intra-cluster handoff locally, and optimizes the communication path, and eventually reduces the packet loss ratio BIB003 . However, the packet loss during handoff and route optimization problems was not considered explicitly. According to the above literatures, the main issues that should be considered in clustering design are: clusters should be confined within a single LMA domain and minimize involving LMA in the local mobility and packet transmissions, the network entities should not be overloaded by exchanging extra massages, and it should not rely on the host-based principles.
A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> IV. FAST HANDOFF <s> Current IP-level mobility protocols have difficulties meeting the stringent handover delay requirements of future wireless networks. At the same time they do not give sufficient control to the network to control the handover process. This paper presents an extension to Proxy Mobile IP, which is the favorite IP level mobility protocol for the 3GPP System Architecture Evolution / Long Term Evolution (SAE/LTE). The extension, Fast Proxy Mobile IPv6 (FPMIPv6), aims at solving or reducing the control and handover problem. An elementary analysis shows that FPMIPv6 can significantly reduce the handover latency and the loss of packets during handover, especially if handovers can be predicted a few tens of milliseconds before they occur. <s> BIB001 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> IV. FAST HANDOFF <s> In IP-based wireless networks, minimizing handover latency with few packet loss is one of the most important issues. To achieve this goal, host-based and network-based fast or localized mobility management solutions have been proposed. Proxy Mobile IPv6(PMIPv6) avoids tunneling overhead over the air and supports mobility for hosts without host involvement. However, the basic performance of PMIPv6 for handover latency and packet loss is not different from that of Mobile IPv6.In this paper, we propose an enhancement for PMIPv6 to reduce the packet reception latencyand to minimize packet lossfor both intra-local mobility anchor(LMA) and inter-LMA handover by pre-establishing bidirectional tunnel between MAGs within an administrative domain. As a result, we found that the proposed scheme, though it requires additional signaling messages to establish the bidirectional tunnels, guarantees lower packet reception latency and fewer packet loss than other recent approaches without erroneous movement prediction. <s> BIB002 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> IV. FAST HANDOFF <s> In a network-based approach such as proxy mobile IPv6 (PMIPv6), the serving network controls mobility management on behalf of the mobile node (MN). Thus, the MN is not required to participate in any mobility-related signaling. PMIPv6 is being standardized in the IETF NetLMM WG. However, the PMIPv6 still suffers from a lengthy handover latency and packet loss during a handover. In this paper, we propose a seamless handover scheme for PMIPv6. The proposed handover scheme uses the Neighbor Discovery message of IPv6 to reduce the handover latency and packet buffering at the Mobile Access Gateway (MAG) to avoid the on-the-fly packet loss during a handover. In addition, it uses an additional packet buffering at the Local Mobility Anchor (LMA) to solve the packet ordering problem. Simulation results demonstrate that the proposed scheme could avoid the on-the-fly packet loss and ensure the packet sequence. <s> BIB003 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> IV. FAST HANDOFF <s> This paper proposes an extension of Proxy Mobile IPv6 (PMIPv6) with bicasting for soft handover, named B-PMIPv6. The proposed scheme is compared with the existing PMIPv6 handover scheme by the theoretical analysis and the network simulator. From the experimental results, we can see that the proposed B-PMIPv6 scheme can provide smaller handover latency and reduce packet loss than the existing schemes. <s> BIB004 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> IV. FAST HANDOFF <s> To reduce the cost and complexity of mobile subscriber devices, it is desirable that mobility be handled by network systems only. PMIPv6 relies on MIPv6 signaling and the reuse of the home agent functionality through a proxy mobility management agent in the network to transparently provide mobility services to mobile subscriber devices. Handover latency resulting from standard MIPv6 procedures remains unacceptable to real time traffic and its reduction can also be beneficial to non real-time throughput-sensitive applications as well. In this paper, therefore, we present a proactive QoS-Aware PMIPv6 that relies on a rich set of informational resources including on-the-go QoS requirements and service level agreements of mobile subscriber devices to make efficient proactive handover decisions. This scheme significantly reduces handover delays; and helps to ensure that mobile subscribers can continue their QoS sensitive and/or SLA-based sessions as they roam within a PMIP domain. <s> BIB005 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> IV. FAST HANDOFF <s> Mobile IPv6 (MIPv6) is a presentative protocol which supports global IP mobility. MIPv6 causes a long handover latency that a mobile node (MN) cannot send or receive packets. This latency can be reduced by using Proxy Mobile IPv6 (PMIPv6). PMIPv6 is a protocol which supports IP mobility without participation of the MN, and is studied in Network-based Localized Mobility Management (NETLMM) working group of IETF. There is much packet loss during handover in PMIPv6, although PMIPv6 reduces handover latency. In this paper, to reduce packet loss in PMIPv6 we propose Packet Lossless PMIPv6 (PL-PMIPv6) with authentication. In PL-PMIPv6 a previous mobile access gateway (pMAG) registers to a Local Mobility Anchor (LMA) on behalf of a new MAG (nMAG) during layer 2 handoff. Then, the nMAG buffers packets during handover after registration. Therefore, PL-PMIPv6 can reduce packet loss in MIPv6 and PMIPv6. Also, we use Authentication, Authorization and Accounting (AAA) infrastructure to authenticate the MN and to receive MN's profiles securely. For the comparison with MIPv6 and PMIPv6, detailed performance evaluation is performed. From the evaluation results, we show that PL-PMIPv6 can achieve low handover latency and low total cost. <s> BIB006 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> IV. FAST HANDOFF <s> The host-based protocols, Mobile IPv6 protocol (MIPv6), Hierarchical Mobile IPv6 (HMIPv6), and Fast Mobile IPv6 (FMIPv6), require that the mobile terminals include the mobility functions. To address this issue, Proxy MIPv6 (PMIPv6), a network-based protocol, has recently emerged. Despite its advantage of easier practical application, PMIPv6 still has some weak points, disconnection and transmission delay, during handover. This paper, therefore, proposes a scheme that reduces handover latency by simplifying the user authentication procedure required when a mobile node (MN) enters a new wireless network domain. And it also saves transmission cost by storing packets in the optical buffering module of the local mobility anchor (LMA) and retransmitting those packets after handover phase. Performance evaluation conducted using an analysis model indicates that the proposed scheme shows a performance improvement of 33% over standard PMIPv6 in terms of handover latency, and 67% over Seamless handover scheme in terms of the transmission cost due to the retransmission of buffered packets. <s> BIB007 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> IV. FAST HANDOFF <s> This document specifies the usage of Fast Mobile IPv6 (FMIPv6) when ::: Proxy Mobile IPv6 is used as the mobility management protocol. ::: Necessary extensions are specified for FMIPv6 to support the scenario ::: when the mobile node does not have IP mobility functionality and hence ::: is not involved with either MIPv6 or FMIPv6 operations. <s> BIB008 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> IV. FAST HANDOFF <s> In Proxy Mobile IPv6 (PMIPv6), any involvement by the Mobile Node (MN) is not required so that any tunneling overhead could be removed from over-the-air. However, during the PMIPv6 handover process, there still exists a service interruption period during which the MN is unable to send or receive data packets because of PMIPv6 protocol operations. To reduce the handover latency, Fast Handover for PMIPv6 (PFMIPv6) is being standardized in the IETF MIPSHOP working group. In PFMIPv6, however, handover initiation can be false, resulting in the PFMIPv6 handover process done so far becoming unnecessary. Therefore, in this paper, we provide a thorough analysis of the handover latency in PFMIPv6, considering the false handover initiation case. The analysis is very meaningful to obtain important insights on how PFMIPv6 improves the handover latency. Further, our analytical study is verified by simulation results. <s> BIB009 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> IV. FAST HANDOFF <s> IETF (Internet Engineering Task Force) has proposed a PMIPv6 (Proxy Mobile IPv6) to make up for the weak points caused by handoff delay and signaling overhead. The handoff delay and signaling overhead occur due to frequent binding updates during handoff in the MIPv6 (Mobile lPv6). Although the handoff in the PMIPv6 can be faster than that in the MIPv6, there still exists the handoff delay and packet loss during the handoff. For these reasons, the researches to reduce the handoff delay and retransmit missing packets have been investigated [1]. We propose a scheme in order to set up the multicast group. As a result, the scheme reduces the handoff delay by using the cache and prevents the packet loss in the PMIPv6. We have analyzed general handoff performance in the PMIPv6 and packet loss occurring interval from a practical point of view to verify the proposed scheme. <s> BIB010 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> IV. FAST HANDOFF <s> This paper proposes an enhanced handover scheme of the Proxy Mobile IPv6 (PMIPv6) with partial bicasting in wireless networks. In the proposed PMIPv6 handover scheme, when a mobile node (MN) moves into a new network and thus its Mobile Access Gateway (MAG) performs the binding update to the Local Mobility Anchor (LMA), the LMA begins the ‘partial’ bicasting of data packets to the new MAG as well as the previous MAG. Then, the data packets will be buffered at the new MAG during handover and then forwarded to MN after the handover operations are completed. The proposed scheme is compared with the existing schemes of PMIPv6 and PMIPv6 with bicasting by ns-2 simulations. From the performance analysis, we can see that the proposed scheme can reduce handover delays and packet losses, and can also use the network resource of wireless links more effectively, compared to the existing schemes. <s> BIB011 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> IV. FAST HANDOFF <s> Proxy Mobile IPv6 (PMIPv6) is proposed as a new network-based mobility protocol and it does not require MN's involving in mobility management. MN can handover relatively faster in PMIPv6 than in Mobile IPv6 (MIPv6) because it actively uses link-layer attachment information and reduces the movement detection time, and eliminates duplicate address detection procedure. However, the current PMIPv6 cannot prevent packet loss during the handover period. We propose the Smart Buffering scheme for seamlessness in PMIPv6. The Smart Buffering scheme prevents packet loss by proactively buffering packets that will be lost in a current serving mobile access gateway (MAG) by harnessing network-side information only. It also performs redundant packet elimination and packet reordering to minimize duplicate packet delivery and disruption of connection-oriented flows. To fetch buffered packets from a previous MAG, a new MAG discovers the previous MAG by using a discovery mechanism without any involvement of an MN. We verified the effectiveness of Smart Buffering via simulation with various parameters. Copyright © 2009 John Wiley & Sons, Ltd. <s> BIB012 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> IV. FAST HANDOFF <s> To improve handoff performance of the network-based Proxy Mobile IPv6 (PMIPv6), two types of enhanced handoff schemes have been presented; fast handover for PMIPv6 (F-PMIPv6) and route optimization handover (ROH). The F-PMIPv6 is committed to reduce handoff latency and is efficient to perform handoff signaling. However, it causes high packet delivery cost from additional tunneling at the LMA. The ROH minimizes packet delivery cost by allowing direct route of MAG-to-MAG but introduces high handoff signaling cost. Due to the tradeoff between handoff signaling and packet delivery cost, both schemes cannot guarantee a better performance for a diverse mobile environment where a mobile node (MN) has a different session arrival and mobility rate. Thus, we propose an adaptive PMIPv6 handoff (APHO) scheme to reduce the overall data overhead and improve the throughput over a wide range of session-to-mobility ratio (SMR). The APHO consists of mobility-aware APHO (M-APHO) and session-aware S-APHO (S-APHO), which are determined by comparing the SMR and pre-defined threshold. By analyzing the performance of FHO, ROH, and APHO, we confirm that the APHO has a better performance than the other schemes over a wide range of SMR. <s> BIB013 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> IV. FAST HANDOFF <s> Mobile IPv6 (MIPv6) is a host-based mobility management protocol that provides continuous service for mobile nodes (MNs) when they change their attachment points. However, MIPv6 is not suitable for real-time services because it causes long disruptions during handoff. Recently, the IETF NETLMM working group proposed a network-based localized mobility management protocol, called Proxy Mobile IPv6 (PMIPv6) to support mobility management without the participation of MNs in any mobility-related signaling. Unfortunately, PMIPv6 still suffers from the packet loss problem and long authentication latency during handoff. Therefore, we propose a fast handoff scheme in PMIPv6 networks called FH-PMIPv6, to provide low handoff latency, resolve the packet loss problem, and reduce signaling cost. Moreover, we evaluate FH-PMIPv6 via analytical model, and the results show that the FH-PMIPv6 provides a better solution than existing schemes. <s> BIB014 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> IV. FAST HANDOFF <s> Proxy Mobile IPv6 (PMIPv6) is a network-based mobility management protocol that supports network mobility, regardless of whether or not a Mobile Node (MN) supports mobility protocol, such as Mobile IPv6 (MIPv6) and Hierarchical MIPv6 (HMIPv6). However, PMIPv6 protocol does not consider the packet loss during MN's handover. We propose a fast handover scheme using the Head Mobile Access Gateway (MAG) to provide a fast and seamless handover service in PMIPv6. The proposed scheme does not violate the principle of PMIPv6 by maintaining mobility information of the MN in the Head MAG. The Head MAG is granted to an MAG deployed at an optimal location in a Local Mobility Anchor (LMA) domain. The Head MAG reduces packet loss and enables fast handover by providing mobility information of the MN to the MAGs in the same LMA domain. In this paper, we analyze an analytic performance evaluation in terms of signaling cost and size of packet loss. The analytic performance evaluation shows that the proposed scheme is compared with basic PMIPv6 and the previous proposed scheme using the Neighbor Detection (ND) message. The proposed scheme reduces signaling cost by over 30% compared to the previous proposed scheme using ND message and packet loss by over 78% compared to basic the PMIPv6 scheme. <s> BIB015 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> IV. FAST HANDOFF <s> In this paper, we study bicasting schemes for PMIPv6 whose purpose is to achieve seamless handovers by minimizing packet loss and handover delay during a handover in a PMIPv6 domain. Bicasting schemes are able to alleviate packet loss during a handover at the expense of utilization of a significant amount of backhaul bandwidth since packets are duplicated to the current and candidate point of attachment during the bicasting period. We therefore propose an enhanced bicasting scheme for PMIPv6 that will not only lower handover delay and packet loss but also promote an efficient utilization of the backhaul bandwidth and network elements' buffer space required as a mobile node changes its point of attachment. The proposed solution uses the signal strength behavior to make decisions on when to start and stop bicasting such that the bicasting operation is executed in a timely and accurate manner to achieve the better network resources utilization. The results which were obtained from the evaluation carried out using the Network Simulator 2 (ns-2) indeed show that the proposed solution surpasses the currently existing bicasting solutions for PMIPv6. <s> BIB016 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> IV. FAST HANDOFF <s> Typical PMIPv6 supports mobility management for the Mobile Host (MH) in localized domains over variant Wireless Local Area Network technologies. The typical PMIPv6 adopted in reactive mode in which break-before-make technique may concern, which results in long disruption latency and inevitable data traffic loss that negatively affects MH's communication performance. This article proposes a proactive latency low handover mechanism, which corresponds to make-before-break technique in order to support MH's seamless and fast roaming in PMIPv6 network. The proposed mechanism proactively performs a pre-registration and pre-access authentication processes tightly together intended for the MH in advance of a handover situation involved in typical PMIPv6, thereby enabling the MH to re-configure its interface more quickly after a handover. Consequently, the associated mobility-related signallings along with their latencies are reduced significantly and the continuity of the MH communication session is granted. Furthermore, an efficient buffering technique with optimized functions is introduced at the MH's anchor mobility entity to prevent data traffic loss and save their transmission cost. Through various simulation evaluations via ns-2, we study and analyse different mobility aspects, such as handover latency, data traffic loss, throughput, end-to-end traffic delay, traffic transmission cost and signalling cost, with respect to different traffic sources like CBR-UDP and FTP-TCP. Several experiments were conducted, revealing numerous results that verify the proposed mechanisms' superior performance over existing scheme. <s> BIB017 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> IV. FAST HANDOFF <s> In a reactive handover, a Mobile Node (MN) handover procedure is not prepared before the event. Since such handovers are performed for roaming MNs in Proxy Mobile IPv6 (PMIPv6), there are limitations in reducing MN handover latency and preventing packet loss. If a handover is predictable and a handover can be prepared beforehand, the handover latency and packet loss can be minimized. To prevent packet loss and reduce handover latency during an MN handover, proactive handover schemes have been proposed as well as buffering schemes. However, these schemes do not determine the exact starting point of the MN handover. In this work, a scheme is proposed to a support proactive MN handover to prevent packet loss, and to minimize handover latency and buffering cost, based on the MN's location information in PMIPv6. Our proposed scheme decides the exact moment of the MN handover and the next network to which the MN roams based on the MN's location information. It is compared to existing schemes through mathematical analysis and simulation. The results of performance evaluation show that our proposed scheme reduces handover latency, signaling cost, and buffering cost effectively compared to the existing schemes. <s> BIB018 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> IV. FAST HANDOFF <s> Proxy Mobile IPv6 (PMIPv6) is one of the most promising technologies for next generation IP network. Fast Handover for PMIPv6 (FPMIPv6) was standardized to improve the handover performance of PMIPv6. However, the long interruption of Lay 2/lay 3 handover in PMIPv6 and FPMIPv6 may be unacceptable for delay-sensitive real-time multimedia applications (e.g. VoIP). To decrease the handover latency, we propose a seamless handover scheme for multi-interface Mobile Node(MN). In this scheme, MN and network units are enhanced to send/forward the packets that destined to handover interface to some other interfaces belonging to MN, which make MN can continuously send/receive packets during handover process. Both theory analysis and Linux-based testbed evaluation have proven that our seamless handover scheme can greatly decrease handover latency and improve the performance of PMIPv6. <s> BIB019 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> IV. FAST HANDOFF <s> Proxy Mobile IPv6 (PMIPv6) was standardized to reduce the long handoff latency, packet loss and signaling overhead of MIPv6 protocol and to exempt the mobile node from any involvement in the handoff process. However, the basic PMIPv6 does not provide any buffering scheme for packets during MNs handoff. In addition, all the binding update messages are processed by a Local Mobility Anchor (LMA) which leads to increase the handoff latency. Previous works enhanced PMIPv6 performance by applying fast handoff mechanisms to reduce the packet loss during handoffs; however, the LMA is still involved during the location update operations. In this paper, we present a new fast handoff scheme based on a cluster-based architecture for the PMIPv6 named Fast handoff Clustered PMIPv6 (CFPMIPv6); it reduces both the handoff signaling and packet loss ratio. In the proposed scheme, the Mobility Access Gateways (MAGs) are grouped into clusters with a one distinguished Head MAG (HMAG) for each cluster. The main role of the HMAG is to carry out the intra-cluster handoff operations and provide fast and seamless handoff services. The proposed CFPMIPv6 is evaluated analytically and compared with the previous work including the basic PMIPv6, Fast PMIPv6 based on Multicast MAGs group (MFPMIPv6), and the Fast Handoff using Head MAG schemes (HFPMIPv6). The obtained numerical results show that the proposed CFPMIPv6 outperforms all the basic PMIPv6, MFPMIP6, and HFPMIPv6 schemes in terms of the handoff signaling cost. <s> BIB020
The handoff latency of PMIPv6 is expressed as the total time needed to perform the access authentication, location update, and address configuration latencies. As shown in Fig. 3 , all packets that are sent during this period are definitely lost BIB001 . This long handoff and packet loss problem definitely cause a service disruption during MN's handoff which makes the PMIPv6 not sufficiently enough for real time applications. There have been several attempts to reduce the handoff latency of the basic PMIPv6 and to provide an efficient buffering scheme for the incoming packets. The key issue in designing fast handoff schemes is how to determine the new target MAG for the moving MN and when to start packets buffering and forwarding processes. Fast handoff schemes can be categorized as MN-assisted and network-assisted handoff schemes. In MN-assisted schemes, the MN which intends to handoff sends a report to its previous MAG informing about its new target MAG. The previous MAG then transfers the MN information to the target MAG including MN-ID, LMA address, and HNP which help the target MAG Fig. 4 . Typical MN-Assisted fast handoff for PMIPv6 . to advertise the MN's prefix once MN is attached to it. In the literature, several MN-assisted research were presented, for example, Xia and Sarikaya proposed a scheme to improve the PMIPv6 performance by reducing packet loss and handoff latency. Their proposal borrowed the FMIPv6 principles, such that, when MN tends to do L2 handoff, it scans the available target access points. As shown in Fig. 4 , once it decides the target MAG, MN informs its previous MAG to initiate the fast handoff signaling with the target MAG. The previous MAG buffers all the incoming packets destined to MN and exchanges the required messages with the target MAG to prepare for MN's handoff. Their proposed method has prevented the packet loss by providing a buffering scheme in both previous and target MAGs, and reduced the handoff latency by providing the MN's information to the target MAG in advance. However, the packet out-of-order may arise as the packets should be buffered in different places. Park in , proposed a mobility management scheme called Fast and Local PMIPv6 (FLPMIPv6) to reduce the handoff latency time and packet loss ratio by utilizing both the FMIPv6 and IEEE 802.21 technology. Since their scheme is based on FMIPv6, MN uses the Media Independent Handover (MIH) messages to provide its previous MAG with the information on the candidate MAGs. Then, previous MAG initiates the handoff signaling with target MAG following the same steps as in . However, MN is involved in the mobility process which requires installation of sophisticated protocol stack in MN and the network access devices need an intelligence link layer. Kim et al. in BIB004 , presented a soft handoff scheme based on bi-casting the data packets to both previous and target MAGs. In their proposal, when MN intends to handoff, it uses the MIH functions to inform its previous MAG about the target MAG. Previous MAG then triggers the handoff initiation with target MAG, the latter pre-registers MN with LMA in advance and before the MN attachés to it. The LMA then starts bi-casting the incoming data packets to both previous and target MAGs. This scheme reduces the packet loss and handoff latency. However, it incurs high network traffic overhead due to the packet bicasting. In order to use the wireless resources more effectively, Kim and Koh in BIB011 proposed partial bi-casting handoff scheme. In their proposal, the incoming packets are buffered in the target MAG during handoff and then these packets are forwarded towards MN after handoff. However, the packets are also duplicated which add extra network overhead. Mphatsi and Falowo in BIB016 proposed a handoff scheme to achieve network resource utilization in bi-casting PMIPv6. They identified the problems of resource wasting in bi-casting and the target MAG overload in partial bi-casting. In their proposal, bi-casting is scheduled according to the Received Signal Strength (RSS), such that, bi-casting starts very close to the link down event, and stops just before the RSS going below a threshold. The incoming packets are forwarded to target MAG only after bi-casting start trigger. They reduced the buffer space required for the incoming packets because the bi-casting started in a time very close to the link down event. Their proposal provides better resource utilization in comparison with bi-casting and partial bi-casting schemes. However, the problem of identifying the target MAG may be arisen. Also, it requires a very accurate scheme to estimate the RSS because the scheme fails if the link down event is detected lately. Fast Proxy MIPv6 (PFMIPv6) BIB008 protocol was standardized by IETF to reduce the handoff latency. However, this protocol introduced the problem of false handoff initiation because the serving network predicts which new network the MN will move to BIB009 . Shih et al. in proposed the Proxy-based Fast Handover for Hierarchical Mobile IPv6 (PFHMIPv6), which is a combination of both Fast Hierarchical MIPv6 (FHMIPv6) and FPMIPv6 protocols. They utilized the benefits of both protocols to reduce both the handoff signaling cost and packet loss ratio. However, PFHMIPv6 inherited the false handoff prediction problem of FPMIPv6 protocol. Heijenk et al. in BIB001 , presented an extension of Fast PMIPv6 (FPMIPv6), in which, the previous MAG plays the key role in coordinating the handoff process. The previous MAG collects information about MN, APs, and candidate MAGs. According to the collected information which are provided by MN, previous MAG instructs the target MAG to register MN by sending a PBU to LMA, and once the registration is done, it instructs the MN to handoff to the target MAG. In addition, the incoming packets are stored in previous and target MAGs to reduce the packet loss. However, the downside of this method is that the wrong prediction for the target MAG may arise and lead to loss the incoming packets. In addition, buffering in more than one MAG increases the processing and transmission overhead. The common downside of the MN-assisted handoff schemes is the involvement of MN in the handoff signaling which complicate both the MN design and the protocol deployment. Also, such involvement contradicts with the main network based principles of PMIPv6. On the other hand, network-assisted handoff schemes utilize the network entities like MAG and LMA to accomplish the handoff process. Since there is no information comes from MN about the target MAG, network-assisted schemes are usually multicasting the MN's information among the neighbor MAGs. There have been several schemes presented as a networkassisted handoff schemes, for example, Kim in BIB002 , proposed a seamless handoff scheme to reduce the handoff latency and data packet loss by pre-establishing the bidirectional tunnel between previous and new MAGs before the MN handoff. As shown in Fig. 5 , previous MAG sends the MN's information to its adjacent MAGs before the MN handoff and starts buffering the incoming packets which are destined to the MN until the end of L3 handoff. By multicasting MN's information, the MN can receive its HNP as soon as it attaches to the new MAG. Author also presented a handoff optimization scheme, to reduce packet loss, by keeping the binding between LMA and previous MAG until new MAG receives PBA message from LMA. However, identifying the candidate MAGs were not specified in this work. Also, since the data packets are stored in previous MAG, the packet out-of-order problem may arise. Moreover, it depends on predicting the MN's next location, i.e. if MN moves to another MAG, the scheme fails and the packets will be lost. In addition, multicasting MN's information to the adjacent MAGs overloads the network due exchanging unnecessary signals. Following the same idea of BIB002 , Kang et al. BIB003 , proposed a seamless handoff scheme by utilizing the Neighbor Discovery (ND) message to send MN's information in advance to the neighbor MAGs before MN handoffs. When previous MAG receives a Link Going Down (LGD) trigger, it sends the MNprofile to neighbor MAGs through ND message of IPv6. To reduce packet loss and to prevent out-of-order problem, they proposed two buffering schemes in both previous MAG and LMA. However, this scheme incurs additional network traffic by multicasting MN's profile to the MAG's neighbors set. Also, buffering in both MAG and LMA overloaded the network entities by extra functions which may lead to drop down the total system performance. Hwang et al. in BIB010 proposed a fast handoff scheme using multicast MAGs groups (MFPMIP). The MN's mobility information is transferred to all MAGs in the multicast group in advance to reduce the handoff latency. However, LMA is still involved in the handoff process and the handoff process incurs a high signaling overhead due to exchanging mobility messages among MAGs in the multicast groups. To specify the correct set of MAG's neighbors, Obele et al. BIB005 presented a new handoff scheme based on a Proxy Information Server (PIS), which is assumed to have a set of informational resources. In their scheme, previous MAG informs LMA when the MN intends to handoff using MIH functions. Once LMA receives these messages from previous MAG, it accesses the PIS server to get the neighbor MAGs set of previous MAG. Then, LMA sends a PBU message to all MAGs in the neighbor list informing them that MN is on moving and may attach to one of them. This method reduced the handoff latency by providing MN's profile for the entire candidate MAGs in advance. However, it incurs additional network traffic overhead due to queries exchanging between LMA and PIS, as well as multicasting un-required PBU messages. Also, they have not considered any traffic buffering scheme. In addition, they add extra load on LMA to get the candidate MAGs. To reduce the multicasting overhead, Ryu et al. in BIB006 proposed a packet lossless PMIPv6 (PL-PMIPv6). In their scheme, to reduce handoff latency, previous MAG registers MN with LMA during L2 handoff on behalf of target MAG in advance. After registration, target MAG starts buffering the incoming data packets to reduce packet loss. However, this scheme depends on previous MAG to predict the address of target MAG and the way of identifying target MAG was not presented. As a result, this scheme will fail if MN moves to one MAG rather than the predicted one. To manage the buffer efficiently, Oh and Choo in BIB007 proposed a handoff scheme to reduce handoff latency through simplifying the authentication procedure and to reduce the transmission cost by providing an optical buffering model. In their scheme, when LMA receives a PBU from previous MAG to de-register MN, it starts buffering the incoming packets and sends MN's profile to entire MAGs in the previous MAG neighbors list by using Immediate Handoff Request (IHR) message. The IHR message simplifies the authentication and registration process to reduce handoff latency. This method has some drawbacks including: there is no attempt to show how the LMA can discover MAG's neighbors, and introducing the optical buffer adds extra cost in terms of infrastructure and access time. Choi et al. in BIB012 introduced smart buffering scheme in MAGs to provide a seamless handoff. Their scheme constitutes packet buffering in previous MAG when it detects that MN intends to move, discovering previous MAG by target MAG through ND message, and packet buffering and forwarding between previous and target MAGs. This method performs the seamless handoff without involving MN in the process. However, target MAG incurs overhead of discovering the previous MAG by multicasting the discovery messages to all neighbors, which thereby increase network traffic overhead. In addition, previous MAG duplicates the incoming packets to the MN and the buffer which increase the load on previous MAG and network. Moreover, both previous and target MAGs incur extra load doing the forwarding, reordering, and removing the redundant packets. Jeon et al. in BIB013 proposed an adaptive PMIPv6 handoff (APHO) management scheme to improve PMIPv6 performance for different MN's status. They specified the appropriate handoff technique based on the session-to-mobility ratio (SMR). If the session activity is greater than the node mobility, a sessionaware APHO (S-APHO) scheme is performed to reduce packet delivery cost by establishing a direct tunnel between previous and target MAGs. On the contrary, when the mobility ratio is greater than session activity, a mobility-aware APHO (M-APHO) is activated to reduce handoff latency without establishing the tunnel between MAGs. Therefore, their proposed APHO maximizing the throughput while minimizing the total traffic overhead. However, this approach incurs a high processing cost to determine the SMR for each MN which leads to complicate the LMA design. In addition, packets may be lost during the time between MN's de-registration and registration with new MAG. Chuang and Lee in BIB014 proposed a fast handoff scheme for PMIPv6 (FH-PMIPv6) to reduce both handoff latency and packet loss ratio while maintaining a right packets order. In their work, the authentication and registration processes are done simultaneously to reduce handoff latency, while packet loss is decreased and packet disordering problem is solved by using a double buffering scheme. Once previous MAG detects the MN's handoff imminent, it multicasts the MN's profile to its neighbors MAGs in advance. Once new MAG detects the MN attachment, it sends a PBU message to register MN along with a de-registration message on behalf of previous MAG to reduce handoff latency. The double buffers are used for the forwarded and new packets to avoid the out-of-sequence packets. However, the main downside of this scheme is how to specify the target MAG. In addition, using two buffers increases the design complexity of MAGs. Al-Surmi et al. in BIB017 proposed a proactive scheme with low latency handoff to support the MN's seamless and fast roaming in PMIPv6 domain. To reduce both handoff latency and service disruption, the proposed scheme provides performing MN's pre-registration and pre-authentication simultaneously in advance before the handoff occurred to enable MN to reconfigure its interface more quickly. In addition, to prevent packet loss, an efficient buffering scheme was introduced in LMA. In their work, when LMA is alerted by previous MAG about the LGD trigger, it starts multicasting the MN's profile to the entire MAG neighbors in order to reduce handoff latency. However, the problem of how to identify the neighbor MAGs is still arisen. In addition, LMA should multicast MN's profile to the neighbors MAGs, which leads to extra traffic overhead on the network performance. To reduce the access to the far LMA, Kwon et al. in BIB015 proposed a fast handoff scheme by introducing the principle of the head MAG in PMIPv6 architecture (HFPMIP). The head MAG is used to provide MAGs with the required MN's mobility information. The use of head MAG reduced the packet loss ratio and enabled a fast and seamless handoff. However, LMA is still accessed extensively for the location update operations. In addition, when the network becomes large, the distance between Head MAG and other MAGs becomes long which To specify the exact handoff starting time, park et al. BIB018 , proposed a proactive handoff scheme based on the MN's location. The next MAG to which the MN attached to is specified based on current location of MN to reduce the false prediction. However, the network is overloaded by exchanging the MNs coordinates periodically with MAGs and the MNs should be supported by hardware, like GPS, to provide the location information. Xu et al., BIB019 proposed a seamless handoff for multiinterface MNs, such that when MN handoffs on one interface, it can receive and transmit data packets using its second interface. To reduce both handoff signaling cost and end-to-end delay for the buffered packets, Jabir el al. in BIB020 , proposed a low cost fast handoff scheme based on the cluster-based PMIPv6 architecture, named (CFPMIPv6), which guarantees the low packet loss ratio using a low handoff signaling cost. In CFPMIPv6, the intra-cluster handoff is carried out by HMAG to reduce the involvement of LMA in the handoff process. Table I summarizes the fast handoff research work in terms of MN/Network assisted, target prediction, handoff coordinator, network overhead, and buffering overhead. According to the literature analysis shown in this section, most fast handoff works guaranteed a low packet loss ratio. However, the following points can be noticed: some schemes are not efficient as they relied on the host mobility principles (MN assisted handoff) which involve MNs in the mobility process. Hence, this requires the MN to install a complicated protocol stack and this contradicts with the PMIPv6 principles of relieving the MN from any participation in the mobility process. Also, fast handoff schemes incur a long handoff signaling cost due to the involvement of LMA in the handoff process. This long handoff leads to buffering overload as the incoming packets should be buffered until the end of the handoff process and this also leads to increase the end-to-end delay for the buffered packets because it cannot be forwarded before the end of handoff. In addition, fast handoff schemes may incur buffer overloading by storing the incoming packets in more than one entity. Moreover, fast handoff schemes may overload the network by multicasting the incoming packets to the previous and new MAGs. The fast handoff schemes which are based on predicting the target MAG may fail due to the wrong prediction. Thus, these issues should be considered during the fast handoff mechanism design. Fast handoff mechanism should satisfy relieving MNs from handoff participation, low handoff latency, low network load, low buffering overhead, and low end-to-end delay.
A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> A. Performance Evaluation <s> Mobility management with provision of seamless handover is crucial for an efficient support of global roaming of mobile nodes (MNs) in next-generation wireless networks (NGWN). Mobile IPv6 (MIPv6) and its extensions were proposed by IETF for IP layer mobility management. However, performance of IPv6-based mobility management schemes is highly dependent on traffic characteristics and user mobility models. Consequently, it is important to assess this performance in-depth through those two factors. The performance of IPv6-based mobility management schemes is usually evaluated through simulations. This paper proposes an analytical framework to evaluate the performance of IPv6-based mobility management protocols. This proposal does not aim to advocate which is better but rather to study the effects of various network parameters on the performance of these protocols to enlighten decision-making. The effect of system parameters, such as subnet residence time, packet arrival rate and wireless link delay, is investigated for performance evaluation with respect to various metrics like signaling overhead cost, handoff latency and packet loss. Numerical results show that there is a trade-off between performance metrics and network parameters. <s> BIB001 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> A. Performance Evaluation <s> We propose a Local Mobility Anchor (LMA) initiated route optimization protocol for a smooth transition from the old optimized path to a new optimized path after handover in Proxy Mobile IP (PMIP). The LMA initiated protocol can reduce the handover latency and achieve fast recovery of the optimized path after handover. As a result, the proposed protocol solves the out-of-sequence delivery problem during the route optimization procedure. <s> BIB002 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> A. Performance Evaluation <s> NEMO (Network Mobility) is proposed to support node mobility collectively. NEMO BSP is the most popular protocols to support NEMO based on MIPv6. However it does not satisfy requirements of realtime and interactive application due to problems, such as long signaling delay and movement detection time. Also MN should have mobility function for its handover. Proxy MIPv6 (PMIPv6) is proposed to overcome defects of MIPv6 based protocols. In this paper, we propose a Network Mobility supporting scheme, which supports MNs’ mobility between PMIPv6 network and mobile network as well as the basic network mobility. <s> BIB003 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> A. Performance Evaluation <s> Proxy Mobile IPv6 (PMIPv6) is a network-based mobility management protocol that supports network mobility, regardless of whether or not a Mobile Node (MN) supports mobility protocol, such as Mobile IPv6 (MIPv6) and Hierarchical MIPv6 (HMIPv6). However, PMIPv6 protocol does not consider the packet loss during MN's handover. We propose a fast handover scheme using the Head Mobile Access Gateway (MAG) to provide a fast and seamless handover service in PMIPv6. The proposed scheme does not violate the principle of PMIPv6 by maintaining mobility information of the MN in the Head MAG. The Head MAG is granted to an MAG deployed at an optimal location in a Local Mobility Anchor (LMA) domain. The Head MAG reduces packet loss and enables fast handover by providing mobility information of the MN to the MAGs in the same LMA domain. In this paper, we analyze an analytic performance evaluation in terms of signaling cost and size of packet loss. The analytic performance evaluation shows that the proposed scheme is compared with basic PMIPv6 and the previous proposed scheme using the Neighbor Detection (ND) message. The proposed scheme reduces signaling cost by over 30% compared to the previous proposed scheme using ND message and packet loss by over 78% compared to basic the PMIPv6 scheme. <s> BIB004 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> A. Performance Evaluation <s> Proxy Mobile IPv6 (PMIPv6) relies on centralized data and signaling anchoring. Communications of mobile users are tunneled to and from a local mobility anchor (LMA) for routing decision. In a hierarchical architecture this centralization generates bottlenecks and scalability issues. In distributed architectures, PMIPv6 could be extended and optimized to take advantage of the distribution of network functions. <s> BIB005 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> A. Performance Evaluation <s> Proxy Mobile IPv6 (PMIPv6) has been developed by the IETF as a network-based mobility management protocol to support the mobility of IP devices. Although several proposals have been made for localized routing optimization, they don't take into account handover management and localized routing simultaneously. In fact, the localized routing state is only restored after the handover, leading to packet loss and signaling overhead. On the other hand, Fast Handovers for PMIPv6 (F-PMIPv6) protocol has been designed to mainly solve the issues of long handover delay and packets loss during handover. As a result, this paper looks at enhancing F-PMIPv6 by combining the handover with route optimization by proposing a new protocol called Optimized Proxy Mobile IPv6 (O-PMIPv6). The proposed protocol enhances the performance of PMIPv6 and F-PMIPv6 in terms of route optimization handover delay, signaling cost, and network utilization. <s> BIB006 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> A. Performance Evaluation <s> Proxy Mobile IPv6 (PMIPv6) was standardized to reduce the long handoff latency, packet loss and signaling overhead of MIPv6 protocol and to exempt the mobile node from any involvement in the handoff process. However, the basic PMIPv6 does not provide any buffering scheme for packets during MNs handoff. In addition, all the binding update messages are processed by a Local Mobility Anchor (LMA) which leads to increase the handoff latency. Previous works enhanced PMIPv6 performance by applying fast handoff mechanisms to reduce the packet loss during handoffs; however, the LMA is still involved during the location update operations. In this paper, we present a new fast handoff scheme based on a cluster-based architecture for the PMIPv6 named Fast handoff Clustered PMIPv6 (CFPMIPv6); it reduces both the handoff signaling and packet loss ratio. In the proposed scheme, the Mobility Access Gateways (MAGs) are grouped into clusters with a one distinguished Head MAG (HMAG) for each cluster. The main role of the HMAG is to carry out the intra-cluster handoff operations and provide fast and seamless handoff services. The proposed CFPMIPv6 is evaluated analytically and compared with the previous work including the basic PMIPv6, Fast PMIPv6 based on Multicast MAGs group (MFPMIPv6), and the Fast Handoff using Head MAG schemes (HFPMIPv6). The obtained numerical results show that the proposed CFPMIPv6 outperforms all the basic PMIPv6, MFPMIP6, and HFPMIPv6 schemes in terms of the handoff signaling cost. <s> BIB007
In this section, the handoff signaling cost is derived for all handoff schemes under consideration to evaluate and compare their performance. In insuring a level comparative platform, the fluid flow model and assumptions in BIB004 , BIB007 , BIB001 are used in this paper. It is necessary to mention here that we focus on the intra-domain mobility only, and for the sake of simplicity, we do not consider the message size and entities processing cost in the computations. To assess the performance of RO schemes in term of their efficiency in recovering the prior RO after handoff, in this section, the handoff signaling cost is derived for all RO schemes under consideration based on the fluid flow mobility model and assumptions presented in . The signaling cost required for recovering the RO status in RO schemes can be derived as follows: Fig . 11 shows the handoff signaling cost for RO schemes presented in this paper subject to the change in number of hops between MAGs and LMA. It can be seen that all schemes incur a high signaling cost with the increment of LMA-MAG distance. However, CBRO scheme shows the best signaling cost because it is slightly affected by LMA-MAG distance due to the reduction of the dependence on LMA. In addition, although LIRO BIB002 shows a high signaling cost, it insures a seamless handoff which is not provided by other schemes like BIB006 , BIB003 , . Moreover, ABRO BIB005 shows better signaling cost in comparison with LIRO, but the former increases the communication path due its dependence on the IAs.
A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> 1) Network and Mobility Model: <s> Global wireless networks enable mobile users to communicate regardless of their locations. One of the most important issues is location management in a highly dynamic environment because mobile users may roam between different wireless systems, network operators, and geographical regions. A location-tracking mechanism is introduced that consists of intersystem location updates and intersystem paging. Intersystem update is implemented by using the concept of boundary location area, which is determined by a dynamic location update policy in which the velocity and the quality of service are taken into account on a per-user basis. Also, intersystem paging is based on the concept of a boundary location register, which is used to maintain the records of mobile users crossing the boundary of systems. This mechanism not only reduces location-tracking costs, but also significantly decreases call-loss rates and average-paging delays. The performance evaluation of the proposed schemes is provided to demonstrate their effectiveness in multitier personal communication systems. <s> BIB001 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> 1) Network and Mobility Model: <s> Data transmission from sources to sink is the most common service in wireless sensor networks (WSNs) but sink mobility brings new challenges to it. How to keep the sensors informed of the current state of the mobile sink is the primary issue for the mobile sink management. In this paper, we propose a distributed mobility management scheme that uses a set of access points (APs) to support the data transmission from sensors to mobile sink. Compared with existing sink mobility support algorithms like the broadcast based method, TTDD and LURP, our approach eliminates network wide broadcast while balancing the communication overhead over all APs. A theoretical analysis shows that while the number of source nodes below certain threshold, our approach outperforms the network wide broadcast and local broadcast approaches in view of communication overhead for ranges of network parameters such as network size and mobile speed. The simulation results match the analysis very well and proved the advantage of proposed algorithm. <s> BIB002 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> 1) Network and Mobility Model: <s> Proxy Mobile IPv6 (PMIPv6) is a network-based mobility management protocol that supports network mobility, regardless of whether or not a Mobile Node (MN) supports mobility protocol, such as Mobile IPv6 (MIPv6) and Hierarchical MIPv6 (HMIPv6). However, PMIPv6 protocol does not consider the packet loss during MN's handover. We propose a fast handover scheme using the Head Mobile Access Gateway (MAG) to provide a fast and seamless handover service in PMIPv6. The proposed scheme does not violate the principle of PMIPv6 by maintaining mobility information of the MN in the Head MAG. The Head MAG is granted to an MAG deployed at an optimal location in a Local Mobility Anchor (LMA) domain. The Head MAG reduces packet loss and enables fast handover by providing mobility information of the MN to the MAGs in the same LMA domain. In this paper, we analyze an analytic performance evaluation in terms of signaling cost and size of packet loss. The analytic performance evaluation shows that the proposed scheme is compared with basic PMIPv6 and the previous proposed scheme using the Neighbor Detection (ND) message. The proposed scheme reduces signaling cost by over 30% compared to the previous proposed scheme using ND message and packet loss by over 78% compared to basic the PMIPv6 scheme. <s> BIB003
The hexagonal network model is used for performance evaluation. As shown in Fig. 6 , in our network model, each cell represents an MAG, each a group of MAGs constitutes a cluster, and clusters are grouped together to constitute the LMA domain . The mobility models are designed to describe the movement pattern of mobile users, and how their location, velocity and acceleration change over time. The most commonly used mobility models are the fluid-flow model and random-walk model. Fluid-flow model is more suitable for users with high mobility, infrequent speed, and direction changes. On the other hand, when the mobility confined to a limited geographical area such as residential and business building, random-walk model is more appropriate BIB001 . Fluid-flow network mobility model is used in the analysis of mobility in terms of the average number of nodes crossing the boundary of a given area and average location update rate. Fluid flow mobility model considers both the MN's mobility direction and velocity. The movement direction of an MN within a PMIPv6 domain is distributed uniformly in the range of (0, 2π). Let K and L are the number of rings in the domain and cluster, respectively. Then the total number of MAGs in LMA domain is N = 3K(K − 1) + 1, while the total number of MAGs in a cluster is M = 3L(L − 1) + 1 BIB003 . Let v be the average speed of an MN (m/s); R the cell radius (m); μ c , μ s , and μ d be the cell, intra-domain, and interdomain crossing rates BIB003 ; μ ac and μ ic be the intra-and inter-cluster crossing rates, respectively. They are expressed as follows: We assume that all MAGs have the same coverage area of circular shape, the cell border crossing rate for a moving MN is: Where S is the cell area and calculated as S = πR 2 . When an MN crosses an LMA domain border, it also crosses an MAG border. Then, if LMA contains N of MAGs, the domain border crossing rate is given by: The rate u s for MNs which cross the MAGs borders but stay in the same LMA domain can be obtained by subtracting the LMA border crossing rate from the cell crossing rate as follows: Accordingly, the intra-cluster (μ ic ) and inter-cluster (μ ac ) crossing rates for clusters which contains M of MAGs, are obtained as follows: Using fluid-flow mobility model, the average number of movements (E[N c ]) and the average number of intra-and interdomain movements (E[N s ] and E[N d ]) can be calculated as shown in Eq. (6)- BIB002 , where λ S is the session inter-arrival time BIB003 : Similarly, the average number of intra-and inter-cluster movements (E[N ac ] and E[N ic ]) can be calculated as follows:
A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> TABLE II PARAMETERS VALUES <s> Current IP-level mobility protocols have difficulties meeting the stringent handover delay requirements of future wireless networks. At the same time they do not give sufficient control to the network to control the handover process. This paper presents an extension to Proxy Mobile IP, which is the favorite IP level mobility protocol for the 3GPP System Architecture Evolution / Long Term Evolution (SAE/LTE). The extension, Fast Proxy Mobile IPv6 (FPMIPv6), aims at solving or reducing the control and handover problem. An elementary analysis shows that FPMIPv6 can significantly reduce the handover latency and the loss of packets during handover, especially if handovers can be predicted a few tens of milliseconds before they occur. <s> BIB001 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> TABLE II PARAMETERS VALUES <s> In a network-based approach such as proxy mobile IPv6 (PMIPv6), the serving network controls mobility management on behalf of the mobile node (MN). Thus, the MN is not required to participate in any mobility-related signaling. PMIPv6 is being standardized in the IETF NetLMM WG. However, the PMIPv6 still suffers from a lengthy handover latency and packet loss during a handover. In this paper, we propose a seamless handover scheme for PMIPv6. The proposed handover scheme uses the Neighbor Discovery message of IPv6 to reduce the handover latency and packet buffering at the Mobile Access Gateway (MAG) to avoid the on-the-fly packet loss during a handover. In addition, it uses an additional packet buffering at the Local Mobility Anchor (LMA) to solve the packet ordering problem. Simulation results demonstrate that the proposed scheme could avoid the on-the-fly packet loss and ensure the packet sequence. <s> BIB002 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> TABLE II PARAMETERS VALUES <s> This paper proposes an extension of Proxy Mobile IPv6 (PMIPv6) with bicasting for soft handover, named B-PMIPv6. The proposed scheme is compared with the existing PMIPv6 handover scheme by the theoretical analysis and the network simulator. From the experimental results, we can see that the proposed B-PMIPv6 scheme can provide smaller handover latency and reduce packet loss than the existing schemes. <s> BIB003 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> TABLE II PARAMETERS VALUES <s> To reduce the cost and complexity of mobile subscriber devices, it is desirable that mobility be handled by network systems only. PMIPv6 relies on MIPv6 signaling and the reuse of the home agent functionality through a proxy mobility management agent in the network to transparently provide mobility services to mobile subscriber devices. Handover latency resulting from standard MIPv6 procedures remains unacceptable to real time traffic and its reduction can also be beneficial to non real-time throughput-sensitive applications as well. In this paper, therefore, we present a proactive QoS-Aware PMIPv6 that relies on a rich set of informational resources including on-the-go QoS requirements and service level agreements of mobile subscriber devices to make efficient proactive handover decisions. This scheme significantly reduces handover delays; and helps to ensure that mobile subscribers can continue their QoS sensitive and/or SLA-based sessions as they roam within a PMIP domain. <s> BIB004 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> TABLE II PARAMETERS VALUES <s> The host-based protocols, Mobile IPv6 protocol (MIPv6), Hierarchical Mobile IPv6 (HMIPv6), and Fast Mobile IPv6 (FMIPv6), require that the mobile terminals include the mobility functions. To address this issue, Proxy MIPv6 (PMIPv6), a network-based protocol, has recently emerged. Despite its advantage of easier practical application, PMIPv6 still has some weak points, disconnection and transmission delay, during handover. This paper, therefore, proposes a scheme that reduces handover latency by simplifying the user authentication procedure required when a mobile node (MN) enters a new wireless network domain. And it also saves transmission cost by storing packets in the optical buffering module of the local mobility anchor (LMA) and retransmitting those packets after handover phase. Performance evaluation conducted using an analysis model indicates that the proposed scheme shows a performance improvement of 33% over standard PMIPv6 in terms of handover latency, and 67% over Seamless handover scheme in terms of the transmission cost due to the retransmission of buffered packets. <s> BIB005 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> TABLE II PARAMETERS VALUES <s> This document specifies the usage of Fast Mobile IPv6 (FMIPv6) when ::: Proxy Mobile IPv6 is used as the mobility management protocol. ::: Necessary extensions are specified for FMIPv6 to support the scenario ::: when the mobile node does not have IP mobility functionality and hence ::: is not involved with either MIPv6 or FMIPv6 operations. <s> BIB006 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> TABLE II PARAMETERS VALUES <s> IETF (Internet Engineering Task Force) has proposed a PMIPv6 (Proxy Mobile IPv6) to make up for the weak points caused by handoff delay and signaling overhead. The handoff delay and signaling overhead occur due to frequent binding updates during handoff in the MIPv6 (Mobile lPv6). Although the handoff in the PMIPv6 can be faster than that in the MIPv6, there still exists the handoff delay and packet loss during the handoff. For these reasons, the researches to reduce the handoff delay and retransmit missing packets have been investigated [1]. We propose a scheme in order to set up the multicast group. As a result, the scheme reduces the handoff delay by using the cache and prevents the packet loss in the PMIPv6. We have analyzed general handoff performance in the PMIPv6 and packet loss occurring interval from a practical point of view to verify the proposed scheme. <s> BIB007 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> TABLE II PARAMETERS VALUES <s> This paper proposes an enhanced handover scheme of the Proxy Mobile IPv6 (PMIPv6) with partial bicasting in wireless networks. In the proposed PMIPv6 handover scheme, when a mobile node (MN) moves into a new network and thus its Mobile Access Gateway (MAG) performs the binding update to the Local Mobility Anchor (LMA), the LMA begins the ‘partial’ bicasting of data packets to the new MAG as well as the previous MAG. Then, the data packets will be buffered at the new MAG during handover and then forwarded to MN after the handover operations are completed. The proposed scheme is compared with the existing schemes of PMIPv6 and PMIPv6 with bicasting by ns-2 simulations. From the performance analysis, we can see that the proposed scheme can reduce handover delays and packet losses, and can also use the network resource of wireless links more effectively, compared to the existing schemes. <s> BIB008 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> TABLE II PARAMETERS VALUES <s> To improve handoff performance of the network-based Proxy Mobile IPv6 (PMIPv6), two types of enhanced handoff schemes have been presented; fast handover for PMIPv6 (F-PMIPv6) and route optimization handover (ROH). The F-PMIPv6 is committed to reduce handoff latency and is efficient to perform handoff signaling. However, it causes high packet delivery cost from additional tunneling at the LMA. The ROH minimizes packet delivery cost by allowing direct route of MAG-to-MAG but introduces high handoff signaling cost. Due to the tradeoff between handoff signaling and packet delivery cost, both schemes cannot guarantee a better performance for a diverse mobile environment where a mobile node (MN) has a different session arrival and mobility rate. Thus, we propose an adaptive PMIPv6 handoff (APHO) scheme to reduce the overall data overhead and improve the throughput over a wide range of session-to-mobility ratio (SMR). The APHO consists of mobility-aware APHO (M-APHO) and session-aware S-APHO (S-APHO), which are determined by comparing the SMR and pre-defined threshold. By analyzing the performance of FHO, ROH, and APHO, we confirm that the APHO has a better performance than the other schemes over a wide range of SMR. <s> BIB009 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> TABLE II PARAMETERS VALUES <s> In this paper, we study bicasting schemes for PMIPv6 whose purpose is to achieve seamless handovers by minimizing packet loss and handover delay during a handover in a PMIPv6 domain. Bicasting schemes are able to alleviate packet loss during a handover at the expense of utilization of a significant amount of backhaul bandwidth since packets are duplicated to the current and candidate point of attachment during the bicasting period. We therefore propose an enhanced bicasting scheme for PMIPv6 that will not only lower handover delay and packet loss but also promote an efficient utilization of the backhaul bandwidth and network elements' buffer space required as a mobile node changes its point of attachment. The proposed solution uses the signal strength behavior to make decisions on when to start and stop bicasting such that the bicasting operation is executed in a timely and accurate manner to achieve the better network resources utilization. The results which were obtained from the evaluation carried out using the Network Simulator 2 (ns-2) indeed show that the proposed solution surpasses the currently existing bicasting solutions for PMIPv6. <s> BIB010 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> TABLE II PARAMETERS VALUES <s> Typical PMIPv6 supports mobility management for the Mobile Host (MH) in localized domains over variant Wireless Local Area Network technologies. The typical PMIPv6 adopted in reactive mode in which break-before-make technique may concern, which results in long disruption latency and inevitable data traffic loss that negatively affects MH's communication performance. This article proposes a proactive latency low handover mechanism, which corresponds to make-before-break technique in order to support MH's seamless and fast roaming in PMIPv6 network. The proposed mechanism proactively performs a pre-registration and pre-access authentication processes tightly together intended for the MH in advance of a handover situation involved in typical PMIPv6, thereby enabling the MH to re-configure its interface more quickly after a handover. Consequently, the associated mobility-related signallings along with their latencies are reduced significantly and the continuity of the MH communication session is granted. Furthermore, an efficient buffering technique with optimized functions is introduced at the MH's anchor mobility entity to prevent data traffic loss and save their transmission cost. Through various simulation evaluations via ns-2, we study and analyse different mobility aspects, such as handover latency, data traffic loss, throughput, end-to-end traffic delay, traffic transmission cost and signalling cost, with respect to different traffic sources like CBR-UDP and FTP-TCP. Several experiments were conducted, revealing numerous results that verify the proposed mechanisms' superior performance over existing scheme. <s> BIB011 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> TABLE II PARAMETERS VALUES <s> Proxy Mobile IPv6 (PMIPv6) was standardized to reduce the long handoff latency, packet loss and signaling overhead of MIPv6 protocol and to exempt the mobile node from any involvement in the handoff process. However, the basic PMIPv6 does not provide any buffering scheme for packets during MNs handoff. In addition, all the binding update messages are processed by a Local Mobility Anchor (LMA) which leads to increase the handoff latency. Previous works enhanced PMIPv6 performance by applying fast handoff mechanisms to reduce the packet loss during handoffs; however, the LMA is still involved during the location update operations. In this paper, we present a new fast handoff scheme based on a cluster-based architecture for the PMIPv6 named Fast handoff Clustered PMIPv6 (CFPMIPv6); it reduces both the handoff signaling and packet loss ratio. In the proposed scheme, the Mobility Access Gateways (MAGs) are grouped into clusters with a one distinguished Head MAG (HMAG) for each cluster. The main role of the HMAG is to carry out the intra-cluster handoff operations and provide fast and seamless handoff services. The proposed CFPMIPv6 is evaluated analytically and compared with the previous work including the basic PMIPv6, Fast PMIPv6 based on Multicast MAGs group (MFPMIPv6), and the Fast Handoff using Head MAG schemes (HFPMIPv6). The obtained numerical results show that the proposed CFPMIPv6 outperforms all the basic PMIPv6, MFPMIP6, and HFPMIPv6 schemes in terms of the handoff signaling cost. <s> BIB012
2) Signaling Cost Analysis: The signaling cost required for transmitting one control packet between two nodes can be expressed as follows: Then the signaling cost for MN-assisted schemes can be derived as follows: SC BIB001 BIB006 SC BIB003 BIB008 BIB010 SC Table II shows the parameters values used for signaling cost calculations. Fig. 7 , shows the signaling cost of the MN-assisted handoff schemes subject to the total number of hops between MAGs and LMA. It can be seen that the signaling cost for all schemes increases with the increment of the number of hops. However, Shih shows the lowest signaling cost due to relying the mobility process on the MAP rather than the LMA. In addition, the handoff signaling cost for the network assisted schemes can be derived as follows: SC BIB004 BIB005 BIB009 BIB011 SC BIB002 BIB007 Where E [N ac ] and E [N ic ] are the average number of intraand inter-cluster movements, respectively. Fig. 8 shows the signaling cost for the network-assisted handoff schemes subject to the total number of hops between MAGs and LMA. It can be seen that the signaling cost for all schemes increases with the increment of the LMA-MAG distance. However, those schemes which are not totally rely on the far LMA show the lowest signaling cost. It can also be seen that the CFPMIPv6 BIB012 shows the best performance due to relying the intra-cluster handoff on the HMAG rather than LMA.
A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> V. ROUTE OPTIMIZATION <s> Network-based mobility management enables IP mobility for a host ::: without requiring its participation in any mobility-related signaling. ::: The network is responsible for managing IP mobility on behalf of the ::: host. The mobility entities in the network are responsible for ::: tracking the movements of the host and initiating the required ::: mobility signaling on its behalf. This specification describes a ::: network-based mobility management protocol and is referred to as Proxy ::: Mobile IPv6. [STANDARDS-TRACK] <s> BIB001 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> V. ROUTE OPTIMIZATION <s> This draft describes a light weight route optimization technique that ::: helps to optimize the media path between two communicating nodes when ::: Proxy MIP is used as the mobility protocol. This routing optimization ::: technique is most useful when the two communicating hosts are away ::: from home and need to communicate with each other using an optimized ::: path. It takes advantage of the data packet between LMA and MAG to set ::: up the optimized data path between the communicating hosts. This route ::: optimization technique is applicable to both the intra-LMA and inter- ::: LMA scenarios. <s> BIB002 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> V. ROUTE OPTIMIZATION <s> We propose a Local Mobility Anchor (LMA) initiated route optimization protocol for a smooth transition from the old optimized path to a new optimized path after handover in Proxy Mobile IP (PMIP). The LMA initiated protocol can reduce the handover latency and achieve fast recovery of the optimized path after handover. As a result, the proposed protocol solves the out-of-sequence delivery problem during the route optimization procedure. <s> BIB003 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> V. ROUTE OPTIMIZATION <s> This document extends local routing in proxy Mobile IPv6 and defines a ::: simplified localized routing optimization protocol within one PMIPv6 ::: domain. The protocol supports IPv4 transport network operation, IPv4 ::: home address mobility and handover. The Local mobility anchor/mobile ::: access gateway initiates local routing for the mobile and ::: correspondent node by sending messages to each mobile access ::: gateway/local mobility anchor. In case the correspondent node is ::: connected to another local mobility anchor, the local mobility anchors ::: connected by the correspondent node needs to be discovered firstly so ::: that it can notify its mobile access gateways to the mobile access ::: gateway attached by the mobile node afterwards. Mobile access gateways ::: create and refresh bindings using proxy binding update and ::: acknowledgement messages. <s> BIB004 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> V. ROUTE OPTIMIZATION <s> Proxy Mobile IPv6 has been developed from the concept of network-based mobility support protocol in the Internet Engineering Task Force. The recently published specification of Proxy Mobile IPv6 only focuses on the mobility support without a mobile host’s participation in the mobility signaling. Then, route optimization issues are left in the basket for further works. In this paper, we explore the existing route optimization proposals that are analyzed and matched against a list of functional and operational angles. Then, the chosen two route optimization proposals are evaluated in terms of signaling cost, packet delivery cost, total cost, and service blocking probability. Through the provided analysis results, we demonstrate that route optimization solves the ineffective routing path problem when the mobile host communicates with its corresponding host and argue that the scalability of Proxy Mobile IPv6 architecture is also improved due to the distributed routing path. In addition, the cost model developed in this paper would be a reference model in order to facilitate decision making for further route optimization design. <s> BIB005 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> V. ROUTE OPTIMIZATION <s> The IETF specified Proxy Mobile IPv6 as protocol for network-based ::: mobility management. In Proxy Mobile IPv6, mobile nodes are attached ::: to the network through Mobility Access Gateways and registered with a ::: Local Mobility Anchor. Traffic from and to the mobile node traverses ::: the mobile node's Local Mobility Anchor, irrespective of the ::: location of the mobile node's corresponding communication endpoint. ::: This document specifies a protocol extension to Proxy Mobile IPv6 ::: which allows the set up and maintenance of an optimized routing path ::: between two communicating mobile nodes' Mobility Access Gateways ::: without traversing the mobile nodes' Local Mobility Anchor(s). The ::: protocol component of a rendezvous control point ensures stable ::: maintenance of routing states during handover in scenarios with ::: multiple mobility anchors, where states for the two communication ::: endpoints is distributed between these anchors. <s> BIB006 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> V. ROUTE OPTIMIZATION <s> Proxy Mobile IPv6 (PMIPv6) relies on centralized data and signaling anchoring. Communications of mobile users are tunneled to and from a local mobility anchor (LMA) for routing decision. In a hierarchical architecture this centralization generates bottlenecks and scalability issues. In distributed architectures, PMIPv6 could be extended and optimized to take advantage of the distribution of network functions. <s> BIB007 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> V. ROUTE OPTIMIZATION <s> Proxy Mobile IPv6 (PMIPv6) is being considered as a promising mobility support protocol in the next-generation mobile network, thanks to its simplicity. However, since the basic specification of PMIPv6 was developed, several extensions to PMIPv6 are still being developed in the IETF. In this paper, we present a survey on route optimization (RO) schemes proposed to improve the performance of packet transmission during the PMIPv6 RO development at the IETF. Qualitative analysis of the RO schemes are provided. Then, we also describe remaining chanllenges and issues. <s> BIB008 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> V. ROUTE OPTIMIZATION <s> Proxy Mobile IPv6 (PMIPv6) is a network based mobility management ::: protocol that enables IP mobility for a host without requiring its ::: participation in any mobility-related signaling. PMIPv6 requires all ::: communications to go through the local mobility anchor. As this can be ::: suboptimal, Localized Routing (LR) allows Mobile Nodes (MNs) attached ::: to the same or different Mobile Access Gateways (MAGs) to route ::: traffic by using localized forwarding or a direct tunnel between the ::: gateways. This document proposes initiation, utilization, and ::: termination mechanisms for localized routing between mobile access ::: gateways within a proxy mobile IPv6 domain. It defines two new ::: signaling messages, Localized Routing Initiation (LRI) and Local ::: Routing Acknowledgment (LRA), that are used to realize this mechanism. ::: [STANDARDS-TRACK] <s> BIB009 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> V. ROUTE OPTIMIZATION <s> Proxy Mobile IPv6 (PMIPv6) has been developed by the IETF as a network-based mobility management protocol to support the mobility of IP devices. Although several proposals have been made for localized routing optimization, they don't take into account handover management and localized routing simultaneously. In fact, the localized routing state is only restored after the handover, leading to packet loss and signaling overhead. On the other hand, Fast Handovers for PMIPv6 (F-PMIPv6) protocol has been designed to mainly solve the issues of long handover delay and packets loss during handover. As a result, this paper looks at enhancing F-PMIPv6 by combining the handover with route optimization by proposing a new protocol called Optimized Proxy Mobile IPv6 (O-PMIPv6). The proposed protocol enhances the performance of PMIPv6 and F-PMIPv6 in terms of route optimization handover delay, signaling cost, and network utilization. <s> BIB010 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> V. ROUTE OPTIMIZATION <s> Proxy Mobile IPv6(PMIPv6) [1], as one of the most promising technologies for the next generation IP network, has attracted much attention in academia and industry. As an extension of PMIPv6, localized routing optimization can improve the performance of local communication greatly. However, recent study on localized routing optimizations only focus on single-access scenario, while the multi-access scenario has not been widely studied yet. In this paper, (1) we analyze the local communication scenarios for multi-access Mobile Node(MN), and propose two localized routing optimization schemes in the scenarios that MNs have interfaces attached to the same Mobile Access Gateway(MAG). (2) We then propose a localized routing selection algorithm to enable LMA to choose the best transmission path between MNs. (3) Besides, we also provide a performance analysis for the localized routing optimization schemes, and the results show that our scheme can greatly reduce the transmission cost and traffic load. <s> BIB011
Despite its advantages in reducing the handoff latency and relieving MNs from participating in the handoff process, PMIPv6 still has some demerits due to the dependence on a single and central LMA BIB008 . In PMIPv6, all the data packets must pass through an LMA even though the communicating entities are located close to each other, as shown in Fig. 9 [10] . Thus, the data packets go through a non-optimized path which thereby increases the end-to-end packet delay . In addition, involving LMA in all handoff operations increases the latency time required for optimal route recovery after handoff. The basic PMIPv6 protocol BIB001 addressed the localized routing between two MNs connected to the same MAG. However, it has not specified the localized routing between two MNs connected to different MAGs which are attached to the same or different LMAs. Thus the main objective of the localized routing solutions is to provide a scheme which allows data packets to be routed directly between the communicating MAGs without traversing LMA , BIB004 . Establishing a direct path between the communicating MAGs enhances the network performance due to BIB008 , : i) reducing data traffic between MAG-LMA, which in turn reduces the traffic overhead and congestion on the core network. ii) the direct path enhances the network performance in terms of packet delivery cost, especially when the communicating MNs reside near to each other. iii) offloading traffic from LMA reduces the bottleneck problem in LMA which in turn increases the network scalability. The return routability of MIPv6, which is used to solve the triangle routing problem, is not suitable for PMIPv6 since the MN is not able to perform the correspondent binding update. In PMIPv6, MNs do not participate in the binding process and kept completely agnostic on their topological location . In addition, unlike MIPv6, the mobility management signaling is performed by MAGs and LMA in which the security may be established among these entities by bootstrapping. Thus, in this paper, the MIPv6 return routability schemes are not included, because we assume that security association among network entities is already established . The key aspects in designing the route optimizations (RO) schemes are: i) how to determine the address of the second party of the communication session (target MAG) in order to maintain a direct tunnel between the communicating MAGs. ii) how to recover the optimal route after MNs handoff to new target MAGs. There have been several attempts to enhance the packet delivery cost in PMIPv6. These attempts share the common goal of reducing the communication path and differ in the number of control messages, RO initiation entity, and the way of recovering RO status after handoff. Fig. 10 shows different RO schemes, where two MNs are attached to different MAGs and registered to the same LMA. The RO trigger event (ROT) is occurred when LMA receives the first packet from MN to CN or when it receives a PBU message for registering an MN which already has an RO states with CN. To maintain an optimal route in PMIPv6, Liebsch et al. in proposed an RO scheme which allows a pair of MAGs to communicate directly without the involvement of LMA. In their proposed proxy-RO mode, LMA initiates the RO process by sending the required RO messages to the pair of MAGs when it receives ROT, as shown in Fig. 10 . LMA sends an ROinit message to inform MAG2 that LMA is the RO controller and the optimal route should be maintained between MN and CN. Then LMA sends an ROsetup message to MAGn informing it about the destination MAG address (MAG2) in order to create a direct bi-directional tunnel between MAGs. After that, LMA sends an ROsetup message to MAG2 informing it that MAGn is ready to create an RO with it. This scheme is considered as a heavy weight RO due to the high signaling required to accomplish the RO procedure. To reduce the signaling required for maintaining RO, Dutta et al. in BIB002 proposed a light weight RO scheme which reduces the required RO messages. In their proposed RO, which is shown in Fig. 10 , LMA initiates the RO procedure by exchanging Correspondent Binding Update (CBU) and Correspondent Binding Acknowledgement (CBA) messages with the source MAG to notify about the destination MAG address. This scheme is considered as a light weight procedure due to its low signaling cost. However, it provides a single-direction RO only from MN to CN and the whole procedure should be repeated for the inverse direction BIB005 . Loureiro et al. in BIB006 enhanced the previous work of scheme to ensure a stable maintenance of routing states during handoff independent of whether the communicating MNs belong to single or multiple LMAs. To accomplish their work, they introduced a rendezvous control point and three pairs of messages including the RO trigger/Ack, RO Init/Ack and RO Setup/Ack messages. As shown in Fig. 10 , LMA sends an ROinit message to MAGn to initialize the RO for traffic from MN to CN. Consequently, MAGn sends an ROsetup message to MAG2 to start activating the RO for the inverse path of traffic from CN to MN. After that, MAGn sends ROinitAck message back to LMA to inform about the RO completion. Wu et al. BIB004 proposed a scheme for localized routing to support IPv4 transport network and introduced Local Routing Optimization Request (LROREQ) and Local Routing Optimization Response (LRORSP) messages to establish the local routing path. Krishnan et al. BIB009 proposed another localized routing scheme to exchange data traffic directly between the communicating MAGs and introduced the Localized Routing Initiation (LRI) and Localized Routing Acknowledgment (LRA) messages to set up the optimal routing. As shown in Fig. 10 , LMA initiates RO when it receives ROT by exchanging the LRI and LRA messages with the communicating MAGs. In addition to the PMIPv6 RO schemes proposed at IETF, several works were carried out to enhance the localized routing. To guarantee a fast and smooth recovery for optimal route after handoff and to solve the packet out-of-order problem, Choi et al. in BIB003 proposed an LMA Initiated RO (LIRO) protocol which provides a bi-directional RO between the communicating MAGs. LMA is responsible for multicasting any update in the RO status to the MAGs which are involved in the communication process (MAG1, MAG2, and MAGn). LIRO provides a bi-directional RO while smoothly recovering the RO after handoff; however, the handoff latency is still long since all the RO signaling should be performed by LMA, which may reside far from the communicating MAGs. Boc et al. in BIB007 proposed an Anchor-Based RO (ABRO) scheme to shorten the route path and to provide more control and flexibility over the RO procedure. The main idea of ABRO is to separate the control and data paths by introducing the Intermediate Anchors (IA) entities, such that, the main RO procedure is controlled by LMA, while the communication process is controlled by the IAs. ABRO provides a short route path by allowing LMA to select the optimum IA which is close to the communicating MAGs. In addition, LMA is able to offload data traffic to other networks. However, introducing the IAs is not without penalties. The packets should traverse IAs which results in an extra delay on the packet delivery cost. Furthermore, LMA performs extra functions for selecting the optimal IAs and exchanges messages with both IAs and the communicating MAGs, which leads to increase signaling cost. Rasem et al. in BIB010 , presented Optimized PMIPv6 (O-PMIPv6) scheme to enhance PMIPv6 performance by combining the FPMIPv6 with the localized routing for PMIPv6. The main objective of O-PMIPv6 is to perform both fast handoff and localized routing operations in parallel. To reduce service disruption, the localized routing LRI/LRA messages are encapsulated in the fast handoff HI/HAck messages. Heading this way, MNs will be able to maintain its LR while moving and there will be no need to establish a new LR session. The main problem is the involvement of MN in the mobility process which conflicts with the basic PMIPv6 principle. In addition, the scheme will fail if the prediction is not correct which leads to overload network with unnecessary signaling. The current works have shortened the routing path; however, most of them have either added extra signaling cost or they have overloaded the network entities by extra functions. In addition, most of the proposed works involve the LMA in handoff process which results in a long handoff latency. Therefore, Jabir et al. in proposed Cluster Based RO (CBRO) as a new RO scheme based on the cluster-based PMIPv6 architecture to shorten the communication path while reducing the handoff signaling cost. Authors discussed different communications and mobility scenarios and their proposed CBRO has shown a low signaling cost to recover the prior RO after handoff. This is attributed to the exclusion of LMA from participating in the intra-cluster handoff. Liu et al., BIB011 proposed a route optimization scheme for multi-interface communicating MNs. These MNs may have interfaces attached to the same MAG. Thus, to reduce the communication path, the LMA should specify the shortest communication path using this shared MAG rather than a direct tunnel between the MAGs of the communicating MNs. Table III , summarizes the route optimization schemes in terms of the RO trigger entity, whether RO maintained to both sides or single side, and the network overhead represented by signaling cost. More comparisons and classifications on RO schemes can be found in BIB008 . According to the qualitative analysis shown in the table, we can specify the main issues that should be considered during the design of RO schemes are: First, is to determine the proper RO trigger entity. LMA provides more centralized and secure entity, but it increases both the possibility of bottleneck in LMA and the RO delay. The second issue is whether the scheme provides RO for single or bidirectional sides of communication. Single side means that the RO is maintained from source to destination nodes only, thus to maintain RO for the opposite side, the RO procedure should be repeated again. Finally, the RO should not overload the network by exchanging large number of messages to create the required tunnel for optimal path.
A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> VI. NETWORK MOBILITY <s> NEMO (Network Mobility) is proposed to support node mobility collectively. NEMO BSP is the most popular protocols to support NEMO based on MIPv6. However it does not satisfy requirements of realtime and interactive application due to problems, such as long signaling delay and movement detection time. Also MN should have mobility function for its handover. Proxy MIPv6 (PMIPv6) is proposed to overcome defects of MIPv6 based protocols. In this paper, we propose a Network Mobility supporting scheme, which supports MNs’ mobility between PMIPv6 network and mobile network as well as the basic network mobility. <s> BIB001 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> VI. NETWORK MOBILITY <s> In this paper, we propose a network mobility supporting scheme (N-NEMO) in Proxy Mobile IPv6 (PMIPv6) network, which is an issue still up in the air for the PMIPv6. In the N-NEMO, a tunnel splitting scheme is used to differentiate the inter-Mobility Access Gateway (MAG) and intra-MAG mobility. The performance analysis and comparison between other related schemes show that N-NEMO reduces the signaling cost significantly. Besides, it enhances the efficiency and scalability to provide the comprehensive network mobility in the PMIPv6 context. <s> BIB002 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> VI. NETWORK MOBILITY <s> In this paper, we propose a Network-based NEtwork MObility supporting scheme (N-NEMO) in Proxy Mobile IPv6 (PMIPv6) network, which is an issue still up in the air for the basic PMIPv6 protocol. The N-NEMO, like PMIPv6, bases mobility support on network functionality, thus enabling conventional (i.e., not mobility-enabled) IP devices to change their point of attachment without disrupting ongoing communications. As a result, N-NEMO enables off-the-shelf IP devices to roam within the fixed infrastructure, attach to a mobile network and move with it, and also roam between fixed and mobile points of attachment while using the same IP address. Besides, a tunnel splitting scheme is used in N-NEMO to differentiate the inter-Mobility Access Gateway (MAG) mobility and intra-MAG mobility. The analyzing results show that N-NEMO reduces the signaling cost significantly and enhances the efficiency and scalability of network mobility in the PMIPv6 context. <s> BIB003 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> VI. NETWORK MOBILITY <s> Network mobility has attracted large attention to provide vehicles such as trains with Internet connectivity. NEMO Basic Support Protocol (NEMO-BS) supports network mobility. However, through our experiment using a train in service, NEMO-BS shows that the handover latency becomes very large if the signaling messages are lost due to instability of the wireless link during handover. This paper proposes PNEMO, a network-based localized mobility management protocol for mobile networks. In PNEMO, mobility management is basically handled in the wired network so that the signaling messages are not transmitted on the wireless link when handover occurs. This makes handover stable even if the wireless link is unstable during handover. PNEMO is implemented in Linux. The measured performance shows that the handover latency is almost constant even if the wireless link is unstable when handover occurs, and that the overhead of PNEMO is negligible in comparison with NEMO-BS. <s> BIB004 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> VI. NETWORK MOBILITY <s> NEtwork MObility (NEMO) provides that a moving network involving mobile network nodes (MNNs) can move around the Internet without loss of connection. NEMO Basic Support (NBS) has been developed as an extension of Mobile IPv6 (MIPv6) so that it succeeds to drawbacks of host-based mobility management protocol. In NBS, the moving network keeps its connectivity with its home agent (HA) through its registration procedure. In other words, the moving network is required to obtain its new address and to send its own mobility signaling to the HA for every movements. In this paper, a simple and lightweight mechanism for NEMO within Proxy Mobile IPv6 (PMIPv6), which is a network-based mobility management protocol, is introduced. The proposed mechanism enables a moving network to change its point of attachment at a given PMIPv6 domain without acquiring a new address and sending its own mobility signaling. Mobility service provisioning entities residing at the PMIPv6 domain are extended to support NEMO. The analytical performance analysis is conducted to demonstrate that the moving network in the proposed mechanism achieves the reduced traffic cost and handover latency compared to NBS. <s> BIB005 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> VI. NETWORK MOBILITY <s> This paper proposes an efficient network mobility support scheme with direct home network prefix (HNP) assignment to reduce the location update cost and packet tunneling cost. Since the HNP of a Mobile Network Node (MNN) is directly assigned by the moving mobile access gateway (mMAG), instead of the local mobility anchor (LMA), using a sub-prefix of the mMAG's HNP, the proposed scheme minimizes both the location update and the packet tunneling cost without any additional extension of the PMIPv6 protocol. Numerical results show that the proposed scheme outperforms the existing schemes. <s> BIB006
Hosts may move together as a group, as in the case of medical care where a number of sensor nodes fixed on a patient body, and in the case of moving vehicles where many passengers attached to a movable network. It is not efficient for each MN to perform its handover procedure separately at the same time when the mobile network moves. In addition, not all MNs are able to run mobility protocols like MIPv6 BIB001 , . Therefore, based on MIPv6, the IETF Network Mobility working group has standardized the NEtwork MObility (NEMO) protocol as a network mobility protocol. Mobile network consists of a number of Mobile Network Nodes (MNN) connected to a Mobile Router (MR). NEMO introduced MR for performing the required mobility signaling to attach its MNNs members to Access Router (AR). MNNs use the Mobile Network Prefix (MNP) advertised by MR to configure their IP-addresses. Even though MR moves from one access network to another, its MNP is not changed which makes mobility transparent to MNNs. MNNs are not aware of handover and all packets flow through a bidirectional tunnel between MR and its HA. NEMO allows all MNs in the mobile network not to lose ongoing sessions irrespective of their capabilities during handover . Three types of MNNs can be recognized which are: Local Mobile Nodes (LMN) that can move within or to another networks, Local Fixed Nodes (LFN) which are fixed nodes, and Visiting Mobile Nodes (VMN) which are coming from another networks and attached to the mobile network . The MR's handoff procedure is similar to that of MIPv6 in acquiring new CoA, sending Binding Update to HA, and establishing a bidirectional tunnel between MR and HA. Packets sent from CN to a host are routed to HA of MR (HA-MR) to be forwarded to MR using the established bidirectional tunnel. MR receives the packet, decapsulates it, and then forwards it to the destination host node. NEMO protocol inherits MIPv6 drawbacks such as long signaling delay and movement detection time. In addition, all MNNs are affected by the handoff delay of MR. Thus, supporting NEMO in PMIPv6 would reduce the signaling overhead required for MR registration. Fig. 12 , shows the scenario where NEMO is supported by PMIPv6 along with the required main entities, tables, and prefixes. There have been several research works presented to support NEMO in the PMIPv6, J. H. Lee [61] presented the possible scenarios for integrating NEMO and PMIPv6. However, their scenarios assumed that MR has installed MIPv6 protocol and they have not considered the MNN's mobility between MRs and MAGs BIB002 . Bernardos et al. described the problem of supporting network mobility in PMIPv6 domain. Their analysis of the current technologies (NEMO and PMIPv6) has shown that these standards are not able to provide full supporting of NEMO in PMIPv6 network. The main problem in combining NEMO and PMIPv6 is that the addresses used by mobile network belong to MNP while PMIPv6 uses different HNP addresses. Thus, when MNs move from MR to MAGs, it should change their addresses. Soto et al. , proposed the NEMO-enabled PMIPv6 (NPMIPv6) to fully integrate NEMO in PMIPv6 domain. NPMIPv6 provided Internet connectivity for users from fixed MAGs or from mobile MAGs. Users can move between MR and fixed MAGs while keeping their ongoing sessions. NPMIPv6 introduced the moving MAG (mMAG) which is responsible for registering MNNs and itself with LMA. As shown in Fig. 13 , the LMA cache table is extended by adding a new field, M flag, to indicate that MNN is connected to a mobile network. The data packet destined to MNN is intercepted by LMA which in turn recursively searches its BCE to find the mMAG to which MNN is attached. The data packet is then encapsulated twice by LMA to send it to the MNN, the inner is for mMAG and the outer is for the fixed MAG. Although this mechanism provided full integration between NEMO and PMIPv6, it incurs a large tunneling overhead due the use of multiple encapsulations even for the local communications. In addition, the case in which MRs are moving from outside PMIPv6 domain was not considered. To reduce the tunneling overhead problem of NPMIP, Yan et al. BIB003 proposed the Network Mobility Support in PMIPv6 Network (NNEMO) which splits the tunnel to the MNN into two parts, LMA-MAG and MAG-mMAG tunnels. To locally register MNN in the MAG, NNEMO introduced two messages which are the Localized Proxy Binding Update (LPBU) and the Localized Binding Update Acknowledgment (LPBA). When the data packet reaches the LMA, it encapsulates it to the destination MAG which in turn decapsulates it and searches for the MR in its binding list. Then the destination MAG encapsulates the packet again and sends it to the MR to be forwarded to the MNN. Although NNEMO reduced the multiple tunneling overhead, it still incurs tunneling overhead during the local communications and when the MRs become nested BIB004 . Teraoka et al. BIB004 proposed PNEMO as a network-based localized mobility management protocol for mobile networks to reduce the tunneling overhead by configuring the routing information of the MNN in the MR. H-B. Lee et al. BIB001 , proposed a scheme to support node mobility between MRs and MAGs assuming MNs have no mobility support capabilities. MR acts as a MAG to exchange the required message with LMA for MN registration and emulating the MN's home network. They assumed that there is another PMIPv6 network (LMA for MR and MR) over the underlying PMIPv6 network. When MN is attached to the mobile network, MR exchanges the binding update message (PBU) required for MN registration with LMA. The PBU is treated as a normal IP packet in the underlying PMIPv6 network. When MN moves out of MR, the target MAG performs MN registration using standard PMIPv6 operations. However, this scheme assumed that MRs and their MNs are initially registered in PMIPv6 domain. J-H Lee et al. BIB005 , proposed a simple and lightweight mechanism for supporting NEMO within Proxy Mobile IPv6 (PMIPv6). The PMIPv6 entities (LMA and MAGs) were extended to enable MR to change its point of attachment at a given PMIPv6 domain without acquiring a new address. J-H Lee et al. , presented PMIPv6 based NEMO (P-NEMO) which extended the PMIPv6 mobility service provisioning entities to provide vehicle's Internet connectivity while moving. The vehicle is relieved from participating in the mobility process to reduce the signaling overhead and to provide mobility services for vehicles without installing mobility protocols. In addition, they proposed Fast PMIPv6 based NEMO (FP-NEMO) scheme to improve handoff latency by anticipating the vehicle's new point of attachment. However, the proposed mechanisms have not considered MNNs mobility and it assumed that MR is initially registered in PMIPv6 domain. Petrescu et al. presented a draft to manage the network mobility in PMIPv6 domain without changing the PMIPv6 specifications and to maintain a bidirectional communication between LFN and any corresponding node in Internet. To avoid the changes in PMIPv6, they presented a mechanism of TABLE IV NEMO SUPPORTING SCHEMES SUMMARY "prefix division", where HNP, which is typically assigned by PMIPv6 to a MH, is used by MR to form Mobile Network sub-Prefix(es). These sub-Prefixes are used by LFNs within the moving network to create their IPv6 addresses. S. Jeon et al. , presented a draft to support network mobility over PMIPv6 protocol. They used the same idea of by introducing new functional entity called mMAG which is responsible for detecting MN's movement and registering the new MNs with LMA. The MN's IP session continuity while moving between MR and MAG is supported in their draft. mMAG is seen as a normal MN by LMA and as a fixed MAG by its attached MNNs. Choi et al., BIB006 proposed a network mobility support in pmipv6 to provide a low binding update cost for the MNN through assigning the HNP to MNNs directly by the mMAG instead of LMA. The mMAG is considered as a normal mobile node in the case of registration and de-registration with the LMA. When an MNN is attached to mMAG, it assigns the required HNP to the MNN using its sub-prefix HNP to reduce the signaling overhead of exchanging PBU/PBA with the LMA. To reduce the packet tunneling cost, the incoming data is sent by the LMA to the mMAG using its HNP which in turn forwards the packets to the destination MNN. Table IV summarizes the research work devoted to the NEMO supporting in PMIPv6 protocol in terms of tunneling overhead, MNN mobility, new messages, entity modification, and supporting the visiting MR.
A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> VII. LOAD BALANCING <s> In Proxy Mobile IPv6 (PMIPv6), a new entity called Mobile Access Gateway (MAG) performs the mobility-related signaling with Local Mobility Anchor (LMA) on behalf of MNs (Mobile Nodes). However, a number of MNs must be associated with a MAG and hence, the MAG can be easily overloaded. Thus, in this paper, we propose a load balancing mechanism for the PMIPv6 network. We also discuss about using IEEE 802.21 Media Independent Handover (MIH) protocol in the load balancing to learn the load status at the candidate Point of Attachments (PoAs), in addition to the load status at the candidate MAGs. It is shown via numerical and simulation results that the proposed load balancing mechanism can realize lower queuing delay at the MAG and higher data transmission rate at the PoA in the PMIPv6 network. <s> BIB001 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> VII. LOAD BALANCING <s> Abstract In proxy mobile IPv6 (PMIPv6), a new entity called mobile access gateway (MAG) performs the mobility-related signaling with local mobility anchor (LMA) on behalf of mobile nodes (MNs) . However, a number of MNs must be associated with an MAG and hence, the MAG can be easily overloaded. Thus, in this paper, we propose a load balancing mechanism for the PMIPv6 network. It is shown via numerical and simulation results that the proposed mechanism reduces the queuing delay in the PMIPv6 network. <s> BIB002 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> VII. LOAD BALANCING <s> In Proxy Mobile IPv6, network entities such as mobile access gateway and local mobility anchor perform mobility on behalf of the mobile nodes. Thus, PMIPv6 decreases the handover latency and the network enables mobile nodes to receive mobility support although they do not have mobility protocol stack within them. However, when a large number of mobile nodes are attached to the PMIPv6 domain, or they attach to a specific MAG, an MAG easily suffers from heavy load. As the load over an MAG increases, the end-to-end transmission delay and the number of packet loss increase. Then the load leads to mobility failure. Yet, the current specification of PMIPv6 does not provide any solution for this problem. Thus, in this paper, we propose a load balancing scheme for the PMIPv6 network to balance the load over MAGs in the domain. It is shown in the simulation that the proposed scheme distributes loads over MAGs in a PMIPv6 domain and also reduces packet losses. <s> BIB003 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> VII. LOAD BALANCING <s> Mobile Access Gateway (MAG) is a component of Proxy Mobile IPv6 (PMIPv6) which provides network-layer transparent mobility to mobile nodes (MN). MAG serves a local geographical area and mobile nodes in its vicinity may attach to it to get the mobility services from its controlling PMIPV6 domain. Since MAG is the point of attachment of mobile nodes, negotiated and guaranteed quality of service (QoS) is affected in case of service disruptions and overload of the MAG. To avoid/minimize the degradation of quality of service, we propose effective mechanisms to share the load of affected MAG with the MAG(s) that are working under normal conditions. We propose to handover certain mobile nodes to other MAGs depending upon their geographical serving area and current capacity. Furthermore, location of mobile node, its quality of service profile, direction of motion and its multi-interface capability are major factors in selecting the mobile nodes for handover. <s> BIB004 </s> A Comprehensive Survey of the Current Trends and Extensions for the Proxy Mobile IPv6 Protocol <s> VII. LOAD BALANCING <s> Researchers have been emphasizing on employing mobile agents to assist vertical handover decisions in 4G mainly because it reduce consumption of network bandwidth, delay in network latency and the time taken to complete a particular task in PMIPv6 environment is minimized as well. However, when more than the desired number of mobile nodes is attached to the PMIPv6 domain including a particular MAG, the MAG gets overload. In fact, with this increasing load, the end-to-end transmission delay and the number of packet loss also increases. Since the number of mobile users and the associated applications are increasing exponentially, the problem stated above is obvious. Yet, the current specification of PMIPv6 does not provide any solution for this problem. Thus, this work extends the previous works wherein the employment of mobile agents in PMIPv6 has been justified but no efforts have been put towards reducing and hence balancing the load of overloaded MAG. The paper proposes a skilful load balancing scheme for the PMIPv6 network to balance the load over MAGs in the domain. <s> BIB005
In PMIPv6, the mobility-related signaling is performed by MAGs on behalf of the MN attached to the access links. However, when a large number of MNs are attached to a specific MAG, a MAG easily suffers from heavy load. When a MAG incurs a high load, it leads to increase both the endto-end transmission delay and number of lost packets. This problem has not yet been considered in the current specification of PMIPv6. So applying the load sharing mechanism among MAGs can improve the overall performance of PMIPv6 network. Kim and Lee BIB002 , proposed a load balancing mechanism for PMIPv6 network to distribute the load among overlapped MAGs, which leads to improve the delay performance for MNs in PMIPv6 domain. In their mechanism, the heartbeat message is utilized by MAGs to send their load information to LMA periodically. The LMA stores load information in its policy database to be used as a reference for future load balancing action and to compute the load status in the overall PMIPv6 domain. As shown in Fig. 14 , when the overall load exceeds a certain threshold, LMA sends a Heartbeat message to the most overloaded MAG to perform the required load balancing procedure. The selected MAG reacts by selecting MN(s) to change its current attachment to another target MAG. The target MAG is selected by the current MAG according to its load and signal strength received by MNs. Basically, the MN requesting the highest data rate is selected except if the MN having a realtime service session. This balancing mechanism was evaluated by both numerical and simulation and the results shown that it reduced both the average queuing delay and the possibility for MAGs to be overloaded. The use of IEEE 802.21 in the load balancing has also been discussed by Kim and Lee BIB001 to specify the load status of the candidate Point of Attachments (PoA). Knowing the load status of PoA is important because although the load at MAG is low, the target PoA may be overloaded if the most target MAG load is concentrated on the target PoA. Their proposed mechanism reduced the queuing delay at the MAGs and provided high data transmission rate at the PoA. Kong et al. BIB003 , proposed a new load balancing mechanism to distribute load over MAGs with a low signaling required for scanning the candidate target MAGs. In their proposed mechanism, the MAGs loads information is exchanged among MAGs in the domain, so that, each MAG is able to create a list of candidate MAGs to be used for selecting the target MAG. The MAG with low load is selected during the initial MN attachment and also MNs are informed to handover before the current MAG becomes overloaded. By avoiding MAG from being overloaded, the packet loss is reduced and the load is distributed among the MAGs with least signaling overhead. IETF considered the load balancing problem and an RFC was standardized by Jiang , where the MAGs load information are sent periodically to the LMA which in turn construct a candidate MAGs list to be used during overload. The factors to be taken into account when selecting the target MAG are specified in BIB004 . Dimple and Kailash BIB005 proposed an agent based scheme to balance the load among MAGs in the PMIPv6 domain. The mobile agent can move from one place to another to reduce the load on MAGs. It visits one mobile node to collect the data and then move to all MNs attached to the MAG in order to collect and transmit only relevant data to reduce the communication overhead. The load balancing scheme based on some criteria to select the MNs that should be handoff from one MAG to another, such as: the MNs with the highest session-to-mobility ratio is selected for handoff, while those MNs having real-time service must not be selected. According to the above literatures, the main issues that should be considered during the load sharing mechanism design are: building the candidate MAG list should not overload the network by exchanging a large number of messages, the selection of candidate MN(s) should consider the traffic type used by MN(s), the ping-pong problem may be arisen when the target MAG is overloaded and triggered the MN to return back to its previous MAG, the incoming traffic should be buffered and forwarded to the target MN, and finally, building the candidate list process is not an easy task due to the dynamic nature of the network systems which makes the MAGs loads varies at every moment.
A Review of Meta-Reinforcement Learning for Deep Neural Networks Architecture Search <s> Introduction <s> Thank you very much for downloading perceptrons an introduction to computational geometry. As you may know, people have search numerous times for their chosen books like this perceptrons an introduction to computational geometry, but end up in malicious downloads. Rather than reading a good book with a cup of coffee in the afternoon, instead they are facing with some harmful virus inside their laptop. <s> BIB001 </s> A Review of Meta-Reinforcement Learning for Deep Neural Networks Architecture Search <s> Introduction <s> Deep neural networks (DNNs) are currently widely used for many artificial intelligence (AI) applications including computer vision, speech recognition, and robotics. While DNNs deliver state-of-the-art accuracy on many AI tasks, it comes at the cost of high computational complexity. Accordingly, techniques that enable efficient processing of DNNs to improve energy efficiency and throughput without sacrificing application accuracy or increasing hardware cost are critical to the wide deployment of DNNs in AI systems. This article aims to provide a comprehensive tutorial and survey about the recent advances toward the goal of enabling efficient processing of DNNs. Specifically, it will provide an overview of DNNs, discuss various hardware platforms and architectures that support DNNs, and highlight key trends in reducing the computation cost of DNNs either solely via hardware design changes or via joint hardware design and DNN algorithm changes. It will also summarize various development resources that enable researchers and practitioners to quickly get started in this field, and highlight important benchmarking metrics and design considerations that should be used for evaluating the rapidly growing number of DNN hardware designs, optionally including algorithmic codesigns, being proposed in academia and industry. The reader will take away the following concepts from this article: understand the key design considerations for DNNs; be able to evaluate different DNN hardware implementations with benchmarks and comparison metrics; understand the tradeoffs between various hardware architectures and platforms; be able to evaluate the utility of various DNN design techniques for efficient processing; and understand recent implementation trends and opportunities. <s> BIB002
"A neuron is nothing more than a switch with information input and output. The switch will be activated if there are enough stimuli of other neurons hitting the information input. Then, at the information output, a pulse is sent to, for example, other neurons " . Brain-inspired machine learning imitates in a simplified manner the hierarchical operating mode of biological neurons BIB002 . The concept of artificial neural networks (ANN) achieved a huge progress from its first theoretical proposal in the 1950s until the recent considerable outcomes of deep learning. In computer vision and more specifically in classification tasks, CNN, which we will examine in this review, are among the most popular deep learning techniques since they are outperforming humans in some vision complex tasks [3] . The origin of CNN that were initially established by goes back to the 1950s with the advent of "perceptron", the first neural network prototyped by Frank Rosenblatt. However, neural network models were not extensively used until recently, after researchers overcame certain limits. Among these advances we can mention the generalization of perceptrons to many layers BIB001 , the emergence of backpropagation algorithm as an appropriate training method for such architectures and, mainly, the availability of large training datasets and computational resources to learn millions of parameters. CNN differ from classical neural networks in the fact that the connectivity of a hidden layer neuron is limited to a subset of neurons in the previous layer. This selective connection endow the network with the ability to operate, implicitly, hierarchical features extraction. For an image classification case, the first hidden layer can visualize edges, the second a specific shape and so on until the final layer that will identify the object. CNN architecture consists of several types of layers including convolution, pooling, and fully connected. The network expert has to make multiple choices while designing a CNN such as the number and ordering of layers, the hyperparameters for each type of layer (receptive field size, stride, etc.). Thus, selecting the appropriate architecture and related hyperparameters requires a trial and error manual search process mainly directed by intuition and experience. Additionally, the number of available choices makes the selection space of CNN architectures extremely wide and impossible for an exhaustive manual exploration. Many research effort in meta-modeling tries to minimize human intervention in designing neural network architectures. In this paper, we first give a general overview and define the field of deep learning. We then briefly survey the history of CNN architectures. In the following section we review several methods for automating CNN design according to three dimensions: search optimization, architecture design methods (plain or modular) and search acceleration techniques. Finally, we conclude the article with a discussion of future works.
A Review of Meta-Reinforcement Learning for Deep Neural Networks Architecture Search <s> VGGNet <s> Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark Krizhevsky et al. [18]. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we explore both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. Used in a diagnostic role, these visualizations allow us to find model architectures that outperform Krizhevsky et al on the ImageNet classification benchmark. We also perform an ablation study to discover the performance contribution from different model layers. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets. <s> BIB001 </s> A Review of Meta-Reinforcement Learning for Deep Neural Networks Architecture Search <s> VGGNet <s> In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision. <s> BIB002
Submitted for the ILSVRC 2014, VGGNet BIB002 won the second place and demonstrated that deeper architectures achieve better results. Indeed, with its 19 hidden layers, it was much deeper than previous convolutional networks. In order to allow an increase in depth without an exponential growth of the parameters number, smaller convolution filters (3 * 3) were used in all layers (e.g. lower size than the 11 * 11 filters adopted in AlexNet). An additional advantage of using smaller filters consists in reducing overlapping scanned pixels which results in feature maps with more local details BIB001 .
A Review of Meta-Reinforcement Learning for Deep Neural Networks Architecture Search <s> More Networks <s> Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation. <s> BIB001 </s> A Review of Meta-Reinforcement Learning for Deep Neural Networks Architecture Search <s> More Networks <s> Very deep convolutional networks have been central to the largest advances in image recognition performance in recent years. One example is the Inception architecture that has been shown to achieve very good performance at relatively low computational cost. Recently, the introduction of residual connections in conjunction with a more traditional architecture has yielded state-of-the-art performance in the 2015 ILSVRC challenge; its performance was similar to the latest generation Inception-v3 network. This raises the question of whether there are any benefit in combining the Inception architecture with residual connections. Here we give clear empirical evidence that training with residual connections accelerates the training of Inception networks significantly. There is also some evidence of residual Inception networks outperforming similarly expensive Inception networks without residual connections by a thin margin. We also present several new streamlined architectures for both residual and non-residual Inception networks. These variations improve the single-frame recognition performance on the ILSVRC 2012 classification task significantly. We further demonstrate how proper activation scaling stabilizes the training of very wide residual Inception networks. With an ensemble of three residual and one Inception-v4, we achieve 3.08 percent top-5 error on the test set of the ImageNet classification (CLS) challenge <s> BIB002 </s> A Review of Meta-Reinforcement Learning for Deep Neural Networks Architecture Search <s> More Networks <s> Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at this https URL . <s> BIB003
After ResNet BIB001 success, which exceeded human-level accuracy in (ILSVRC 2015), the so-called modern hand-crafted CNN are still being designed on the basis of previous models looking for more efficiency and lower training time. Inception-v4 BIB002 is a new release of GoogLeNet that involved many more layers than the initial version. Inception-ResNet BIB002 is built as a combination of an Inception network and a ResNet, joining inception blocks and residual connections. The last example of this section is DenseNet (Dense Convolutional Networks) BIB003 where each dense block layer is connected via skip connections to all subsequent ones allowing the learning of new features.
A Review of Meta-Reinforcement Learning for Deep Neural Networks Architecture Search <s> Meta-controllers <s> Random forests are a combination of tree predictors such that each tree depends on the values of a random vector sampled independently and with the same distribution for all trees in the forest. The generalization error for forests converges a.s. to a limit as the number of trees in the forest becomes large. The generalization error of a forest of tree classifiers depends on the strength of the individual trees in the forest and the correlation between them. Using a random selection of features to split each node yields error rates that compare favorably to Adaboost (Y. Freund & R. Schapire, Machine Learning: Proceedings of the Thirteenth International conference, aaa, 148–156), but are more robust with respect to noise. Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the splitting. Internal estimates are also used to measure variable importance. These ideas are also applicable to regression. <s> BIB001 </s> A Review of Meta-Reinforcement Learning for Deep Neural Networks Architecture Search <s> Meta-controllers <s> Research in neuroevolution---that is, evolving artificial neural networks (ANNs) through evolutionary algorithms---is inspired by the evolution of biological brains, which can contain trillions of connections. Yet while neuroevolution has produced successful results, the scale of natural brains remains far beyond reach. This article presents a method called hypercube-based NeuroEvolution of Augmenting Topologies (HyperNEAT) that aims to narrow this gap. HyperNEAT employs an indirect encoding called connective compositional pattern-producing networks (CPPNs) that can produce connectivity patterns with symmetries and repeating motifs by interpreting spatial patterns generated within a hypercube as connectivity patterns in a lower-dimensional space. This approach can exploit the geometry of the task by mapping its regularities onto the topology of the network, thereby shifting problem difficulty away from dimensionality to the underlying problem structure. Furthermore, connective CPPNs can represent the same connectivity pattern at any resolution, allowing ANNs to scale to new numbers of inputs and outputs without further evolution. HyperNEAT is demonstrated through visual discrimination and food-gathering tasks, including successful visual discrimination networks containing over eight million connections. The main conclusion is that the ability to explore the space of regular connectivity patterns opens up a new class of complex high-dimensional tasks to neuroevolution. <s> BIB002 </s> A Review of Meta-Reinforcement Learning for Deep Neural Networks Architecture Search <s> Meta-controllers <s> The computer graphics and animation fields are filled with applications that require the setting of tricky parameters. In many cases, the models are complex and the parameters unintuitive for non-experts. In this paper, we present an optimization method for setting parameters of a procedural fluid animation system by showing the user examples of different parametrized animations and asking for feedback. Our method employs the Bayesian technique of bringing in "prior" belief based on previous runs of the system and/or expert knowledge, to assist users in finding good parameter settings in as few steps as possible. To do this, we introduce novel extensions to Bayesian optimization, which permit effective learning for parameter-based procedural animation applications. We show that even when users are trying to find a variety of different target animations, the system can learn and improve. We demonstrate the effectiveness of our method compared to related active learning methods. We also present a working application for assisting animators in the challenging task of designing curl-based velocity fields, even with minimal domain knowledge other than identifying when a simulation "looks right". <s> BIB003 </s> A Review of Meta-Reinforcement Learning for Deep Neural Networks Architecture Search <s> Meta-controllers <s> Deep neural networks (DNNs) show very strong performance on many machine learning problems, but they are very sensitive to the setting of their hyperparameters. Automated hyperparameter optimization methods have recently been shown to yield settings competitive with those found by human experts, but their widespread adoption is hampered by the fact that they require more computational resources than human experts. Humans have one advantage: when they evaluate a poor hyperparameter setting they can quickly detect (after a few steps of stochastic gradient descent) that the resulting network performs poorly and terminate the corresponding evaluation to save time. In this paper, we mimic the early termination of bad runs using a probabilistic model that extrapolates the performance from the first part of a learning curve. Experiments with a broad range of neural network architectures on various prominent object recognition benchmarks show that our resulting approach speeds up state-of-the-art hyperparameter optimization methods for DNNs roughly twofold, enabling them to find DNN settings that yield better performance than those chosen by human experts. <s> BIB004 </s> A Review of Meta-Reinforcement Learning for Deep Neural Networks Architecture Search <s> Meta-controllers <s> Neural networks are powerful and flexible models that work well for many difficult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a recurrent network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is 0.09 percent better and 1.05x faster than the previous state-of-the-art model that used a similar architectural scheme. On the Penn Treebank dataset, our model can compose a novel recurrent cell that outperforms the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-of-the-art model. The cell can also be transferred to the character language modeling task on PTB and achieves a state-of-the-art perplexity of 1.214. <s> BIB005 </s> A Review of Meta-Reinforcement Learning for Deep Neural Networks Architecture Search <s> Meta-controllers <s> Bayesian optimization has become a successful tool for hyperparameter optimization of machine learning algorithms, such as support vector machines or deep neural networks. Despite its success, for large datasets, training and validating a single configuration often takes hours, days, or even weeks, which limits the achievable performance. To accelerate hyperparameter optimization, we propose a generative model for the validation error as a function of training set size, which is learned during the optimization process and allows exploration of preliminary configurations on small subsets, by extrapolating to the full dataset. We construct a Bayesian optimization procedure, dubbed Fabolas, which models loss and training time as a function of dataset size and automatically trades off high information gain about the global optimum against computational cost. Experiments optimizing support vector machines and deep neural networks show that Fabolas often finds high-quality solutions 10 to 100 times faster than other state-of-the-art Bayesian optimization methods or the recently proposed bandit strategy Hyperband. <s> BIB006 </s> A Review of Meta-Reinforcement Learning for Deep Neural Networks Architecture Search <s> Meta-controllers <s> At present, designing convolutional neural network (CNN) architectures requires both human expertise and labor. New architectures are handcrafted by careful experimentation or modified from a handful of existing networks. We introduce MetaQNN, a meta-modeling algorithm based on reinforcement learning to automatically generate high-performing CNN architectures for a given learning task. The learning agent is trained to sequentially choose CNN layers using $Q$-learning with an $\epsilon$-greedy exploration strategy and experience replay. The agent explores a large but finite space of possible architectures and iteratively discovers designs with improved performance on the learning task. On image classification benchmarks, the agent-designed networks (consisting of only standard convolution, pooling, and fully-connected layers) beat existing networks designed with the same layer types and are competitive against the state-of-the-art methods that use more complex layer types. We also outperform existing meta-modeling approaches for network design on image classification tasks. <s> BIB007 </s> A Review of Meta-Reinforcement Learning for Deep Neural Networks Architecture Search <s> Meta-controllers <s> Deep neural networks continue to show improved performance with increasing depth, an encouraging trend that implies an explosion in the possible permutations of network architectures and hyperparameters for which there is little intuitive guidance. To address this increasing complexity, we propose Evolutionary DEep Networks (EDEN), a computationally efficient neuro-evolutionary algorithm which interfaces to any deep neural network platform, such as TensorFlow. We show that EDEN evolves simple yet successful architectures built from embedding, 1D and 2D convolutional, max pooling and fully connected layers along with their hyperparameters. Evaluation of EDEN across seven image and sentiment classification datasets shows that it reliably finds good networks -- and in three cases achieves state-of-the-art results -- even on a single GPU, in just 6-24 hours. Our study provides a first attempt at applying neuro-evolution to the creation of 1D convolutional networks for sentiment analysis including the optimisation of the embedding layer. <s> BIB008 </s> A Review of Meta-Reinforcement Learning for Deep Neural Networks Architecture Search <s> Meta-controllers <s> Deep reinforcement learning (DRL) is poised to revolutionize the field of artificial intelligence (AI) and represents a step toward building autonomous systems with a higherlevel understanding of the visual world. Currently, deep learning is enabling reinforcement learning (RL) to scale to problems that were previously intractable, such as learning to play video games directly from pixels. DRL algorithms are also applied to robotics, allowing control policies for robots to be learned directly from camera inputs in the real world. In this survey, we begin with an introduction to the general field of RL, then progress to the main streams of value-based and policy-based methods. Our survey will cover central algorithms in deep RL, including the deep Q-network (DQN), trust region policy optimization (TRPO), and asynchronous advantage actor critic. In parallel, we highlight the unique advantages of deep neural networks, focusing on visual understanding via RL. To conclude, we describe several current areas of research within the field. <s> BIB009 </s> A Review of Meta-Reinforcement Learning for Deep Neural Networks Architecture Search <s> Meta-controllers <s> As the two hottest branches of machine learning, deep learning and reinforcement learning both play a vital role in the field of artificial intelligence. Combining deep learning with reinforcement learning, deep reinforcement learning is a method of artificial intelligence that is much closer to human learning. As one of the most basic algorithms for reinforcement learning, Q-learning is a discrete strategic learning algorithm that uses a reasonable strategy to generate an action. According to the rewards and the next state generated by the interaction of the action and the environment, optimal Q-function can be obtained. Furthermore, based on Q-learning and convolutional neural networks, the deep Q-learning with experience replay is developed in this paper. To ensure the convergence of value function, a discount factor is involved in the value function. The temporal difference method is introduced to training the Q-function or value function. At last, a detailed procedure is proposed to implement deep reinforcement learning. <s> BIB010 </s> A Review of Meta-Reinforcement Learning for Deep Neural Networks Architecture Search <s> Meta-controllers <s> Convolutional neural networks have gained a remarkable success in computer vision. However, most usable network architectures are hand-crafted and usually require expertise and elaborate design. In this paper, we provide a block-wise network generation pipeline called BlockQNN which automatically builds high-performance networks using the Q-Learning paradigm with epsilon-greedy exploration strategy. The optimal network block is constructed by the learning agent which is trained sequentially to choose component layers. We stack the block to construct the whole auto-generated network. To accelerate the generation process, we also propose a distributed asynchronous framework and an early stop strategy. The block-wise generation brings unique advantages: (1) it performs competitive results in comparison to the hand-crafted state-of-the-art networks on image classification, additionally, the best network generated by BlockQNN achieves 3.54% top-1 error rate on CIFAR-10 which beats all existing auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of the search space in designing networks which only spends 3 days with 32 GPUs, and (3) moreover, it has strong generalizability that the network built on CIFAR also performs well on a larger-scale ImageNet dataset. <s> BIB011
Meta-modeling approaches perform iterative selection from the hyperparameters space and build associated architectures that are then trained and evaluated. Accuracies records are fed to meta-modeling controllers (metacontrollers) to guide next architectures sampling. Meta-controllers for CNN design are mainly based on bayesian optimization ( , BIB004 ), evolutionary algorithms ( BIB002 , ) or more recently on reinforcement learning ( BIB005 , ). Bayesian optimization is an efficient way to optimize black-box objective functions f : X → R that are slow to evaluate BIB003 . It aims at finding an input x = arg min x∈X f (x) that globally minimizes f where in the context of a machine learning algorithm, x refers to the set of hyperparameters to optimize. The problem with this kind of optimization is that evaluating the objective function is very costly due to the great number of hyperparameters and the complex nature of models like deep neural networks. In order to overcome this problem, bayesian approaches propose probabilistic surrogate reconstruction of the objective function p(f |D) where D is a set of past observations. The evaluation of the empirical function is much cheaper than the true objective function BIB006 . Some of the most used probabilistic surrogate (regression) models are gaussian processes , random forests BIB001 and treestructured Parzen estimator . Briefly, the processing of a bayesian optimization consists in building an empirical (probabilistic) model of the objective function. Then, iteratively, the model identifies a set of optimal hyperparameters for which the objective function returns corresponding results (e.g. loss values). Each feedback allows the update of the surrogate model and the guidance of hyperparameters predictions until the process reaches a termination condition. Evolutionary algorithms present another strategy of hyperparameters optimization that modifies a set of candidate solutions (population) on the basis of a number of rules (operators). Following an iterative procedure of mutation, crossover and selection , an evolutionary algorithm initializes, in a first step, a set of N random networks to create a primary population. The second step consists in introducing a fitness function to score each network through its classification accuracy and keep the top ranked networks to construct the next generation. The evolutionary process continues until a termination criteria is met, which is generally defined as the maximum number of allowed generations. One of the advantage of evolutionary algorithms is the adaptation to complex combination of discrete (layer type) and continuous (learning rate) hyperparameters which is suitable to neuronal network optimization models BIB008 . An important approach for goal-oriented optimization is reinforcement learning (RL) inspired from behaviorist psychology . The frame of RL is an agent learning through interaction with its environment (figure 6). Thus the agent adapts its behavior (transition to a state s t+1 ) on the basis of observed consequences (rewards) of an action a t taken in state s t . The agent purpose is to learn a policy π that is able to identify the optimal sequence of actions maximizing the expected cumulative rewards. The environment return reinforces the agent to select new actions to improve learning process, hence the name of reinforcement learning. The methods developed to resolve reinforcement tasks are based on value functions, policy search or a combination of both strategies (actor-critic methods) BIB009 . Value function methods consist in estimating the expected reward value R when reaching a given state s and following a policy π: A recursive form of this function is particularly used in recent Q-learning [38] models assigned to CNN architecture design ( BIB007 , BIB011 ): Where s t is a current state, a t is a current action, α is the learning rate, r t+1 is the reward earned when transitioning from time t to the next and γ is the discount rate. In contrast to value function methods, policy search methods do not implement a value function and apply, instead, a gradient-based procedure to identify directly an optimal policy π * . In this context, deep reinforcement learning is achieved when deep neural networks are used to approximate one of the reinforcement learning components : value function, policy or reward function BIB010 . Among the active fields of designing CNN architectures through deep reinforcement learning, recurrent neural networks (RNN) arise as a valuable model that handles a set of tasks such as hyperparameters prediction ( BIB005 , ). In fact, a RNN operates sequentially involving hidden units to store processing history, which allows the reinforcement learning to profit from past observations. Long short term memory networks (LSTM), a variant of RNN, offers a more efficient way of evolving conditionally on the basis of previous elements.
A Review of Meta-Reinforcement Learning for Deep Neural Networks Architecture Search <s> EAS <s> In the first part of this paper, a regular recurrent neural network (RNN) is extended to a bidirectional recurrent neural network (BRNN). The BRNN can be trained without the limitation of using input information just up to a preset future frame. This is accomplished by training it simultaneously in positive and negative time direction. Structure and training procedure of the proposed network are explained. In regression and classification experiments on artificial data, the proposed structure gives better results than other approaches. For real data, classification experiments for phonemes from the TIMIT database show the same tendency. In the second part of this paper, it is shown how the proposed bidirectional structure can be easily modified to allow efficient estimation of the conditional posterior probability of complete symbol sequences without making any explicit assumption about the shape of the distribution. For this part, experiments on real data are reported. <s> BIB001 </s> A Review of Meta-Reinforcement Learning for Deep Neural Networks Architecture Search <s> EAS <s> We introduce techniques for rapidly transferring the information stored in one neural net into another neural net. The main purpose is to accelerate the training of a significantly larger neural net. During real-world workflows, one often trains very many different neural networks during the experimentation and design process. This is a wasteful process in which each new model is trained from scratch. Our Net2Net technique accelerates the experimentation process by instantaneously transferring the knowledge from a previous network to each new deeper or wider network. Our techniques are based on the concept of function-preserving transformations between neural network specifications. This differs from previous approaches to pre-training that altered the function represented by a neural net when adding layers to it. Using our knowledge transfer mechanism to add depth to Inception modules, we demonstrate a new state of the art accuracy rating on the ImageNet dataset. <s> BIB002 </s> A Review of Meta-Reinforcement Learning for Deep Neural Networks Architecture Search <s> EAS <s> Techniques for automatically designing deep neural network architectures such as reinforcement learning based approaches have recently shown promising results. However, their success is based on vast computational resources (e.g. hundreds of GPUs), making them difficult to be widely used. A noticeable limitation is that they still design and train each network from scratch during the exploration of the architecture space, which is highly inefficient. In this paper, we propose a new framework toward efficient architecture search by exploring the architecture space based on the current network and reusing its weights. We employ a reinforcement learning agent as the meta-controller, whose action is to grow the network depth or layer width with function-preserving transformations. As such, the previously validated networks can be reused for further exploration, thus saves a large amount of computational cost. We apply our method to explore the architecture space of the plain convolutional neural networks (no skip-connections, branching etc.) on image benchmark datasets (CIFAR-10, SVHN) with restricted computational resources (5 GPUs). Our method can design highly competitive networks that outperform existing networks using the same design scheme. On CIFAR-10, our model without skip-connections achieves 4.23\% test error rate, exceeding a vast majority of modern architectures and approaching DenseNet. Furthermore, by applying our method to explore the DenseNet architecture space, we are able to achieve more accurate networks with fewer parameters. <s> BIB003
In their very recent work Efficient Architecture Search, BIB003 implement network transformation techniques that allow reusing pre-existing models and efficiently exploring search space for automatic architecture design. This novel approach differs from the previous ones in the definition of reinforcement learning states and actions. The state is the current network architecture while the action involves network transformation operations such as adding, enlarging and deleting layers. Starting point architectures used in experiments are plain CNN which only consist of convolutional, fully-connected and pooling layers. EAS approach is inspired from Net2Net technique introduced in BIB002 and based on the idea of building deeper student network to reproduce the same processing of an associated teacher network. As shown in figure 10 , an encoder network implemented with bidirectional recurrent neural network BIB001 feeds actors network with given architectures. The selected actor networks performs 2 types of transformation: widening layers in terms of units and filters and inserting new layers. EAS outperforms similar state-of-the-art models designed either manually or automatically with the attractive advantage of using relatively much smaller computational resources.
A Review of Meta-Reinforcement Learning for Deep Neural Networks Architecture Search <s> BlockQNN <s> The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks. <s> BIB001 </s> A Review of Meta-Reinforcement Learning for Deep Neural Networks Architecture Search <s> BlockQNN <s> Convolutional neural networks have gained a remarkable success in computer vision. However, most usable network architectures are hand-crafted and usually require expertise and elaborate design. In this paper, we provide a block-wise network generation pipeline called BlockQNN which automatically builds high-performance networks using the Q-Learning paradigm with epsilon-greedy exploration strategy. The optimal network block is constructed by the learning agent which is trained sequentially to choose component layers. We stack the block to construct the whole auto-generated network. To accelerate the generation process, we also propose a distributed asynchronous framework and an early stop strategy. The block-wise generation brings unique advantages: (1) it performs competitive results in comparison to the hand-crafted state-of-the-art networks on image classification, additionally, the best network generated by BlockQNN achieves 3.54% top-1 error rate on CIFAR-10 which beats all existing auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of the search space in designing networks which only spends 3 days with 32 GPUs, and (3) moreover, it has strong generalizability that the network built on CIFAR also performs well on a larger-scale ImageNet dataset. <s> BIB002
One of the first approaches implementing block-wise architecture search, BlockQNN BIB002 automatically builds convolutional neworks using Q-Learning reinforcement technique with epsilon-greedy as exploration strategy BIB001 . The block structure is similar to ResNet and Inception (GoogLeNet) modern networks since it contains shortcut connections and multi-branch layer combinations. The search space of the approach is reduced given that the focus is switched to explore network blocks rather than designing the entire network. The block search space is detailed in figure 11 and consists of 5 parameters: a layer index (its position in the block), an operation type (selected among 7 types commonly used), a kernel size and 2 predecessors layers indexes. Figure 12 depicts 2 different samples of blocks, one with multi-branch structure and the second showing a skip connection. As described in previous sections, the Q-learning model includes an agent, states and actions, where the state represents the current layer of the agent and the action refers to the transition to the next layer. On the basis of defined blocks, the complete network is constructed by stacking them sequentially N times.
A Review of Meta-Reinforcement Learning for Deep Neural Networks Architecture Search <s> PNAS <s> Neural networks are powerful and flexible models that work well for many difficult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a recurrent network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is 0.09 percent better and 1.05x faster than the previous state-of-the-art model that used a similar architectural scheme. On the Penn Treebank dataset, our model can compose a novel recurrent cell that outperforms the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-of-the-art model. The cell can also be transferred to the character language modeling task on PTB and achieves a state-of-the-art perplexity of 1.214. <s> BIB001 </s> A Review of Meta-Reinforcement Learning for Deep Neural Networks Architecture Search <s> PNAS <s> We propose a new method for learning the structure of convolutional neural networks (CNNs) that is more efficient than recent state-of-the-art methods based on reinforcement learning and evolutionary algorithms. Our approach uses a sequential model-based optimization (SMBO) strategy, in which we search for structures in order of increasing complexity, while simultaneously learning a surrogate model to guide the search through structure space. Direct comparison under the same search space shows that our method is up to 5 times more efficient than the RL method of Zoph et al. (2018) in terms of number of models evaluated, and 8 times faster in terms of total compute. The structures we discover in this way achieve state of the art classification accuracies on CIFAR-10 and ImageNet. <s> BIB002
Progressive neural architecture search BIB002 proposes to explore the space of modular structures starting from simple models then evolving to more complex ones, discarding underperforming structures as learning progresses. The modular structure in this approach is called a cell and consists of a fixed number of blocks. Each block is a combination of 2 operators among 8 selected ones such as identity, pooling and convolution. A cell structure is learned first then it's stacked N times in order to build the resulting CNN. The main contribution of PNAS lies in the optimization of the search process by avoiding direct search in the entire space of cells. This was made possible with the use of a sequential model-based optimization (SMBO) strategy. The initial step consists in building, training and evaluating all possible 1-block cells. Then the cell is expanded to 2-block size exploding the number of total combinations. The innovation brought by PNAS is to predict the performance of the second level cells by training a RNN (predictor) on the performance of previous level ones. Only the K best cells (i.e. most promising ones) are transferred to the next step of cell size expansion. This process is repeated until the maximum allowed blocks number is reached. With an accuracy comparable to NAS BIB001 approach, PNAS is up to 5 times faster using a cell maximum size of 5 blocks and K equal to 256. This result is due to the fact that performance prediction takes much less time than full training of designed cells. The best cell architecture is shown in figure 13.
A Review of Meta-Reinforcement Learning for Deep Neural Networks Architecture Search <s> ENAS <s> Neural networks are powerful and flexible models that work well for many difficult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a recurrent network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is 0.09 percent better and 1.05x faster than the previous state-of-the-art model that used a similar architectural scheme. On the Penn Treebank dataset, our model can compose a novel recurrent cell that outperforms the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-of-the-art model. The cell can also be transferred to the character language modeling task on PTB and achieves a state-of-the-art perplexity of 1.214. <s> BIB001 </s> A Review of Meta-Reinforcement Learning for Deep Neural Networks Architecture Search <s> ENAS <s> We propose a new method for learning the structure of convolutional neural networks (CNNs) that is more efficient than recent state-of-the-art methods based on reinforcement learning and evolutionary algorithms. Our approach uses a sequential model-based optimization (SMBO) strategy, in which we search for structures in order of increasing complexity, while simultaneously learning a surrogate model to guide the search through structure space. Direct comparison under the same search space shows that our method is up to 5 times more efficient than the RL method of Zoph et al. (2018) in terms of number of models evaluated, and 8 times faster in terms of total compute. The structures we discover in this way achieve state of the art classification accuracies on CIFAR-10 and ImageNet. <s> BIB002 </s> A Review of Meta-Reinforcement Learning for Deep Neural Networks Architecture Search <s> ENAS <s> We present an interpretation of Inception modules in convolutional neural networks as being an intermediate step in-between regular convolution and the depthwise separable convolution operation (a depthwise convolution followed by a pointwise convolution). In this light, a depthwise separable convolution can be understood as an Inception module with a maximally large number of towers. This observation leads us to propose a novel deep convolutional neural network architecture inspired by Inception, where Inception modules have been replaced with depthwise separable convolutions. We show that this architecture, dubbed Xception, slightly outperforms Inception V3 on the ImageNet dataset (which Inception V3 was designed for), and significantly outperforms Inception V3 on a larger image classification dataset comprising 350 million images and 17,000 classes. Since the Xception architecture has the same number of parameters as Inception V3, the performance gains are not due to increased capacity but rather to a more efficient use of model parameters. <s> BIB003
Efficient neural architecture search comes in the continuity of previous work NAS BIB001 and PNAS BIB002 . It explores a cell-based search space through a controller RNN trained with reinforcement learning. The cell structure is similar to PNAS model where block concept is replaced with a node that consists of 2 operations and two skip connections. The controller RNN manages thus 2 types of decisions at each node. First it identifies 2 previous nodes to connect to, allowing the cell to set skip connections. Second, the controller selects 2 operations to implement among a set of 1 identity, 2 depth wise-separable convolutions of filter sizes 3 * 3 and 5 * 5 BIB003 , max pooling and average pooling both of size 3 * 3. Within each node, the operations results are added in order to constitute an input for the next node. Figure 14 illustrates the design of a 4-node cell. At the end, the entire CNN is built by stacking N times convolutional cells. Another contribution of ENAS consists in sampling mini-batches from validation dataset to train designed models. The models with the best accuracy are then trained on the entire validation dataset. Additionally, the approach efficiency is greatly improved by implementing a weight sharing strategy. Each node has its own parameters (used when involved operations are activated) that are shared through inheritance by the generated child models. The latter are hence not trained from scratch saving a considerable processing time. ENAS provides competitive results on CIFAR-10 and Penn Treebank datasets. It specifically takes much less time to build the convolution cells than previous approaches that adopt the same strategy of designing modular structures then stack them to obtain a final CNN.
A Review of Meta-Reinforcement Learning for Deep Neural Networks Architecture Search <s> EAS With Path Level Transformation <s> Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank). <s> BIB001 </s> A Review of Meta-Reinforcement Learning for Deep Neural Networks Architecture Search <s> EAS With Path Level Transformation <s> Techniques for automatically designing deep neural network architectures such as reinforcement learning based approaches have recently shown promising results. However, their success is based on vast computational resources (e.g. hundreds of GPUs), making them difficult to be widely used. A noticeable limitation is that they still design and train each network from scratch during the exploration of the architecture space, which is highly inefficient. In this paper, we propose a new framework toward efficient architecture search by exploring the architecture space based on the current network and reusing its weights. We employ a reinforcement learning agent as the meta-controller, whose action is to grow the network depth or layer width with function-preserving transformations. As such, the previously validated networks can be reused for further exploration, thus saves a large amount of computational cost. We apply our method to explore the architecture space of the plain convolutional neural networks (no skip-connections, branching etc.) on image benchmark datasets (CIFAR-10, SVHN) with restricted computational resources (5 GPUs). Our method can design highly competitive networks that outperform existing networks using the same design scheme. On CIFAR-10, our model without skip-connections achieves 4.23\% test error rate, exceeding a vast majority of modern architectures and approaching DenseNet. Furthermore, by applying our method to explore the DenseNet architecture space, we are able to achieve more accurate networks with fewer parameters. <s> BIB002 </s> A Review of Meta-Reinforcement Learning for Deep Neural Networks Architecture Search <s> EAS With Path Level Transformation <s> We introduce a new function-preserving transformation for efficient neural architecture search. This network transformation allows reusing previously trained networks and existing successful architectures that improves sample efficiency. We aim to address the limitation of current network transformation operations that can only perform layer-level architecture modifications, such as adding (pruning) filters or inserting (removing) a layer, which fails to change the topology of connection paths. Our proposed path-level transformation operations enable the meta-controller to modify the path topology of the given network while keeping the merits of reusing weights, and thus allow efficiently designing effective structures with complex path topologies like Inception models. We further propose a bidirectional tree-structured reinforcement learning meta-controller to explore a simple yet highly expressive tree-structured architecture space that can be viewed as a generalization of multi-branch architectures. We experimented on the image classification datasets with limited computational resources (about 200 GPU-hours), where we observed improved parameter efficiency and better test results (97.70% test accuracy on CIFAR-10 with 14.3M parameters and 74.6% top-1 accuracy on ImageNet in the mobile setting), demonstrating the effectiveness and transferability of our designed architectures. <s> BIB003
A developed version of EAS BIB002 which adopts network transformation for efficient CNN architecture search is presented in BIB003 . The new approach tackle the constraint of only performing plain architecture modification (layer-level), e.g. adding (removing) units, filters and layers, by using path-level transformation operations. The proposed model is similar to ([42]) Figure 14 : Illustration of 4-nodes cell . where the reinforcement learning meta-controller samples network transformation actions to build new architectures. The latter are then trained and resulting accuracies are used as reward to update the meta-controller. However, certain changes have been implemented in order to adapt search methods to the tree-structured architecture space: using a tree-structured LSTM, ( BIB001 ) as meta-controller, defining a new action space consisting of feature maps allocation schemes (replication, skip), merge schemes (add, concatenation, none) and primitive operations (convolution, identity, depthwise-separable convolution, etc.). Figure 15 presents an example of transformation decisions operated by the meta-controller. Experimenting with ResNet and DenseNet architectures as base input, the path level transformation approach achieves competitive performance with state-of-the-art models maintaining low computational resources comparable to EAS approach ones.
A Review of Meta-Reinforcement Learning for Deep Neural Networks Architecture Search <s> Architecture search accelerators <s> Recent work in unsupervised feature learning and deep learning has shown that being able to train large models can dramatically improve performance. In this paper, we consider the problem of training a deep network with billions of parameters using tens of thousands of CPU cores. We have developed a software framework called DistBelief that can utilize computing clusters with thousands of machines to train large models. Within this framework, we have developed two algorithms for large-scale distributed training: (i) Downpour SGD, an asynchronous stochastic gradient descent procedure supporting a large number of model replicas, and (ii) Sandblaster, a framework that supports a variety of distributed batch optimization procedures, including a distributed implementation of L-BFGS. Downpour SGD and Sandblaster L-BFGS both increase the scale and speed of deep network training. We have successfully used our system to train a deep network 30x larger than previously reported in the literature, and achieves state-of-the-art performance on ImageNet, a visual object recognition task with 16 million images and 21k categories. We show that these same techniques dramatically accelerate the training of a more modestly- sized deep network for a commercial speech recognition service. Although we focus on and report performance of these methods as applied to training large neural networks, the underlying algorithms are applicable to any gradient-based machine learning algorithm. <s> BIB001 </s> A Review of Meta-Reinforcement Learning for Deep Neural Networks Architecture Search <s> Architecture search accelerators <s> Deep neural networks (DNNs) show very strong performance on many machine learning problems, but they are very sensitive to the setting of their hyperparameters. Automated hyperparameter optimization methods have recently been shown to yield settings competitive with those found by human experts, but their widespread adoption is hampered by the fact that they require more computational resources than human experts. Humans have one advantage: when they evaluate a poor hyperparameter setting they can quickly detect (after a few steps of stochastic gradient descent) that the resulting network performs poorly and terminate the corresponding evaluation to save time. In this paper, we mimic the early termination of bad runs using a probabilistic model that extrapolates the performance from the first part of a learning curve. Experiments with a broad range of neural network architectures on various prominent object recognition benchmarks show that our resulting approach speeds up state-of-the-art hyperparameter optimization methods for DNNs roughly twofold, enabling them to find DNN settings that yield better performance than those chosen by human experts. <s> BIB002 </s> A Review of Meta-Reinforcement Learning for Deep Neural Networks Architecture Search <s> Architecture search accelerators <s> Neural networks are powerful and flexible models that work well for many difficult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a recurrent network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is 0.09 percent better and 1.05x faster than the previous state-of-the-art model that used a similar architectural scheme. On the Penn Treebank dataset, our model can compose a novel recurrent cell that outperforms the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-of-the-art model. The cell can also be transferred to the character language modeling task on PTB and achieves a state-of-the-art perplexity of 1.214. <s> BIB003 </s> A Review of Meta-Reinforcement Learning for Deep Neural Networks Architecture Search <s> Architecture search accelerators <s> We explore efficient neural architecture search methods and show that a simple yet powerful evolutionary algorithm can discover new architectures with excellent performance. Our approach combines a novel hierarchical genetic representation scheme that imitates the modularized design pattern commonly adopted by human experts, and an expressive search space that supports complex topologies. Our algorithm efficiently discovers architectures that outperform a large number of manually designed models for image classification, obtaining top-1 error of 3.6% on CIFAR-10 and 20.3% when transferred to ImageNet, which is competitive with the best existing neural architecture search approaches. We also present results using random search, achieving 0.3% less top-1 accuracy on CIFAR-10 and 0.1% less on ImageNet whilst reducing the search time from 36 hours down to 1 hour. <s> BIB004 </s> A Review of Meta-Reinforcement Learning for Deep Neural Networks Architecture Search <s> Architecture search accelerators <s> Neural networks have recently had a lot of success for many tasks. However, neural network architectures that perform well are still typically designed manually by experts in a cumbersome trial-and-error process. We propose a new method to automatically search for well-performing CNN architectures based on a simple hill climbing procedure whose operators apply network morphisms, followed by short optimization runs by cosine annealing. Surprisingly, this simple method yields competitive results, despite only requiring resources in the same order of magnitude as training a single network. E.g., on CIFAR-10, our method designs and trains networks with an error rate below 6% in only 12 hours on a single GPU; training for one day reduces this error further, to almost 5%. <s> BIB005 </s> A Review of Meta-Reinforcement Learning for Deep Neural Networks Architecture Search <s> Architecture search accelerators <s> In this work we study the problem of network morphism, an effective learning scheme to morph a well-trained neural network to a new one with the network function completely preserved. Different from existing work where basic morphing types on the layer level were addressed, we target at the central problem of network morphism at a higher level, i.e., how a convolutional layer can be morphed into an arbitrary module of a neural network. To simplify the representation of a network, we abstract a module as a graph with blobs as vertices and convolutional layers as edges, based on which the morphing process is able to be formulated as a graph transformation problem. Two atomic morphing operations are introduced to compose the graphs, based on which modules are classified into two families, i.e., simple morphable modules and complex modules. We present practical morphing solutions for both of these two families, and prove that any reasonable module can be morphed from a single convolutional layer. Extensive experiments have been conducted based on the state-of-the-art ResNet on benchmark datasets, and the effectiveness of the proposed solution has been verified. <s> BIB006 </s> A Review of Meta-Reinforcement Learning for Deep Neural Networks Architecture Search <s> Architecture search accelerators <s> Convolutional neural networks have gained a remarkable success in computer vision. However, most usable network architectures are hand-crafted and usually require expertise and elaborate design. In this paper, we provide a block-wise network generation pipeline called BlockQNN which automatically builds high-performance networks using the Q-Learning paradigm with epsilon-greedy exploration strategy. The optimal network block is constructed by the learning agent which is trained sequentially to choose component layers. We stack the block to construct the whole auto-generated network. To accelerate the generation process, we also propose a distributed asynchronous framework and an early stop strategy. The block-wise generation brings unique advantages: (1) it performs competitive results in comparison to the hand-crafted state-of-the-art networks on image classification, additionally, the best network generated by BlockQNN achieves 3.54% top-1 error rate on CIFAR-10 which beats all existing auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of the search space in designing networks which only spends 3 days with 32 GPUs, and (3) moreover, it has strong generalizability that the network built on CIFAR also performs well on a larger-scale ImageNet dataset. <s> BIB007 </s> A Review of Meta-Reinforcement Learning for Deep Neural Networks Architecture Search <s> Architecture search accelerators <s> Most learning algorithms require the practitioner to manually set the values of many hyperparameters before the learning process can begin. However, with modern algorithms, the evaluation of a given hyperparameter setting can take a considerable amount of time and the search space is often very high-dimensional. We suggest using a lower-dimensional representation of the original data to quickly identify promising areas in the hyperparameter space. This information can then be used to initialize the optimization algorithm for the original, higher-dimensional data. We compare this approach with the standard procedure of optimizing the hyperparameters only on the original input. We perform experiments with various state-of-the-art hyperparameter optimization algorithms such as random search, the tree of parzen estimators (TPEs), sequential model-based algorithm configuration (SMAC), and a genetic algorithm (GA). Our experiments indicate that it is possible to speed up the optimization process by using lower-dimensional data representations at the beginning, while increasing the dimensionality of the input later in the optimization process. This is independent of the underlying optimization procedure, making the approach promising for many existing hyperparameter optimization algorithms. <s> BIB008 </s> A Review of Meta-Reinforcement Learning for Deep Neural Networks Architecture Search <s> Architecture search accelerators <s> We introduce a new function-preserving transformation for efficient neural architecture search. This network transformation allows reusing previously trained networks and existing successful architectures that improves sample efficiency. We aim to address the limitation of current network transformation operations that can only perform layer-level architecture modifications, such as adding (pruning) filters or inserting (removing) a layer, which fails to change the topology of connection paths. Our proposed path-level transformation operations enable the meta-controller to modify the path topology of the given network while keeping the merits of reusing weights, and thus allow efficiently designing effective structures with complex path topologies like Inception models. We further propose a bidirectional tree-structured reinforcement learning meta-controller to explore a simple yet highly expressive tree-structured architecture space that can be viewed as a generalization of multi-branch architectures. We experimented on the image classification datasets with limited computational resources (about 200 GPU-hours), where we observed improved parameter efficiency and better test results (97.70% test accuracy on CIFAR-10 with 14.3M parameters and 74.6% top-1 accuracy on ImageNet in the mobile setting), demonstrating the effectiveness and transferability of our designed architectures. <s> BIB009
Reinforcement learning methods have been applied successfully to design neural networks. Although multi-branch structures and skip connections improves the efficiency of architectures automatic search, the latter is still computationally expensive (hundreds of GPU hours), time consuming and requires further acceleration of learning process. Thus, in addition to the methods assigned to architectural search optimization and complex component building, some techniques are developed to speed up learning and are depicted in the current section. Early stopping strategy proposed in BIB007 enables fast convergence of the learning agent while maintaining an acceptable level of efficiency. This is possible by taking into account intermediate rewards ignored in previous works (set to zero delaying reinforcement learning convergence . In such case, the agent stops searching in an early training phase as the accuracy rewards reach higher levels in fewer iterations. The reward function is redefined in order to include designed block complexity and density and avoid possible poor accuracy resulting from training early stopping. A second technique is presented in BIB007 which consists of a distributed asynchronous framework assembling 3 nodes with different functions. The master node is the place where block structures are sampled by agent. Then, in the controller node, the entire network is built from generated blocks and transmitted to multiple compute nodes for training. The framework is a kind of simplified parameter-server BIB001 and allows the parallel training of designed networks in each compute nodes. Hence, the whole design and learning processing is operated in multiple machines and GPUs. BIB003 uses the same parameter server scheme with replication of controllers in order to train various architectures in parallel. As seen previously, reinforcement learning policies use explored architectures performance as a guiding reward for controllers updates. Training and evaluating every sampled architecture (among hundreds) on validation data is responsible for most of computational load. Extracting architecture performance was consequently subject to several estimation attempts. A number of approaches focus on performance prediction on the basis of past observations. Most of such techniques are based on learning curve extrapolation BIB002 and surrogate models using RNN predictor BIB004 that aim at predicting and eliminating poor architectures before full training. Another idea to estimate performance and rank designed architectures is to use simplified (proxy) metrics for training such as data subsets (mini-batches) and down-sampled data (like images with lower resolution) BIB008 . Network transformation is one of the more recent techniques assigned to accelerate neural architecture search ( BIB009 , BIB005 ). It consists in training explored architectures reusing previously trained or existing networks. This modeling feature allows to address a limitation of reinforcement learning approaches where training is performed with a random initialization of weights. Thus, extending network morphisms BIB006 to initiate architecture search through the transfer of experience and knowledge reflected by reused weights enables the framework to scrutinize the search space efficiently. Although the techniques presented above have saved substantial computational resources for neural architecture search, there is still more effort needed to examine the extent of bias impact of such techniques on the search process. Indeed, it's crucial to assure that modifications brought through re-sampled data, discarded cases and early convergence do not influence the models original predictions. Further studies are thus required to verify that learning accelerators do not have amplified effect on approaches predictions and validation accuracies.
A Review of Meta-Reinforcement Learning for Deep Neural Networks Architecture Search <s> Conclusion <s> Research in neuroevolution---that is, evolving artificial neural networks (ANNs) through evolutionary algorithms---is inspired by the evolution of biological brains, which can contain trillions of connections. Yet while neuroevolution has produced successful results, the scale of natural brains remains far beyond reach. This article presents a method called hypercube-based NeuroEvolution of Augmenting Topologies (HyperNEAT) that aims to narrow this gap. HyperNEAT employs an indirect encoding called connective compositional pattern-producing networks (CPPNs) that can produce connectivity patterns with symmetries and repeating motifs by interpreting spatial patterns generated within a hypercube as connectivity patterns in a lower-dimensional space. This approach can exploit the geometry of the task by mapping its regularities onto the topology of the network, thereby shifting problem difficulty away from dimensionality to the underlying problem structure. Furthermore, connective CPPNs can represent the same connectivity pattern at any resolution, allowing ANNs to scale to new numbers of inputs and outputs without further evolution. HyperNEAT is demonstrated through visual discrimination and food-gathering tasks, including successful visual discrimination networks containing over eight million connections. The main conclusion is that the ability to explore the space of regular connectivity patterns opens up a new class of complex high-dimensional tasks to neuroevolution. <s> BIB001 </s> A Review of Meta-Reinforcement Learning for Deep Neural Networks Architecture Search <s> Conclusion <s> Deep neural networks (DNNs) show very strong performance on many machine learning problems, but they are very sensitive to the setting of their hyperparameters. Automated hyperparameter optimization methods have recently been shown to yield settings competitive with those found by human experts, but their widespread adoption is hampered by the fact that they require more computational resources than human experts. Humans have one advantage: when they evaluate a poor hyperparameter setting they can quickly detect (after a few steps of stochastic gradient descent) that the resulting network performs poorly and terminate the corresponding evaluation to save time. In this paper, we mimic the early termination of bad runs using a probabilistic model that extrapolates the performance from the first part of a learning curve. Experiments with a broad range of neural network architectures on various prominent object recognition benchmarks show that our resulting approach speeds up state-of-the-art hyperparameter optimization methods for DNNs roughly twofold, enabling them to find DNN settings that yield better performance than those chosen by human experts. <s> BIB002 </s> A Review of Meta-Reinforcement Learning for Deep Neural Networks Architecture Search <s> Conclusion <s> Neural networks are powerful and flexible models that work well for many difficult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a recurrent network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is 0.09 percent better and 1.05x faster than the previous state-of-the-art model that used a similar architectural scheme. On the Penn Treebank dataset, our model can compose a novel recurrent cell that outperforms the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-of-the-art model. The cell can also be transferred to the character language modeling task on PTB and achieves a state-of-the-art perplexity of 1.214. <s> BIB003 </s> A Review of Meta-Reinforcement Learning for Deep Neural Networks Architecture Search <s> Conclusion <s> This paper addresses the difficult problem of finding an optimal neural architecture design for a given image classification task. We propose a method that aggregates two main results of the previous state-of-the-art in neural architecture search. These are, appealing to the strong sampling efficiency of a search scheme based on sequential model-based optimization (SMBO), and increasing training efficiency by sharing weights among sampled architectures. Sequential search has previously demonstrated its capabilities to find state-of-the-art neural architectures for image classification. However, its computational cost remains high, even unreachable under modest computational settings. Affording SMBO with weight-sharing alleviates this problem. On the other hand, progressive search with SMBO is inherently greedy, as it leverages a learned surrogate function to predict the validation error of neural architectures. This prediction is directly used to rank the sampled neural architectures. We propose to attenuate the greediness of the original SMBO method by relaxing the role of the surrogate function so it predicts architecture sampling probability instead. We demonstrate with experiments on the CIFAR-10 dataset that our method, denominated Efficient progressive neural architecture search (EPNAS), leads to increased search efficiency, while retaining competitiveness of found architectures. <s> BIB004 </s> A Review of Meta-Reinforcement Learning for Deep Neural Networks Architecture Search <s> Conclusion <s> Techniques for automatically designing deep neural network architectures such as reinforcement learning based approaches have recently shown promising results. However, their success is based on vast computational resources (e.g. hundreds of GPUs), making them difficult to be widely used. A noticeable limitation is that they still design and train each network from scratch during the exploration of the architecture space, which is highly inefficient. In this paper, we propose a new framework toward efficient architecture search by exploring the architecture space based on the current network and reusing its weights. We employ a reinforcement learning agent as the meta-controller, whose action is to grow the network depth or layer width with function-preserving transformations. As such, the previously validated networks can be reused for further exploration, thus saves a large amount of computational cost. We apply our method to explore the architecture space of the plain convolutional neural networks (no skip-connections, branching etc.) on image benchmark datasets (CIFAR-10, SVHN) with restricted computational resources (5 GPUs). Our method can design highly competitive networks that outperform existing networks using the same design scheme. On CIFAR-10, our model without skip-connections achieves 4.23\% test error rate, exceeding a vast majority of modern architectures and approaching DenseNet. Furthermore, by applying our method to explore the DenseNet architecture space, we are able to achieve more accurate networks with fewer parameters. <s> BIB005 </s> A Review of Meta-Reinforcement Learning for Deep Neural Networks Architecture Search <s> Conclusion <s> We introduce a new function-preserving transformation for efficient neural architecture search. This network transformation allows reusing previously trained networks and existing successful architectures that improves sample efficiency. We aim to address the limitation of current network transformation operations that can only perform layer-level architecture modifications, such as adding (pruning) filters or inserting (removing) a layer, which fails to change the topology of connection paths. Our proposed path-level transformation operations enable the meta-controller to modify the path topology of the given network while keeping the merits of reusing weights, and thus allow efficiently designing effective structures with complex path topologies like Inception models. We further propose a bidirectional tree-structured reinforcement learning meta-controller to explore a simple yet highly expressive tree-structured architecture space that can be viewed as a generalization of multi-branch architectures. We experimented on the image classification datasets with limited computational resources (about 200 GPU-hours), where we observed improved parameter efficiency and better test results (97.70% test accuracy on CIFAR-10 with 14.3M parameters and 74.6% top-1 accuracy on ImageNet in the mobile setting), demonstrating the effectiveness and transferability of our designed architectures. <s> BIB006 </s> A Review of Meta-Reinforcement Learning for Deep Neural Networks Architecture Search <s> Conclusion <s> Multitask learning, i.e. learning several tasks at once with the same neural network, can improve performance in each of the tasks. Designing deep neural network architectures for multitask learning is a challenge: There are many ways to tie the tasks together, and the design choices matter. The size and complexity of this problem exceeds human design ability, making it a compelling domain for evolutionary optimization. Using the existing state of the art soft ordering architecture as the starting point, methods for evolving the modules of this architecture and for evolving the overall topology or routing between modules are evaluated in this paper. A synergetic approach of evolving custom routings with evolved, shared modules for each task is found to be very powerful, significantly improving the state of the art in the Omniglot multitask, multialphabet character recognition domain. This result demonstrates how evolution can be instrumental in advancing deep neural network and complex system design in general. <s> BIB007
The review of recent work trend on automatic design of CNN architectures raised some methodological options that are adopted by the majority of built approaches. Despite some attempts to use design meta-controllers based on evolutionary algorithms ( BIB001 , ) and Bayesian optimization ( BIB002 , ), reinforcement learning has shown promising empirical results and stands as the preferred strategy to train design controllers BIB004 . Another common conception option is the introduction of multi-branch (modular) structures as an elementary component of the entire network which restricts the search space to block/cell level. The plain network design is generally kept as a first step of proposed approaches application ( BIB003 , BIB005 ) given that it leads to simple networks and allows to focus on the method itself before switching to more complex structures with modular design ( , BIB006 ). A third option used in design approaches at a lower scale is the prediction of explored architectures rewards before full training the most promising ones ( BIB002 , ). This training acceleration technique is implemented for performance improvement purpose and requires further attention to control possible bias impact on the models behavior. The success of current reinforcement-learning-based approaches to design CNN architectures is widely proven especially for image classification tasks. However, it is achieved at the cost of high computational resources despite the acceleration attempts of most of recent models. Such fact is preventing individual researchers and small research entities (companies and laboratories) from fully access to this innovative technology BIB005 . Hence, deeper and more revolutionary optimizing methods are required to practically operate CNN automatic design. Transformation approaches based on extended network morphisms BIB006 are among the first attempts in this direction that achieved drastic decrease in computational cost and demonstrated generalization capacity. Additional future directions to control automatic design complexity is to develop methods for multi-task problems BIB007 and weights sharing in order to benefit from knowledge transfer contributions.
Progress in Outlier Detection Techniques: A Survey <s> I. INTRODUCTION <s> Outlier detection has been used for centuries to detect and, where appropriate, remove anomalous observations from data. Outliers arise due to mechanical faults, changes in system behaviour, fraudulent behaviour, human error, instrument error or simply through natural deviations in populations. Their detection can identify system faults and fraud before they escalate with potentially catastrophic consequences. It can identify errors and remove their contaminating effect on the data set and as such to purify the data for processing. The original outlier detection methods were arbitrary but now, principled and systematic techniques are used, drawn from the full gamut of Computer Science and Statistics. In this paper, we introduce a survey of contemporary techniques for outlier detection. We identify their respective motivations and distinguish their advantages and disadvantages in a comparative review. <s> BIB001 </s> Progress in Outlier Detection Techniques: A Survey <s> I. INTRODUCTION <s> The outlier detection problem has important applications in the field of fraud detection, network robustness analysis, and intrusion detection. Most such applications are most important for high-dimensional domains in which the data can contain hundreds of dimensions. Many recent algorithms have been proposed for outlier detection that use several concepts of proximity in order to find the outliers based on their relationship to the other points in the data. However, in high-dimensional space, the data are sparse and concepts using the notion of proximity fail to retain their effectiveness. In fact, the sparsity of high-dimensional data can be understood in a different way so as to imply that every point is an equally good outlier from the perspective of distance-based definitions. Consequently, for high-dimensional data, the notion of finding meaningful outliers becomes substantially more complex and nonobvious. In this paper, we discuss new techniques for outlier detection that find the outliers by studying the behavior of projections from the data set. <s> BIB002 </s> Progress in Outlier Detection Techniques: A Survey <s> I. INTRODUCTION <s> Outlier detection has recently become an important problem in many industrial and financial applications. In this paper, a novel feature bagging approach for detecting outliers in very large, high dimensional and noisy databases is proposed. It combines results from multiple outlier detection algorithms that are applied using different set of features. Every outlier detection algorithm uses a small subset of features that are randomly selected from the original feature set. As a result, each outlier detector identifies different outliers, and thus assigns to all data records outlier scores that correspond to their probability of being outliers. The outlier scores computed by the individual outlier detection algorithms are then combined in order to find the better quality outliers. Experiments performed on several synthetic and real life data sets show that the proposed methods for combining outputs from multiple outlier detection algorithms provide non-trivial improvements over the base algorithm. <s> BIB003 </s> Progress in Outlier Detection Techniques: A Survey <s> I. INTRODUCTION <s> Clustering is an important task in mining evolving data streams. Beside the limited memory and one-pass constraints, the nature of evolving data streams implies the following requirements for stream clustering: no assumption on the number of clusters, discovery of clusters with arbitrary shape and ability to handle outliers. While a lot of clustering algorithms for data streams have been proposed, they offer no solution to the combination of these requirements. In this paper, we present DenStream, a new approach for discovering clusters in an evolving data stream. The “dense” micro-cluster (named core-micro-cluster) is introduced to summarize the clusters with arbitrary shape, while the potential core-micro-cluster and outlier micro-cluster structures are proposed to maintain and distinguish the potential clusters and outliers. A novel pruning strategy is designed based on these concepts, which guarantees the precision of the weights of the micro-clusters with limited memory. Our performance study over a number of real and synthetic data sets demonstrates the effectiveness and efficiency of our method. <s> BIB004 </s> Progress in Outlier Detection Techniques: A Survey <s> I. INTRODUCTION <s> As advances in networking technology help to connect the distant corners of the globe and as the Internet continues to expand its influence as a medium for communications and commerce, the threat from spammers, attackers and criminal enterprises has also grown accordingly. It is the prevalence of such threats that has made intrusion detection systems-the cyberspace's equivalent to the burglar alarm-join ranks with firewalls as one of the fundamental technologies for network security. However, today's commercially available intrusion detection systems are predominantly signature-based intrusion detection systems that are designed to detect known attacks by utilizing the signatures of those attacks. Such systems require frequent rule-base updates and signature updates, and are not capable of detecting unknown attacks. In contrast, anomaly detection systems, a subset of intrusion detection systems, model the normal system/network behavior which enables them to be extremely effective in finding and foiling both known as well as unknown or ''zero day'' attacks. While anomaly detection systems are attractive conceptually, a host of technological problems need to be overcome before they can be widely adopted. These problems include: high false alarm rate, failure to scale to gigabit speeds, etc. In this paper, we provide a comprehensive survey of anomaly detection systems and hybrid intrusion detection systems of the recent past and present. We also discuss recent technological trends in anomaly detection and identify open problems and challenges in this area. <s> BIB005 </s> Progress in Outlier Detection Techniques: A Survey <s> I. INTRODUCTION <s> We propose an original outlier detection schema that detects outliers in varying subspaces of a high dimensional feature space. In particular, for each object in the data set, we explore the axis-parallel subspace spanned by its neighbors and determine how much the object deviates from the neighbors in this subspace. In our experiments, we show that our novel subspace outlier detection is superior to existing full-dimensional approaches and scales well to high dimensional databases. <s> BIB006 </s> Progress in Outlier Detection Techniques: A Survey <s> I. INTRODUCTION <s> Anomaly detection is an important problem that has been researched within diverse research areas and application domains. Many anomaly detection techniques have been specifically developed for certain application domains, while others are more generic. This survey tries to provide a structured and comprehensive overview of the research on anomaly detection. We have grouped existing techniques into different categories based on the underlying approach adopted by each technique. For each category we have identified key assumptions, which are used by the techniques to differentiate between normal and anomalous behavior. When applying a given technique to a particular domain, these assumptions can be used as guidelines to assess the effectiveness of the technique in that domain. For each category, we provide a basic anomaly detection technique, and then show how the different existing techniques in that category are variants of the basic technique. This template provides an easier and more succinct understanding of the techniques belonging to each category. Further, for each category, we identify the advantages and disadvantages of the techniques in that category. We also provide a discussion on the computational complexity of the techniques since it is an important issue in real application domains. We hope that this survey will provide a better understanding of the different directions in which research has been done on this topic, and how techniques developed in one area can be applied in domains for which they were not intended to begin with. <s> BIB007 </s> Progress in Outlier Detection Techniques: A Survey <s> I. INTRODUCTION <s> The detection of outliers has gained considerable interest in data mining with the realization that outliers can be the key discovery to be made from very large databases. Outliers arise due to various reasons such as mechanical faults, changes in system behavior, fraudulent behavior, human error and instrument error. Indeed, for many applications the discovery of outliers leads to more interesting and useful results than the discovery of inliers. Detection of outliers can lead to identification of system faults so that administrators can take preventive measures before they escalate. It is possible that anomaly detection may enable detection of new attacks. Outlier detection is an important anomaly detection approach. In this paper, we present a comprehensive survey of well-known distance-based, density-based and other techniques for outlier detection and compare them. We provide definitions of outliers and discuss their detection based on supervised and unsupervised learning in the context of network anomaly detection. <s> BIB008 </s> Progress in Outlier Detection Techniques: A Survey <s> I. INTRODUCTION <s> High-dimensional data in Euclidean space pose special challenges to data mining algorithms. These challenges are often indiscriminately subsumed under the term ‘curse of dimensionality’, more concrete aspects being the so-called ‘distance concentration effect’, the presence of irrelevant attributes concealing relevant information, or simply efficiency issues. In about just the last few years, the task of unsupervised outlier detection has found new specialized solutions for tackling high-dimensional data in Euclidean space. These approaches fall under mainly two categories, namely considering or not considering subspaces (subsets of attributes) for the definition of outliers. The former are specifically addressing the presence of irrelevant attributes, the latter do consider the presence of irrelevant attributes implicitly at best but are more concerned with general issues of efficiency and effectiveness. Nevertheless, both types of specialized outlier detection algorithms tackle challenges specific to high-dimensional data. In this survey article, we discuss some important aspects of the ‘curse of dimensionality’ in detail and survey specialized algorithms for outlier detection from both categories. © 2012 Wiley Periodicals, Inc. Statistical Analysis and Data Mining, 2012 © 2012 Wiley Periodicals, Inc. <s> BIB009 </s> Progress in Outlier Detection Techniques: A Survey <s> I. INTRODUCTION <s> Detecting anomalies in data is a vital task, with numerous high-impact applications in areas such as security, finance, health care, and law enforcement. While numerous techniques have been developed in past years for spotting outliers and anomalies in unstructured collections of multi-dimensional points, with graph data becoming ubiquitous, techniques for structured graph data have been of focus recently. As objects in graphs have long-range correlations, a suite of novel technology has been developed for anomaly detection in graph data. This survey aims to provide a general, comprehensive, and structured overview of the state-of-the-art methods for anomaly detection in data represented as graphs. As a key contribution, we give a general framework for the algorithms categorized under various settings: unsupervised versus (semi-)supervised approaches, for static versus dynamic graphs, for attributed versus plain graphs. We highlight the effectiveness, scalability, generality, and robustness aspects of the methods. What is more, we stress the importance of anomaly attribution and highlight the major techniques that facilitate digging out the root cause, or the `why', of the detected anomalies for further analysis and sense-making. Finally, we present several real-world applications of graph-based anomaly detection in diverse domains, including financial, auction, computer traffic, and social networks. We conclude our survey with a discussion on open theoretical and practical challenges in the field. <s> BIB010 </s> Progress in Outlier Detection Techniques: A Survey <s> I. INTRODUCTION <s> The distance-based outlier is a widely used definition of outlier. A point is distinguished as an outlier on the basis of the distances to its nearest neighbors. In this paper, to solve the problem of outlier computing in distributed environments, DBOZ, a distributed algorithm for distance-based outlier detection using Z-curve hierarchical tree (ZH-tree) is proposed. First, we propose a new index, ZH-tree, to effectively manage the data in a distributed environment. ZH-tree has two desirable advantages, including clustering property to help search the neighbors of a point, and hierarchical structure to support space pruning. We also design a bottom-up approach to build ZH-tree in parallel, whose time complexity is linear to the number of dimensions and the size of dataset. Second, DBOZ is proposed to compute outliers in distributed environments. It consists of two stages. 1) To avoid calculating the exact nearest neighbors of all the points, we design a greedy method and a new ZH-tree based k-nearest neighbor searching algorithm (ZHkNN for short) to obtain a threshold LW. 2) We propose a filter-and-refine approach, which first filters out the unpromising points using LW, and then outputs the final outliers through refining the remaining points. At last, the efficiency and the effectiveness of ZH-tree and DBOZ are testified through a series of experiments. <s> BIB011 </s> Progress in Outlier Detection Techniques: A Survey <s> I. INTRODUCTION <s> Detecting anomalies in surveillance videos, that is, finding events or objects with low probability of occurrence, is a practical and challenging research topic in computer vision community. In this paper, we put forward a novel unsupervised learning framework for anomaly detection. At feature level, we propose a Sparse Semi-nonnegative Matrix Factorization (SSMF) to learn local patterns at each pixel, and a Histogram of Nonnegative Coefficients (HNC) can be constructed as local feature which is more expressive than previously used features like Histogram of Oriented Gradients (HOG). At model level, we learn a probability model which takes the spatial and temporal contextual information into consideration. Our framework is totally unsupervised requiring no human-labeled training data. With more expressive features and more complicated model, our framework can accurately detect and localize anomalies in surveillance video. We carried out extensive experiments on several benchmark video datasets for anomaly detection, and the results demonstrate the superiority of our framework to state-of-the-art approaches, validating the effectiveness of our framework. <s> BIB012 </s> Progress in Outlier Detection Techniques: A Survey <s> I. INTRODUCTION <s> Anomaly detection is an important problem with multiple applications, and thus has been studied for decades in various research domains. In the past decade there has been a growing interest in anomaly detection in data represented as networks, or graphs, largely because of their robust expressiveness and their natural ability to represent complex relationships. Originally, techniques focused on anomaly detection in static graphs, which do not change and are capable of representing only a single snapshot of data. As real-world networks are constantly changing, there has been a shift in focus to dynamic graphs, which evolve over time. <s> BIB013 </s> Progress in Outlier Detection Techniques: A Survey <s> I. INTRODUCTION <s> Normally exports of goods and products are transactions encouraged by the governments of countries. Typically these incentives are promoted by tax exemptions or lower tax collections. However, exports fraud may occur with objectives not related to tax evasion, for example money laundering. This article presents the results obtained in implementing the unsupervised Deep Learning model to classify Brazilian exporters regarding the possibility of committing fraud in exports. Assuming that the vast majority of exporters have explanatory features of their export volume which interrelate in a standard way, we used the AutoEncoder to detect anomalous situations with regards to the data pattern. The databases used in this work come from exports of goods and products that occurred in Brazil in 2014, provided by the Secretariat of Federal Revenue of Brazil. From attributes that characterize export companies, the model was able to detect anomalies in at least twenty exporters. <s> BIB014 </s> Progress in Outlier Detection Techniques: A Survey <s> I. INTRODUCTION <s> In the past twenty years, progress in intrusion detection has been steady but slow. The biggest challenge is to detect new attacks in real time. In this work, a deep learning approach for anomaly detection using a Restricted Boltzmann Machine (RBM) and a deep belief network are implemented. Our method uses a one-hidden layer RBM to perform unsupervised feature reduction. The resultant weights from this RBM are passed to another RBM producing a deep belief network. The pre-trained weights are passed into a fine tuning layer consisting of a Logistic Regression (LR) classifier with multi-class soft-max. We have implemented the deep learning architecture in C++ in Microsoft Visual Studio 2013 and we use the DARPA KDDCUP'99 dataset to evaluate its performance. Our architecture outperforms previous deep learning methods implemented by Li and Salama in both detection speed and accuracy. We achieve a detection rate of 97.9% on the total 10% KDDCUP'99 test dataset. By improving the training process of the simulation, we are also able to produce a low false negative rate of 2.47%. Although the deficiencies in the KDDCUP'99 dataset are well understood, it still presents machine learning approaches for predicting attacks with a reasonable challenge. Our future work will include applying our machine learning strategy to larger and more challenging datasets, which include larger classes of attacks. <s> BIB015 </s> Progress in Outlier Detection Techniques: A Survey <s> I. INTRODUCTION <s> – Among the growing number of data mining (DM) techniques, outlier detection has gained importance in many applications and also attracted much attention in recent times. In the past, outlier detection researched papers appeared in a safety care that can view as searching for the needles in the haystack. However, outliers are not always erroneous. Therefore, the purpose of this paper is to investigate the role of outliers in healthcare services in general and patient safety care, in particular. , – It is a combined DM (clustering and the nearest neighbor) technique for outliers’ detection, which provides a clear understanding and meaningful insights to visualize the data behaviors for healthcare safety. The outcomes or the knowledge implicit is vitally essential to a proper clinical decision-making process. The method is important to the semantic, and the novel tactic of patients’ events and situations prove that play a significant role in the process of patient care safety and medications. , – The outcomes of the paper is discussing a novel and integrated methodology, which can be inferring for different biological data analysis. It is discussed as integrated DM techniques to optimize its performance in the field of health and medical science. It is an integrated method of outliers detection that can be extending for searching valuable information and knowledge implicit based on selected patient factors. Based on these facts, outliers are detected as clusters and point events, and novel ideas proposed to empower clinical services in consideration of customers’ satisfactions. It is also essential to be a baseline for further healthcare strategic development and research works. , – This paper mainly focussed on outliers detections. Outlier isolation that are essential to investigate the reason how it happened and communications how to mitigate it did not touch. Therefore, the research can be extended more about the hierarchy of patient problems. , – DM is a dynamic and successful gateway for discovering useful knowledge for enhancing healthcare performances and patient safety. Clinical data based outlier detection is a basic task to achieve healthcare strategy. Therefore, in this paper, the authors focussed on combined DM techniques for a deep analysis of clinical data, which provide an optimal level of clinical decision-making processes. Proper clinical decisions can obtain in terms of attributes selections that important to know the influential factors or parameters of healthcare services. Therefore, using integrated clustering and nearest neighbors techniques give more acceptable searched such complex data outliers, which could be fundamental to further analysis of healthcare and patient safety situational analysis. <s> BIB016 </s> Progress in Outlier Detection Techniques: A Survey <s> I. INTRODUCTION <s> Multimedia networks hold the promise of facilitating large-scale, real-time data processing in complex environments. Their foreseeable applications will help protect and monitor military, environmental, safety-critical, or domestic infrastructures and resources. Cloud infrastructures promise to provide high performance and cost effective solutions to large scale data processing problems. This paper focused on the outlier detection over distributed data stream in real time, proposed kernel density estimation (KDE) based outlier detection algorithm KDEDisStrOut in Storm, firstly formalized the problem of outlier detection using the kernel density estimation technique and update the transported data incrementally between the child node and the coordinator node which reduces the communication cost. Then the paper adopted the exponential decay policy to keep pace with the transient and evolving natures of stream data and changed the weight of different data in the sliding window adaptively made the data analysis more reasonable. Theoretical analysis and experiments on Storm with synthetic and real data show that the KDEDisStrOut algorithm is efficient and effective compared with existing outlier detection algorithms, and more suitable for data streams. <s> BIB017 </s> Progress in Outlier Detection Techniques: A Survey <s> I. INTRODUCTION <s> Nowadays, Radio frequency identification (RFID) has been extensively deployed to retailing, supply chain management, object recognition, object monitoring and tracking and many other fields. Detecting outliers in RFID data streams can help us find abnormal activities and thus avoid disasters. In order to detect outliers in RFID data streams efficiently and effectively, we proposed a fractal based outlier detection algorithm. Firstly, we built a monotone searching space based on the self-similarity of fractal. Then, we proposed two piecewise fractal models for RFID data streams, and presented an outlier detection algorithm based on the piecewise fractal model. Finally, we validated the efficiency and effectiveness of the proposed algorithm by massive experiments. <s> BIB018 </s> Progress in Outlier Detection Techniques: A Survey <s> I. INTRODUCTION <s> This paper proposes a fast outlier detection algorithm for big datasets, which is a combination of a Cell-based method and a rank-difference outlier detection method associated with a new weighted distance definition. Firstly, a Cell-based method is used to transform a dataset having a very large number of objects into a significant small set of weighted cells based on predefined lower bound and upper bound sizes. A weighted distance function is defined to measure distances between two cells based on their coordinates and weights. Then, a rank-based outlier detection method with different depths is used to calculate outlier scores of cells. Finally, cells are ranked based on scores, outlier objects are identified from ranked cells and eliminated from the provided dataset. Based on experiment results, this proposed method is appropriate for datasets that have a very large number of objects. <s> BIB019 </s> Progress in Outlier Detection Techniques: A Survey <s> I. INTRODUCTION <s> Enterprise's archives are inevitably affected by the presence of data quality problems (also called glitches). This article proposes the application of a new method to analyze the quality of datasets stored in the tables of a database, with no knowledge of the semantics of the data and without the need to define repositories of rules. The proposed method is based on proper revisions of different approaches for outlier detection that are combined to boost overall performance and accuracy. A novel transformation algorithm is conceived that treats the items in database tables as data points in real coordinate space of n dimensions, so that fields containing dates and fields containing text are processed to calculate distances between those data points. The implementation of an iterative approach ensures that global and local outliers are discovered even if they are subject, primarily in datasets with multiple outliers or clusters of outliers, to masking and swamping effects. The application of the method to a set of archives, some of which have been studied extensively in the literature, provides very promising experimental results and outperforms the application of a single other technique. Finally, a list of future research directions is highlighted. <s> BIB020 </s> Progress in Outlier Detection Techniques: A Survey <s> I. INTRODUCTION <s> Outlier detection acts as one of the most important analysis tasks for trajectory stream. In stream scenarios, such properties as unlimitedness, time-varying evolutionary, sparsity, and skewness distribution of trajectories pose new challenges to outlier detection technique. Trajectory outlier detection techniques mainly focus on finding trajectory that is dissimilar to the majority of the others, which is based on the hypothesis that they are probably generated by a different mechanism. Most distance-based methods tend to utilize a function (e.g., weighted linear sum) to measure the similarity of two arbitrary objects provided that representative features have been extracted in advance. However, this kind of method is not tailored to identify the outlier which is close to its neighbors according to some features, but behaves significantly different from its neighbors in terms of the other features. To address this issue, we propose a feature grouping-based mechanism that divides all the features into two groups, where the first group ( Similarity Feature ) is used to find close neighbors and the second group ( Difference Feature ) is used to find outliers within the similar neighborhood. According to the feature differences among local adjacent objects in one or more time intervals, we present two outlier definitions, including local anomaly trajectory fragment ( TF-outlier ) and evolutionary anomaly moving object ( MO-outlier ). We devise a basic solution and then an optimized algorithm to detect both types of outliers. Experimental results show that our proposal is both effective and efficient to detect outliers upon trajectory data streams. <s> BIB021 </s> Progress in Outlier Detection Techniques: A Survey <s> I. INTRODUCTION <s> The detection of abnormal moving objects over high-volume trajectory streams is critical for real-time applications ranging from military surveillance to transportation management. Yet this outlier detection problem, especially along both the spatial and temporal dimensions, remains largely unexplored. In this work, we propose a rich taxonomy of novel classes of neighbor-based trajectory outlier definitions that model the anomalous behavior of moving objects for a large range of real-time applications. Our theoretical analysis and empirical study on two real-world datasets—the Beijing Taxi trajectory data and the Ground Moving Target Indicator data stream—and one generated Moving Objects dataset demonstrate the effectiveness of our taxonomy in effectively capturing different types of abnormal moving objects. Furthermore, we propose a general strategy for efficiently detecting these new outlier classes called the m inimal ex amination (MEX) framework. The MEX framework features three core optimization principles, which leverage spatiotemporal as well as the predictability properties of the neighbor evidence to minimize the detection costs. Based on this foundation, we design algorithms that detect the outliers based on these classes of new outlier semantics that successfully leverage our optimization principles. Our comprehensive experimental study demonstrates that our proposed MEX strategy drives the detection costs 100-fold down into the practical realm for applications that analyze high-volume trajectory streams in near real time. <s> BIB022 </s> Progress in Outlier Detection Techniques: A Survey <s> I. INTRODUCTION <s> An important issue in processing data from sensors is outlier detection. Plenty of methods for solving this task exist - applying rules, Support Vector Machines, Naive Bayes. They are not computationally intensive and give good results where border between outliers and inliers is linear. However, when the border's shape is highly non-linear, more sophisticated methods should be applied, with the requirement of not being computationally intensive. Deep learning architecture is applied to solve this problem and results are compared with the ones obtained by applying shallow architectures. <s> BIB023 </s> Progress in Outlier Detection Techniques: A Survey <s> I. INTRODUCTION <s> Often the challenge associated with tasks like fraud and spam detection is the lack of all likely patterns needed to train suitable supervised learning models. This problem accentuates when the fraudulent patterns are not only scarce, they also change over time. Change in fraudulent pattern is because fraudsters continue to innovate novel ways to circumvent measures put in place to prevent fraud. Limited data and continuously changing patterns makes learning significantly difficult. We hypothesize that good behavior does not change with time and data points representing good behavior have consistent spatial signature under different groupings. Based on this hypothesis we are proposing an approach that detects outliers in large data sets by assigning a consistency score to each data point using an ensemble of clustering methods. Our main contribution is proposing a novel method that can detect outliers in large datasets and is robust to changing patterns. We also argue that area under the ROC curve, although a commonly used metric to evaluate outlier detection methods is not the right metric. Since outlier detection problems have a skewed distribution of classes, precision-recall curves are better suited because precision compares false positives to true positives (outliers) rather than true negatives (inliers) and therefore is not affected by the problem of class imbalance. We show empirically that area under the precision-recall curve is a better than ROC as an evaluation metric. The proposed approach is tested on the modified version of the Landsat satellite dataset, the modified version of the ann-thyroid dataset and a large real world credit card fraud detection dataset available through Kaggle where we show significant improvement over the baseline methods. <s> BIB024 </s> Progress in Outlier Detection Techniques: A Survey <s> I. INTRODUCTION <s> Releasing social network data could seriously breach user privacy. User profile and friendship relations are inherently private. Unfortunately, sensitive information may be predicted out of released data through data mining techniques. Therefore, sanitizing network data prior to release is necessary. In this paper, we explore how to launch an inference attack exploiting social networks with a mixture of non-sensitive attributes and social relationships. We map this issue to a collective classification problem and propose a collective inference model. In our model, an attacker utilizes user profile and social relationships in a collective manner to predict sensitive information of related victims in a released social network dataset. To protect against such attacks, we propose a data sanitization method collectively manipulating user profile and friendship relations. Besides sanitizing friendship relations, the proposed method can take advantages of various data-manipulating methods. We show that we can easily reduce adversary’s prediction accuracy on sensitive information, while resulting in less accuracy decrease on non-sensitive information towards three social network datasets. This is the first work to employ collective methods involving various data-manipulating methods and social relationships to protect against inference attacks in social networks. <s> BIB025 </s> Progress in Outlier Detection Techniques: A Survey <s> I. INTRODUCTION <s> Urban traffic data consists of observations like number and speed of cars or other vehicles at certain locations as measured by deployed sensors. These numbers can be interpreted as traffic flow which in turn relates to the capacity of streets and the demand of the traffic system. City planners are interested in studying the impact of various conditions on the traffic flow, leading to unusual patterns, i.e., outliers. Existing approaches to outlier detection in urban traffic data take into account only individual flow values (i.e., an individual observation). This can be interesting for real time detection of sudden changes. Here, we face a different scenario: The city planners want to learn from historical data, how special circumstances (e.g., events or festivals) relate to unusual patterns in the traffic flow, in order to support improved planing of both, events and the layout of the traffic system. Therefore, we propose to consider the sequence of traffic flow values observed within some time interval. Such flow sequences can be modeled as probability distributions of flows. We adapt an established outlier detection method, the local outlier factor (LOF), to handling flow distributions rather than individual observations. We apply the outlier detection online to extend the database with new flow distributions that are considered inliers. For the validation we consider a special case of our framework for comparison with state-of-the-art outlier detection on flows. In addition, a real case study on urban traffic flow data showcases that our method finds meaningful outliers in the traffic flow data. <s> BIB026 </s> Progress in Outlier Detection Techniques: A Survey <s> I. INTRODUCTION <s> This paper reviews the use of outlier detection approaches in urban traffic analysis. We divide existing solutions into two main categories: flow outlier detection and trajectory outlier detection. The first category groups solutions that detect flow outliers and includes statistical, similarity and pattern mining approaches. The second category contains solutions where the trajectory outliers are derived, including off-line processing for trajectory outliers and online processing for sub-trajectory outliers. Solutions in each of these categories are described, illustrated, and discussed, and open perspectives and research trends are drawn. Compared to the state-of-the-art survey papers, the contribution of this paper lies in providing a deep analysis of all the kinds of representations in urban traffic data, including flow values, segment flow values, trajectories, and sub-trajectories. In this context, we can better understand the intuition, limitations, and benefits of the existing outlier urban traffic detection algorithms. As a result, practitioners can receive some guidance for selecting the most suitable methods for their particular case. <s> BIB027 </s> Progress in Outlier Detection Techniques: A Survey <s> I. INTRODUCTION <s> Anomaly detection is an important problem that has been well-studied within diverse research areas and application domains. The aim of this survey is two-fold, firstly we present a structured and comprehensive overview of research methods in deep learning-based anomaly detection. Furthermore, we review the adoption of these methods for anomaly across various application domains and assess their effectiveness. We have grouped state-of-the-art research techniques into different categories based on the underlying assumptions and approach adopted. Within each category we outline the basic anomaly detection technique, along with its variants and present key assumptions, to differentiate between normal and anomalous behavior. For each category, we present we also present the advantages and limitations and discuss the computational complexity of the techniques in real application domains. Finally, we outline open issues in research and challenges faced while adopting these techniques. <s> BIB028
Outlier detection remains to be an essential and extensive research branch in data mining due to its widespread use in a wide range of applications. By identifying outliers, researchers can obtain vital knowledge which assists in making better decisions about data. Also, detecting outliers translates to significant actionable information in a wide variety of applications such as fraud detection BIB014 , BIB024 , intrusion detection in cybersecurity BIB015 , and health diagnosis BIB016 . Despite the ambiguity in providing a clear definition, an outlier is generally considered a data point which is significantly different from other data points or which does not conform to the expected normal pattern of the phenomenon it represents. Outlier detection techniques strive to solve the problem of discovering patterns that do not adapt to expected behaviors. Consider a scenario where we would want to define the usual behavior and the normal region. This scenario can be complicated because of: The associate editor coordinating the review of this manuscript and approving it for publication was Mohammed Nabil El Korso. • inaccurate boundaries between the outlier and normal behavior • the high possibility of the normal behavior to continue to evolve and perhaps it might not be a correct representation in the future • different applications and conflicting notion make it hard to apply techniques developed in one field to another • noise in the data which mimics real outliers and therefore makes is challenging to distinguish and remove them. Although outlier detection faces some challenges, several outlier detection techniques have been proposed that use different methodologies and algorithms to address these issues BIB001 . Some of the commonly encountered difficulties related to the nature of the input data, outlier type, data labels, accuracy, and computational complexity in terms of the CPU time and memory consumption BIB002 - BIB004 . Researchers continue to find better solutions to address these challenges, together with problems associated with detecting outliers efficiently in distributed data streams BIB017 , RFID reading streams BIB018 , large multidimensional data BIB019 , BIB011 , wireless sensor networks , efficient trajectories BIB021 , and in data quality and cleaning BIB020 . For example, consider the challenges present in large multidimensional data, in which, whether the data is relatively large or extremely large, it always contains some outliers. In most cases, as the data increase in size, the number of outliers also increases BIB006 . Therefore, with a large volume of data, it is essential to design scalable outlier detection techniques to handle large datasets (Volume). As data increase in size, this proportionally influences the computational cost, rendering the process slow and expensive. It is of great importance that these outliers are detected in a timely manner, to minimize dirty data, prevent data infection, and for the data to provide a well-timed value (Velocity and Value). In another case, when varieties of data are present and some of which are structured, mixed-valued, semi-structured and unstructured data (Variety); computing outliers of this nature can be daunting and complicated. Other areas that are confronted with challenges include in application areas such as mobile social networks, security surveillance BIB025 , BIB012 , trajectory streams BIB022 , and traffic management BIB027 , BIB026 . These areas demand constant discovery of abnormal objects to deliver crucial information promptly. Many other outlier detection areas share similar, and new challenges, and they will be referred to in subsequent sections of this paper. As a result of the inherent importance of outlier detection in various areas, considerable research efforts in the survey of outlier detection (OD) methods have been made BIB007 - BIB010 . Despite the increasing number of reviews in outlier detection that are in existence, it remains to be an all-embracing topic in the research domain. There are still newly proposed methods and essential issues to be addressed. Therefore, this article serves a vital role in keeping researchers abreast with the latest progress in outlier detection techniques. To the best of our knowledge, most surveys conducted so far only address specific areas rather than providing in-depth coverage and insights of up-to-date research studies, as can be seen in Table 1 . For example, the review in only focuses on data streams, BIB009 focuses on high dimensional numeric data, BIB013 , BIB008 on dynamic networks and the most recent on deep learning BIB028 . The most comprehensive ones , BIB008 , BIB005 , despite containing a lot of insights, they do not review most of the primary state-of-the-art methods, with most published at least five years ago. In recent years, more contemporary studies have been conducted, especially in the area of deep learning , BIB023 and ensemble techniques BIB003 , . Therefore, more of these recent studies and discoveries needs a review. Our survey presents a comprehensive review of the most prominent stateof-the-art outlier detection methods, including both conventional and emerging challenges. This survey is different from others because it captures and presents a more comprehensive review of state-of-the-art literature, as well as consolidating and complementing existing studies in the outlier detection domain. In addition, we did extensive research to bring forth significant categories of outlier detection approaches and critically discuss and evaluate them. We further discussed commonly adopted evaluation criteria, as well as the tools and available public databases for outlier detection techniques. We believe, this survey will significantly benefit researchers and practitioners as it will give a thorough understanding of various advantages, disadvantages, open challenges, and gaps associated with state-of-the-art outlier detection methods. This will provide them with a better insight into what needs to be focused on in the future. In summary, the novel and significant contributions of the paper are: application areas. Unlike other surveys, we add new application areas that need more attention. • We expand on the categories of outlier detection algorithms with additional distinct methods to previous surveys. We introduce state-of-the-art algorithms, discuss them with highlighting their strengths and weaknesses. We mainly cite and discuss recent studies that were done after most of the significant surveys , BIB008 . • We significantly expand the discussions for each of the distinct categories, in comparison to previous surveys, by presenting the pros, cons, open challenges, and shortfalls of recent methods. We also offer a summary of the performance of some state-of-the-art algorithms, issues solved, drawbacks, and possible solutions. • We present some of the contemporary open challenges in evaluating outlier detection algorithms. We then introduce standard tools, and some benchmark datasets usually used in outlier detection research. We extend our discussion with a discussion of the OD tools selection and challenges in choosing suitable datasets. • We identify some challenges and finally recommend some possible research directions for future studies. The paper is organized as follows: In section 2, we commence our study by providing a comprehensive background on outlier detection. This is done through a detailed explanation about their most significant outlining features and foundations: the definition, characteristics, causes, and application areas. In Section 3, we formally categorize the outlier detection methods (OD) into distinct areas and then discuss these techniques briefly. We include the performances, issues addressed, and drawbacks of these methods with open research questions and challenges for future work. Section 4 contains the discussion of some evaluation constraints in outlier detection, essential tools used for OD, and some analysis of benchmark data sets. In Section 5, we conclude the paper with some open challenges and recommendations for future work.
Progress in Outlier Detection Techniques: A Survey <s> 2) HOW TO HANDLE OUTLIERS <s> Abstract In this paper, a two-phase clustering algorithm for outliers detection is proposed. We first modify the traditional k -means algorithm in Phase 1 by using a heuristic “if one new input pattern is far enough away from all clusters' centers, then assign it as a new cluster center”. It results that the data points in the same cluster may be most likely all outliers or all non-outliers. And then we construct a minimum spanning tree (MST) in Phase 2 and remove the longest edge. The small clusters, the tree with less number of nodes, are selected and regarded as outlier. The experimental results show that our process works well. <s> BIB001 </s> Progress in Outlier Detection Techniques: A Survey <s> 2) HOW TO HANDLE OUTLIERS <s> Many outlier detection methods do not merely provide the decision for a single data object being or not being an outlier. Instead, many approaches give an “outlier score” or “outlier factor” indicating “how much” the respective data object is an outlier. Such outlier scores differ widely in their range, contrast, and expressiveness between different outlier models. Even for one and the same outlier model, the same score can indicate a different degree of “outlierness” in different data sets or regions of different characteristics in one data set. Here, we demonstrate a visualization tool based on a unification of outlier scores that allows to compare and evaluate outlier scores visually even for high dimensional data. <s> BIB002 </s> Progress in Outlier Detection Techniques: A Survey <s> 2) HOW TO HANDLE OUTLIERS <s> High-dimensional data in Euclidean space pose special challenges to data mining algorithms. These challenges are often indiscriminately subsumed under the term ‘curse of dimensionality’, more concrete aspects being the so-called ‘distance concentration effect’, the presence of irrelevant attributes concealing relevant information, or simply efficiency issues. In about just the last few years, the task of unsupervised outlier detection has found new specialized solutions for tackling high-dimensional data in Euclidean space. These approaches fall under mainly two categories, namely considering or not considering subspaces (subsets of attributes) for the definition of outliers. The former are specifically addressing the presence of irrelevant attributes, the latter do consider the presence of irrelevant attributes implicitly at best but are more concerned with general issues of efficiency and effectiveness. Nevertheless, both types of specialized outlier detection algorithms tackle challenges specific to high-dimensional data. In this survey article, we discuss some important aspects of the ‘curse of dimensionality’ in detail and survey specialized algorithms for outlier detection from both categories. © 2012 Wiley Periodicals, Inc. Statistical Analysis and Data Mining, 2012 © 2012 Wiley Periodicals, Inc. <s> BIB003 </s> Progress in Outlier Detection Techniques: A Survey <s> 2) HOW TO HANDLE OUTLIERS <s> An outlier removal based data cleaning technique is proposed to clean manually pre-segmented human skin data in colour images. The 3-dimensional colour data is projected onto three 2-dimensional planes, from which outliers are removed. The cleaned 2 dimensional data projections are merged to yield a 3D clean RGB data. This data is finally used to build a look up table and a single Gaussian classifier for the purpose of human skin detection in colour images. <s> BIB004 </s> Progress in Outlier Detection Techniques: A Survey <s> 2) HOW TO HANDLE OUTLIERS <s> Benchmarks are derived from several data sets found at the UC Irvine Machine Learning Repository: https://archive.ics.uci.edu/ml/index.html <s> BIB005 </s> Progress in Outlier Detection Techniques: A Survey <s> 2) HOW TO HANDLE OUTLIERS <s> Learning expressive low-dimensional representations of ultrahigh-dimensional data, e.g., data with thousands/millions of features, has been a major way to enable learning methods to address the curse of dimensionality. However, existing unsupervised representation learning methods mainly focus on preserving the data regularity information and learning the representations independently of subsequent outlier detection methods, which can result in suboptimal and unstable performance of detecting irregularities (i.e., outliers). This paper introduces a ranking model-based framework, called RAMODO, to address this issue. RAMODO unifies representation learning and outlier detection to learn low-dimensional representations that are tailored for a state-of-the-art outlier detection approach - the random distance-based approach. This customized learning yields more optimal and stable representations for the targeted outlier detectors. Additionally, RAMODO can leverage little labeled data as prior knowledge to learn more expressive and application-relevant representations. We instantiate RAMODO to an efficient method called REPEN to demonstrate the performance of RAMODO. Extensive empirical results on eight real-world ultrahigh dimensional data sets show that REPEN (i) enables a random distance-based detector to obtain significantly better AUC performance and two orders of magnitude speedup; (ii) performs substantially better and more stably than four state-of-the-art representation learning methods; and (iii) leverages less than 1% labeled data to achieve up to 32% AUC improvement. <s> BIB006
There is still considerable discussion on what are considered outliers. The most applicable rule of thumb used by many researchers is to flag a data point as an outlier when the data point is three or more standard deviations from the mean . This, however, is a weak supporting idea to discuss such argument further, since it cannot hold for all other scenarios. This is especially true in recent times, when we are faced with large dynamic and unstructured data. Therefore, in modern times, it is imperative to further deliberate on some crucial questions to determine how to handle outliers. For example, whether it is prudent to remove outliers or acknowledge them as part of the data. Outliers in data can sometimes have a negative impact. In machine learning and deep learning outlier detection processes, this will consequently result in longer training process of the data, less accurate models, and eventually degrading results. With the recent development of new techniques to detect outliers, new approaches have been proposed to deal with outliers. In some cases BIB001 , BIB002 , visual examination of the data is more preferred to get a clear picture of the degree of outlierness of a data point. In another case , an approach such as the univariate technique is used to search for data points that contain extreme values on a single variable. While other strategies such as the multivariate technique search for the combinations on the entire variables and then the Minkowski error minimizes prospective outliers during the training phase. There is another great deal of controversy as to what to do when outliers are identified. In many situations, just answering why there are outliers in the data can boost the decision of what can be done with these outliers. In some scenarios, outliers are illegally included BIB004 , while in some other case, they might be part of the data . In cases of high dimensional numeric data computation BIB003 , BIB006 , , there are some critical factors like the curse of dimensionality that needs to be considered. Researchers recently tried to use more accurate data that is uncontaminated and ones that are suitable for an outlier detection process BIB005 - , before they start the outlier detection procedure. Generally, dealing with outliers is dependent on the application domain. For example, in cases where the influence of outliers might cause serious issues such as errors from instrument readings, critical environment safety scenarios, or in real-time situations (fraud detection/intrusion detection). These outliers can be purged, or an alarm is set up. While, in a no cause for alarm scenario, in a case like in a population census survey where few people stand out in some features like height, these outliers can be noted and verified since they are just naturally occurring outliers. There is no need to delete them as in the former case. In most cases, to answer the question about how to handle outliers, one has to use their intuition, analytic argument through some experiments and also thoughtful deliberation before making decisions. Other noteworthy questions in the outlier detection process, include the significance of considering the context and scenario, and in deliberating the purpose of detecting the outliers. It is essential to know the reason why the outliers are to be identified and what they signify at the end. In the subsequent sections, we will see that different methods or application areas call for various measures on how to deal with outliers.
Progress in Outlier Detection Techniques: A Survey <s> C. APPLICATION AREAS OF OUTLIER DETECTION <s> Outlier detection has been used for centuries to detect and, where appropriate, remove anomalous observations from data. Outliers arise due to mechanical faults, changes in system behaviour, fraudulent behaviour, human error, instrument error or simply through natural deviations in populations. Their detection can identify system faults and fraud before they escalate with potentially catastrophic consequences. It can identify errors and remove their contaminating effect on the data set and as such to purify the data for processing. The original outlier detection methods were arbitrary but now, principled and systematic techniques are used, drawn from the full gamut of Computer Science and Statistics. In this paper, we introduce a survey of contemporary techniques for outlier detection. We identify their respective motivations and distinguish their advantages and disadvantages in a comparative review. <s> BIB001 </s> Progress in Outlier Detection Techniques: A Survey <s> C. APPLICATION AREAS OF OUTLIER DETECTION <s> Data mining for intrusion detection can be divided into several sub-topics, among which unsupervised clustering (which has controversial properties). Unsupervised clustering for intrusion detection aims to i) group behaviours together depending on their similarity and ii) detect groups containing only one (or very few) behaviour(s). Such isolated behaviours seem to deviate from the model of normality; therefore, they are considered as malicious. Obviously, not all atypical behaviours are attacks or intrusion attempts. This represents one drawback of intrusion detection methods based on clustering.We take into account the addition of a new feature to isolated behaviours before they are considered malicious. This feature is based on the possible repeated occurrences of the bahaviour on many information systems. Based on this feature, we propose a new outlier mining method which we validate through a set of experiments. <s> BIB002 </s> Progress in Outlier Detection Techniques: A Survey <s> C. APPLICATION AREAS OF OUTLIER DETECTION <s> An outlier removal based data cleaning technique is proposed to clean manually pre-segmented human skin data in colour images. The 3-dimensional colour data is projected onto three 2-dimensional planes, from which outliers are removed. The cleaned 2 dimensional data projections are merged to yield a 3D clean RGB data. This data is finally used to build a look up table and a single Gaussian classifier for the purpose of human skin detection in colour images. <s> BIB003 </s> Progress in Outlier Detection Techniques: A Survey <s> C. APPLICATION AREAS OF OUTLIER DETECTION <s> Traditional anomaly detection on social media mostly focuses on individual point anomalies while anomalous phenomena usually occur in groups. Therefore it is valuable to study the collective behavior of individuals and detect group anomalies. Existing group anomaly detection approaches rely on the assumption that the groups are known, which can hardly be true in real world social media applications. In this paper, we take a generative approach by proposing a hierarchical Bayes model: Group Latent Anomaly Detection (GLAD) model. GLAD takes both pair-wise and point-wise data as input, automatically infers the groups and detects group anomalies simultaneously. To account for the dynamic properties of the social media data, we further generalize GLAD to its dynamic extension d-GLAD. We conduct extensive experiments to evaluate our models on both synthetic and real world datasets. The empirical results demonstrate that our approach is effective and robust in discovering latent groups and detecting group anomalies. <s> BIB004 </s> Progress in Outlier Detection Techniques: A Survey <s> C. APPLICATION AREAS OF OUTLIER DETECTION <s> With the rapid proliferation of GPS-equipped devices, a myriad of trajectory data representing the mobility of various moving objects in two-dimensional space have been generated. This paper aims to detect the anomalous trajectories with the help of the historical trajectory dataset and the popular routes. In this paper, both of spatial and temporal abnormalities are taken into consideration simultaneously to improve the accuracy of the detection. Previous work has developed a novel time-dependent popular routes based algorithm named TPRO. TPRO focuses on finding out all outliers in the historical trajectory dataset. But in most cases, people do not care about which trajectory in the dataset is abnormal. They only yearn for the detection result of a new trajectory that is not included in the dataset. So this paper develops the the upgrade version of TPRO, named TPRRO. TPRRO is a real-time outlier detection algorithm and it contains the off-line preprocess step and the on-line detection step. In the off-line preprocess step, TTI (short for time-dependent transfer index) and hot TTG (short for time-dependent transfer graph) cache are constructed according to the historical trajectory dataset. Then in the on-line detection step, TTI and hot TTG cache are used to speed up the detection progress. The experiment result shows that TPRRO has a better efficiency than TPRO in detecting outliers. <s> BIB005 </s> Progress in Outlier Detection Techniques: A Survey <s> C. APPLICATION AREAS OF OUTLIER DETECTION <s> Enterprise's archives are inevitably affected by the presence of data quality problems (also called glitches). This article proposes the application of a new method to analyze the quality of datasets stored in the tables of a database, with no knowledge of the semantics of the data and without the need to define repositories of rules. The proposed method is based on proper revisions of different approaches for outlier detection that are combined to boost overall performance and accuracy. A novel transformation algorithm is conceived that treats the items in database tables as data points in real coordinate space of n dimensions, so that fields containing dates and fields containing text are processed to calculate distances between those data points. The implementation of an iterative approach ensures that global and local outliers are discovered even if they are subject, primarily in datasets with multiple outliers or clusters of outliers, to masking and swamping effects. The application of the method to a set of archives, some of which have been studied extensively in the literature, provides very promising experimental results and outperforms the application of a single other technique. Finally, a list of future research directions is highlighted. <s> BIB006 </s> Progress in Outlier Detection Techniques: A Survey <s> C. APPLICATION AREAS OF OUTLIER DETECTION <s> This paper proposes a fast outlier detection algorithm for big datasets, which is a combination of a Cell-based method and a rank-difference outlier detection method associated with a new weighted distance definition. Firstly, a Cell-based method is used to transform a dataset having a very large number of objects into a significant small set of weighted cells based on predefined lower bound and upper bound sizes. A weighted distance function is defined to measure distances between two cells based on their coordinates and weights. Then, a rank-based outlier detection method with different depths is used to calculate outlier scores of cells. Finally, cells are ranked based on scores, outlier objects are identified from ranked cells and eliminated from the provided dataset. Based on experiment results, this proposed method is appropriate for datasets that have a very large number of objects. <s> BIB007 </s> Progress in Outlier Detection Techniques: A Survey <s> C. APPLICATION AREAS OF OUTLIER DETECTION <s> Social media anomaly detection is of critical importance to prevent malicious activities such as bullying, terrorist attack planning, and fraud information dissemination. With the recent popularity of social media, new types of anomalous behaviors arise, causing concerns from various parties. While a large amount of work have been dedicated to traditional anomaly detection problems, we observe a surge of research interests in the new realm of social media anomaly detection. In this paper, we present a survey on existing approaches to address this problem. We focus on the new type of anomalous phenomena in the social media and review the recent developed techniques to detect those special types of anomalies. We provide a general overview of the problem domain, common formulations, existing methodologies and potential directions. With this work, we hope to call out the attention from the research community on this challenging problem and open up new directions that we can contribute in the future. <s> BIB008 </s> Progress in Outlier Detection Techniques: A Survey <s> C. APPLICATION AREAS OF OUTLIER DETECTION <s> Errors are prevalent in time series data, such as GPS trajectories or sensor readings. Existing methods focus more on anomaly detection but not on repairing the detected anomalies. By simply filtering out the dirty data via anomaly detection, applications could still be unreliable over the incomplete time series. Instead of simply discarding anomalies, we propose to (iteratively) repair them in time series data, by creatively bonding the beauty of temporal nature in anomaly detection with the widely considered minimum change principle in data repairing. Our major contributions include: (1) a novel framework of iterative minimum repairing (IMR) over time series data, (2) explicit analysis on convergence of the proposed iterative minimum repairing, and (3) efficient estimation of parameters in each iteration. Remarkably, with incremental computation, we reduce the complexity of parameter estimation from O(n) to O(1). Experiments on real datasets demonstrate the superiority of our proposal compared to the state-of-the-art approaches. In particular, we show that (the proposed) repairing indeed improves the time series classification application. <s> BIB009 </s> Progress in Outlier Detection Techniques: A Survey <s> C. APPLICATION AREAS OF OUTLIER DETECTION <s> In the last decade, outlier detection for temporal data has received much attention from data mining and machine learning communities. While other works have addressed this problem by two-way approaches (similarity and clustering), we propose in this paper an embedded technique dealing with both methods simultaneously. We reformulate the task of outlier detection as a weighted clustering problem based on entropy and dynamic time warping for time series. The outliers are then detected by an optimization problem of a new proposed cost function adapted to this kind of data. Finally, we provide some experimental results for validating our proposal and comparing it with other methods of detection. <s> BIB010 </s> Progress in Outlier Detection Techniques: A Survey <s> C. APPLICATION AREAS OF OUTLIER DETECTION <s> The detection of abnormal moving objects over high-volume trajectory streams is critical for real-time applications ranging from military surveillance to transportation management. Yet this outlier detection problem, especially along both the spatial and temporal dimensions, remains largely unexplored. In this work, we propose a rich taxonomy of novel classes of neighbor-based trajectory outlier definitions that model the anomalous behavior of moving objects for a large range of real-time applications. Our theoretical analysis and empirical study on two real-world datasets—the Beijing Taxi trajectory data and the Ground Moving Target Indicator data stream—and one generated Moving Objects dataset demonstrate the effectiveness of our taxonomy in effectively capturing different types of abnormal moving objects. Furthermore, we propose a general strategy for efficiently detecting these new outlier classes called the m inimal ex amination (MEX) framework. The MEX framework features three core optimization principles, which leverage spatiotemporal as well as the predictability properties of the neighbor evidence to minimize the detection costs. Based on this foundation, we design algorithms that detect the outliers based on these classes of new outlier semantics that successfully leverage our optimization principles. Our comprehensive experimental study demonstrates that our proposed MEX strategy drives the detection costs 100-fold down into the practical realm for applications that analyze high-volume trajectory streams in near real time. <s> BIB011 </s> Progress in Outlier Detection Techniques: A Survey <s> C. APPLICATION AREAS OF OUTLIER DETECTION <s> The problem of outlier detection is extremely challenging in many domains such as text, in which the attribute values are typically non-negative, and most values are zero. In such cases, it often becomes difficult to separate the outliers from the natural variations in the patterns in the underlying data. In this paper, we present a matrix factorization method, which is naturally able to distinguish the anomalies with the use of low rank approximations of the underlying data. Our iterative algorithm TONMF is based on block coordinate descent (BCD) framework. We define blocks over the term-document matrix such that the function becomes solvable. Given most recently updated values of other matrix blocks, we always update one block at a time to its optimal. Our approach has significant advantages over traditional methods for text outlier detection. Finally, we present experimental results illustrating the effectiveness of our method over competing methods. <s> BIB012 </s> Progress in Outlier Detection Techniques: A Survey <s> C. APPLICATION AREAS OF OUTLIER DETECTION <s> Financial fraud, such as money laundering, is known to be a serious process of crime that makes illegitimately obtained funds go to terrorism or other criminal activity. This kind of illegal activities involve complex networks of trade and financial transactions, which makes it difficult to detect the fraud entities and discover the features of fraud. Fortunately, trading/transaction network and features of entities in the network can be constructed from the complex networks of the trade and financial transactions. The trading/transaction network reveals the interaction between entities, and thus anomaly detection on trading networks can reveal the entities involved in the fraud activity; while features of entities are the description of entities, and anomaly detection on features can reflect details of the fraud activities. Thus, network and features provide complementary information for fraud detection, which has potential to improve fraud detection performance. However, the majority of existing methods focus on networks or features information separately, which does not utilize both information. In this paper, we propose a novel fraud detection framework, CoDetect, which can leverage both network information and feature information for financial fraud detection. In addition, the CoDetect can simultaneously detecting financial fraud activities and the feature patterns associated with the fraud activities. Extensive experiments on both synthetic data and real-world data demonstrate the efficiency and the effectiveness of the proposed framework in combating financial fraud, especially for money laundering. <s> BIB013 </s> Progress in Outlier Detection Techniques: A Survey <s> C. APPLICATION AREAS OF OUTLIER DETECTION <s> This paper reviews the use of outlier detection approaches in urban traffic analysis. We divide existing solutions into two main categories: flow outlier detection and trajectory outlier detection. The first category groups solutions that detect flow outliers and includes statistical, similarity and pattern mining approaches. The second category contains solutions where the trajectory outliers are derived, including off-line processing for trajectory outliers and online processing for sub-trajectory outliers. Solutions in each of these categories are described, illustrated, and discussed, and open perspectives and research trends are drawn. Compared to the state-of-the-art survey papers, the contribution of this paper lies in providing a deep analysis of all the kinds of representations in urban traffic data, including flow values, segment flow values, trajectories, and sub-trajectories. In this context, we can better understand the intuition, limitations, and benefits of the existing outlier urban traffic detection algorithms. As a result, practitioners can receive some guidance for selecting the most suitable methods for their particular case. <s> BIB014 </s> Progress in Outlier Detection Techniques: A Survey <s> C. APPLICATION AREAS OF OUTLIER DETECTION <s> Online retailers execute a very large number of price updates when compared to brick-and-mortar stores. Even a few mis-priced items can have a significant business impact and result in a loss of customer trust. Early detection of anomalies in an automated real-time fashion is an important part of such a pricing system. In this paper, we describe unsupervised and supervised anomaly detection approaches we developed and deployed for a large-scale online pricing system at Walmart. Our system detects anomalies both in batch and real-time streaming settings, and the items flagged are reviewed and actioned based on priority and business impact. We found that having the right architecture design was critical to facilitate model performance at scale, and business impact and speed were important factors influencing model selection, parameter choice, and prioritization in a production environment for a large-scale system. We conducted analyses on the performance of various approaches on a test set using real-world retail data and fully deployed our approach into production. We found that our approach was able to detect the most important anomalies with high precision. <s> BIB015 </s> Progress in Outlier Detection Techniques: A Survey <s> C. APPLICATION AREAS OF OUTLIER DETECTION <s> The smart campus is becoming a reality with the advancement of information and communication technologies. For energy efficiency, it is essential to detect abnormal energy consumption in a smart campus, which is important for a “smart” campus. However, the obtained data are usually continuously generated by ubiquitous sensing devices, and the abnormal patterns hidden in the data are usually unknown, which makes detecting anomalies in such a context more challenging. Moreover, evaluating the quality of anomaly detection algorithms is difficult without labeled datasets. If the data are annotated well, classical criteria such as the receiver operating characteristic or precision recall curves can be used to compare the performance of different anomaly detection algorithms. In a smart campus environment, it is difficult to acquire labeled data to train a model due to the limited capabilities of the sensing devices. Therefore, distributed intelligence is preferred. In this paper, we present a multi-agent-based unsupervised anomaly detection method. We tackle these challenges in two stages with this method. First, we label the data using ensemble models. Second, we propose a method based on deep learning techniques to detect anomalies in an unsupervised fashion. The result of the first stage is used to evaluate the performance of the proposed method. We validate the proposed method with several datasets, and the experimental results demonstrate the effectiveness of our method. <s> BIB016
Outlier detection, with its ever-growing interest, has several applications areas in wide-ranging areas. The applications areas where outlier detection is applied are so diverse, it VOLUME 7, 2019 is impossible to cover thoroughly in just a single survey, because of space limitation. Therefore, in this paper, we list and introduce existing and recent application areas. We will refer our readers to some previous surveys that exhaustively cover many application domains that OD methods are applied in. Chandola et al. BIB014 provided a broad outline and an in-depth knowledge of outlier detection application domain. Also, the survey BIB001 also presented an exhaustive list and discussions of applications that adopt outlier detection. Some existing application areas include credit card fraud detection , BIB013 , intrusion detection BIB002 , defect detection from behavioral patterns of industrial machines , sensor networks , finding unusual patterns in time-series data BIB009 , BIB010 , trajectories BIB011 , BIB005 , e-commerce BIB015 , energy consumption BIB016 , data quality and cleaning BIB006 , BIB003 , textual outlier BIB012 , in big data analysis BIB007 , , in social media BIB008 , BIB004 and so on. Recently, detecting outliers has become essential in these application domains. We consider only a few new application areas of interest for just a short introduction.
Progress in Outlier Detection Techniques: A Survey <s> 7) SENSOR NETWORKS AND DATABASES <s> Wireless sensor networks (WSNs) have received considerable attention for multiple types of applications. In particular, outlier detection in WSNs has been an area of vast interest. Outlier detection becomes even more important for the applications involving harsh environments, however, it has not received extensive treatment in the literature. The identification of outliers in WSNs can be used for filtration of false data, find faulty nodes and discover events of interest. This paper presents a survey of the essential characteristics for the analysis of outlier detection techniques in harsh environments. These characteristics include, input data type, spatio-temporal and attribute correlations, user specified thresholds, outlier types(local and global), type of approach(distributed/centralized), outlier identification(event or error), outlier degree, outlier score, susceptibility to dynamic topology, non-stationarity and inhomogeneity. Moreover, the prioritization of various characteristics has been discussed for outlier detection techniques in harsh environments. The paper also gives a brief overview of the classification strategies for outlier detection techniques in WSNs and discusses the feasibility of various types of techniques for WSNs deployed in harsh environments. <s> BIB001 </s> Progress in Outlier Detection Techniques: A Survey <s> 7) SENSOR NETWORKS AND DATABASES <s> Outlier detection (OD) constitutes an important issue for many research areas namely data mining, medicines, and sensor networks. It is helpful mainly in identifying intrusion, fraud, errors, defects, noise and so on. In fact, outlier measurements are essential improvements to quality of information, as they are not conforming to expected normal behaviour. Due to the importance of sensed measurements is collected via wireless sensor networks, a novel OD process dubbed density-based spatial clustering of applications with noise (DBSCAN)-OD has been developed based on the algorithm DBSCAN, as a background for OD. With respect to the classic DBSCAN approach, two processes have been jointly combined, the first of computing parameters, while the second concerns class identification in spatial temporal databases. Through both of these modules, one is able to consider real-time application cases as centralised in the base station for the purpose of separating outliers from normal sensors. For the sake of evaluating the authors proposed solution, a diversity of synthetic databases has been applied as generated from real measurements of Intel Berkeley lab. The reached simulation findings indicate well that their devised method can prove to help effectively in detecting outliers with an accuracy rate of 99%. <s> BIB002 </s> Progress in Outlier Detection Techniques: A Survey <s> 7) SENSOR NETWORKS AND DATABASES <s> This study proposes a distributed outlier detection algorithm based on credibility feedback in wireless sensor networks. The algorithm consists of three stages, which are evaluating the initial credibility of sensor nodes, evaluating the final credibility based on credibility feedback and Bayesian theorem, and adjusting for the outlier set. Simulation results verify that the algorithm can achieve high detection accuracy and low false alarm rate, even in the situation that the network with a large number of outliers. <s> BIB003
Detecting outliers in sensor environments such as in a wireless sensor environment BIB002 , BIB003 , target tracking environment BIB001 , and body sensor networks has helped in ensuring quality network routing and in giving accurate results from sensors. It helps in monitoring the computer network performance, for example, to detect network bottlenecks.
Progress in Outlier Detection Techniques: A Survey <s> 8) DATA QUALITY AND DATA CLEANING <s> An outlier removal based data cleaning technique is proposed to clean manually pre-segmented human skin data in colour images. The 3-dimensional colour data is projected onto three 2-dimensional planes, from which outliers are removed. The cleaned 2 dimensional data projections are merged to yield a 3D clean RGB data. This data is finally used to build a look up table and a single Gaussian classifier for the purpose of human skin detection in colour images. <s> BIB001 </s> Progress in Outlier Detection Techniques: A Survey <s> 8) DATA QUALITY AND DATA CLEANING <s> Enterprise's archives are inevitably affected by the presence of data quality problems (also called glitches). This article proposes the application of a new method to analyze the quality of datasets stored in the tables of a database, with no knowledge of the semantics of the data and without the need to define repositories of rules. The proposed method is based on proper revisions of different approaches for outlier detection that are combined to boost overall performance and accuracy. A novel transformation algorithm is conceived that treats the items in database tables as data points in real coordinate space of n dimensions, so that fields containing dates and fields containing text are processed to calculate distances between those data points. The implementation of an iterative approach ensures that global and local outliers are discovered even if they are subject, primarily in datasets with multiple outliers or clusters of outliers, to masking and swamping effects. The application of the method to a set of archives, some of which have been studied extensively in the literature, provides very promising experimental results and outperforms the application of a single other technique. Finally, a list of future research directions is highlighted. <s> BIB002
Data from different application areas may contain and generate measurement errors and dirty data. Thus, the process of outlier detection BIB002 , BIB001 , can enhance data quality and cleaning. The method of cleaning and correcting data is essential for training high-quality model and the fast computation and prediction of accurate results.
Progress in Outlier Detection Techniques: A Survey <s> 9) TIME-SERIES MONITORING AND DATA STREAMS <s> In the statistics community, outlier detection for time series data has been studied for decades. Recently, with advances in hardware and software technology, there has been a large body of work on temporal outlier detection from a computational perspective within the computer science community. In particular, advances in hardware technology have enabled the availability of various forms of temporal data collection mechanisms, and advances in software technology have enabled a variety of data management mechanisms. This has fueled the growth of different kinds of data sets such as data streams, spatio-temporal data, distributed streams, temporal networks, and time series data, generated by a multitude of applications. There arises a need for an organized and detailed study of the work done in the area of outlier detection with respect to such temporal datasets. In this survey, we provide a comprehensive and structured overview of a large set of interesting outlier definitions for various forms of temporal data, novel techniques, and application scenarios in which specific definitions and techniques have been widely used. <s> BIB001 </s> Progress in Outlier Detection Techniques: A Survey <s> 9) TIME-SERIES MONITORING AND DATA STREAMS <s> Data mining is one of the most exciting fields of research for the researcher. As data is getting digitized, systems are getting connected and integrated, scope of data generation and analytics has increased exponentially. Today, most of the systems generate non-stationary data of huge, size, volume, occurrence speed, fast changing etc. these kinds of data are called data streams. One of the most recent trend i.e. IOT (Internet Of Things) is also promising lots of expectation of people which will ease the use of day to day activities and it could also connect systems and people together. This situation will also lead to generation of data streams, thus present and future scope of data stream mining is highly promising. Characteristics of data stream possess many challenges for the researcher; this makes analytics of such data difficult and also acts as source of inspiration for researcher. Outlier detection plays important role in any application. In this paper we reviewed different techniques of outlier detection for stream data and their issues in detail and presented results of the same. <s> BIB002 </s> Progress in Outlier Detection Techniques: A Survey <s> 9) TIME-SERIES MONITORING AND DATA STREAMS <s> Multimedia networks hold the promise of facilitating large-scale, real-time data processing in complex environments. Their foreseeable applications will help protect and monitor military, environmental, safety-critical, or domestic infrastructures and resources. Cloud infrastructures promise to provide high performance and cost effective solutions to large scale data processing problems. This paper focused on the outlier detection over distributed data stream in real time, proposed kernel density estimation (KDE) based outlier detection algorithm KDEDisStrOut in Storm, firstly formalized the problem of outlier detection using the kernel density estimation technique and update the transported data incrementally between the child node and the coordinator node which reduces the communication cost. Then the paper adopted the exponential decay policy to keep pace with the transient and evolving natures of stream data and changed the weight of different data in the sliding window adaptively made the data analysis more reasonable. Theoretical analysis and experiments on Storm with synthetic and real data show that the KDEDisStrOut algorithm is efficient and effective compared with existing outlier detection algorithms, and more suitable for data streams. <s> BIB003 </s> Progress in Outlier Detection Techniques: A Survey <s> 9) TIME-SERIES MONITORING AND DATA STREAMS <s> Errors are prevalent in time series data, such as GPS trajectories or sensor readings. Existing methods focus more on anomaly detection but not on repairing the detected anomalies. By simply filtering out the dirty data via anomaly detection, applications could still be unreliable over the incomplete time series. Instead of simply discarding anomalies, we propose to (iteratively) repair them in time series data, by creatively bonding the beauty of temporal nature in anomaly detection with the widely considered minimum change principle in data repairing. Our major contributions include: (1) a novel framework of iterative minimum repairing (IMR) over time series data, (2) explicit analysis on convergence of the proposed iterative minimum repairing, and (3) efficient estimation of parameters in each iteration. Remarkably, with incremental computation, we reduce the complexity of parameter estimation from O(n) to O(1). Experiments on real datasets demonstrate the superiority of our proposal compared to the state-of-the-art approaches. In particular, we show that (the proposed) repairing indeed improves the time series classification application. <s> BIB004
Detecting outliers in time series data BIB001 , BIB004 and in detecting abnormal patterns in data streaming BIB003 , , BIB002 - is essential. This is because the abnormal pattern will influence the fast computation and estimation of correct results.
Progress in Outlier Detection Techniques: A Survey <s> A. DENSITY-BASED APPROACHES <s> For many KDD applications, such as detecting criminal activities in E-commerce, finding the rare instances or the outliers, can be more interesting than finding the common patterns. Existing work in outlier detection regards being an outlier as a binary property. In this paper, we contend that for many scenarios, it is more meaningful to assign to each object a degree of being an outlier. This degree is called the local outlier factor (LOF) of an object. It is local in that the degree depends on how isolated the object is with respect to the surrounding neighborhood. We give a detailed formal analysis showing that LOF enjoys many desirable properties. Using real-world datasets, we demonstrate that LOF can be used to find outliers which appear to be meaningful, but can otherwise not be identified with existing approaches. Finally, a careful performance evaluation of our algorithm confirms we show that our approach of finding local outliers can be practical. <s> BIB001 </s> Progress in Outlier Detection Techniques: A Survey <s> A. DENSITY-BASED APPROACHES <s> Outlier detection is concerned with discovering exceptional behaviors of objects in data sets.It is becoming a growingly useful tool in applications such as credit card fraud detection, discovering criminal behaviors in e-commerce, identifying computer intrusion, detecting health problems, etc. In this paper, we introduce a connectivity-based outlier factor (COF) scheme that improves the effectiveness of an existing local outlier factor (LOF) scheme when a pattern itself has similar neighbourhood density as an outlier. We give theoretical and empirical analysis to demonstrate the improvement in effectiveness and the capability of the COF scheme in comparison with the LOF scheme. <s> BIB002 </s> Progress in Outlier Detection Techniques: A Survey <s> A. DENSITY-BASED APPROACHES <s> Outlier detection is an integral part of data mining and has attracted much attention recently [M. Breunig et al., (2000)], [W. Jin et al., (2001)], [E. Knorr et al., (2000)]. We propose a new method for evaluating outlierness, which we call the local correlation integral (LOCI). As with the best previous methods, LOCI is highly effective for detecting outliers and groups of outliers (a.k.a. micro-clusters). In addition, it offers the following advantages and novelties: (a) It provides an automatic, data-dictated cutoff to determine whether a point is an outlier-in contrast, previous methods force users to pick cut-offs, without any hints as to what cut-off value is best for a given dataset. (b) It can provide a LOCI plot for each point; this plot summarizes a wealth of information about the data in the vicinity of the point, determining clusters, micro-clusters, their diameters and their inter-cluster distances. None of the existing outlier-detection methods can match this feature, because they output only a single number for each point: its outlierness score, (c) Our LOCI method can be computed as quickly as the best previous methods, (d) Moreover, LOCI leads to a practically linear approximate method, aLOCI (for approximate LOCI), which provides fast highly-accurate outlier detection. To the best of our knowledge, this is the first work to use approximate computations to speed up outlier detection. Experiments on synthetic and real world data sets show that LOCI and aLOCI can automatically detect outliers and micro-clusters, without user-required cut-offs, and that they quickly spot both expected and unexpected outliers. <s> BIB003 </s> Progress in Outlier Detection Techniques: A Survey <s> A. DENSITY-BASED APPROACHES <s> Outlier detection can lead to discovering unexpected and interesting knowledge, which is critical important to some areas such as monitoring of criminal activities in electronic commerce, credit card fraud, etc. In this paper, we developed an efficient density-based outlier detection method for large datasets. Our contributions are: a) we introduce a relative density factor (RDF); b) based on RDF, we propose an RDF-based outlier detection method which can efficiently prune the data points which are deep in clusters, and detect outliers only within the remaining small subset of the data; c) the performance of our method is further improved by means of a vertical data representation, P-trees. We tested our method with NHL and NBA data. Our method shows an order of magnitude speed improvement compared to the contemporary approaches. <s> BIB004 </s> Progress in Outlier Detection Techniques: A Survey <s> A. DENSITY-BASED APPROACHES <s> An outlier is an observation that deviates so much from other observations as to arouse suspicion that it was generated by a different mechanism. Outlier detection has many applications, such as data cleaning, fraud detection and network intrusion. The existence of outliers can indicate individuals or groups that exhibit a behavior that is very different from most of the individuals of the dataset. In this paper we design two parallel algorithms, the first one is for finding out distance-based outliers based on nested loops along with randomization and the use of a pruning rule. The second parallel algorithm is for detecting density-based local outliers. In both cases data parallelism is used. We show that both algorithms reach near linear speedup. Our algorithms are tested on four real-world datasets coming from the Machine Learning Database Repository at the UCI. <s> BIB005 </s> Progress in Outlier Detection Techniques: A Survey <s> A. DENSITY-BASED APPROACHES <s> Mining outliers in database is to find exceptional objects that deviate from the rest of the data set. Besides classical outlier analysis algorithms, recent studies have focused on mining local outliers, i.e., the outliers that have density distribution significantly different from their neighborhood. The estimation of density distribution at the location of an object has so far been based on the density distribution of its k-nearest neighbors [2,11]. However, when outliers are in the location where the density distributions in the neighborhood are significantly different, for example, in the case of objects from a sparse cluster close to a denser cluster, this may result in wrong estimation. To avoid this problem, here we propose a simple but effective measure on local outliers based on a symmetric neighborhood relationship. The proposed measure considers both neighbors and reverse neighbors of an object when estimating its density distribution. As a result, outliers so discovered are more meaningful. To compute such local outliers efficiently, several mining algorithms are developed that detects top-n outliers based on our definition. A comprehensive performance evaluation and analysis shows that our methods are not only efficient in the computation but also more effective in ranking outliers. <s> BIB006 </s> Progress in Outlier Detection Techniques: A Survey <s> A. DENSITY-BASED APPROACHES <s> One of the common endeavours in engineering applications is outlier detection, which aims to identify inconsistent records from large amounts of data. Although outlier detection schemes in data mining discipline are acknowledged as a more viable solution to efficient identification of anomalies from these data repository, current outlier mining algorithms require the input of domain parameters. These parameters are often unknown, difficult to determine and vary across different datasets containing different cluster features. This paper presents a novel resolution-based outlier notion and a nonparametric outlier-mining algorithm, which can efficiently identify and rank top listed outliers from a wide variety of datasets. The algorithm generates reasonable outlier results by taking both local and global features of a dataset into account. Experiments are conducted using both synthetic datasets and a real life construction equipment dataset from a large road building contractor. Comparison with the current outlier mining algorithms indicates that the proposed algorithm is more effective and can be integrated into a decision support system to serve as a universal detector of potentially inconsistent records. <s> BIB007 </s> Progress in Outlier Detection Techniques: A Survey <s> A. DENSITY-BASED APPROACHES <s> Detecting outliers which are grossly different from or inconsistent with the remaining dataset is a major challenge in real-world KDD applications. Existing outlier detection methods are ineffective on scattered real-world datasets due to implicit data patterns and parameter setting issues. We define a novel Local Distance-based Outlier Factor (LDOF) to measure the outlier-ness of objects in scattered datasets which addresses these issues. LDOF uses the relative location of an object to its neighbours to determine the degree to which the object deviates from its neighbourhood. We present theoretical bounds on LDOF's false-detection probability. Experimentally, LDOF compares favorably to classical KNN and LOF based outlier detection. In particular it is less sensitive to parameter values. <s> BIB008 </s> Progress in Outlier Detection Techniques: A Survey <s> A. DENSITY-BASED APPROACHES <s> Many outlier detection methods do not merely provide the decision for a single data object being or not being an outlier but give also an outlier score or "outlier factor" signaling "how much" the respective data object is an outlier. A major problem for any user not very acquainted with the outlier detection method in question is how to interpret this "factor" in order to decide for the numeric score again whether or not the data object indeed is an outlier. Here, we formulate a local density based outlier detection method providing an outlier "score" in the range of [0, 1] that is directly interpretable as a probability of a data object for being an outlier. <s> BIB009 </s> Progress in Outlier Detection Techniques: A Survey <s> A. DENSITY-BASED APPROACHES <s> Outlier detection research has been seeing many new algorithms every year that often appear to be only slightly different from existing methods along with some experiments that show them to "clearly outperform" the others. However, few approaches come along with a clear analysis of existing methods and a solid theoretical differentiation. Here, we provide a formalized method of analysis to allow for a theoretical comparison and generalization of many existing methods. Our unified view improves understanding of the shared properties and of the differences of outlier detection models. By abstracting the notion of locality from the classic distance-based notion, our framework facilitates the construction of abstract methods for many special data types that are usually handled with specialized algorithms. In particular, spatial neighborhood can be seen as a special case of locality. Here we therefore compare and generalize approaches to spatial outlier detection in a detailed manner. We also discuss temporal data like video streams, or graph data such as community networks. Since we reproduce results of specialized approaches with our general framework, and even improve upon them, our framework provides reasonable baselines to evaluate the true merits of specialized approaches. At the same time, seeing spatial outlier detection as a special case of local outlier detection, opens up new potentials for analysis and advancement of methods. <s> BIB010 </s> Progress in Outlier Detection Techniques: A Survey <s> A. DENSITY-BASED APPROACHES <s> Outlier mining is a major task in data analysis. Outliers are objects that highly deviate from regular objects in their local neighborhood. Density-based outlier ranking methods score each object based on its degree of deviation. In many applications, these ranking methods degenerate to random listings due to low contrast between outliers and regular objects. Outliers do not show up in the scattered full space, they are hidden in multiple high contrast subspace projections of the data. Measuring the contrast of such subspaces for outlier rankings is an open research challenge. In this work, we propose a novel subspace search method that selects high contrast subspaces for density-based outlier ranking. It is designed as pre-processing step to outlier ranking algorithms. It searches for high contrast subspaces with a significant amount of conditional dependence among the subspace dimensions. With our approach, we propose a first measure for the contrast of subspaces. Thus, we enhance the quality of traditional outlier rankings by computing outlier scores in high contrast projections only. The evaluation on real and synthetic data shows that our approach outperforms traditional dimensionality reduction techniques, naive random projections as well as state-of-the-art subspace search techniques and provides enhanced quality for outlier ranking. <s> BIB011 </s> Progress in Outlier Detection Techniques: A Survey <s> A. DENSITY-BASED APPROACHES <s> The problem of unsupervised outlier detection is challenging, especially when the structure of data is unknown. This paper presents a new density-based outlier detection technique that detects the top-n outliers. It overcomes the limitations of existing approaches, like low accuracy and high sensitivity to parameters. Our approach provides a score to each object called Dynamic-Window Outlier Factor (DWOF). DWOF improves Resolution-based Outlier Factor method (ROF) to consider varying-density clusters, which improves outliers’ ranking even when providing same outliers. Experiments show that DWOF’s average accuracy is better than existing approaches and less sensitive to its parameter. <s> BIB012 </s> Progress in Outlier Detection Techniques: A Survey <s> A. DENSITY-BASED APPROACHES <s> Outlier detection is one of the key problems in the data mining area which can reveal rare phenomena and behaviors. In this paper, we will examine the problem of density-based local outlier detection on uncertain data sets described by some discrete instances. We propose a new density-based local outlier concept based on uncertain data. In order to quickly detect outliers, an algorithm is proposed that does not require the unfolding of all possible worlds. The performance of our method is verified through a number of simulation experiments. The experimental results show that our method is an effective way to solve the problem of density-based local outlier detection on uncertain data. <s> BIB013 </s> Progress in Outlier Detection Techniques: A Survey <s> A. DENSITY-BASED APPROACHES <s> Anomaly detection in streaming data is of high interest in numerous application domains. In this paper, we propose a novel one-class semi-supervised algorithm to detect anomalies in streaming data. Underlying the algorithm is a fast and accurate density estimator implemented by multiple fully randomized space trees (RS-Trees), named RS-Forest. The piecewise constant density estimate of each RS-tree is defined on the tree node into which an instance falls. Each incoming instance in a data stream is scored by the density estimates averaged over all trees in the forest. Two strategies, statistical attribute range estimation of high probability guarantee and dual node profiles for rapid model update, are seamlessly integrated into RS Forestto systematically address the ever-evolving nature of data streams. We derive the theoretical upper bound for the proposed algorithm and analyze its asymptotic properties via bias-variance decomposition. Empirical comparisons to the state-of-the-art methods on multiple benchmark datasets demonstrate that the proposed method features high detection rate, fast response, and insensitivity to most of the parameter settings. Algorithm implementations and datasets are available upon request. <s> BIB014 </s> Progress in Outlier Detection Techniques: A Survey <s> A. DENSITY-BASED APPROACHES <s> An integrated framework for density-based cluster analysis, outlier detection, and data visualization is introduced in this article. The main module consists of an algorithm to compute hierarchical estimates of the level sets of a density, following Hartigan’s classic model of density-contour clusters and trees. Such an algorithm generalizes and improves existing density-based clustering techniques with respect to different aspects. It provides as a result a complete clustering hierarchy composed of all possible density-based clusters following the nonparametric model adopted, for an infinite range of density thresholds. The resulting hierarchy can be easily processed so as to provide multiple ways for data visualization and exploration. It can also be further postprocessed so that: (i) a normalized score of “outlierness” can be assigned to each data object, which unifies both the global and local perspectives of outliers into a single definition; and (ii) a “flat” (i.e., nonhierarchical) clustering solution composed of clusters extracted from local cuts through the cluster tree (possibly corresponding to different density thresholds) can be obtained, either in an unsupervised or in a semisupervised way. In the unsupervised scenario, the algorithm corresponding to this postprocessing module provides a global, optimal solution to the formal problem of maximizing the overall stability of the extracted clusters. If partially labeled objects or instance-level constraints are provided by the user, the algorithm can solve the problem by considering both constraints violations/satisfactions and cluster stability criteria. An asymptotic complexity analysis, both in terms of running time and memory space, is described. Experiments are reported that involve a variety of synthetic and real datasets, including comparisons with state-of-the-art, density-based clustering and (global and local) outlier detection methods. <s> BIB015 </s> Progress in Outlier Detection Techniques: A Survey <s> A. DENSITY-BASED APPROACHES <s> The outlier detection is a popular issue in the area of data management and multimedia analysis, and it can be used in many applications such as detection of noisy images, credit card fraud detection, network intrusion detection. The density-based outlier is an important definition of outlier, whose target is to compute a Local Outlier Factor (LOF) for each tuple in a data set to represent the degree of this tuple to be an outlier. It shows several significant advantages comparing with other existing definitions. This paper focuses on the problem of distributed density-based outlier detection for large-scale data. First, we propose a Gird-Based Partition algorithm (GBP) as a data preparation method. GBP first splits the data set into several grids, and then allocates these grids to the datanodes in a distributed environment. Second, we propose a Distributed LOF Computing method (DLC) for detecting density-based outliers in parallel, which only needs a small amount of network communications. At last, the efficiency and effectiveness of the proposed approaches are verified through a series of simulation experiments. <s> BIB016 </s> Progress in Outlier Detection Techniques: A Survey <s> A. DENSITY-BASED APPROACHES <s> A local density-based approach for outlier detection is proposed.The theoretical properties of the proposed outlierness score are derived.Three types of nearest neighbors are presented. This paper presents a simple and effective density-based outlier detection approach with local kernel density estimation (KDE). A Relative Density-based Outlier Score (RDOS) is introduced to measure local outlierness of objects, in which the density distribution at the location of an object is estimated with a local KDE method based on extended nearest neighbors of the object. Instead of using only k nearest neighbors, we further consider reverse nearest neighbors and shared nearest neighbors of an object for density distribution estimation. Some theoretical properties of the proposed RDOS including its expected value and false alarm probability are derived. A comprehensive experimental study on both synthetic and real-life data sets demonstrates that our approach is more effective than state-of-the-art outlier detection methods. <s> BIB017 </s> Progress in Outlier Detection Techniques: A Survey <s> A. DENSITY-BASED APPROACHES <s> Most outlier detection algorithms are based on lazy learning or imply quadratic complexity. Both characteristics make them unsuitable for big data and stream data applications and preclude their applicability in systems that must operate autonomously. In this paper we propose a new algorithm—called SDO (Sparse Data Observers)—to estimate outlierness based on low density models of data. SDO is an eager learner; therefore, computational costs in application phases are severely reduced. We perform tests with a wide variation of synthetic datasets as well as the main datasets published in the literature for anomaly detection testing. Results show that SDO satisfactorily competes with the best ranked outlier detection alternatives. The good detection performance coupled with a low complexity makes SDO highly flexible and adaptable to stand-alone frameworks that must detect outliers fast with accuracy rates equivalent to lazy learning algorithms. <s> BIB018 </s> Progress in Outlier Detection Techniques: A Survey <s> A. DENSITY-BASED APPROACHES <s> Outlier detection is an important data mining technique to identify interesting and novel patterns, trends and anomalies from data. Density-based methods are among the most popular class of methods used in outlier detection. However, these methods suffer from the low density patterns problem that could lead to poor performance. In this paper, a novel relative density-based outlier detection algorithm is proposed, which utilizes a new measure of an object's neighborhood density. This approach takes into account an important factor for density: relative neighborhood. Experiments on both simulated and real data demonstrate that the proposed algorithm achieves better performance than other alternatives. <s> BIB019 </s> Progress in Outlier Detection Techniques: A Survey <s> A. DENSITY-BASED APPROACHES <s> After the local outlier factor was first proposed, there is a large family of local outlier detection approaches derived from it. Since the existing approaches only focus on the extent of overall separation between an object and its neighbors, and ignore the degree of dispersion between them, the precision of these approaches will be affected by various degrees in the scattered datasets. In addition, the outlier data occupy a relatively small amount in the dataset, but the existing approaches need to perform local outlier factor calculation on all data during the outlier detection, which greatly reduces the efficiency of the algorithms. In this paper, we redefine a local outlier factor called local deviation coefficient (LDC) by taking full advantage of the distribution of the object and its neighbors. And then, we propose a safe non-outlier objects elimination approach named as rough clustering based on multi-level queries (RCMLQ) to preprocess the datasets to eliminate the non-outlier objects to the utmost. Finally, an efficient local outlier detection approach named as efficient density-based local outlier detection for scattered data (E2DLOS) is proposed based on the LDC and RCMLQ. The RCMLQ greatly reduces the amount of data that needs to be quantified for local outlier factor and the LDC is more sensitive to the degree of anomaly of the scattered datasets, and so the E2DLOS improves the existing local outlier detection approaches in time efficiency and detection accuracy. Experiments show that the LDC can better reflect the true abnormal situations of the data for the scattered datasets. And the RCMLQ can be used in parallel with the traditional methods of improving the efficiency of the nearest neighbor search, which can further improve the efficiency of the E2DLOS algorithm by about 16%. <s> BIB020
Applying density-based methods to outlier detection is one of the earliest known approaches to outlier detection problems. The core principle of the density-based outlier detection methods is that an outlier can be found in a low-density region, whereas non-outliers (inliers) are assumed to appear in dense neighborhoods. The objects that differ considerably from their nearest neighbors, i.e., those that occur far from their closest neighbors, are flagged and always treated as outliers. They compare the local point's densities with their local neighbor, densities. In density-based outlier detection methods, more complex mechanisms are applied to model the outliers, when compared to distance-based methods. Notwithstanding this, the simplicity and effectiveness of density-based methods have made them widely adopted to detect outliers. Some algorithms designed using this approach have served to be the baseline algorithms BIB001 , BIB006 for many new algorithms BIB008 - BIB017 . Breunig et al. BIB001 proposed the Local Outlier Factor (LOF) method, which is one of the first fundamental loosely related density-based clustering outlier detection methods. The technique makes use of the k-nearest neighbor. In the KNN set of each point, LOF makes use of the local reachability density (lrd) and compares it with those of the neighbors of each participant of that KNN set. The local reachability density (a density estimate that reduces the variability) of an object p is defined as: where The final local outlier factor score is given as: where lrd k (p) and lrd k (o) are the local reachability density of p and o, respectively. The main focus of the approach is that the outlier degree of the observation is defined by its clustering structure in an adjacent neighborhood. The LOF score is at its peak if the lrd of the test points is smaller when compared to the nearest neighbors' estimates. Storing the KNN and lrd values simultaneously when computing the LOF scores of all data points will incur O(k) additional operation in each point. Therefore, it is prudent to apply a valid index, because in the absence of the useful index, for a data set of size n, it will incur (n 2 ) time when a sequential search is applied. Because of these shortcomings, Schubert et al. BIB010 found that the LOF density estimate can be simplified, and they proposed a simplifiedLOF to replace the LOF's reachability distance with the KNN distance. where . Even though the SimplifiedLOF showed improved performance, it has a computational complexity similar to LOF. In a later study, an improvement to LOF BIB001 and simplifiedLOF BIB010 , is introduced by Tang et al. BIB002 , which they called the Connective-based Outlier Factor (COF). The method is closely similar to the LOF with the only difference being the way the density estimation of the records is computed. COF uses a chaining distance as the shortestpath to estimate the local densities of the neighbors while LOF uses the Euclidean distance in selecting the K-nearest neighbors. The drawback to this approach is the indirect assumption made towards the data distribution, which results in incorrect density estimation. The key idea proposed by the authors is based on differentiating ''low density'' from ''isolativity''. The isolativity is defined as the degree of an object's connectivity to other objects. The COF value at p with respect to its k-neighborhood is expressed as where ac-dist Nk(p) is the average chaining distance from p to N k (p). COF adjusts the SimplifiedLOF's density estimate to justify the 'connectedness' via a minimum spanning tree (MST) of the neighborhood. The cost of O(k 2 ) is incurred when computing the MST of the KNNs. Their method still maintains a similar time complexity as the LOF except in cases where connective data patterns characterize the data sets. After a couple of techniques, it is still confusing which threshold score can be considered as an outlier in LOF. Kriegel et al. BIB009 , then formulated a more robust local density estimate for an outlier detection method called the Local Outlier Probabilities (LoOP) which combines the idea of providing an outlier 'score' with a probabilistic and statisticaloriented approach. It makes use of a density estimation that is based on the distance distribution, and the local outlier score is defined as a probability. LoOP tries to address the issue of LOF outputting an outlier score instead of an outlier probability. The advantage of using the LoOP's probability score is that it may give a better comparison of the outlier records for different datasets. The LoOP showing that a point is an outlier is given as: where PLOF λ,S (O) is the probabilistic local outlier factor of an object with respect to the significance of λ r a context VOLUME 7, 2019 set S(o) ⊆ D and nPLOF is the aggregated value. Points within the dense region will have a LoOP value close to 0 while those that are closer to 1 will be for density-based outliers. Similar to simplifiedLOF BIB010 , the LoOP normalizes its outlier detection score, which gives it the same complexity of O(k) per point as in BIB010 . The LoOP, like other previous local outlier algorithms, computes the local density estimation using the neighborhood set. However, computing the density is different. It follows the assumption of a ''half-Gaussian'' distribution and applies the probabilistic set distance (standard deviation). In LOF BIB001 and COF BIB002 , these methods fall short of handling the issue of multi-granularity correctly. Papadimitriou et al. BIB003 proposed a technique with the LOcal Correlation Integral called LOCI and its outlier metricthe multi-granularity deviation factor (MDEF), to handle this drawback. Points that deviate at least three standard deviations away from MDEF's neighbor are marked as an outlier. It deals well with the local density variations in the feature space and also detects both distant clusters and secluded outliers. The MDEF of a point p i at a radius r is mathematically defined as: where n(p i , αr) andn (p i , r,α) are the number of αr neighborhood objects and the average of all the objects p in the r-neighborhood of p i . For the faster computation of the MDEF, if we estimate the value of the numerator and denominator of the fraction on the right-hand side, this gives a better result. All the previous algorithms have shown that it is crucial to choose an appropriate k for excellent detection performance. In the LOCI, a maximization approach is used to address this issue. The method adopts the half Gaussian distribution to estimate the local density; similar to LoOP. However, instead of using the distance, the aggregate of the records in the neighborhood are used. Another point worth noting is that the LoOP has a different way to estimate the local density. It differs from that of LOCI. Instead of comparing the local density's ratio, it examines two different sized neighborhoods. Even though the LOCI showed good performance, however, it has a longer runtime and Papadimitriou et al. BIB003 , proposed another method, an approximate version of LOCI called aLOCI. To increase the counting speed of the two neighborhoods, the quad-trees are applied with some constraints. Another technique when compared with existing methods, LOF BIB001 and LOCI BIB003 , that performs more efficiently as a result of its pruning ability for data points that are deep in a cluster was proposed by Ren et al. BIB004 . It shows better scalability with an increase in data size. They proposed a method called the Relative Density Factor (RDF) method, and which uses a vertical data model (P-trees) to detect outliers. The RDF is the degree of the measure of outlierness and outliers are points with high RDF values. The RDF of point p is the ratio of the neighborhood density factor of point p divided by its density factor. where DF(P,r) is the density factor that is defined as the ratio of the number of neighbors of P and the radius r, while DF nbr (P,r) is the neighborhood density factor of the point p. Jin et al. BIB006 proposed INFLuenced Outlierness (INFLO), which is another technique for local outlier detection similar to that of LOF and uses the symmetric neighborhood relationship to mine outliers. In LOF, for a dataset with closely related clusters of different densities, correctly computing the score of the instances at the cluster borders is not given. INFLO addresses this shortcoming. It solves the problem of inaccurate space representation in the LOF. INFLO uses different descriptions of the neighborhood for the reference set and context set. The INFLO score is computed using both the k-nearest neighbors and the reverse nearest neighbor. To achieve an enhanced estimation of the neighborhood's density distribution, both the nearest neighbors (NN) and reverse nearest neighbors (RNNs) of data points are considered. INFLO is defined as the ''ratio of the average density of objects in IS k (p) to p s local density'': where den(o) and den(p) are the densities of o and p respectively, and IS k (p) is the average density of objects to p s local density. The higher the INFLO value, the higher the probability that the object is an outlier. In 2014, still using the density-based approach to tackle local outlier detection problems, Cao et al. BIB013 proposed a novel density-based local outlier detection (UDLO) notion on uncertain data that are characterized by some discrete instances. Here, an exact algorithm is recommended to compute the density of an instance rather than using the naive method of finding all k-neighbors to calculate the outliers, as in the LOF. However, in their approach, they only applied the Euclidean distance metrics. Using other distance computation methods to investigate the possibility of improving the performance of the algorithm can be a future study. After the introduction of LOF BIB001 , several variations of LOF have been established, such as COF BIB002 , INFLO BIB006 , and LOCI BIB003 . However, these algorithms are challenged with the distance computations for high dimensional datasets. Keller et al. BIB011 proposed a high contrast subspace method (HiCS) to improve on evaluating and ranking of outliers where outlier scores are closely related. Extending the focus beyond only local outliers to include global outliers, Campello et al. BIB015 proposed a new effective outlier detection measure algorithm called Global-Local Outlier Score from Hierarchies (GLOSH). It is capable of simultaneously detecting both global and local outlier types based on a complete statistical interpretation. Generally, even though the GLOSH result can't perform better in all cases than other techniques, it still has the strength of scaling well for different tasks. Since the study is based on a specific k-nearest neighbor density estimate, it has some limitations. A future study could be to investigate how other density estimates would improve this work. Momtaz et al. BIB012 , deviate a little from the central focus of most previous algorithms in computing the local outliers. They introduced a novel density-based outlier detection technique that detects the top-n outliers by providing for every object a score called the Dynamic-Window Outlier Factor (DWOF). This algorithm is a modified and improved version of Fan et al. BIB007 -Resolution-based Outlier Factor (ROF) algorithm. ROF overcomes some setbacks, such as low accuracy and its high sensitivity to parameters in data sets. With the massive flow of high-dimensional data, new research motivations are linked with improving the effectiveness and efficiency of algorithms in detecting outliers in big data. Wu et al. BIB014 proposed an algorithm for the detection of outliers in big data streams. They use a fast and accurate density estimator called RS-Forest and a semi-supervised one class machine-learning algorithm. Bai et al. BIB016 , considered a density-based outlier detection in big data and proposed a Distributed LOF Computing (DLC) method, which detects outliers in parallel. The main idea here is twofold. Initially, the preprocessing stage uses the Grid-Based Partitioning (GBP) algorithm and the DLC for the outlier detection stage. However, despite the improved performance, it still does not scale well when compared to Lozano et al. BIB005 -Parallel LOF Algorithm (PLOFA). Improving the scalability of the algorithm can be an interesting research problem for future direction. Tang and He BIB017 proposed an outlier detection method using the local KDE. To measure the local outlierness, a Relative Density-Based Outlier Score (RDOS) is used. Here, the local KDE method with an extension of the object's nearest neighbor is used to estimate the density distribution at the object location. They pay more emphasis on the reverse and shared nearest neighbors rather than the k-nearest neighbor of an object for the density distribution estimation. In their method, only the Euclidean distance metric is applied, similar to UDLO in BIB013 . With a related extension for a future study, there is a need to involve other distance methods to investigate its effect, and to extend their work in real-life applications. Vázquez et al. BIB018 proposed a novel algorithm to detect outliers based on low-density models of the data called Sparse Data Observers (SDO). SDO reduces the quadratic complexity experienced by most lazy learner OD algorithms. It is an eager learner and severely reduced the computational cost, which in turn performs well when compared to other bestranked outlier detection algorithms. Ning et al. BIB019 proposed a relative density-based OD method that uses a novel technique to measure the object's neighborhood density. Su et al. BIB020 proposed an efficient density-based scheme based on local OD approach for scattered Data called E2DLOS. They rename the local outlier factor and called theirs Local Deviation Coefficient (LDC) by utilizing the full benefit of the object distribution and the distribution of the neighbors. They then proposed a safe non-outlier object removal method to preprocess the datasets to remove all nonoutlier objects. This process is named as rough clustering based on multi-level queries (RCMLQ). This helps in reducing the amount of data that is required to be computed for the local outlier factor. The proposed method is based on LDC and RCMLQ, and from the experiment, it improves on existing local outlier detection methods in both detection accuracy and time efficiency. We present a summary in Table 2 , showing the progress using this technique for some key algorithms mentioned above. In our overview, it is essential to note that, when we say this method outperforms the other, it does not necessarily mean that it is superior to the other in all scenarios and datasets. The analysis and summary presented here are based on the experiment done in these papers, as reported by the authors. While a method might outperform another method, this might be for a set of parameters, the scenario or assumptions used in the experiment. We cannot claim that a method is superior to another in all cases since we did not perform experiments under the same parameter settings and environment. This is true for all the following tables (Table 2-5) in this paper.
Progress in Outlier Detection Techniques: A Survey <s> 1) DENSITY-BASED APPROACHES-ADVANTAGES, DISADVANTAGES, CHALLENGES, AND GAPS a: ADVANTAGES <s> Spatial data mining, i.e., discovery of interesting characteristics and patterns that may implicitly exist in spatial databases, is a challenging task due to the huge amounts of spatial data and to the new conceptual nature of the problems which must account for spatial distance. Clustering and region oriented queries are common problems in this domain. Several approaches have been presented in recent years, all of which require at least one scan of all individual objects (points). Consequently, the computational complexity is at least linearly proportional to the number of objects to answer each query. In this paper, we propose a hierarchical statistical information grid based approach for spatial data mining to reduce the cost further. The idea is to capture statistical information associated with spatial cells in such a manner that whole classes of queries and clustering problems can be answered without recourse to the individual objects. In theory, and confirmed by empirical studies, this approach outperforms the best previous method by at least an order of magnitude, especially when the data set is very large. <s> BIB001 </s> Progress in Outlier Detection Techniques: A Survey <s> 1) DENSITY-BASED APPROACHES-ADVANTAGES, DISADVANTAGES, CHALLENGES, AND GAPS a: ADVANTAGES <s> For many KDD applications, such as detecting criminal activities in E-commerce, finding the rare instances or the outliers, can be more interesting than finding the common patterns. Existing work in outlier detection regards being an outlier as a binary property. In this paper, we contend that for many scenarios, it is more meaningful to assign to each object a degree of being an outlier. This degree is called the local outlier factor (LOF) of an object. It is local in that the degree depends on how isolated the object is with respect to the surrounding neighborhood. We give a detailed formal analysis showing that LOF enjoys many desirable properties. Using real-world datasets, we demonstrate that LOF can be used to find outliers which appear to be meaningful, but can otherwise not be identified with existing approaches. Finally, a careful performance evaluation of our algorithm confirms we show that our approach of finding local outliers can be practical. <s> BIB002 </s> Progress in Outlier Detection Techniques: A Survey <s> 1) DENSITY-BASED APPROACHES-ADVANTAGES, DISADVANTAGES, CHALLENGES, AND GAPS a: ADVANTAGES <s> Outlier detection is an integral part of data mining and has attracted much attention recently [M. Breunig et al., (2000)], [W. Jin et al., (2001)], [E. Knorr et al., (2000)]. We propose a new method for evaluating outlierness, which we call the local correlation integral (LOCI). As with the best previous methods, LOCI is highly effective for detecting outliers and groups of outliers (a.k.a. micro-clusters). In addition, it offers the following advantages and novelties: (a) It provides an automatic, data-dictated cutoff to determine whether a point is an outlier-in contrast, previous methods force users to pick cut-offs, without any hints as to what cut-off value is best for a given dataset. (b) It can provide a LOCI plot for each point; this plot summarizes a wealth of information about the data in the vicinity of the point, determining clusters, micro-clusters, their diameters and their inter-cluster distances. None of the existing outlier-detection methods can match this feature, because they output only a single number for each point: its outlierness score, (c) Our LOCI method can be computed as quickly as the best previous methods, (d) Moreover, LOCI leads to a practically linear approximate method, aLOCI (for approximate LOCI), which provides fast highly-accurate outlier detection. To the best of our knowledge, this is the first work to use approximate computations to speed up outlier detection. Experiments on synthetic and real world data sets show that LOCI and aLOCI can automatically detect outliers and micro-clusters, without user-required cut-offs, and that they quickly spot both expected and unexpected outliers. <s> BIB003 </s> Progress in Outlier Detection Techniques: A Survey <s> 1) DENSITY-BASED APPROACHES-ADVANTAGES, DISADVANTAGES, CHALLENGES, AND GAPS a: ADVANTAGES <s> Mining outliers in database is to find exceptional objects that deviate from the rest of the data set. Besides classical outlier analysis algorithms, recent studies have focused on mining local outliers, i.e., the outliers that have density distribution significantly different from their neighborhood. The estimation of density distribution at the location of an object has so far been based on the density distribution of its k-nearest neighbors [2,11]. However, when outliers are in the location where the density distributions in the neighborhood are significantly different, for example, in the case of objects from a sparse cluster close to a denser cluster, this may result in wrong estimation. To avoid this problem, here we propose a simple but effective measure on local outliers based on a symmetric neighborhood relationship. The proposed measure considers both neighbors and reverse neighbors of an object when estimating its density distribution. As a result, outliers so discovered are more meaningful. To compute such local outliers efficiently, several mining algorithms are developed that detects top-n outliers based on our definition. A comprehensive performance evaluation and analysis shows that our methods are not only efficient in the computation but also more effective in ranking outliers. <s> BIB004 </s> Progress in Outlier Detection Techniques: A Survey <s> 1) DENSITY-BASED APPROACHES-ADVANTAGES, DISADVANTAGES, CHALLENGES, AND GAPS a: ADVANTAGES <s> Many outlier detection methods do not merely provide the decision for a single data object being or not being an outlier but give also an outlier score or "outlier factor" signaling "how much" the respective data object is an outlier. A major problem for any user not very acquainted with the outlier detection method in question is how to interpret this "factor" in order to decide for the numeric score again whether or not the data object indeed is an outlier. Here, we formulate a local density based outlier detection method providing an outlier "score" in the range of [0, 1] that is directly interpretable as a probability of a data object for being an outlier. <s> BIB005 </s> Progress in Outlier Detection Techniques: A Survey <s> 1) DENSITY-BASED APPROACHES-ADVANTAGES, DISADVANTAGES, CHALLENGES, AND GAPS a: ADVANTAGES <s> We propose a new statistical approach to the problem of inlier-based outlier detection, i.e., finding outliers in the test set based on the training set consisting only of inliers. Our key idea is to use the ratio of training and test data densities as an outlier score. This approach is expected to have better performance even in high-dimensional problems since methods for directly estimating the density ratio without going through density estimation are available. Among various density ratio estimation methods, we employ the method called unconstrained least-squares importance fitting (uLSIF) since it is equipped with natural cross-validation procedures, allowing us to objectively optimize the value of tuning parameters such as the regularization parameter and the kernel width. Furthermore, uLSIF offers a closed-form solution as well as a closed-form formula for the leave-one-out error, so it is computationally very efficient and is scalable to massive datasets. Simulations with benchmark and real-world datasets illustrate the usefulness of the proposed approach. <s> BIB006
In density-based methods, the density estimates used are nonparametric; they do not rely on assumed distributions to fit the data. Some of the density-based techniques BIB002 , BIB004 , BIB005 , BIB003 have served as a fundamental baseline for many subsequent algorithms. They have experimentally been shown to work well for modern methods, often outperforming their competitors like some existing statistical and distance-based approaches , BIB001 , BIB006 . Since outliers in these methods are often analyzed through the object's neighborhood density BIB002 , BIB003 , this, in turn, gives it more advantage in identifying crucial outliers missed by most other outlier detection-based methods. These methods facilitate the process of efficiently ruling out outliers nearby some dense neighbors. They require only minimum prior knowledge such as the probability distribution and only a single parameter tuning. They are also known for their ability to compute local outliers efficiently.
Progress in Outlier Detection Techniques: A Survey <s> b: DISADVANTAGES, CHALLENGES, AND GAPS <s> For many KDD applications, such as detecting criminal activities in E-commerce, finding the rare instances or the outliers, can be more interesting than finding the common patterns. Existing work in outlier detection regards being an outlier as a binary property. In this paper, we contend that for many scenarios, it is more meaningful to assign to each object a degree of being an outlier. This degree is called the local outlier factor (LOF) of an object. It is local in that the degree depends on how isolated the object is with respect to the surrounding neighborhood. We give a detailed formal analysis showing that LOF enjoys many desirable properties. Using real-world datasets, we demonstrate that LOF can be used to find outliers which appear to be meaningful, but can otherwise not be identified with existing approaches. Finally, a careful performance evaluation of our algorithm confirms we show that our approach of finding local outliers can be practical. <s> BIB001 </s> Progress in Outlier Detection Techniques: A Survey <s> b: DISADVANTAGES, CHALLENGES, AND GAPS <s> High-dimensional data in Euclidean space pose special challenges to data mining algorithms. These challenges are often indiscriminately subsumed under the term ‘curse of dimensionality’, more concrete aspects being the so-called ‘distance concentration effect’, the presence of irrelevant attributes concealing relevant information, or simply efficiency issues. In about just the last few years, the task of unsupervised outlier detection has found new specialized solutions for tackling high-dimensional data in Euclidean space. These approaches fall under mainly two categories, namely considering or not considering subspaces (subsets of attributes) for the definition of outliers. The former are specifically addressing the presence of irrelevant attributes, the latter do consider the presence of irrelevant attributes implicitly at best but are more concerned with general issues of efficiency and effectiveness. Nevertheless, both types of specialized outlier detection algorithms tackle challenges specific to high-dimensional data. In this survey article, we discuss some important aspects of the ‘curse of dimensionality’ in detail and survey specialized algorithms for outlier detection from both categories. © 2012 Wiley Periodicals, Inc. Statistical Analysis and Data Mining, 2012 © 2012 Wiley Periodicals, Inc. <s> BIB002 </s> Progress in Outlier Detection Techniques: A Survey <s> b: DISADVANTAGES, CHALLENGES, AND GAPS <s> Anomaly detection is the process of identifying unexpected items or events in datasets, which differ from the norm. In contrast to standard classification tasks, anomaly detection is often applied on unlabeled data, taking only the internal structure of the dataset into account. This challenge is known as unsupervised anomaly detection and is addressed in many practical applications, for example in network intrusion detection, fraud detection as well as in the life science and medical domain. Dozens of algorithms have been proposed in this area, but unfortunately the research community still lacks a comparative universal evaluation as well as common publicly available datasets. These shortcomings are addressed in this study, where 19 different unsupervised anomaly detection algorithms are evaluated on 10 different datasets from multiple application domains. By publishing the source code and the datasets, this paper aims to be a new well-funded basis for unsupervised anomaly detection research. Additionally, this evaluation reveals the strengths and weaknesses of the different approaches for the first time. Besides the anomaly detection performance, computational effort, the impact of parameter settings as well as the global/local anomaly detection behavior is outlined. As a conclusion, we give an advise on algorithm selection for typical real-world tasks. <s> BIB003 </s> Progress in Outlier Detection Techniques: A Survey <s> b: DISADVANTAGES, CHALLENGES, AND GAPS <s> A great deal of attention has been given to deep learning over the past several years, and new deep learning techniques are emerging with improved functionality. Many computer and network applications actively utilize such deep learning algorithms and report enhanced performance through them. In this study, we present an overview of deep learning methodologies, including restricted Bolzmann machine-based deep belief network, deep neural network, and recurrent neural network, as well as the machine learning techniques relevant to network anomaly detection. In addition, this article introduces the latest work that employed deep learning techniques with the focus on network anomaly detection through the extensive literature survey. We also discuss our local experiments showing the feasibility of the deep learning approach to network traffic analysis. <s> BIB004 </s> Progress in Outlier Detection Techniques: A Survey <s> b: DISADVANTAGES, CHALLENGES, AND GAPS <s> Anomaly detection is an important problem that has been well-studied within diverse research areas and application domains. The aim of this survey is two-fold, firstly we present a structured and comprehensive overview of research methods in deep learning-based anomaly detection. Furthermore, we review the adoption of these methods for anomaly across various application domains and assess their effectiveness. We have grouped state-of-the-art research techniques into different categories based on the underlying assumptions and approach adopted. Within each category we outline the basic anomaly detection technique, along with its variants and present key assumptions, to differentiate between normal and anomalous behavior. For each category, we present we also present the advantages and limitations and discuss the computational complexity of the techniques in real application domains. Finally, we outline open issues in research and challenges faced while adopting these techniques. <s> BIB005
Even though some density-based methods are shown to have improved performance, they are more complicated and computationally expensive when compared especially to statistical methods in most cases . They are sensitive to parameter settings such as in determining the size of the neighbors. They need to cautiously take into consideration several factors, which consequently results in expensive computations. For varying density regions, it becomes more VOLUME 7, 2019 complicating and results in poor performance. Density-based methods, due to their inherent complexity and the lack of update of their outlierness measures, some of these algorithms, such as INFLO and MDEF, cannot resourcefully handle data streams. In addition, they can be a poor choice for outlier detection in data stream scenarios. It is also challenging for high dimensional data when the outlier scores are closely related to each other. To discuss further, in Table 2 , we present a summary of randomly handpicked (because of the space limitation) well-known density-based outlier detection algorithms. We included the performance, issues addressed, and drawbacks of standard algorithms and show the progress of how these algorithms have evolved. In one of the most popularly known density-based methods, LOF BIB001 , it is crucial to note that in an outlier detection process where the local outliers are not significant, the algorithm can create a lot of false alarms. Generally, since density-based methods are nonparametric, for high-dimensional data spaces, the sample size is considered too small BIB002 . Additional re-sampling, to draw new samples, can be adapted to enhance the performance. We also note that, since most density-based methods rely on nearest neighbor computations, this makes the choice of k very significant for the evaluation of these algorithms. Usually, finding the nearest neighbor in nearest-neighbor outlier detection algorithms, the computational cost is about O(n 2 ). A rare case is that in LOCI, where the radius r is extended and thus summing its complexity to O(n 3 ). This makes it very slow for more massive datasets. An improved version is the aLOCI, which shows a faster runtime depending on the amount of the quad-trees that are utilized. Goldstein et al. BIB003 , compared COF and LOF, and it was found that the LOF spherical density estimation was a poor choice for efficiently detecting outliers. COF estimated its local density by connecting the regular records with each other to solve the above drawback. INFLO shows improved outlier scores when clusters with different densities are not far from each other. Table 2 gives the remaining summary of critical points for the different algorithms. Some learning-based techniques such as subspace learning, can be computationally expensive and challenging to discover the relevant subspaces for the outliers. In areas like deep learning techniques, with an increase in the volume of data, it becomes a big challenge for the possibility of traditional methods to scale well to detect outliers. The need to design deep learning OD techniques to capture complex structures in large-scale data is crucial. In addition, the traditional manual learning process to extract features from the data has many disadvantages. Therefore, finding better ways through learning the hierarchical discriminative features from the data is vital. The lack of accurate representation of normal and abnormal boundaries also presents challenges for both traditional methods and deep learning-based methods. Addressing these challenges can be interesting work in the future. There are still limited studies using unsupervised algorithms such as Long Short-Term Memory networks (LSTM), Recurrent Neural Network (RNN), Deep Belief Network (DBN), etc. in the area of outlier detection. For in-depth knowledge and more references, we suggest Chalapathy et al. BIB005 and Kwon et al. BIB004 surveys.
Progress in Outlier Detection Techniques: A Survey <s> a: GAUSSIAN MIXTURE MODEL METHODS <s> For many KDD applications, such as detecting criminal activities in E-commerce, finding the rare instances or the outliers, can be more interesting than finding the common patterns. Existing work in outlier detection regards being an outlier as a binary property. In this paper, we contend that for many scenarios, it is more meaningful to assign to each object a degree of being an outlier. This degree is called the local outlier factor (LOF) of an object. It is local in that the degree depends on how isolated the object is with respect to the surrounding neighborhood. We give a detailed formal analysis showing that LOF enjoys many desirable properties. Using real-world datasets, we demonstrate that LOF can be used to find outliers which appear to be meaningful, but can otherwise not be identified with existing approaches. Finally, a careful performance evaluation of our algorithm confirms we show that our approach of finding local outliers can be practical. <s> BIB001 </s> Progress in Outlier Detection Techniques: A Survey <s> a: GAUSSIAN MIXTURE MODEL METHODS <s> Outlier detection is concerned with discovering exceptional behaviors of objects in data sets.It is becoming a growingly useful tool in applications such as credit card fraud detection, discovering criminal behaviors in e-commerce, identifying computer intrusion, detecting health problems, etc. In this paper, we introduce a connectivity-based outlier factor (COF) scheme that improves the effectiveness of an existing local outlier factor (LOF) scheme when a pattern itself has similar neighbourhood density as an outlier. We give theoretical and empirical analysis to demonstrate the improvement in effectiveness and the capability of the COF scheme in comparison with the LOF scheme. <s> BIB002 </s> Progress in Outlier Detection Techniques: A Survey <s> a: GAUSSIAN MIXTURE MODEL METHODS <s> Outlier detection is an integral part of data mining and has attracted much attention recently [M. Breunig et al., (2000)], [W. Jin et al., (2001)], [E. Knorr et al., (2000)]. We propose a new method for evaluating outlierness, which we call the local correlation integral (LOCI). As with the best previous methods, LOCI is highly effective for detecting outliers and groups of outliers (a.k.a. micro-clusters). In addition, it offers the following advantages and novelties: (a) It provides an automatic, data-dictated cutoff to determine whether a point is an outlier-in contrast, previous methods force users to pick cut-offs, without any hints as to what cut-off value is best for a given dataset. (b) It can provide a LOCI plot for each point; this plot summarizes a wealth of information about the data in the vicinity of the point, determining clusters, micro-clusters, their diameters and their inter-cluster distances. None of the existing outlier-detection methods can match this feature, because they output only a single number for each point: its outlierness score, (c) Our LOCI method can be computed as quickly as the best previous methods, (d) Moreover, LOCI leads to a practically linear approximate method, aLOCI (for approximate LOCI), which provides fast highly-accurate outlier detection. To the best of our knowledge, this is the first work to use approximate computations to speed up outlier detection. Experiments on synthetic and real world data sets show that LOCI and aLOCI can automatically detect outliers and micro-clusters, without user-required cut-offs, and that they quickly spot both expected and unexpected outliers. <s> BIB003 </s> Progress in Outlier Detection Techniques: A Survey <s> a: GAUSSIAN MIXTURE MODEL METHODS <s> Outlier detection has recently become an important problem in many data mining applications. In this paper, a novel unsupervised algorithm for outlier detection is proposed. First we apply a provably globally optimal Expectation Maximization (EM) algorithm to flt a Gaussian Mixture Model (GMM) to a given data set. In our approach, a Gaussian is centered at each data point, and hence, the estimated mixture proportions can be interpreted as probabilities of being a cluster center for all data points. The outlier factor at each data point is then deflned as a weighted sum of the mixture proportions with weights representing the similarities to other data points. The proposed outlier factor is thus based on global properties of the data set. This is in contrast to most existing approaches to outlier detection, which are strictly local. Our experiments performed on several simulated and real life data sets demonstrate superior performance of the proposed approach. Moreover, we also demonstrate the ability to detect unusual shapes. <s> BIB004 </s> Progress in Outlier Detection Techniques: A Survey <s> a: GAUSSIAN MIXTURE MODEL METHODS <s> We utilize outlier detection by principal component analysis (PCA) as an effective step to automate snakes/active contours for object detection. The principle of our approach is straightforward: we allow snakes to evolve on a given image and classify them into desired object and non-object classes. To perform the classification, an annular image band around a snake is formed. The annular band is considered as a pattern image for PCA. Extensive experiments have been carried out on oil-sand and leukocyte images and the performance of the proposed method has been compared with two other automatic initialization and two gradient-based outlier detection techniques. Results show that the proposed algorithm improves the performance of automatic initialization techniques and validates snakes more accurately than other outlier detection methods, even when considerable object localization error is present. <s> BIB005 </s> Progress in Outlier Detection Techniques: A Survey <s> a: GAUSSIAN MIXTURE MODEL METHODS <s> Special Complex non-Gaussian processes may have dynamic operation scenario shifts so that the traditional Outlier detection approaches become ill-suited. This paper proposes a new outlier detection approach based on using subspace learning and Gaussian mixture model(GMM) in energy disaggregation. Locality preserving projections(LPP) of subspace learning can optimally preserve the neighborhood structure, reveal the intrinsic manifold structure of the data and keep outliers far away from the normal sample compared with the principal component analysis (PCA). The results show proposed approach can significantly improve performance of outlier detection in energy disaggregation, increase the fraction true-positive from 93.8% to 97%, decrease the fraction false-positive from 35.48% to 25.8%. <s> BIB006
The Gaussian Model is one of the most prevalent statistical approaches used to detect outliers. In this model, the training phase uses the Maximum Likelihood Estimates (MLE) method to perform the mean and variance estimates of the Gaussian distribution. In the test stage, some statistical discordancy tests (box-plot, mean-variance test) are applied. Yang et al. BIB004 , introduced an unsupervised outlier detection method with the globally optimal Exemplar-Based GMM (Gaussian Mixture Model). In their technique, they first realized the global optimal expectation maximization (EM) algorithm to fit the GMM to a given data set. The outlier factor for every data point is considered as the sum of the weighted mixture proportions with the weight signifying the relationship with other data points. The outlier factor F k at x k is mathematically defined as: where s kj π j (t) shows the depth of the point x k 's influence on another point x j . s kj , referring to the connection strength, t h the final iteration and π j the measure of the significance of point j. The data point x k , is more likely to be flagged as an outlier if F k , is smaller. This technique is in contrast to other existing methods BIB001 , BIB002 , BIB003 , which focus solely on local properties rather than global properties. Yang et al. BIB004 technique can be applied to solve the problem of the clustering-based technique's inability to detect outliers in the presence of noisy data, by fitting the GMM at every data point in a given dataset. We note that, notwithstanding the method's capacity to identify unusual shapes faster, it still has a high complexity (with O(n 3 ) for single iteration and O(Nn 3 ) for N iterations). An algorithm that can reduce such a computational complexity can serve to be more scalable for future study. In 2015, for a more robust approach to outlier detection, the use of GMM with locality preserving projections was proposed by Tang et al. BIB006 . They combined the use of GMM and subspace learning for robust outlier detection in energy disaggregation. In their approach, the locality preserving projection (LPP) of subspace learning is used to preserve the neighborhood structure efficiently and then reveal the intrinsic manifold structure of the data. The outliers are kept far away from the normal sample, which happens to be reversed when compared to Saha et al. BIB005 principal component analysis (PCA) method. This study addresses the research gap of the previous methods, LOF BIB001 and Tang et al. BIB002 , which failed to detect outliers in multiple state processes and multi-gaussian states. From the experimental evaluation, even though the proposed method showed improved performance (true-positive 93.8% to 97% and a decrease of false-positive from 35.48% to 25.8%). However, missing in the literature is the computational complexity when compared to other techniques.
Progress in Outlier Detection Techniques: A Survey <s> b: REGRESSION METHODS <s> In this paper, we present a new algorithm for detecting multiple outliers in linear regression. The algorithm is based on a non-iterative robust covariance matrix and concentration steps used in LTS estimation. A robust covariance matrix is constructed to calculate Mahalanobis distances of independent variables which are then used as weights in weighted least squares estimation. A few concentration steps are then performed using the observations that have smallest residuals. We generate random data sets for $n=10^3, 10^4, 10^5$ and $p=5,10$ to show up the capabilities of the algorithm. In our Monte Carlo simulations, it is shown that our algorithm has very low masking and swamping ratios when the number of observations is up to $10^4$ in the case of maximum contamination in X-Space. It is also shown that, the algorithm is successful in the case of Y-Space outliers when the contamination level, sample size and number of parameters are up to $30\%$, $n=10^5$, and $p=10$, respectively. Bias, variance and MSE statistics are calculated for different scenarios. The reported computation time of our implementation is quite short. It is concluded that the presented algorithm is suitable and applicable for detecting multiple outliers in regression analysis with its small masking and swamping ratios, accurate estimates of regression parameters except the intercept, and short computation time in large data sets and high level of contamination. A future work is required for reducing bias and variance of the intercept estimator in the model. <s> BIB001 </s> Progress in Outlier Detection Techniques: A Survey <s> b: REGRESSION METHODS <s> We present an improved outlier detection method using a regression model. A synthesized signal using the measurements of different sensors is applied for the estimation of the model parameters. The artificial and real dataset are used to verify the proposed method. The preliminary experiments show improvement in the regression-based outlier detection method. <s> BIB002 </s> Progress in Outlier Detection Techniques: A Survey <s> b: REGRESSION METHODS <s> Artificial Neural Networks provide models for a large class of natural and artificial phenomena that are difficult to handle using classical parametric techniques. They offer a potential solution to fit all the data, including any outliers, instead of removing them. This paper compares the predictive performance of linear and nonlinear models in outlier detection. The best-subsets regression algorithm for the selection of minimum variables in a linear regression model is used by removing predictors that are irrelevant to the task to be learned. Then, the ANN is trained by the Multi-Layer Perceptron to improve the classification and prediction of the linear model based on standard nonlinear functions which are inherent in ANNs. Comparison of linear and nonlinear models was carried out by analyzing the Receiver Operating Characteristic curves in terms of accuracy and misclassification rates for linear and nonlinear models. The results for linear and nonlinear models achieved 68% and 93%, respectively, with better fit for the nonlinear model. <s> BIB003
Detecting outliers using regression models is one of the most straightforward approaches to outlier detection problems. The model chosen by the user can either be linear or nonlinear depending on the problem that needs to be solved. Usually, when adopting this technique, the first stage, which is the training stage, involves constructing a regression model that fits the data. The test stage then tests the regression model by evaluating every data instance against the model. An outlier here is labeled when a data point with a remarkable deviation occurs between the actual value and the anticipated value produced by the regression model. Over the years, some standard approaches for outlier detection using the regression techniques include thresholding using Mahalanobis distance, robust least squares with bi-square weights, mixture models and then an alternate vibrational Bayesian approach to regression . These techniques use regression models to detect outliers, and in contrast, a different method was proposed by Satman BIB001 to detect outliers in linear regression. The algorithm is centered on a non-interactive covariance matrix and concentration steps applied in the least trimmed square estimation. The algorithm has the advantage of detecting multiple outliers in a short time, which makes the computational time to be cost-effective. However, for a better result of this model, a future study can be to minimize the bias and the variance of the intercept estimator because regression models are sometimes characterized by minute preferences. Park et al. BIB002 , proposed another regression-based outlier detection technique, but this time, it is centered on detecting outliers in sensor measurements. The proposed technique makes use of a weighted summation approach for building a synthesized independent variable from the observed values. Since the method was only tested for a single environment, we believe proposing techniques that will attain precise model estimation for different sensor settings and situation will be an interesting future study. Recently, in 2017, Dalatu et al. BIB003 did a comparative study on linear and nonlinear regression models for outlier detection by analyzing the receiver operating characteristic (ROC) curves in terms of their misclassification rate and accuracy. The study gives researchers insight into the predictive results of the two kinds of regression models in outlier detection. The non-linear models (93% accuracy) tend to fit more than the linear models (68% accuracy) for outlier detection, which gives researchers better reasons why adopting a non-linear model can be more effective in a more general situation.
Progress in Outlier Detection Techniques: A Survey <s> 2) NON-PARAMETRIC METHODS <s> For many KDD applications, such as detecting criminal activities in E-commerce, finding the rare instances or the outliers, can be more interesting than finding the common patterns. Existing work in outlier detection regards being an outlier as a binary property. In this paper, we contend that for many scenarios, it is more meaningful to assign to each object a degree of being an outlier. This degree is called the local outlier factor (LOF) of an object. It is local in that the degree depends on how isolated the object is with respect to the surrounding neighborhood. We give a detailed formal analysis showing that LOF enjoys many desirable properties. Using real-world datasets, we demonstrate that LOF can be used to find outliers which appear to be meaningful, but can otherwise not be identified with existing approaches. Finally, a careful performance evaluation of our algorithm confirms we show that our approach of finding local outliers can be practical. <s> BIB001 </s> Progress in Outlier Detection Techniques: A Survey <s> 2) NON-PARAMETRIC METHODS <s> Outlier detection is an integral part of data mining and has attracted much attention recently [M. Breunig et al., (2000)], [W. Jin et al., (2001)], [E. Knorr et al., (2000)]. We propose a new method for evaluating outlierness, which we call the local correlation integral (LOCI). As with the best previous methods, LOCI is highly effective for detecting outliers and groups of outliers (a.k.a. micro-clusters). In addition, it offers the following advantages and novelties: (a) It provides an automatic, data-dictated cutoff to determine whether a point is an outlier-in contrast, previous methods force users to pick cut-offs, without any hints as to what cut-off value is best for a given dataset. (b) It can provide a LOCI plot for each point; this plot summarizes a wealth of information about the data in the vicinity of the point, determining clusters, micro-clusters, their diameters and their inter-cluster distances. None of the existing outlier-detection methods can match this feature, because they output only a single number for each point: its outlierness score, (c) Our LOCI method can be computed as quickly as the best previous methods, (d) Moreover, LOCI leads to a practically linear approximate method, aLOCI (for approximate LOCI), which provides fast highly-accurate outlier detection. To the best of our knowledge, this is the first work to use approximate computations to speed up outlier detection. Experiments on synthetic and real world data sets show that LOCI and aLOCI can automatically detect outliers and micro-clusters, without user-required cut-offs, and that they quickly spot both expected and unexpected outliers. <s> BIB002 </s> Progress in Outlier Detection Techniques: A Survey <s> 2) NON-PARAMETRIC METHODS <s> Outlier detection has recently become an important problem in many industrial and financial applications. In this paper, a novel unsupervised algorithm for outlier detection with a solid statistical foundation is proposed. First we modify a nonparametric density estimate with a variable kernel to yield a robust local density estimation. Outliers are then detected by comparing the local density of each point to the local density of its neighbors. Our experiments performed on several simulated data sets have demonstrated that the proposed approach can outperform two widely used outlier detection algorithms (LOF and LOCI). <s> BIB003 </s> Progress in Outlier Detection Techniques: A Survey <s> 2) NON-PARAMETRIC METHODS <s> this paper, an attempt has been made to develop a statistical model for the sensor data stream, estimating density for distribution of data and flagging a particular value as an outlier in the best possible manner without compromising with the performance. A statistical modeling technique transforms the raw sensor readings into meaningful information which will yield effective output, hence offering a more reliable way to gain insight into the physical phenomena under observation. We have proposed a model that is based on the approximation of the sensor data distribution. Our approach takes into consideration various characteristics and features of streaming sensor data. We processed and evaluated our proposed scheme with a set of experiments with datasets which is taken from Intel Berkeley research lab. The experimental evaluation shows that our algorithm can achieve very high precision and recall rates for identifying outliers and demonstrate the effectiveness of the proposed approach. <s> BIB004 </s> Progress in Outlier Detection Techniques: A Survey <s> 2) NON-PARAMETRIC METHODS <s> The probability density function (PDF) is an effective data model for a variety of stream mining tasks. As such, accurate estimates of the PDF are essential to reducing the uncertainties and errors associated with mining results. The nonparametric adaptive kernel density estimator (AKDE) provides accurate, robust, and asymptotically consistent estimates of a PDF. However, due to AKDE's extensive computational requirements, it cannot be directly applied to the data stream environment. This paper describes the development of an AKDE approximation approach that heeds the constraints of the data stream environment and supports efficient processing of multiple queries. To this end, this work proposes (1) the concept of local regions to provide a partition-based variable bandwidth to capture local density structures and enhance estimation quality; (2) a suite of linear-pass methods to construct the local regions and kernel objects online; (3) an efficient multiple queries evaluation algorithm; (4) a set of approximate techniques to increase the throughput of multiple density queries processing; and (5) a fixed-size memory time-based sliding window that updates the kernel objects in linear time. Comprehensive experiments were conducted with real-world and synthetic data sets to validate the effectiveness and efficiency of the approach. <s> BIB005 </s> Progress in Outlier Detection Techniques: A Survey <s> 2) NON-PARAMETRIC METHODS <s> Based on the widely known kernel density estimator of the probability function, a new algorithm is proposed in order to detect outliers and to provide a robust estimation of location and scatter. With the help of the Gaussian Transform, a robust weighted kernel estimation of the density probability function is calculated, referring to the whole of the data, including the outliers. In the next step, the data points having the smallest values according to the robust pdf are removed as the least probable to belong to the clean data. The program based on this algorithm is more accurate even on greatly correlated outliers with the data, and even with outliers with small Euclidean distance from the data. In case the data have many variables, we can use Principal Component algorithm by (Introduction to Multivariate Statistical Analysis in Chemometrics. CRC press, 2008) [1] with the same efficiency in the detection of outliers. <s> BIB006 </s> Progress in Outlier Detection Techniques: A Survey <s> 2) NON-PARAMETRIC METHODS <s> Multimedia networks hold the promise of facilitating large-scale, real-time data processing in complex environments. Their foreseeable applications will help protect and monitor military, environmental, safety-critical, or domestic infrastructures and resources. Cloud infrastructures promise to provide high performance and cost effective solutions to large scale data processing problems. This paper focused on the outlier detection over distributed data stream in real time, proposed kernel density estimation (KDE) based outlier detection algorithm KDEDisStrOut in Storm, firstly formalized the problem of outlier detection using the kernel density estimation technique and update the transported data incrementally between the child node and the coordinator node which reduces the communication cost. Then the paper adopted the exponential decay policy to keep pace with the transient and evolving natures of stream data and changed the weight of different data in the sliding window adaptively made the data analysis more reasonable. Theoretical analysis and experiments on Storm with synthetic and real data show that the KDEDisStrOut algorithm is efficient and effective compared with existing outlier detection algorithms, and more suitable for data streams. <s> BIB007 </s> Progress in Outlier Detection Techniques: A Survey <s> 2) NON-PARAMETRIC METHODS <s> Big Data and cloud computing are complementary technological paradigms with a core focus on scalability, agility, and on-demand availability. The rise of cloud computing and cloud data stores have been a precursor and facilitator to the emergence of big data. Cloud computing turns traditional siloed computing assets into shared pools of resources that are based on an underlying internet foundation. As a result a number of enterprises are building efficient and agile cloud environments, and cloud providers continue to expand service offerings. Many cloud providers offer online collaboration service which is basically loosely-coupled in nature. Online anomaly detection aims to detect anomalies in data flowing in a streaming fashion. Such stream data is commonplace in today's cloud centric collaborations which enables participating domains to dynamically interoperate through sharing and accessing of information. Accordingly to forestall unauthorized disclosure of the shared resources and conceivable misappropriation, there is a need to identify anomalous access requests. To the best of our knowledge, the detection of anomalous access requests in cloud-based collaborations through non-parametric statistical technique has not been studied in earlier works. This paper proposes an online anomaly detection algorithm based on non-parametric statistical technique to detect anomalous access requests in cloud environment at runtime. <s> BIB008 </s> Progress in Outlier Detection Techniques: A Survey <s> 2) NON-PARAMETRIC METHODS <s> This paper presents an unsupervised, density-based approach to anomaly detection. The purpose is to define a smooth yet effective measure of outlierness that can be used to detect anomalies in nonl ... <s> BIB009
Kernel Density Estimation Methods: Kernel Density Estimation (KDE) is a common non-parametric approach for detecting outliers BIB006 . An unsupervised approach to outlier detection using kernel functions was presented in BIB003 by Latecki et al. The outlier detection process is performed by comparing each point's local density to that of the neighbor's local density. The experimental evaluation of the proposed techniques when compared to some popular density-based methods BIB001 , BIB002 results in better detection performance in most cases. However, the method still lacks applicability in very large and high dimensional real-life databases. This can be an extension of the current study for the future. Later, Gao et al. proposed a better approach to address some of the previous shortcomings. The method shows improved performance, and good scalability for broad data sets using kernel-based techniques with less computational time when compared to LOF and Latecki et al. BIB003 proposed methods. To address the issue of inaccurate outlier detection in complex and large data sets, they adopted the variable kernel density estimation to tackle this problem. Another issue to address is related to the LOF, which is the dependability of the parameter k -which measures the weight of the local neighborhood. To salvage this issue, they adopted a weighted neighborhood density estimate. Overall, the method shows improved performance and good scalability for large data sets with less computational time. Kumar and Verma BIB004 use KDE to estimate the sensor data distribution to detect malicious nodes. In another study, Boedihardjo et al. BIB005 adopt the KDE based approach in a data stream environment despite the challenges of directly applying the KDE methods in a data stream environment. The KDE methods in outlier detection approaches show improved performance in some aspects. However, they are known for their extensive computational cost. Uddin et al. then use KDE for outlier detection in different application area -power grid. The authors in BIB005 proposed an approximation approach of the adaptive kernel density estimator (AKDE) for robust and accurate estimates of the probability density function (PDF). Although it shows that the technique produces a better estimation quality than the original KDE -with a (O(n 2 )) computational cost. However, it still shows a better performance in most areas when compared to the original KDE. The authors were able to propose a technique that met the stringent constraints in this kind of environment, and we believe further studies for multivariate streams can be done. Zheng et al. BIB007 in another study, use KDE on distributed streams in a multimedia network for detecting outliers. Smrithy et al. BIB008 proposed a non-parametric online outlier detection algorithm to detect outliers in big data streams. An adaptive kernel density-based technique using the Gaussian kernel was also studied by Zhang et al. BIB009 for detecting anomalies in nonlinear systems. Qin et al. proposed a novel local outlier semantics that makes excellent use of KDE to detect local outliers from data streams effectively. Their work addresses the shortcoming of existing works that are not well furnished to tackle current high-velocity data streams owing to high intricacy and their unpredictability to data updates. They designed KELOS, an approach to unceasingly identify the top-N KDE-based local outliers over streaming data. To conclude, one big setback of most KDE methods is that they usually suffer from a high computational cost and curse of dimensionality, which makes them very unreliable in practice. Despite KDE's better performance when compared to other non-parametric OD approaches, there is a relatively low number of reports that adopt KDE based phenomenon to approach this problem.
Progress in Outlier Detection Techniques: A Survey <s> 3) OTHER STATISTICAL METHODS <s> For many KDD applications, such as detecting criminal activities in E-commerce, finding the rare instances or the outliers, can be more interesting than finding the common patterns. Existing work in outlier detection regards being an outlier as a binary property. In this paper, we contend that for many scenarios, it is more meaningful to assign to each object a degree of being an outlier. This degree is called the local outlier factor (LOF) of an object. It is local in that the degree depends on how isolated the object is with respect to the surrounding neighborhood. We give a detailed formal analysis showing that LOF enjoys many desirable properties. Using real-world datasets, we demonstrate that LOF can be used to find outliers which appear to be meaningful, but can otherwise not be identified with existing approaches. Finally, a careful performance evaluation of our algorithm confirms we show that our approach of finding local outliers can be practical. <s> BIB001 </s> Progress in Outlier Detection Techniques: A Survey <s> 3) OTHER STATISTICAL METHODS <s> Outlier detection is concerned with discovering exceptional behaviors of objects in data sets.It is becoming a growingly useful tool in applications such as credit card fraud detection, discovering criminal behaviors in e-commerce, identifying computer intrusion, detecting health problems, etc. In this paper, we introduce a connectivity-based outlier factor (COF) scheme that improves the effectiveness of an existing local outlier factor (LOF) scheme when a pattern itself has similar neighbourhood density as an outlier. We give theoretical and empirical analysis to demonstrate the improvement in effectiveness and the capability of the COF scheme in comparison with the LOF scheme. <s> BIB002 </s> Progress in Outlier Detection Techniques: A Survey <s> 3) OTHER STATISTICAL METHODS <s> In this paper we propose a new definition of distance-based outlier that considers for each point the sum of the distances from its k nearest neighbors, called weight. Outliers are those points having the largest values of weight. In order to compute these weights, we find the k nearest neighbors of each point in a fast and efficient way by linearizing the search space through the Hilbert space filling curve. The algorithm consists of two phases, the first provides an approximated solution, within a small factor, after executing at most d + 1 scans of the data set with a low time complexity cost, where d is the number of dimensions of the data set. During each scan the number of points candidate to belong to the solution set is sensibly reduced. The second phase returns the exact solution by doing a single scan which examines further a little fraction of the data set. Experimental results show that the algorithm always finds the exact solution during the first phase after d ? d + 1 steps and it scales linearly both in the dimensionality and the size of the data set. <s> BIB003 </s> Progress in Outlier Detection Techniques: A Survey <s> 3) OTHER STATISTICAL METHODS <s> Outliers may provide useful information about the development and manufacturing process. Analysts use various statistical methods to evaluate outliers and to reduce their impact on the analysis. This article describes some of the more commonly used identification methods. <s> BIB004 </s> Progress in Outlier Detection Techniques: A Survey <s> 3) OTHER STATISTICAL METHODS <s> Mining outliers in database is to find exceptional objects that deviate from the rest of the data set. Besides classical outlier analysis algorithms, recent studies have focused on mining local outliers, i.e., the outliers that have density distribution significantly different from their neighborhood. The estimation of density distribution at the location of an object has so far been based on the density distribution of its k-nearest neighbors [2,11]. However, when outliers are in the location where the density distributions in the neighborhood are significantly different, for example, in the case of objects from a sparse cluster close to a denser cluster, this may result in wrong estimation. To avoid this problem, here we propose a simple but effective measure on local outliers based on a symmetric neighborhood relationship. The proposed measure considers both neighbors and reverse neighbors of an object when estimating its density distribution. As a result, outliers so discovered are more meaningful. To compute such local outliers efficiently, several mining algorithms are developed that detects top-n outliers based on our definition. A comprehensive performance evaluation and analysis shows that our methods are not only efficient in the computation but also more effective in ranking outliers. <s> BIB005 </s> Progress in Outlier Detection Techniques: A Survey <s> 3) OTHER STATISTICAL METHODS <s> We propose a new statistical approach to the problem of inlier-based outlier detection, i.e., finding outliers in the test set based on the training set consisting only of inliers. Our key idea is to use the ratio of training and test data densities as an outlier score. This approach is expected to have better performance even in high-dimensional problems since methods for directly estimating the density ratio without going through density estimation are available. Among various density ratio estimation methods, we employ the method called unconstrained least-squares importance fitting (uLSIF) since it is equipped with natural cross-validation procedures, allowing us to objectively optimize the value of tuning parameters such as the regularization parameter and the kernel width. Furthermore, uLSIF offers a closed-form solution as well as a closed-form formula for the leave-one-out error, so it is computationally very efficient and is scalable to massive datasets. Simulations with benchmark and real-world datasets illustrate the usefulness of the proposed approach. <s> BIB006 </s> Progress in Outlier Detection Techniques: A Survey <s> 3) OTHER STATISTICAL METHODS <s> When analyzing data, outlying observations cause problems because they may strongly influence the result. Robust statistics aims at detecting the outliers by searching for the model fitted by the majority of the data. We present an overview of several robust methods and outlier detection tools. We discuss robust procedures for univariate, low-dimensional, and high-dimensional data such as estimation of location and scatter, linear regression, principal component analysis, and classification. © 2011 John Wiley & Sons, Inc. WIREs Data Mining Knowl Discov 2011 1 73-79 DOI: 10.1002/widm.2 ::: ::: This article is categorized under: ::: ::: Algorithmic Development > Biological Data Mining ::: Algorithmic Development > Spatial and Temporal Data Mining ::: Application Areas > Health Care ::: Technologies > Structure Discovery and Clustering <s> BIB007 </s> Progress in Outlier Detection Techniques: A Survey <s> 3) OTHER STATISTICAL METHODS <s> Unsupervised anomaly detection is the process of nding outliers in data sets without prior training. In this paper, a histogrambased outlier detection (HBOS) algorithm is presented, which scores records in linear time. It assumes independence of the features making it much faster than multivariate approaches at the cost of less precision. A comparative evaluation on three UCI data sets and 10 standard algorithms show, that it can detect global outliers as reliable as state-of-theart algorithms, but it performs poor on local outlier problems. HBOS is in our experiments up to 5 times faster than clustering based algorithms and up to 7 times faster than nearest-neighbor based methods. <s> BIB008 </s> Progress in Outlier Detection Techniques: A Survey <s> 3) OTHER STATISTICAL METHODS <s> Automatically identifying that a certain page in a set of documents is printed with a different printer than the rest of the documents can give an important clue for a possible forgery attempt. Different printers vary in their produced printing quality, which is especially noticeable at the edges of printed characters. In this paper, a system using the difference in edge roughness to distinguish laser printed ages from inkjet printed pages is presented. Several feature extraction methods have been developed and evaluated for that purpose. In contrast to previous work, this system uses unsupervised anomaly detection to detect documents printed by a different printing technique than the majority of the documents among a set. This approach has the advantage that no prior training using genuine documents has to be done. Furthermore, we created a dataset featuring 1200 document images from different domains (invoices, contracts, scientific papers) printed by 7 different inkjet and 13 laser printers. Results show that the presented feature extraction method achieves the best outlier rank score in comparison to state-of-the-art features. <s> BIB009 </s> Progress in Outlier Detection Techniques: A Survey <s> 3) OTHER STATISTICAL METHODS <s> With the rapid expansion of data scale, big datamining and analysis has attracted increasing attention. Outlierdetection as an important task of data mining is widely usedin many applications. However, conventional outlier detectionmethods have difficulty handling large-scale datasets. In addition, most of them typically can only identify global outliersand are over sensitive to parameters variation. In this paper, we propose a robust method for robust local outlier detectionwith statistical parameters, which incorporates the clusteringbasedideas in dealing with big data. Firstly, This method findssome density peaks of dataset by 3s standard. Secondly eachremaining data object in the dataset is assigned to the samecluster as its nearest neighbor of higher density. Finally, weuse Chebyshevs inequality and density peak reachability toidentify local outliers of each group. The experimental resultsdemonstrate the efficiency and accuracy of the proposedmethod in identifying both global and local outliers, Moreover, the method also proved more robust analysis than typicaloutlier detection methods, such as LOF and DBSCAN. <s> BIB010 </s> Progress in Outlier Detection Techniques: A Survey <s> 3) OTHER STATISTICAL METHODS <s> An integrated framework for density-based cluster analysis, outlier detection, and data visualization is introduced in this article. The main module consists of an algorithm to compute hierarchical estimates of the level sets of a density, following Hartigan’s classic model of density-contour clusters and trees. Such an algorithm generalizes and improves existing density-based clustering techniques with respect to different aspects. It provides as a result a complete clustering hierarchy composed of all possible density-based clusters following the nonparametric model adopted, for an infinite range of density thresholds. The resulting hierarchy can be easily processed so as to provide multiple ways for data visualization and exploration. It can also be further postprocessed so that: (i) a normalized score of “outlierness” can be assigned to each data object, which unifies both the global and local perspectives of outliers into a single definition; and (ii) a “flat” (i.e., nonhierarchical) clustering solution composed of clusters extracted from local cuts through the cluster tree (possibly corresponding to different density thresholds) can be obtained, either in an unsupervised or in a semisupervised way. In the unsupervised scenario, the algorithm corresponding to this postprocessing module provides a global, optimal solution to the formal problem of maximizing the overall stability of the extracted clusters. If partially labeled objects or instance-level constraints are provided by the user, the algorithm can solve the problem by considering both constraints violations/satisfactions and cluster stability criteria. An asymptotic complexity analysis, both in terms of running time and memory space, is described. Experiments are reported that involve a variety of synthetic and real datasets, including comparisons with state-of-the-art, density-based clustering and (global and local) outlier detection methods. <s> BIB011
Many statistical approaches have been proposed, but among the more straightforward statistical methods to identify outliers are the histogram BIB008 and other statistical tests BIB004 such as the Boxplot, Trimmed mean, Extreme Studentized Deviate and the Dixon-type test BIB004 . The Trimmed mean among the others is more comparatively resistance to outliers, while to identify single outliers, the Extreme Studentized Deviate test is the right choice. The Dixon-type test has the advantage of performing well with a small sample size because there is no need to assume the normalcy of the data. Barnett et al. discuss several tests for the optimization of different distributions model to effectively detect outliers. Optimization could depend on the actual parameters of conforming distributions, that is, the expected space for outliers and the number of outliers. Rousseeuw and Hubert BIB007 also gave a broader discussion of statistical techniques for outlier detection. Using a histogram-based approach, Goldstein and Dengel BIB008 proposed a Histogram-Based Outlier (HBOS) detection algorithm that uses static and dynamic bin width histograms to model univariate feature densities. These histograms are then used to calculate the outlier score for each of the data instances. Though the algorithm showed improved performance in some performance metrics like the computational speed when compared to some other popular OD methods such as LOF BIB001 , COF BIB002 , and INFLO BIB005 . However, it falls short in local outlier detection problems because the algorithm cannot model local outliers using the proposed density estimation. Hido et al. BIB006 , proposed a new statistical approach for inlier-based outlier detection problems by using the directed density ratio estimation. The main idea is to utilize the ratio of the training and test data densities as outlier scores. The method of unconstrained least-squares importance fitting (uLSIF) was applied because it is more suitable with natural cross-validation measures that allow it to accurately optimize the tuning parameter's value; such as the kernel width and regularization parameter. The proposed technique, when compared to the non-parametric KDE, is more advantageous because it has provision to escape the hard density estimation computation. The method also showed an improved performance in accuracy, even though not in all cases, they showed a better performance than the other methods. Nevertheless, it demonstrates that the approach is more efficient in a broader perspective. Improving the accuracy of the density ratio estimation can be an important future work to consider this approach. Du et al. BIB010 , proposed another robust technique with statistical parameters to solve the problem of local outlier detection called the Robust Local Outlier Detection (RLOD). This study was motivated by the fact that most OD methods focus on identifying global outliers, and most of these methods BIB009 , BIB003 are very sensitive to parameter changes. The whole idea of the framework is in three stages. In the first stage, the authors propose a method to initially find density peaks in the dataset using the 3σ standard. In the second stage, in the dataset, each remaining data object is then allocated to its identical cluster to be labeled as to its nearest neighbor with a higher density. In the final stage, they use Chebyshev's inequality and then the density peak reachability to recognize the local outliers in each group. The method supports both the detection of local and global outliers as in Campello et al. BIB011 technique, and they experimentally proved that the method outperforms other methods BIB001 , in terms of the running time and detection rate. The authors recommend further experiments on how to improve efficiency through the use of a robust method for distributed and parallel computing. Other studies have been done using statistical methods for computing outliers. In Table 3 , we present a summary showing the progress using this technique for some key algorithms mentioned above.
Progress in Outlier Detection Techniques: A Survey <s> Advantages <s> Outlier detection has recently become an important problem in many data mining applications. In this paper, a novel unsupervised algorithm for outlier detection is proposed. First we apply a provably globally optimal Expectation Maximization (EM) algorithm to flt a Gaussian Mixture Model (GMM) to a given data set. In our approach, a Gaussian is centered at each data point, and hence, the estimated mixture proportions can be interpreted as probabilities of being a cluster center for all data points. The outlier factor at each data point is then deflned as a weighted sum of the mixture proportions with weights representing the similarities to other data points. The proposed outlier factor is thus based on global properties of the data set. This is in contrast to most existing approaches to outlier detection, which are strictly local. Our experiments performed on several simulated and real life data sets demonstrate superior performance of the proposed approach. Moreover, we also demonstrate the ability to detect unusual shapes. <s> BIB001 </s> Progress in Outlier Detection Techniques: A Survey <s> Advantages <s> We utilize outlier detection by principal component analysis (PCA) as an effective step to automate snakes/active contours for object detection. The principle of our approach is straightforward: we allow snakes to evolve on a given image and classify them into desired object and non-object classes. To perform the classification, an annular image band around a snake is formed. The annular band is considered as a pattern image for PCA. Extensive experiments have been carried out on oil-sand and leukocyte images and the performance of the proposed method has been compared with two other automatic initialization and two gradient-based outlier detection techniques. Results show that the proposed algorithm improves the performance of automatic initialization techniques and validates snakes more accurately than other outlier detection methods, even when considerable object localization error is present. <s> BIB002 </s> Progress in Outlier Detection Techniques: A Survey <s> Advantages <s> We propose a new statistical approach to the problem of inlier-based outlier detection, i.e., finding outliers in the test set based on the training set consisting only of inliers. Our key idea is to use the ratio of training and test data densities as an outlier score. This approach is expected to have better performance even in high-dimensional problems since methods for directly estimating the density ratio without going through density estimation are available. Among various density ratio estimation methods, we employ the method called unconstrained least-squares importance fitting (uLSIF) since it is equipped with natural cross-validation procedures, allowing us to objectively optimize the value of tuning parameters such as the regularization parameter and the kernel width. Furthermore, uLSIF offers a closed-form solution as well as a closed-form formula for the leave-one-out error, so it is computationally very efficient and is scalable to massive datasets. Simulations with benchmark and real-world datasets illustrate the usefulness of the proposed approach. <s> BIB003 </s> Progress in Outlier Detection Techniques: A Survey <s> Advantages <s> Unsupervised anomaly detection is the process of nding outliers in data sets without prior training. In this paper, a histogrambased outlier detection (HBOS) algorithm is presented, which scores records in linear time. It assumes independence of the features making it much faster than multivariate approaches at the cost of less precision. A comparative evaluation on three UCI data sets and 10 standard algorithms show, that it can detect global outliers as reliable as state-of-theart algorithms, but it performs poor on local outlier problems. HBOS is in our experiments up to 5 times faster than clustering based algorithms and up to 7 times faster than nearest-neighbor based methods. <s> BIB004 </s> Progress in Outlier Detection Techniques: A Survey <s> Advantages <s> Special Complex non-Gaussian processes may have dynamic operation scenario shifts so that the traditional Outlier detection approaches become ill-suited. This paper proposes a new outlier detection approach based on using subspace learning and Gaussian mixture model(GMM) in energy disaggregation. Locality preserving projections(LPP) of subspace learning can optimally preserve the neighborhood structure, reveal the intrinsic manifold structure of the data and keep outliers far away from the normal sample compared with the principal component analysis (PCA). The results show proposed approach can significantly improve performance of outlier detection in energy disaggregation, increase the fraction true-positive from 93.8% to 97%, decrease the fraction false-positive from 35.48% to 25.8%. <s> BIB005 </s> Progress in Outlier Detection Techniques: A Survey <s> Advantages <s> With the rapid expansion of data scale, big datamining and analysis has attracted increasing attention. Outlierdetection as an important task of data mining is widely usedin many applications. However, conventional outlier detectionmethods have difficulty handling large-scale datasets. In addition, most of them typically can only identify global outliersand are over sensitive to parameters variation. In this paper, we propose a robust method for robust local outlier detectionwith statistical parameters, which incorporates the clusteringbasedideas in dealing with big data. Firstly, This method findssome density peaks of dataset by 3s standard. Secondly eachremaining data object in the dataset is assigned to the samecluster as its nearest neighbor of higher density. Finally, weuse Chebyshevs inequality and density peak reachability toidentify local outliers of each group. The experimental resultsdemonstrate the efficiency and accuracy of the proposedmethod in identifying both global and local outliers, Moreover, the method also proved more robust analysis than typicaloutlier detection methods, such as LOF and DBSCAN. <s> BIB006
i. They are mathematically acceptable and have a fast evaluation process once the models are built. This is because most models are made in a compacted form, and they showed improved performance given the probabilistic model. ii. The models generally fit quantitative real-valued data sets or some quantitative ordinal data distributions. The ordinal data can be changed to an appropriate value for processing, which results in improved processing time for complex data. iii. They are easier to implement even though limited to specific problems. Disadvantages, Challenges, And Gaps: i. Because of their dependency and the assumptions of a distribution model in parametric models, the quality of the results produced is mostly unreliable for practical situations and applications due to the lack of preceding knowledge regarding the underlying distribution. ii. Since most models apply to univariate feature space, they are characteristically not applicable in a multidimensional scenario. In outlier detection problems, the importance of outlierfree data is significant for building reliable systems. This is because outliers can have a drastic effect on the system efficiency, so it's prudent to identify and remove those that affect the system's accuracy. Most of the drawbacks from statistical methods are centered around the outlier detection accuracy, lack of efficient techniques for very high data sets, the curse of dimensionality, and computational cost. Statistical-based methods can be effective in the outlier detection process when the correct distribution model is captured. In some real-life situations, for instance, in sensor stream distribution, there is no prior knowledge available to be learned. In such a scenario, when the data does not follow the predetermined distribution, it may become impractical. Therefore, non-parametric methods are mostly appealing since they do not depend on the assumption of the distribution characteristics. This is also true for big data streams, where the data distribution cannot be assumed. For evenly dispersed outliers in a dataset, using statistical techniques becomes complicating. Therefore, parametric methods are not applicable for big data streams, but for non-parametric methods, they are. In addition, defining the threshold of a standard distribution to differentiate the outliers has a higher probability of inaccurate labeling. For parametric cases, using the Gaussian mixture models, a worth noting point is the daunting task in adopting Gaussian techniques for computing outliers in both a high dimensional data subspace and data streams. Yang et al. BIB001 method, for instance, has high complexity. An algorithm that can reduce such a computational complexity can serve to be more scalable. Regression techniques also are not suitable to support high dimensional subspace data. For a more efficient and robust solution in finding and discovering outliers, it's more appropriate to apply robust regressions rather than ordinary regressions because outliers can impact the latter. For the nonparametric case, KDE performs better in most cases, despite its sensitivity to outliers and the complexity in determining a good estimate of the nominal data density in polluted data sets. In multivariate data, they scale well and are computationally inexpensive. The histogram models work well for univariate data but not suitable for multivariate data. This is because it cannot capture the relations between the different attributes. Some statistical techniques are not well adapted in recent times because of the kinds of data and application areas. However, they are considered to be great practical approaches to outlier detection problems. Tang et al. method BIB005 , when compared to the PCA method BIB002 , gives a robust improvement for outlier and noise detection problems. In HBOS BIB004 , their approach shows a good computational speed even more than some clustering-based algorithms and other types of algorithms (LOCI, LOF, INFLO) and thus makes it a suitable for large scale near real-time applications. While Hido et al. BIB003 method is more scalable for massive data sets and Du et al. BIB006 method has a more robust analysis.
Progress in Outlier Detection Techniques: A Survey <s> C. DISTANCE-BASED APPROACHES <s> This paper deals with finding outliers (exceptions) in large, multidimensional datasets. The identification of outliers can lead to the discovery of truly unexpected knowledge in areas such as electronic commerce, credit card fraud, and even the analysis of performance statistics of professional athletes. Existing methods that we have seen for finding outliers in large datasets can only deal efficiently with two dimensions/attributes of a dataset. Here, we study the notion of DB- (DistanceBased) outliers. While we provide formal and empirical evidence showing the usefulness of DB-outliers, we focus on the development of algorithms for computing such outliers. First, we present two simple algorithms, both having a complexity of O(k N’), k being the dimensionality and N being the number of objects in the dataset. These algorithms readily support datasets with many more than two attributes. Second, we present an optimized cell-based algorithm that has a complexity that is linear wrt N, but exponential wrt k. Third, for datasets that are mainly disk-resident, we present another version of the cell-based algorithm that guarantees at most 3 passes over a dataset. We provide <s> BIB001 </s> Progress in Outlier Detection Techniques: A Survey <s> C. DISTANCE-BASED APPROACHES <s> This paper deals with finding outliers (exceptions) in large, multidimensional datasets. The identification of outliers can lead to the discovery of truly unexpected knowledge in areas such as electronic commerce, credit card fraud, and even the analysis of performance statistics of professional athletes. Existing methods that we have seen for finding outliers can only deal efficiently with two dimensions/attributes of a dataset. In this paper, we study the notion of DB (distance-based) outliers. Specifically, we show that (i) outlier detection can be done efficiently for large datasets, and for k-dimensional datasets with large values of k (e.g., $k \ge 5$); and (ii), outlier detection is a meaningful and important knowledge discovery task.First, we present two simple algorithms, both having a complexity of $O(k \: N^2)$, k being the dimensionality and N being the number of objects in the dataset. These algorithms readily support datasets with many more than two attributes. Second, we present an optimized cell-based algorithm that has a complexity that is linear with respect to N, but exponential with respect to k. We provide experimental results indicating that this algorithm significantly outperforms the two simple algorithms for $k \leq 4$. Third, for datasets that are mainly disk-resident, we present another version of the cell-based algorithm that guarantees at most three passes over a dataset. Again, experimental results show that this algorithm is by far the best for $k \leq 4$. Finally, we discuss our work on three real-life applications, including one on spatio-temporal data (e.g., a video surveillance application), in order to confirm the relevance and broad applicability of DB outliers. <s> BIB002 </s> Progress in Outlier Detection Techniques: A Survey <s> C. DISTANCE-BASED APPROACHES <s> A distance-based outlier detection method that finds the top outliers in an unlabeled data set and provides a subset of it, called outlier detection solving set, that can be used to predict the outlierness of new unseen objects, is proposed. The solving set includes a sufficient number of points that permits the detection of the top outliers by considering only a subset of all the pairwise distances from the data set. The properties of the solving set are investigated, and algorithms for computing it, with subquadratic time requirements, are proposed. Experiments on synthetic and real data sets to evaluate the effectiveness of the approach are presented. A scaling analysis of the solving set size is performed, and the false positive rate, that is, the fraction of new objects misclassified as outliers using the solving set instead of the overall data set, is shown to be negligible. Finally, to investigate the accuracy in separating outliers from inliers, ROC analysis of the method is accomplished. Results obtained show that using the solving set instead of the data set guarantees a comparable quality of the prediction, but at a lower computational cost. <s> BIB003 </s> Progress in Outlier Detection Techniques: A Survey <s> C. DISTANCE-BASED APPROACHES <s> This paper presents a k-nearest neighbors (kNN) method to detect outliers in large-scale traffic data collected daily in every modern city. Outliers include hardware and data errors as well as abnormal traffic behaviors. The proposed kNN method detects outliers by exploiting the relationship among neighborhoods in data points. The farther a data point is beyond its neighbors, the more possible the data is an outlier. Traffic data here was recorded in a video format, and converted to spatial-temporal (ST) traffic signals by statistics. The ST signals are then transformed to a two-dimensional (2D) (x, y) -coordinate plane by Principal Component Analysis (PCA) for dimension reduction. The distance-based kNN method is evaluated by unsupervised and semi-supervised approaches. The semi-supervised approach reaches 96.19% accuracy. <s> BIB004
Distance-based methods detect outliers by computing the distances between points. A data point that is at a far distance from its nearest neighbor is regarded as an outlier. The most commonly used distance-based outlier detection definition is centered on the concept of the local neighborhood, k-nearest neighbor (KNN) BIB004 , and the traditional distance threshold. One of the earliest studies of computing distancebased outliers by Knorr and Ng BIB001 defined distance-based outliers as: Definition: In a dataset T, an object O is a DB (p, D)-outlier if minute fraction p of the objects in the dataset lies beyond the distance D from O. Other well-known definitions of distance-based outliers given the distance measure of feature space, define outliers as: i. Points with less than p different samples within the distance d BIB002 . ii. The top n examples whose distance to thekth nearest neighbor are the greatest . iii. The top n examples whose average distance to the k nearest neighbors are the greatest BIB003 . The abbreviation DB(p,D) is the Distance-Based outlier detected using the parameters p and D. DB outlier detection methods are moderate non-parametric approaches that scale well for large data sets with a medium to high dimensionality. In comparison with statistical techniques, they tend to have a more robust foundation and are more flexible and computationally efficient. In our subsequent section, we classify the distance-based methods into the following groups -distance-based computation method using k-nearest neighbor computation, pruning techniques, and data stream related works. Some of the most commonly used distance-based approaches to detect outliers are as follows:
Progress in Outlier Detection Techniques: A Survey <s> 1) K-NEAREST NEIGHBOR METHODS <s> This paper deals with finding outliers (exceptions) in large, multidimensional datasets. The identification of outliers can lead to the discovery of truly unexpected knowledge in areas such as electronic commerce, credit card fraud, and even the analysis of performance statistics of professional athletes. Existing methods that we have seen for finding outliers in large datasets can only deal efficiently with two dimensions/attributes of a dataset. Here, we study the notion of DB- (DistanceBased) outliers. While we provide formal and empirical evidence showing the usefulness of DB-outliers, we focus on the development of algorithms for computing such outliers. First, we present two simple algorithms, both having a complexity of O(k N’), k being the dimensionality and N being the number of objects in the dataset. These algorithms readily support datasets with many more than two attributes. Second, we present an optimized cell-based algorithm that has a complexity that is linear wrt N, but exponential wrt k. Third, for datasets that are mainly disk-resident, we present another version of the cell-based algorithm that guarantees at most 3 passes over a dataset. We provide <s> BIB001 </s> Progress in Outlier Detection Techniques: A Survey <s> 1) K-NEAREST NEIGHBOR METHODS <s> This paper deals with finding outliers (exceptions) in large, multidimensional datasets. The identification of outliers can lead to the discovery of truly unexpected knowledge in areas such as electronic commerce, credit card fraud, and even the analysis of performance statistics of professional athletes. Existing methods that we have seen for finding outliers can only deal efficiently with two dimensions/attributes of a dataset. In this paper, we study the notion of DB (distance-based) outliers. Specifically, we show that (i) outlier detection can be done efficiently for large datasets, and for k-dimensional datasets with large values of k (e.g., $k \ge 5$); and (ii), outlier detection is a meaningful and important knowledge discovery task.First, we present two simple algorithms, both having a complexity of $O(k \: N^2)$, k being the dimensionality and N being the number of objects in the dataset. These algorithms readily support datasets with many more than two attributes. Second, we present an optimized cell-based algorithm that has a complexity that is linear with respect to N, but exponential with respect to k. We provide experimental results indicating that this algorithm significantly outperforms the two simple algorithms for $k \leq 4$. Third, for datasets that are mainly disk-resident, we present another version of the cell-based algorithm that guarantees at most three passes over a dataset. Again, experimental results show that this algorithm is by far the best for $k \leq 4$. Finally, we discuss our work on three real-life applications, including one on spatio-temporal data (e.g., a video surveillance application), in order to confirm the relevance and broad applicability of DB outliers. <s> BIB002 </s> Progress in Outlier Detection Techniques: A Survey <s> 1) K-NEAREST NEIGHBOR METHODS <s> For many KDD applications, such as detecting criminal activities in E-commerce, finding the rare instances or the outliers, can be more interesting than finding the common patterns. Existing work in outlier detection regards being an outlier as a binary property. In this paper, we contend that for many scenarios, it is more meaningful to assign to each object a degree of being an outlier. This degree is called the local outlier factor (LOF) of an object. It is local in that the degree depends on how isolated the object is with respect to the surrounding neighborhood. We give a detailed formal analysis showing that LOF enjoys many desirable properties. Using real-world datasets, we demonstrate that LOF can be used to find outliers which appear to be meaningful, but can otherwise not be identified with existing approaches. Finally, a careful performance evaluation of our algorithm confirms we show that our approach of finding local outliers can be practical. <s> BIB003 </s> Progress in Outlier Detection Techniques: A Survey <s> 1) K-NEAREST NEIGHBOR METHODS <s> Outlier detection is concerned with discovering exceptional behaviors of objects in data sets.It is becoming a growingly useful tool in applications such as credit card fraud detection, discovering criminal behaviors in e-commerce, identifying computer intrusion, detecting health problems, etc. In this paper, we introduce a connectivity-based outlier factor (COF) scheme that improves the effectiveness of an existing local outlier factor (LOF) scheme when a pattern itself has similar neighbourhood density as an outlier. We give theoretical and empirical analysis to demonstrate the improvement in effectiveness and the capability of the COF scheme in comparison with the LOF scheme. <s> BIB004 </s> Progress in Outlier Detection Techniques: A Survey <s> 1) K-NEAREST NEIGHBOR METHODS <s> A distance-based outlier detection method that finds the top outliers in an unlabeled data set and provides a subset of it, called outlier detection solving set, that can be used to predict the outlierness of new unseen objects, is proposed. The solving set includes a sufficient number of points that permits the detection of the top outliers by considering only a subset of all the pairwise distances from the data set. The properties of the solving set are investigated, and algorithms for computing it, with subquadratic time requirements, are proposed. Experiments on synthetic and real data sets to evaluate the effectiveness of the approach are presented. A scaling analysis of the solving set size is performed, and the false positive rate, that is, the fraction of new objects misclassified as outliers using the solving set instead of the overall data set, is shown to be negligible. Finally, to investigate the accuracy in separating outliers from inliers, ROC analysis of the method is accomplished. Results obtained show that using the solving set instead of the data set guarantees a comparable quality of the prediction, but at a lower computational cost. <s> BIB005 </s> Progress in Outlier Detection Techniques: A Survey <s> 1) K-NEAREST NEIGHBOR METHODS <s> Defining outliers by their distance to neighboring data points has been shown to be an effective non-parametric approach to outlier detection. In recent years, many research efforts have looked at developing fast distance-based outlier detection algorithms. Several of the existing distance-based outlier detection algorithms report log-linear time performance as a function of the number of data points on many real low-dimensional datasets. However, these algorithms are unable to deliver the same level of performance on high-dimensional datasets, since their scaling behavior is exponential in the number of dimensions. In this paper, we present RBRP, a fast algorithm for mining distance-based outliers, particularly targeted at high-dimensional datasets. RBRP scales log-linearly as a function of the number of data points and linearly as a function of the number of dimensions. Our empirical evaluation demonstrates that we outperform the state-of-the-art algorithm, often by an order of magnitude. <s> BIB006 </s> Progress in Outlier Detection Techniques: A Survey <s> 1) K-NEAREST NEIGHBOR METHODS <s> Outlier detection has recently become an important problem in many data mining applications. In this paper, a novel unsupervised algorithm for outlier detection is proposed. First we apply a provably globally optimal Expectation Maximization (EM) algorithm to flt a Gaussian Mixture Model (GMM) to a given data set. In our approach, a Gaussian is centered at each data point, and hence, the estimated mixture proportions can be interpreted as probabilities of being a cluster center for all data points. The outlier factor at each data point is then deflned as a weighted sum of the mixture proportions with weights representing the similarities to other data points. The proposed outlier factor is thus based on global properties of the data set. This is in contrast to most existing approaches to outlier detection, which are strictly local. Our experiments performed on several simulated and real life data sets demonstrate superior performance of the proposed approach. Moreover, we also demonstrate the ability to detect unusual shapes. <s> BIB007 </s> Progress in Outlier Detection Techniques: A Survey <s> 1) K-NEAREST NEIGHBOR METHODS <s> Detecting outliers which are grossly different from or inconsistent with the remaining dataset is a major challenge in real-world KDD applications. Existing outlier detection methods are ineffective on scattered real-world datasets due to implicit data patterns and parameter setting issues. We define a novel Local Distance-based Outlier Factor (LDOF) to measure the outlier-ness of objects in scattered datasets which addresses these issues. LDOF uses the relative location of an object to its neighbours to determine the degree to which the object deviates from its neighbourhood. We present theoretical bounds on LDOF's false-detection probability. Experimentally, LDOF compares favorably to classical KNN and LOF based outlier detection. In particular it is less sensitive to parameter values. <s> BIB008 </s> Progress in Outlier Detection Techniques: A Survey <s> 1) K-NEAREST NEIGHBOR METHODS <s> We propose an original outlier detection schema that detects outliers in varying subspaces of a high dimensional feature space. In particular, for each object in the data set, we explore the axis-parallel subspace spanned by its neighbors and determine how much the object deviates from the neighbors in this subspace. In our experiments, we show that our novel subspace outlier detection is superior to existing full-dimensional approaches and scales well to high dimensional databases. <s> BIB009 </s> Progress in Outlier Detection Techniques: A Survey <s> 1) K-NEAREST NEIGHBOR METHODS <s> In this paper, we present a new algorithm for detecting multiple outliers in linear regression. The algorithm is based on a non-iterative robust covariance matrix and concentration steps used in LTS estimation. A robust covariance matrix is constructed to calculate Mahalanobis distances of independent variables which are then used as weights in weighted least squares estimation. A few concentration steps are then performed using the observations that have smallest residuals. We generate random data sets for $n=10^3, 10^4, 10^5$ and $p=5,10$ to show up the capabilities of the algorithm. In our Monte Carlo simulations, it is shown that our algorithm has very low masking and swamping ratios when the number of observations is up to $10^4$ in the case of maximum contamination in X-Space. It is also shown that, the algorithm is successful in the case of Y-Space outliers when the contamination level, sample size and number of parameters are up to $30\%$, $n=10^5$, and $p=10$, respectively. Bias, variance and MSE statistics are calculated for different scenarios. The reported computation time of our implementation is quite short. It is concluded that the presented algorithm is suitable and applicable for detecting multiple outliers in regression analysis with its small masking and swamping ratios, accurate estimates of regression parameters except the intercept, and short computation time in large data sets and high level of contamination. A future work is required for reducing bias and variance of the intercept estimator in the model. <s> BIB010 </s> Progress in Outlier Detection Techniques: A Survey <s> 1) K-NEAREST NEIGHBOR METHODS <s> Based on local information: local density and local uncertainty level, a new outlier detection algorithm is designed in this paper to calculate uncertain local outlier factor (ULOF) for each point in an uncertain dataset. In this algorithm, all concepts, definitions and formulations for conventional local outlier detection approach (LOF) are generalized to include uncertainty information. The least squares algorithm on multi-times curve fitting is used to generate an approximate probability density function of distance between two points. An iteration algorithm is proposed to evaluate K–η–distance and a pruning strategy is adopted to reduce the size of candidate set of nearest-neighbors. The comparison between ULOF algorithm and the state-of-the-art approaches has been made. Results of several experiments on synthetic and real data sets demonstrate the effectiveness of the proposed approach. <s> BIB011 </s> Progress in Outlier Detection Techniques: A Survey <s> 1) K-NEAREST NEIGHBOR METHODS <s> We propose a new approach for outlier detection, based on a ranking measure that focuses on the question of whether a point is ‘central’ for its nearest neighbours. Using our notations, a low cumulative rank implies that the point is central. For instance, a point centrally located in a cluster has a relatively low cumulative sum of ranks because it is among the nearest neighbours of its own nearest neighbours, but a point at the periphery of a cluster has a high cumulative sum of ranks because its nearest neighbours are closer to each other than the point. Use of ranks eliminates the problem of density calculation in the neighbourhood of the point and this improves the performance. Our method performs better than several density-based methods on some synthetic data sets as well as on some real data sets. <s> BIB012 </s> Progress in Outlier Detection Techniques: A Survey <s> 1) K-NEAREST NEIGHBOR METHODS <s> Use of neighborhood rank-difference for outlier score.Dynamic (dataset specific) k for construction of influence/decision space.High rank-power for both synthetic and real datasets. Display Omitted Presence of outliers critically affects many pattern classification tasks. In this paper, we propose a novel dynamic outlier detection method based on neighborhood rank difference. In particular, reverse and the forward nearest neighbor rank difference is employed to capture the variations in densities of a test point with respect to various training points. In the first step of our method, we determine the influence space for a given dataset. A score for outlierness is proposed in the second step using the rank difference as well as the absolute density within this influence space. Experiments on synthetic and some UCI machine learning repository datasets clearly indicate the supremacy of our method over some recently published approaches. <s> BIB013 </s> Progress in Outlier Detection Techniques: A Survey <s> 1) K-NEAREST NEIGHBOR METHODS <s> This paper presents a k-nearest neighbors (kNN) method to detect outliers in large-scale traffic data collected daily in every modern city. Outliers include hardware and data errors as well as abnormal traffic behaviors. The proposed kNN method detects outliers by exploiting the relationship among neighborhoods in data points. The farther a data point is beyond its neighbors, the more possible the data is an outlier. Traffic data here was recorded in a video format, and converted to spatial-temporal (ST) traffic signals by statistics. The ST signals are then transformed to a two-dimensional (2D) (x, y) -coordinate plane by Principal Component Analysis (PCA) for dimension reduction. The distance-based kNN method is evaluated by unsupervised and semi-supervised approaches. The semi-supervised approach reaches 96.19% accuracy. <s> BIB014 </s> Progress in Outlier Detection Techniques: A Survey <s> 1) K-NEAREST NEIGHBOR METHODS <s> Today's real-world databases typically contain millions of items with many thousands of fields. As a result, traditional distribution-based outlier detection techniques have more and more restricted capabilities and novel k-nearest neighbors based approaches have become more and more popular. However, the problems with these k-nearest neighbors based methods are that they are very sensitive to the value of k, may have different rankings for top n outliers, are very computationally expensive for large datasets, and doubts exist in general whether they would work well for high dimensional datasets. To partially circumvent these problems, we propose in this paper a new global outlier factor and a new local outlier factor and an efficient outlier detection algorithm developed upon them that is easy to implement and can provide competing performances with existing solutions. Experiments performed on both synthetic and real data sets demonstrate the efficacy of our method. A new k-nearest neighbors (kNN) based outlier detection scheme is proposed.It is built upon two new MST-inspired outlier scores, a global one and a local one.A set of state-of-the-art outlier detectors are applied to some high dimensional data.A fast approximate kNN search algorithm is used to accelerate the mining process.The proposed method can provide competing performances with existing solutions. <s> BIB015 </s> Progress in Outlier Detection Techniques: A Survey <s> 1) K-NEAREST NEIGHBOR METHODS <s> Outlier detection in high-dimensional data presents various challenges resulting from the “curse of dimensionality.” A prevailing view is that distance concentration, i.e., the tendency of distances in high-dimensional data to become indiscernible, hinders the detection of outliers by making distance-based methods label all points as almost equally good outliers. In this paper, we provide evidence supporting the opinion that such a view is too simple, by demonstrating that distance-based methods can produce more contrasting outlier scores in high-dimensional settings. Furthermore, we show that high dimensionality can have a different impact, by reexamining the notion of reverse nearest neighbors in the unsupervised outlier-detection context. Namely, it was recently observed that the distribution of points’ reverse-neighbor counts becomes skewed in high dimensions, resulting in the phenomenon known as hubness . We provide insight into how some points (antihubs) appear very infrequently in $k$ -NN lists of other points, and explain the connection between antihubs, outliers, and existing unsupervised outlier-detection methods. By evaluating the classic $k$ -NN method, the angle-based technique designed for high-dimensional data, the density-based local outlier factor and influenced outlierness methods, and antihub-based methods on various synthetic and real-world data sets, we offer novel insight into the usefulness of reverse neighbor counts in unsupervised outlier detection. <s> BIB016 </s> Progress in Outlier Detection Techniques: A Survey <s> 1) K-NEAREST NEIGHBOR METHODS <s> Recent research studies on outlier detection have focused on examining the nearest neighbor structure of a data object to measure its outlierness degree. This leads to two weaknesses: the size of nearest neighborhood, which should be predetermined, greatly affects the final detection results, and the outlierness scores produced by existing methods are not sufficiently diverse to allow precise ranking of outliers. To overcome these problems, in this research paper, a novel outlier detection method involving an iterative random sampling procedure is proposed. The proposed method is inspired by the simple notion that outlying objects are less easily selected than inlying objects in blind random sampling, and therefore, more inlierness scores are given to selected objects. We develop a new measure called the observability factor (OF) by utilizing this idea. In order to offer a heuristic guideline to determine the best size of nearest neighborhood, we additionally propose using the entropy of OF scores. An intensive numerical evaluation based on various synthetic and real-world datasets shows the superiority and effectiveness of the proposed method. <s> BIB017 </s> Progress in Outlier Detection Techniques: A Survey <s> 1) K-NEAREST NEIGHBOR METHODS <s> Outlier detection is an important task in data mining with numerous applications, including credit card fraud detection, video surveillance, etc. Although many Outlier detection algorithm have been proposed. However, for most of these algorithms faced a serious problem that it is very difficult to select an appropriate parameter when they run on a dataset. In this paper we use the method of Natural Neighbor to adaptively obtain the parameter, named Natural Value. We also propose a novel notion that Natural Outlier Factor (NOF) to measure the outliers and provide the algorithm based on Natural Neighbor (NaN) that does not require any parameters to compute the NOF of the objects in the database. The formal analysis and experiments show that this method can achieve good performance in outlier detection. <s> BIB018 </s> Progress in Outlier Detection Techniques: A Survey <s> 1) K-NEAREST NEIGHBOR METHODS <s> A local density-based approach for outlier detection is proposed.The theoretical properties of the proposed outlierness score are derived.Three types of nearest neighbors are presented. This paper presents a simple and effective density-based outlier detection approach with local kernel density estimation (KDE). A Relative Density-based Outlier Score (RDOS) is introduced to measure local outlierness of objects, in which the density distribution at the location of an object is estimated with a local KDE method based on extended nearest neighbors of the object. Instead of using only k nearest neighbors, we further consider reverse nearest neighbors and shared nearest neighbors of an object for density distribution estimation. Some theoretical properties of the proposed RDOS including its expected value and false alarm probability are derived. A comprehensive experimental study on both synthetic and real-life data sets demonstrates that our approach is more effective than state-of-the-art outlier detection methods. <s> BIB019
Using these methods for computing outliers have been one of the most popular ways adopted by many researchers to detect outliers. It is not the same as the k-nearest neighbor classification. These methods are mostly used for detecting global outliers. Initially, a search for the k-nearest neighbor of every record, and then these neighbors are used to compute the outlier score. They mainly examine the nature of a given object neighborhood information to determine whether they are close to their neighbors or have a low density or not. The key concept is to exploit the neighborhood information to detect the outliers. Knorr and Ng BIB001 and Ramaswamy et al. were among the first to propose techniques for detecting outliers in large data sets that shows significant progress in the already state-of-the-art existing studies. Knorr and Ng BIB001 , proposed a non-parametric approach, which is in contrast to some previous statistical techniques BIB007 , BIB010 . The users lack knowledge about the underlying distribution. The indexed-based and nested-loop based algorithms were the two algorithms proposed with the computational complexity of O(kN 2 ); with k as the dimensionality and N as the number of datasets. Later Ramaswamy et al. , proposed a cell-based algorithm that is linear with respect to N and exponential with respect to K to optimize the previous algorithm BIB001 . It has a computational complexity lower than the two previous methods. Ramaswamy et al. tried to improve on several of the shortcomings of BIB001 such as specifying the distance, the ranking method adopted and in minimizing the computational cost. To address the problem of determining the distance, they defined their approach as one that does not require the users to specify the distance parameter but adopts the kth nearest neighbor. In the expanded version of BIB001 , to find the nearest neighbor of each candidate spatial indexing structures, the KD-tree, X-tree, and R-tree are used BIB002 . This is done by querying the index structure for the closest k points in each example and finally, in line with the outlier definition, the top n candidate is selected. One main concern of this method is that the index structures breakdown with an increase in the dimensionality. Angiulli et al. BIB005 differ a bit from the traditional approach of targeting the development of techniques to detect outliers in an input dataset, to that which can learn a model and predict outliers in an incoming dataset. They designed a distance-based algorithm that detects top outliers from a given unlabeled dataset and predicts if an undetected data point is an outlier. The outlier detection process involves detecting the top n outliers in a given dataset, which means the n objects of the dataset with the highest weight. This is done by determining whether an incoming object's weight in the dataset is greater than or equal to the nth highest weight. This process results in a O(n 2 ) complexity. Ghoting et al. BIB006 proposed an algorithm called the Recursive Binning and Re-Projection (RBRP) to enhance the computational speed for high-dimensional datasets and improve on the drawbacks of previous methods BIB001 , . The key difference from the earlier algorithms is that it supports the fast merging of a point's approximate nearest neighbors. In terms of its efficiency, only the points' approximate nearest neighbors are of value. It scales linearly as a function of the number of dimensions and log-linear for the number of data points. One key difference from other methods is, instead of using the nearest neighbors, the approximate nearest neighbor is used, which makes the computation faster. In 2009, instead of following the trend in outlier detection for global outliers using distance-based computation techniques, the authors decided to divert to local outlier detection. Zhang et al. BIB008 proposed a local distance-based outlier detection method called the Local Distance-based Outlier Factor (LDOF). Their study shows improved performance over the range of neighbor size when compared to LOF BIB003 . The demand for a pairwise distance computation is (O(k 2 ) ), similar to COF BIB004 . It is comparable to that of the k-nearest neighbor outlier detection techniques in performance; however, it is less sensitive to parameter values. Liu et al. BIB011 , in a later study, extended the traditional LOF to uncertain data. Huang et al. BIB012 proposed a method called Rank-Based Detection Algorithm (RBDA) to rank the neighbors. It provides a feasible solution and ensures that the nature of high-dimensional data becomes meaningful. For illustration, in BIB009 , the fundamental assumption will be that objects will become close to each other or share similar neighbors when they are produced from the same mechanism. The RBDA uses the ranks of individual objects that are close as the degree of proximity of the object. It does not take into consideration the objects distance information with respect to their neighbors. Bhattacharya et al. BIB013 propose a method that further uses both the ranks of the nearest neighbors and the reverse nearest neighbors. This ensures each candidate object outlier score is effectively measured. In another study, Dang et al. BIB014 applied k-nearest neighbor to detect outliers in daily collected large-scale traffic data in some advanced cities. Outliers are detected in data points by exploiting the relationship among neighborhoods. An outlier here is a data point that is farther from its neighbors. Notwithstanding the good results shown concerning the success detection rate of 95.5% and 96.2% respectively, which outperforms those of statistical approaches such as KDE (95%) and GMM (80.9%). However, a shortcoming of their work is they only considered a single distance-based metric, so a future study with more complicated variations like that of different weights on multiple distances, can improve the outlier detection rate. In another study, to improve on the search effectiveness of the KNN neighbors, Wang et al. BIB015 applied a minimum spanning tree. Radovanovi'c et al. BIB016 presented a reverse nearest neighbor approach to tackle one of the biggest challenges in computing outliers in high-dimensional data sets, that is, ''the curse of dimensionality.'' They showed that their approach could be effectively applied in both low and high-dimensional settings. When compared with the original KNN method , it showed an improved performance in the detection rate. Their primary focus is centered on the influence of high dimensionality and the hubness phenomenon. An antihub technique was then proposed, which optimized the perception between scores. Huang et al. BIB018 implemented the concept of natural neighbor to acquire the information of the neighborhood. Ha et al. BIB017 proposed a heuristic approach to determine a suitable value for k by employing iterative random sampling. To this end, most recently, Tang et al. BIB019 proposed a method to determine the outlier scores in local KDE. They examine different types of neighborhoods, including the reverse nearest neighbor, shared nearest neighbors and the k nearest neighbor. The neighbor-based detection methods are independent of the data distribution model and can be easily understood and interpreted. However, they are sensitive to parameter settings and sometimes deficient in performance.
Progress in Outlier Detection Techniques: A Survey <s> 2) PRUNING METHODS <s> This paper deals with finding outliers (exceptions) in large, multidimensional datasets. The identification of outliers can lead to the discovery of truly unexpected knowledge in areas such as electronic commerce, credit card fraud, and even the analysis of performance statistics of professional athletes. Existing methods that we have seen for finding outliers in large datasets can only deal efficiently with two dimensions/attributes of a dataset. Here, we study the notion of DB- (DistanceBased) outliers. While we provide formal and empirical evidence showing the usefulness of DB-outliers, we focus on the development of algorithms for computing such outliers. First, we present two simple algorithms, both having a complexity of O(k N’), k being the dimensionality and N being the number of objects in the dataset. These algorithms readily support datasets with many more than two attributes. Second, we present an optimized cell-based algorithm that has a complexity that is linear wrt N, but exponential wrt k. Third, for datasets that are mainly disk-resident, we present another version of the cell-based algorithm that guarantees at most 3 passes over a dataset. We provide <s> BIB001 </s> Progress in Outlier Detection Techniques: A Survey <s> 2) PRUNING METHODS <s> This paper deals with finding outliers (exceptions) in large, multidimensional datasets. The identification of outliers can lead to the discovery of truly unexpected knowledge in areas such as electronic commerce, credit card fraud, and even the analysis of performance statistics of professional athletes. Existing methods that we have seen for finding outliers can only deal efficiently with two dimensions/attributes of a dataset. In this paper, we study the notion of DB (distance-based) outliers. Specifically, we show that (i) outlier detection can be done efficiently for large datasets, and for k-dimensional datasets with large values of k (e.g., $k \ge 5$); and (ii), outlier detection is a meaningful and important knowledge discovery task.First, we present two simple algorithms, both having a complexity of $O(k \: N^2)$, k being the dimensionality and N being the number of objects in the dataset. These algorithms readily support datasets with many more than two attributes. Second, we present an optimized cell-based algorithm that has a complexity that is linear with respect to N, but exponential with respect to k. We provide experimental results indicating that this algorithm significantly outperforms the two simple algorithms for $k \leq 4$. Third, for datasets that are mainly disk-resident, we present another version of the cell-based algorithm that guarantees at most three passes over a dataset. Again, experimental results show that this algorithm is by far the best for $k \leq 4$. Finally, we discuss our work on three real-life applications, including one on spatio-temporal data (e.g., a video surveillance application), in order to confirm the relevance and broad applicability of DB outliers. <s> BIB002 </s> Progress in Outlier Detection Techniques: A Survey <s> 2) PRUNING METHODS <s> Defining outliers by their distance to neighboring examples is a popular approach to finding unusual examples in a data set. Recently, much work has been conducted with the goal of finding fast algorithms for this task. We show that a simple nested loop algorithm that in the worst case is quadratic can give near linear time performance when the data is in random order and a simple pruning rule is used. We test our algorithm on real high-dimensional data sets with millions of examples and show that the near linear scaling holds over several orders of magnitude. Our average case analysis suggests that much of the efficiency is because the time to process non-outliers, which are the majority of examples, does not depend on the size of the data set. <s> BIB003 </s> Progress in Outlier Detection Techniques: A Survey <s> 2) PRUNING METHODS <s> "One person's noise is another person's signal". Outlier detection is used to clean up datasets and also to discover useful anomalies, such as criminal activities in electronic commerce, computer intrusion attacks, terrorist threats, agricultural pest infestations, etc. Thus, outlier detection is critically important in the information-based society. This paper focuses on finding outliers in large datasets using distance-based methods. First, to speedup outlier detections, we revise Knorr and Ng's distance-based outlier definition; second, a vertical data structure, instead of traditional horizontal structures, is adopted to facilitate efficient outlier detection further. We tested our methods against national hockey league dataset and show an order of magnitude of speed improvement compared to the contemporary distance-based outlier detection approaches. <s> BIB004 </s> Progress in Outlier Detection Techniques: A Survey <s> 2) PRUNING METHODS <s> In this work a novel algorithm, named DOLPHIN, for detecting distance-based outliers is presented. The proposed algorithm performs only two sequential scans of the dataset. It needs to store into main memory a portion of the dataset, to efficiently search for neighbors and early prune inliers. The strategy pursued by the algorithm allows to keep this portion very small. Both theoretical justification and empirical evidence that the size of the stored data amounts only to a few percent of the dataset are provided. Another important feature of DOLPHIN is that the memory-resident data are indexed by using a suitable proximity search approach. This allows to search for nearest neighbors looking only at a small subset of the main memory stored data. Temporal and spatial cost analysis show that the novel algorithm achieves both near linear CPU and I/O cost. DOLPHIN has been compared with state of the art methods, showing that it outperforms existing ones. <s> BIB005 </s> Progress in Outlier Detection Techniques: A Survey <s> 2) PRUNING METHODS <s> Outlier detection finds many applications, especially in domains that have scope for abnormal behavior. In this paper, we present a new technique for detecting distance-based outliers, aimed at reducing execution time associated with the detection process. Our approach operates in two phases and employs three pruning rules. In the first phase, we partition the data into clusters, and make an early estimate on the lower bound of outlier scores. Based on this lower bound, the second phase then processes relevant clusters using the traditional block nested-loop algorithm. Here two efficient pruning rules are utilized to quickly discard more non-outliers and reduce the search space. Detailed analysis of our approach shows that the additional overhead of the first phase is offset by the reduction in cost of the second phase. We also demonstrate the superiority of our approach over existing distance-based outlier detection methods by extensive empirical studies on real datasets. <s> BIB006
Bay et al. BIB003 , presented an algorithm based on a nested loop that uses the randomization and pruning rule. By modifying the nested loop algorithm, which is recognized for its quadratic performance O(N 2 ), they were able to obtain a near linear time on most of the data sets that previously showed a quadratic performance in the previous method BIB001 . However, the algorithm makes a lot of assumptions which will consequently lead to poor performance. Angiulli et al. BIB005 , since most previous research BIB002 , BIB001 , were unable to simultaneously meet the demand of both the CPU cost and in minimizing the I/O cost, the authors presented a novel algorithm called Detecting OutLiers PusHing data into an Index (DOLPHIN) to address these challenges. In the proposed algorithm, only two sequential scans of the data set are performed, while that of BIB003 implements a blocknested loop analysis of the disk pages, which results in a quadratic input and output cost. Ren et al. BIB004 , presented an improved version of Ramaswamy et al. technique , a vertical distance-based outlier detection method to detect outliers in large data sets by also applying the pruning method and a ''by-neighbor'' labeling technique. In their study, as an alternative to the outdated horizontal structures, the vertical structure is adopted to facilitate the efficient detection of outliers. The technique is implemented in two phases (with and without pruning) with P-Trees for the outlier detection. According to the authors, a future study can be to discover the use of P-Trees in other OD methods, such as the density-based approach. In another work, Vu et al. BIB006 introduced the MultI-Rule Outlier (MIRO) that adopts a similar technique as in BIB004 by using the pruning technique to speed up the process of detecting outliers.
Progress in Outlier Detection Techniques: A Survey <s> 3) IN DATA STREAMS <s> Sensor networks have recently found many popular applications in a number of different settings. Sensors at different locations can generate streaming data, which can be analyzed in real-time to identify events of interest. In this paper, we propose a framework that computes in a distributed fashion an approximation of multi-dimensional data distributions in order to enable complex applications in resource-constrained sensor networks.We motivate our technique in the context of the problem of outlier detection. We demonstrate how our framework can be extended in order to identify either distance- or density-based outliers in a single pass over the data, and with limited memory requirements. Experiments with synthetic and real data show that our method is efficient and accurate, and compares favorably to other proposed techniques. We also demonstrate the applicability of our technique to other related problems in sensor networks. <s> BIB001 </s> Progress in Outlier Detection Techniques: A Survey <s> 3) IN DATA STREAMS <s> In this work a method for detecting distance-based outliers in data streams is presented. We deal with the sliding window model, where outlier queries are performed in order to detect anomalies in the current window. Two algorithms are presented. The first one exactly answers outlier queries, but has larger space requirements. The second algorithm is directly derived from the exact one, has limited memory requirements and returns an approximate answer based on accurate estimations with a statistical guarantee. Several experiments have been accomplished, confirming the effectiveness of the proposed approach and the high quality of approximate solutions. <s> BIB002 </s> Progress in Outlier Detection Techniques: A Survey <s> 3) IN DATA STREAMS <s> This work proposes a method for detecting distance-based outliers in data streams under the sliding window model. The novel notion of one-time outlier query is introduced in order to detect anomalies in the current window at arbitrary points-in-time. Three algorithms are presented. The first algorithm exactly answers to outlier queries, but has larger space requirements than the other two. The second algorithm is derived from the exact one, reduces memory requirements and returns an approximate answer based on estimations with a statistical guarantee. The third algorithm is a specialization of the approximate algorithm working with strictly fixed memory requirements. Accuracy properties and memory consumption of the algorithms have been theoretically assessed. Moreover experimental results have confirmed the effectiveness of the proposed approach and the good quality of the solutions. <s> BIB003 </s> Progress in Outlier Detection Techniques: A Survey <s> 3) IN DATA STREAMS <s> The discovery of complex patterns such as clusters, outliers, and associations from huge volumes of streaming data has been recognized as critical for many domains. However, pattern detection with sliding window semantics, as required by applications ranging from stock market analysis to moving object tracking remains largely unexplored. Applying static pattern detection algorithms from scratch to every window is prohibitively expensive due to their high algorithmic complexity. This work tackles this problem by developing the first solution for incremental detection of neighbor-based patterns specific to sliding window scenarios. The specific pattern types covered in this work include density-based clusters and distance-based outliers. Incremental pattern computation in highly dynamic streaming environments is challenging, because purging a large amount of to-be-expired data from previously formed patterns may cause complex pattern changes including migration, splitting, merging and termination of these patterns. Previous incremental neighbor-based pattern detection algorithms, which were typically not designed to handle sliding windows, such as incremental DBSCAN, are not able to solve this problem efficiently in terms of both CPU and memory consumption. To overcome this, we exploit the "predictability" property of sliding windows to elegantly discount the effect of expiring objects on the remaining pattern structures. Our solution achieves minimal CPU utilization, while still keeping the memory utilization linear in the number of objects in the window. Our comprehensive experimental study, using both synthetic as well as real data from domains of stock trades and moving object monitoring, demonstrates superiority of our proposed strategies over alternate methods in both CPU and memory utilization. <s> BIB004 </s> Progress in Outlier Detection Techniques: A Survey <s> 3) IN DATA STREAMS <s> Anomaly detection is considered an important data mining task, aiming at the discovery of elements (also known as outliers) that show significant diversion from the expected case. More specifically, given a set of objects the problem is to return the suspicious objects that deviate significantly from the typical behavior. As in the case of clustering, the application of different criteria lead to different definitions for an outlier. In this work, we focus on distance-based outliers: an object x is an outlier if there are less than k objects lying at distance at most R from x. The problem offers significant challenges when a stream-based environment is considered, where data arrive continuously and outliers must be detected on-the-fly. There are a few research works studying the problem of continuous outlier detection. However, none of these proposals meets the requirements of modern stream-based applications for the following reasons: (i) they demand a significant storage overhead, (ii) their efficiency is limited and (iii) they lack flexibility. In this work, we propose new algorithms for continuous outlier monitoring in data streams, based on sliding windows. Our techniques are able to reduce the required storage overhead, run faster than previously proposed techniques and offer significant flexibility. Experiments performed on real-life as well as synthetic data sets verify our theoretical study. <s> BIB005 </s> Progress in Outlier Detection Techniques: A Survey <s> 3) IN DATA STREAMS <s> The discovery of distance-based outliers from huge volumes of streaming data is critical for modern applications ranging from credit card fraud detection to moving object monitoring. In this work, we propose the first general framework to handle the three major classes of distance-based outliers in streaming environments, including the traditional distance-threshold based and the nearest-neighbor-based definitions. Our LEAP framework encompasses two general optimization principles applicable across all three outlier types. First, our “minimal probing” principle uses a lightweight probing operation to gather minimal yet sufficient evidence for outlier detection. This principle overturns the state-of-the-art methodology that requires routinely conducting expensive complete neighborhood searches to identify outliers. Second, our “lifespan-aware prioritization” principle leverages the temporal relationships among stream data points to prioritize the processing order among them during the probing process. Guided by these two principles, we design an outlier detection strategy which is proven to be optimal in CPU costs needed to determine the outlier status of any data point during its entire life. Our comprehensive experimental studies, using both synthetic as well as real streaming data, demonstrate that our methods are 3 orders of magnitude faster than state-of-the-art methods for a rich diversity of scenarios tested yet scale to high dimensional streaming data. <s> BIB006 </s> Progress in Outlier Detection Techniques: A Survey <s> 3) IN DATA STREAMS <s> Data mining is one of the most exciting fields of research for the researcher. As data is getting digitized, systems are getting connected and integrated, scope of data generation and analytics has increased exponentially. Today, most of the systems generate non-stationary data of huge, size, volume, occurrence speed, fast changing etc. these kinds of data are called data streams. One of the most recent trend i.e. IOT (Internet Of Things) is also promising lots of expectation of people which will ease the use of day to day activities and it could also connect systems and people together. This situation will also lead to generation of data streams, thus present and future scope of data stream mining is highly promising. Characteristics of data stream possess many challenges for the researcher; this makes analytics of such data difficult and also acts as source of inspiration for researcher. Outlier detection plays important role in any application. In this paper we reviewed different techniques of outlier detection for stream data and their issues in detail and presented results of the same. <s> BIB007 </s> Progress in Outlier Detection Techniques: A Survey <s> 3) IN DATA STREAMS <s> Continuous outlier detection in data streams has important applications in fraud detection, network security, and public health. The arrival and departure of data objects in a streaming manner impose new challenges for outlier detection algorithms, especially in time and space efficiency. In the past decade, several studies have been performed to address the problem of distance-based outlier detection in data streams (DODDS), which adopts an unsupervised definition and does not have any distributional assumptions on data values. Our work is motivated by the lack of comparative evaluation among the state-of-the-art algorithms using the same datasets on the same platform. We systematically evaluate the most recent algorithms for DODDS under various stream settings and outlier rates. Our extensive results show that in most settings, the MCOD algorithm offers the superior performance among all the algorithms, including the most recent algorithm Thresh_LEAP. <s> BIB008
Lately, most incoming data are in the form of continuous flow, and storing these data can be impractical because they need to be computed fast. In data streams, for distancebased approaches, researchers continue to face significant challenges such as the notion of time, multi-dimensionality, concept drift, and uncertainty problems BIB007 . Researchers have seen these as interesting challenges, and they have focused on designing algorithms to detect outliers in the data stream environment. The data stream is considered to be a large volume of unlimited incoming sequence data. Since the mining of these kinds of data is highly dependent on time intervals, usually the computation is done in windows. The two well-known data stream window models are the landmark and sliding window BIB003 . In the former, a time point in the data stream is identified, and the points within both the last time point and the current time are then analyzed. While in the latter, the window is marked by the two sliding endpoints. Angiulli et al. BIB003 propose a novel idea for the one-time query of outliers in data streams that is different from the continuous queries approach presented by authors in , BIB001 . They proposed three kinds of Stream Outlier Miner (STORM) algorithms to detect outliers in data streams using the distance-based method. The first one is based on computing the exact outlier query and the other two focus on retrieving the approximate results of the queries. The exact algorithm (Exact-Storm) makes use of the stream manager (which collects the incoming streams) and a suitable data structure (that is used by the query manager to answer outlier queries). One shortcoming of this algorithm is the cost of storing all the window objects. It is also not suitable in cases of colossal memory since it cannot fit into the memory. To tackle this issue, the approximate algorithm (Approx-Storm) is applied to improve the Exact-Storm. This is done by adjusting two approximations, that is, by reducing the number of data points stored in each window and by decreasing the space for every data point neighbor's storage. The final algorithm (Approx-fixed-memory) aims to minimize memory usage by keeping only a controlled fraction of the safe inliers. Yang et al. BIB004 , proposed some methods (Abstract-C, Abstract-M, Exact-N, and Extra-N) to deal with the incremental detection of neighbor-based patterns in the sliding window scenarios over data streams. The old static approach of pattern detection is costly and results in high complexity. Therefore, in this technique, the authors address the issue of handling sliding windows, which was not supported in earlier incremental neighbor-based pattern detection algorithms such as the incremental DBSCAN . From their experimental studies, it shows less CPU usage, and it maintained a linear memory usage for the number of objects in the window. Among these algorithms, Abstract-C is the only related algorithm using distance-based while the other two are more linked with density-based cluster methods. Table 3 gives further details and summaries of these methods. Kontaki et al. BIB005 , proposed algorithms that tackle some issues in event detection in data streams BIB002 and in sliding window scenarios over data stream BIB004 , which are both characterized by continuous outlier detection. In Angiulli et al. technique BIB002 , two of the algorithms use the sliding window in parallel with the step function in the process of detecting the outliers. The main objective in BIB005 is to minimize the storage consumption, improve the algorithm efficiency, and to make it more flexible. The authors designed three algorithms to support their aim, and they include Continuous Outlier Detection (COD), Advance Continuous Outlier Detection (ACOD), and Micro-ClusterBased Continuous Outlier Detection (MCOD). The first algorithm, COD, has double versions that support a fixed radius and multiple k values, while ACOD supports multiple k and R values. The final algorithm, MCOD, minimizes the range queries and thereby reduces the amount of distance computation that needs to be done. K is the parameter for the number of neighbors, and R is the distance parameter for the outlier detection. The key dissimilarity between STORM and COD is the decline in the number of examined objects in each slide, while for Abstract-C and COD, they are the speed and memory consumption. That is, it is much faster and requires less space. Another algorithm that is designed specifically for a high-volume of data streams was proposed by Cao et al. BIB006 called ThreshLEAP. It is a technique that tries to mitigate the expensive range queries. This is achieved by not storing data points in the same window like those in the same index structure. Leveraging modern distributed multi-core clusters to improve the scalability of VOLUME 7, 2019 detecting the outliers can be an exciting direction for future studies. In Table 4 , added to the survey done by Tamboli et al. in comparing some distance-based outlier detection algorithms using the Massive Online Analysis tool , we added other methods that were not included in their work. In addition, Tran et al. BIB008 , performed an evaluation study with detailed experiments of outlier detection methods in the data stream. Among all the algorithms presented, they conclude that MCOD in most settings has the best performance.
Progress in Outlier Detection Techniques: A Survey <s> Disadvantages, Challenges, And Gaps: <s> The outlier detection problem has important applications in the field of fraud detection, network robustness analysis, and intrusion detection. Most such applications are high dimensional domains in which the data can contain hundreds of dimensions. Many recent algorithms use concepts of proximity in order to find outliers based on their relationship to the rest of the data. However, in high dimensional space, the data is sparse and the notion of proximity fails to retain its meaningfulness. In fact, the sparsity of high dimensional data implies that every point is an almost equally good outlier from the perspective of proximity-based definitions. Consequently, for high dimensional data, the notion of finding meaningful outliers becomes substantially more complex and non-obvious. In this paper, we discuss new techniques for outlier detection which find the outliers by studying the behavior of projections from the data set. <s> BIB001 </s> Progress in Outlier Detection Techniques: A Survey <s> Disadvantages, Challenges, And Gaps: <s> Defining outliers by their distance to neighboring examples is a popular approach to finding unusual examples in a data set. Recently, much work has been conducted with the goal of finding fast algorithms for this task. We show that a simple nested loop algorithm that in the worst case is quadratic can give near linear time performance when the data is in random order and a simple pruning rule is used. We test our algorithm on real high-dimensional data sets with millions of examples and show that the near linear scaling holds over several orders of magnitude. Our average case analysis suggests that much of the efficiency is because the time to process non-outliers, which are the majority of examples, does not depend on the size of the data set. <s> BIB002 </s> Progress in Outlier Detection Techniques: A Survey <s> Disadvantages, Challenges, And Gaps: <s> Defining outliers by their distance to neighboring data points has been shown to be an effective non-parametric approach to outlier detection. In recent years, many research efforts have looked at developing fast distance-based outlier detection algorithms. Several of the existing distance-based outlier detection algorithms report log-linear time performance as a function of the number of data points on many real low-dimensional datasets. However, these algorithms are unable to deliver the same level of performance on high-dimensional datasets, since their scaling behavior is exponential in the number of dimensions. In this paper, we present RBRP, a fast algorithm for mining distance-based outliers, particularly targeted at high-dimensional datasets. RBRP scales log-linearly as a function of the number of data points and linearly as a function of the number of dimensions. Our empirical evaluation demonstrates that we outperform the state-of-the-art algorithm, often by an order of magnitude. <s> BIB003 </s> Progress in Outlier Detection Techniques: A Survey <s> Disadvantages, Challenges, And Gaps: <s> The problem of distance-based outlier detection is difficult to solve efficiently in very large datasets because of potential quadratic time complexity. We address this problem and develop sequential and distributed algorithms that are significantly more efficient than state-of-the-art methods while still guaranteeing the same outliers. By combining simple but effective indexing and disk block accessing techniques, we have developed a sequential algorithm iOrca that is up to an order-of-magnitude faster than the state-of-the-art. The indexing scheme is based on sorting the data points in order of increasing distance from a fixed reference point and then accessing those points based on this sorted order. To speed up the basic outlier detection technique, we develop two distributed algorithms (DOoR and iDOoR) for modern distributed multi-core clusters of machines, connected on a ring topology. The first algorithm passes data blocks from each machine around the ring, incrementally updating the nearest neighbors of the points passed. By maintaining a cutoff threshold, it is able to prune a large number of points in a distributed fashion. The second distributed algorithm extends this basic idea with the indexing scheme discussed earlier. In our experiments, both distributed algorithms exhibit significant improvements compared to the state-of-the-art distributed method [13]. <s> BIB004
i. They share some similar drawbacks as statistical and density-based approaches in terms of high dimensional space, as their performance declines due to the curse of dimensionality. The objects in the data often have discrete attributes, which makes it challenging to define distances between such objects. ii. The search techniques such as the neighborhood and KNN search in high-dimensional space when using a distance-based approach is an expensive task. In large data sets, the scalability is also not cost effective. iii. Most of the existing distance-based methods that cannot deal with data streams are because it is difficult for them to maintain the data distribution in the local neighborhood and in finding the KNN in the data stream. This is an exception for methods that were specially designed to tackle data streams. Discussing further, in Table 4 , we present a comprehensive well-known distance-based outlier detection algorithm. We give a summary of different techniques in terms of their computational complexity (running time and memory consumption), address issues, and their drawbacks. Distancebased methods are widely adopted approach since they have a strong theoretical basis and are computationally effective. However, they are faced with some challenges. One of the critical underlining drawbacks of most distance-based methods is their inability to scale well for very high dimensional data sets BIB001 . Issues like the curse of dimensionality continue to be an evolving challenge. When the data dimension grows, this influences the descriptive ability of the distance measures and makes it quite tricky to apply indexing techniques to search for the neighbors. In multivariate data sets, computing the distance between data instances can be computationally demanding and consequently resulting in a lack of scalability. Even though researchers have focused on solving these problems, we still believe better algorithms can be designed, which can simultaneously address the problem of both a low memory cost and computational time. To address the issue of quadratic complexity, researchers have focused in proposing several significant algorithms and optimizations techniques such as applying compact data structures BIB003 , BIB004 , using pruning and randomization BIB002 , among the many others. Another challenge worth noting is the inability of distance-based techniques to identify local outliers. Distance-based calculations are often done with respect to global information. For K-nearest neighbor approaches, the dataset plays a vital role in determining the perfect KNN score. From most of the algorithms mentioned, choosing an appropriate threshold when it is required is one of the most complex tasks. Another important thing that also influences the results obtained in these outlier detection processes is the choice of k and in choosing appropriate input parameters. Furthermore, in terms of detecting outliers in the data streams, the fundamental requirement is related to its computational speed. We believe that designing algorithms that can support fast computation in both single and multiple data streams using distance-based techniques will be an exciting challenge for future directions. For current growing interest research areas like that of big data which demands the computation of more massive data sets, it is imperative to design robust algorithms using distance-based techniques that can scale well with a low computational cost (running time and memory) for large up-to-date real data sets for both batch and stream processes.
Progress in Outlier Detection Techniques: A Survey <s> D. CLUSTERING-BASED APPROACHES <s> The main purpose of this paper is to describe a process for partitioning an N-dimensional population into k sets on the basis of a sample. The process, which is called 'k-means,' appears to give partitions which are reasonably efficient in the sense of within-class variance. That is, if p is the probability mass function for the population, S = {S1, S2, * *, Sk} is a partition of EN, and ui, i = 1, 2, * , k, is the conditional mean of p over the set Si, then W2(S) = ff=ISi f z u42 dp(z) tends to be low for the partitions S generated by the method. We say 'tends to be low,' primarily because of intuitive considerations, corroborated to some extent by mathematical analysis and practical computational experience. Also, the k-means procedure is easily programmed and is computationally economical, so that it is feasible to process very large samples on a digital computer. Possible applications include methods for similarity grouping, nonlinear prediction, approximating multivariate distributions, and nonparametric tests for independence among several variables. In addition to suggesting practical classification methods, the study of k-means has proved to be theoretically interesting. The k-means concept represents a generalization of the ordinary sample mean, and one is naturally led to study the pertinent asymptotic behavior, the object being to establish some sort of law of large numbers for the k-means. This problem is sufficiently interesting, in fact, for us to devote a good portion of this paper to it. The k-means are defined in section 2.1, and the main results which have been obtained on the asymptotic behavior are given there. The rest of section 2 is devoted to the proofs of these results. Section 3 describes several specific possible applications, and reports some preliminary results from computer experiments conducted to explore the possibilities inherent in the k-means idea. The extension to general metric spaces is indicated briefly in section 4. The original point of departure for the work described here was a series of problems in optimal classification (MacQueen [9]) which represented special <s> BIB001 </s> Progress in Outlier Detection Techniques: A Survey <s> D. CLUSTERING-BASED APPROACHES <s> 1. Introduction. 2. Partitioning Around Medoids (Program PAM). 3. Clustering large Applications (Program CLARA). 4. Fuzzy Analysis. 5. Agglomerative Nesting (Program AGNES). 6. Divisive Analysis (Program DIANA). 7. Monothetic Analysis (Program MONA). Appendix 1. Implementation and Structure of the Programs. Appendix 2. Running the Programs. Appendix 3. Adapting the Programs to Your Needs. Appendix 4. The Program CLUSPLOT. References. Author Index. Subject Index. <s> BIB002 </s> Progress in Outlier Detection Techniques: A Survey <s> D. CLUSTERING-BASED APPROACHES <s> Clustering algorithms are attractive for the task of class identification in spatial databases. However, the application to large spatial databases rises the following requirements for clustering algorithms: minimal requirements of domain knowledge to determine the input parameters, discovery of clusters with arbitrary shape and good efficiency on large databases. The well-known clustering algorithms offer no solution to the combination of these requirements. In this paper, we present the new clustering algorithm DBSCAN relying on a density-based notion of clusters which is designed to discover clusters of arbitrary shape. DBSCAN requires only one input parameter and supports the user in determining an appropriate value for it. We performed an experimental evaluation of the effectiveness and efficiency of DBSCAN using synthetic data and real data of the SEQUOIA 2000 benchmark. The results of our experiments demonstrate that (1) DBSCAN is significantly more effective in discovering clusters of arbitrary shape than the well-known algorithm CLAR-ANS, and that (2) DBSCAN outperforms CLARANS by a factor of more than 100 in terms of efficiency. <s> BIB003 </s> Progress in Outlier Detection Techniques: A Survey <s> D. CLUSTERING-BASED APPROACHES <s> Spatial data mining, i.e., discovery of interesting characteristics and patterns that may implicitly exist in spatial databases, is a challenging task due to the huge amounts of spatial data and to the new conceptual nature of the problems which must account for spatial distance. Clustering and region oriented queries are common problems in this domain. Several approaches have been presented in recent years, all of which require at least one scan of all individual objects (points). Consequently, the computational complexity is at least linearly proportional to the number of objects to answer each query. In this paper, we propose a hierarchical statistical information grid based approach for spatial data mining to reduce the cost further. The idea is to capture statistical information associated with spatial cells in such a manner that whole classes of queries and clustering problems can be answered without recourse to the individual objects. In theory, and confirmed by empirical studies, this approach outperforms the best previous method by at least an order of magnitude, especially when the data set is very large. <s> BIB004 </s> Progress in Outlier Detection Techniques: A Survey <s> D. CLUSTERING-BASED APPROACHES <s> Clustering, in data mining, is useful for discovering groups and identifying interesting distributions in the underlying data. Traditional clustering algorithms either favor clusters with spherical shapes and similar sizes, or are very fragile in the presence of outliers. We propose a new clustering algorithm called CURE that is more robust to outliers, and identifies clusters having non-spherical shapes and wide variances in size. CURE achieves this by representing each cluster by a certain fixed number of points that are generated by selecting well scattered points from the cluster and then shrinking them toward the center of the cluster by a specified fraction. Having more than one representative point per cluster allows CURE to adjust well to the geometry of non-spherical shapes and the shrinking helps to dampen the effects of outliers. To handle large databases, CURE employs a combination of random sampling and partitioning . A random sample drawn from the data set is first partitioned and each partition is partially clustered. The partial clusters are then clustered in a second pass to yield the desired clusters. Our experimental results confirm that the quality of clusters produced by CURE is much better than those found by existing algorithms. Furthermore, they demonstrate that random sampling and partitioning enable CURE to not only outperform existing algorithms but also to scale well for large databases without sacrificing clustering quality. <s> BIB005 </s> Progress in Outlier Detection Techniques: A Survey <s> D. CLUSTERING-BASED APPROACHES <s> Data mining applications place special requirements on clustering algorithms including: the ability to find clusters embedded in subspaces of high dimensional data, scalability, end-user comprehensibility of the results, non-presumption of any canonical data distribution, and insensitivity to the order of input records. We present CLIQUE, a clustering algorithm that satisfies each of these requirements. CLIQUE identifies dense clusters in subspaces of maximum dimensionality. It generates cluster descriptions in the form of DNF expressions that are minimized for ease of comprehension. It produces identical results irrespective of the order in which input records are presented and does not presume any specific mathematical form for data distribution. Through experiments, we show that CLIQUE efficiently finds accurate cluster in large high dimensional datasets. <s> BIB006 </s> Progress in Outlier Detection Techniques: A Survey <s> D. CLUSTERING-BASED APPROACHES <s> Many applications require the management of spatial data in a multidimensional feature space. Clustering large spatial databases is an important problem, which tries to find the densely populated regions in the feature space to be used in data mining, knowledge discovery, or efficient information retrieval. A good clustering approach should be efficient and detect clusters of arbitrary shape. It must be insensitive to the noise (outliers) and the order of input data. We propose WaveCluster, a novel clustering approach based on wavelet transforms, which satisfies all the above requirements. Using the multiresolution property of wavelet transforms, we can effectively identify arbitrarily shaped clusters at different degrees of detail. We also demonstrate that WaveCluster is highly efficient in terms of time complexity. Experimental results on very large datasets are presented, which show the efficiency and effectiveness of the proposed approach compared to the other recent clustering methods. <s> BIB007 </s> Progress in Outlier Detection Techniques: A Survey <s> D. CLUSTERING-BASED APPROACHES <s> For many KDD applications, such as detecting criminal activities in E-commerce, finding the rare instances or the outliers, can be more interesting than finding the common patterns. Existing work in outlier detection regards being an outlier as a binary property. In this paper, we contend that for many scenarios, it is more meaningful to assign to each object a degree of being an outlier. This degree is called the local outlier factor (LOF) of an object. It is local in that the degree depends on how isolated the object is with respect to the surrounding neighborhood. We give a detailed formal analysis showing that LOF enjoys many desirable properties. Using real-world datasets, we demonstrate that LOF can be used to find outliers which appear to be meaningful, but can otherwise not be identified with existing approaches. Finally, a careful performance evaluation of our algorithm confirms we show that our approach of finding local outliers can be practical. <s> BIB008 </s> Progress in Outlier Detection Techniques: A Survey <s> D. CLUSTERING-BASED APPROACHES <s> The clustering problem is a difficult problem for the data stream domain. This is because the large volumes of data arriving in a stream renders most traditional algorithms too inefficient. In recent years, a few one-pass clustering algorithms have been developed for the data stream problem. Although such methods address the scalability issues of the clustering problem, they are generally blind to the evolution of the data and do not address the following issues: (1) The quality of the clusters is poor when the data evolves considerably over time. (2) A data stream clustering algorithm requires much greater functionality in discovering and exploring clusters over different portions of the stream. ::: ::: The widely used practice of viewing data stream clustering algorithms as a class of one-pass clustering algorithms is not very useful from an application point of view. For example, a simple one-pass clustering algorithm over an entire data stream of a few years is dominated by the outdated history of the stream. The exploration of the stream over different time windows can provide the users with a much deeper understanding of the evolving behavior of the clusters. At the same time, it is not possible to simultaneously perform dynamic clustering over all possible time horizons for a data stream of even moderately large volume. ::: ::: This paper discusses a fundamentally different philosophy for data stream clustering which is guided by application-centered requirements. The idea is divide the clustering process into an online component which periodically stores detailed summary statistics and an offine component which uses only this summary statistics. The offine component is utilized by the analyst who can use a wide variety of inputs (such as time horizon or number of clusters) in order to provide a quick understanding of the broad clusters in the data stream. The problems of efficient choice, storage, and use of this statistical data for a fast data stream turns out to be quite tricky. For this purpose, we use the concepts of a pyramidal time frame in conjunction with a microclustering approach. Our performance experiments over a number of real and synthetic data sets illustrate the effectiveness, efficiency, and insights provided by our approach. <s> BIB009 </s> Progress in Outlier Detection Techniques: A Survey <s> D. CLUSTERING-BASED APPROACHES <s> The data stream problem has been studied extensively in recent years, because of the great ease in collection of stream data. The nature of stream data makes it essential to use algorithms which require only one pass over the data. Recently, single-scan, stream analysis methods have been proposed in this context. However, a lot of stream data is high-dimensional in nature. High-dimensional data is inherently more complex in clustering, classification, and similarity search. Recent research discusses methods for projected clustering over high-dimensional data sets. This method is however difficult to generalize to data streams because of the complexity of the method and the large volume of the data streams. ::: ::: In this paper, we propose a new, high-dimensional, projected data stream clustering method, called HPStream. The method incorporates a fading cluster structure, and the projection based clustering methodology. It is incrementally updatable and is highly scalable on both the number of dimensions and the size of the data streams, and it achieves better clustering quality in comparison with the previous stream clustering methods. Our performance study with both real and synthetic data sets demonstrates the efficiency and effectiveness of our proposed framework and implementation methods. <s> BIB010 </s> Progress in Outlier Detection Techniques: A Survey <s> D. CLUSTERING-BASED APPROACHES <s> Outlier detection has been used for centuries to detect and, where appropriate, remove anomalous observations from data. Outliers arise due to mechanical faults, changes in system behaviour, fraudulent behaviour, human error, instrument error or simply through natural deviations in populations. Their detection can identify system faults and fraud before they escalate with potentially catastrophic consequences. It can identify errors and remove their contaminating effect on the data set and as such to purify the data for processing. The original outlier detection methods were arbitrary but now, principled and systematic techniques are used, drawn from the full gamut of Computer Science and Statistics. In this paper, we introduce a survey of contemporary techniques for outlier detection. We identify their respective motivations and distinguish their advantages and disadvantages in a comparative review. <s> BIB011 </s> Progress in Outlier Detection Techniques: A Survey <s> D. CLUSTERING-BASED APPROACHES <s> Efficient clustering in dynamic spatial databases is currently an open problem with many potential applications. Most traditional spatial clustering algorithms are inadequate because they do not have an efficient support for incremental clustering.In this paper, we propose DClust, a novel clustering technique for dynamic spatial databases. DClust is able to provide multi-resolution view of the clusters, generate arbitrary shapes clusters in the presence of noise, generate clusters that are insensitive to ordering of input data and support incremental clustering efficiently. DClust utilizes the density criterion that captures arbitrary cluster shapes and sizes to select a number of representative points, and builds the Minimum Spanning Tree (MST) of these representative points, called R-MST. After the initial clustering, a summary of the cluster structure is built. This summary enables quick localization of the effect of data updates on the current set of clusters. Our experimental results show that DClust outperforms existing spatial clustering methods such as DBSCAN, C2P, DENCLUE, Incremental DBSCAN and BIRCH in terms of clustering time and accuracy of clusters found. <s> BIB012 </s> Progress in Outlier Detection Techniques: A Survey <s> D. CLUSTERING-BASED APPROACHES <s> Clustering is an important task in mining evolving data streams. Beside the limited memory and one-pass constraints, the nature of evolving data streams implies the following requirements for stream clustering: no assumption on the number of clusters, discovery of clusters with arbitrary shape and ability to handle outliers. While a lot of clustering algorithms for data streams have been proposed, they offer no solution to the combination of these requirements. In this paper, we present DenStream, a new approach for discovering clusters in an evolving data stream. The “dense” micro-cluster (named core-micro-cluster) is introduced to summarize the clusters with arbitrary shape, while the potential core-micro-cluster and outlier micro-cluster structures are proposed to maintain and distinguish the potential clusters and outliers. A novel pruning strategy is designed based on these concepts, which guarantees the precision of the weights of the micro-clusters with limited memory. Our performance study over a number of real and synthetic data sets demonstrates the effectiveness and efficiency of our method. <s> BIB013 </s> Progress in Outlier Detection Techniques: A Survey <s> D. CLUSTERING-BASED APPROACHES <s> Existing data-stream clustering algorithms such as CluStream arebased on k-means. These clustering algorithms are incompetent tofind clusters of arbitrary shapes and cannot handle outliers. Further, they require the knowledge of k and user-specified time window. To address these issues, this paper proposes D-Stream, a framework for clustering stream data using adensity-based approach. The algorithm uses an online component which maps each input data record into a grid and an offline component which computes the grid density and clusters the grids based on the density. The algorithm adopts a density decaying technique to capture the dynamic changes of a data stream. Exploiting the intricate relationships between the decay factor, data density and cluster structure, our algorithm can efficiently and effectively generate and adjust the clusters in real time. Further, a theoretically sound technique is developed to detect and remove sporadic grids mapped to by outliers in order to dramatically improve the space and time efficiency of the system. The technique makes high-speed data stream clustering feasible without degrading the clustering quality. The experimental results show that our algorithm has superior quality and efficiency, can find clusters of arbitrary shapes, and can accurately recognize the evolving behaviors of real-time data streams. <s> BIB014 </s> Progress in Outlier Detection Techniques: A Survey <s> D. CLUSTERING-BASED APPROACHES <s> In this work a method for detecting distance-based outliers in data streams is presented. We deal with the sliding window model, where outlier queries are performed in order to detect anomalies in the current window. Two algorithms are presented. The first one exactly answers outlier queries, but has larger space requirements. The second algorithm is directly derived from the exact one, has limited memory requirements and returns an approximate answer based on accurate estimations with a statistical guarantee. Several experiments have been accomplished, confirming the effectiveness of the proposed approach and the high quality of approximate solutions. <s> BIB015 </s> Progress in Outlier Detection Techniques: A Survey <s> D. CLUSTERING-BASED APPROACHES <s> Outlier detection has recently become an important problem in many industrial and financial applications. This problem is further complicated by the fact that in many cases, outliers have to be detected from data streams that arrive at an enormous pace. In this paper, an incremental LOF (local outlier factor) algorithm, appropriate for detecting outliers in data streams, is proposed. The proposed incremental LOF algorithm provides equivalent detection performance as the iterated static LOF algorithm (applied after insertion of each data record), while requiring significantly less computational time. In addition, the incremental LOF algorithm also dynamically updates the profiles of data points. This is a very important property, since data profiles may change over time. The paper provides theoretical evidence that insertion of a new data point as well as deletion of an old data point influence only limited number of their closest neighbors and thus the number of updates per such insertion/deletion does not depend on the total number of points TV in the data set. Our experiments performed on several simulated and real life data sets have demonstrated that the proposed incremental LOF algorithm is computationally efficient, while at the same time very successful in detecting outliers and changes of distributional behavior in various data stream applications <s> BIB016 </s> Progress in Outlier Detection Techniques: A Survey <s> D. CLUSTERING-BASED APPROACHES <s> Anomaly detection is currently an important and active research problem in many fields and involved in numerous applications. Most of the existing methods are based on distance measure. But in case of data stream these methods are not very efficient as computational point of view. Most of the exiting work on outlier detection in data stream declare a point as an outlier/inlier as soon as it arrive due to limited memory resources as compared to the huge data stream, to declare an outlier as it arrive often can lead us to a wrong decision, because of dynamic nature of the incoming data. In this paper we introduced a clustering based approach, which divide the stream in chunks and cluster each chunk using k-mean in fixed number of clusters. Instead of keeping only the summary information, which often used in case of clustering data stream, we keep the candidate outliers and mean value of every cluster for the next fixed number of steam chunks, to make sure that the detected candidate outliers are the real outliers. By employing the mean value of the clusters of previous chunk with mean values of the current chunk of stream, we decide better outlierness for data stream objects. Several experiments on different dataset confirm that our technique can find better outliers with low computational cost than the other exiting distance based approaches of outlier detection in data stream. <s> BIB017 </s> Progress in Outlier Detection Techniques: A Survey <s> D. CLUSTERING-BASED APPROACHES <s> Anomaly detection is an important problem that has been researched within diverse research areas and application domains. Many anomaly detection techniques have been specifically developed for certain application domains, while others are more generic. This survey tries to provide a structured and comprehensive overview of the research on anomaly detection. We have grouped existing techniques into different categories based on the underlying approach adopted by each technique. For each category we have identified key assumptions, which are used by the techniques to differentiate between normal and anomalous behavior. When applying a given technique to a particular domain, these assumptions can be used as guidelines to assess the effectiveness of the technique in that domain. For each category, we provide a basic anomaly detection technique, and then show how the different existing techniques in that category are variants of the basic technique. This template provides an easier and more succinct understanding of the techniques belonging to each category. Further, for each category, we identify the advantages and disadvantages of the techniques in that category. We also provide a discussion on the computational complexity of the techniques since it is an important issue in real application domains. We hope that this survey will provide a better understanding of the different directions in which research has been done on this topic, and how techniques developed in one area can be applied in domains for which they were not intended to begin with. <s> BIB018 </s> Progress in Outlier Detection Techniques: A Survey <s> D. CLUSTERING-BASED APPROACHES <s> Data stream clustering is an important task in data stream mining. In this paper, we propose SDStream, a new method for performing density-based data streams clustering over sliding windows. SDStream adopts CluStream clustering framework. In the online component, the potential core-micro-cluster and outlier micro-cluster structures are introduced to maintain the potential clusters and outliers. They are stored in the form of Exponential Histogram of Cluster Feature (EHCF) in main memory and are maintained by the maintenance of EHCFs. Outdated micro-clusters which need to be deleted are found by the value of t in Temporal Cluster Feature (TCF). In the offline component, the final clusters of arbitrary shape are generated according to all the potential core-micro-clusters maintained online by DBSCAN algorithm. Experimental results show that SDStream which can generate clusters of arbitrary shape has a much higher clustering quality than CluStream which generates spherical clusters. <s> BIB019 </s> Progress in Outlier Detection Techniques: A Survey <s> D. CLUSTERING-BASED APPROACHES <s> The detection of outliers has gained considerable interest in data mining with the realization that outliers can be the key discovery to be made from very large databases. Outliers arise due to various reasons such as mechanical faults, changes in system behavior, fraudulent behavior, human error and instrument error. Indeed, for many applications the discovery of outliers leads to more interesting and useful results than the discovery of inliers. Detection of outliers can lead to identification of system faults so that administrators can take preventive measures before they escalate. It is possible that anomaly detection may enable detection of new attacks. Outlier detection is an important anomaly detection approach. In this paper, we present a comprehensive survey of well-known distance-based, density-based and other techniques for outlier detection and compare them. We provide definitions of outliers and discuss their detection based on supervised and unsupervised learning in the context of network anomaly detection. <s> BIB020 </s> Progress in Outlier Detection Techniques: A Survey <s> D. CLUSTERING-BASED APPROACHES <s> With the increase of sensor and monitoring applications, data mining on streaming data is receiving increasing research attention. As data is continuously generated, mining algorithms need to be able to analyze the data in a one-pass fashion. In many applications the rate at which the data objects arrive varies greatly. This has led to anytime mining algorithms for classification or clustering. They successfully mine data until the a priori unknown point of interruption by the next data in the stream. ::: ::: In this work we investigate anytime outlier detection. Anytime outlier detection denotes the problem of determining within any period of time whether an object in a data stream is anomalous. The more time is available, the more reliable the decision should be. We introduce AnyOut, an algorithm capable of solving anytime outlier detection, and investigate different approaches to build up the underlying data structure. We propose a confidence measure for AnyOut that allows to improve the performance on constant data streams. We evaluate our method in thorough experiments and demonstrate its performance in comparison with established algorithms for outlier detection. <s> BIB021 </s> Progress in Outlier Detection Techniques: A Survey <s> D. CLUSTERING-BASED APPROACHES <s> The fundamental and active research problem in a lot of fields is outlier detection. It is involved many applications. A lot of these methods based on distance measure. But for stream data these methods are not efficient. Most of the previous work on outlier detection declares online outlier and these have less accuracy and it may be lead to a wrong decision. moreover the exiting work on outlier detection in data stream declare a point as an outlier/inlier as soon as it arrive due to limited memory resources as compared to the huge data stream, to declare an outlier as it arrive often can lead us to a wrong decision, because of dynamic nature of the incoming data. The aim of this study is to present an algorithm to detect outlier in stream data by clustering method that concentrate to find real outlier in period of time. It is considered some outlier that has received in previous time and find out real outlier in stream data. The accuracy of this method is more than other methods. <s> BIB022 </s> Progress in Outlier Detection Techniques: A Survey <s> D. CLUSTERING-BASED APPROACHES <s> Two new incremental models for online anomaly detection in data streams at nodes in wireless sensor networks are discussed. These models are incremental versions of a model that uses ellipsoids to detect first, second, and higher-ordered anomalies in arrears. The incremental versions can also be used this way but have additional capabilities offered by processing data incrementally as they arrive in time. Specifically, they can detect anomalies 'on-the-fly' in near real time. They can also be used to track temporal changes in near real-time because of sensor drift, cyclic variation, or seasonal changes. One of the new models has a mechanism that enables graceful degradation of inputs in the distant past fading memory. Three real datasets from single sensors in deployed environmental monitoring networks are used to illustrate various facets of the new models. Examples compare the incremental version with the previous batch and dynamic models and show that the incremental versions can detect various types of dynamic anomalies in near real time. Copyright © 2012 John Wiley & Sons, Ltd. <s> BIB023 </s> Progress in Outlier Detection Techniques: A Survey <s> D. CLUSTERING-BASED APPROACHES <s> Anomaly detection in data streams plays a vital role in on-line data mining applications. A major challenge for anomaly detection is the dynamically changing nature of many monitoring environments. This causes a problem for traditional anomaly detection techniques in data streams, which assume a relatively static monitoring environment. In an environment that is intermittently changing (known as switching data streams), static approaches can yield a high error rate in terms of false positives. To cope with dynamic environments, we require an approach that can learn from the history of normal behaviour in data streams, while accounting for the fact that not all time periods in the past are equally relevant. Consequently, we have proposed a relevance-weighted ensemble model for learning normal behaviour, which forms the basis of our anomaly detection scheme. The advantage of this approach is that it can improve the accuracy of detection by using relevant history, while remaining computationally efficient. Our solution provides a novel contribution through the use of ensemble techniques for anomaly detection in switching data streams. Our empirical results on real and synthetic data streams show that we can achieve substantial improvements compared to a recent anomaly detection algorithm for data streams. <s> BIB024 </s> Progress in Outlier Detection Techniques: A Survey <s> D. CLUSTERING-BASED APPROACHES <s> Evolvable Takagi–Sugeno (T–S) models are fuzzy-rule-based models with the ability to continuously learn and adapt to incoming samples from data streams. The model adjusts both premise and consequent parameters to enhance the performance of the model. This paper introduces a new methodology for the estimation of the premise parameters in the evolvable T–S (eTS) model. Incremental updates for the weighted sample mean and inverse of the covariance matrix enable us to construct an evolvable fuzzy rule base that is used to detect outliers and regime changes in the input stream. We compare our model with Angelov's eTS+ model with artificial and real data. <s> BIB025 </s> Progress in Outlier Detection Techniques: A Survey <s> D. CLUSTERING-BASED APPROACHES <s> Outlier detection is an important issue in the realm of data mining. Several applications relay on outlier detection such as intrusion detection, fraud detection, medical and public health data, image processing, etc. Clustering-based outlier detection algorithms are considered as the most important outlier detection approaches. They provide high detection rate, however, they suffer from high false positives. In this paper, we propose a clustering-based outlier detection algorithm that supports searching for outliers not only in small clusters but also in large clusters with an optimized calculation methodology. The experimental results demonstrate the good performance of the algorithm in terms of detection accuracy by increasing the detection rate, decreasing the false positives, and minimizing outlierness factor calculations. <s> BIB026 </s> Progress in Outlier Detection Techniques: A Survey <s> D. CLUSTERING-BASED APPROACHES <s> Text clustering is a challenging problem due to the high-dimensional and large-volume characteristics of text datasets. In this paper, we propose a collapsed Gibbs Sampling algorithm for the Dirichlet Process Multinomial Mixture model for text clustering (abbr. to GSDPMM) which does not need to specify the number of clusters in advance and can cope with the high-dimensional problem of text clustering. Our extensive experimental study shows that GSDPMM can achieve significantly better performance than three other clustering methods and can achieve high consistency on both long and short text datasets. We found that GSDPMM has low time and space complexity and can scale well with huge text datasets. We also propose some novel and effective methods to detect the outliers in the dataset and obtain the representative words of each cluster. <s> BIB027 </s> Progress in Outlier Detection Techniques: A Survey <s> D. CLUSTERING-BASED APPROACHES <s> Anomaly detection in data streams has become a major research problem in the era of ubiquitous sensing. We are collecting large amounts of data from non-stationary environments, which makes traditional anomaly detection techniques ineffective. In this paper we propose an unsupervised cluster-based algorithm for modelling normal behaviour in non-stationary data streams and detecting anomalous data points. We show that our method scales linearly with the number of observed data points, while the complexity of our model is independent of the size of the data stream. We have employed a selective clustering approach to optimize the computation time needed to model the normal data. Our experiments on large-scale synthetic and real life datasets show that the accuracy of the proposed algorithm is comparable to the state-of-the-art techniques reported in the literature while providing substantial improvements in terms of computation time. <s> BIB028 </s> Progress in Outlier Detection Techniques: A Survey <s> D. CLUSTERING-BASED APPROACHES <s> Clustering data streams is an emerging challenge with a wide range of applications in areas including Wireless Sensor Networks, the Internet of Things, finance and social media. In an evolving data stream, a clustering algorithm is desired to both (a) assign observations to clusters and (b) identify anomalies in real-time. Current state-of-the-art algorithms in the literature do not address feature (b) as they only consider the spatial proximity of data, which results in (1) poor clustering and (2) poor demonstration of the temporal evolution of data in noisy environments. In this paper, we propose an online clustering algorithm that considers the temporal proximity of observations as well as their spatial proximity to identify anomalies in real-time. It identifies the evolution of clusters in noisy streams, incrementally updates the model and calculates the minimum window length over the evolving data stream without jeopardizing performance. To the best of our knowledge, this is the first online clustering algorithm that identifies anomalies in real-time and discovers the temporal evolution of clusters. Our contributions are supported by synthetic as well as real-world data experiments. <s> BIB029
Clustering-based techniques generally rely on using clustering methods to describe the behavior of the data. To do this, the smaller size clusters that comprise significantly fewer data points than other clusters are labeled as outliers. It is important to note that the clustering methods are different from the outlier detection process. The main aim of clustering methods is to recognize the clusters while outlier detection is to detect outliers. The performance of clustering-based techniques is highly dependent on the effectiveness of the clustering algorithm in capturing the cluster structure of the normal instances . Clustering-based methods are unsupervised since they do not require any prior knowledge. So far, numerous research studies are using clustering-based techniques, and some of them are furnished with mechanisms to minimize the adverse influence of the outliers. Zhang , in his work introduced many clustering-based algorithms and divided them into different groups. As most of these clustering-based algorithms have not been proposed within this decade, we deem it unnecessary to repeat them in our work and refer our readers to or the original references of the studies for a detailed introduction of the listed algorithms below. Clustering-based outlier detection algorithms have been grouped into the following subgroups. i. Partitioning Clustering methods: are also known as distance-based clustering algorithms. Here, the number of clusters are either randomly chosen or initially given. Some examples of algorithms that fall under this group include PAM , CLARANS , K-Means BIB001 , CLARA , etc. ii. Hierarchical Clustering methods: They partition the set of objects into groups of different levels and form a tree-like structure. To group into different levels, they usually require the maximum number of clusters. Some examples include the MST , CURE BIB005 , CHAMELEON iii. Density-based clustering methods: They do not require the number of clusters to be initially given as in the case of partitioning methods; such as K-Means. Given the radius of the cluster, they can model the clusters into dense regions. Some examples of density clustering methods include DBSCAN BIB003 and DENCLUE . Other groups include the: iv. Grid-based clustering methods: STING BIB004 , Wavecluster BIB007 , DCluster BIB012 v. Clustering methods for High-Dimensional Data: CLIQUE BIB006 , HPStream BIB010 In addition to the following algorithms, which have been covered in existing literature BIB011 , BIB018 , , BIB020 , a two-phase algorithm called DenStream was proposed by Cao et al. BIB013 and D-Stream by Chen et al. BIB014 . The authors make use of the density clustering-based technique to address the problems of both online and offline outlier detections. In DenStream, the initial phase records the summary information of the data streams, and the latter phase clusters the already summarized data. Outliers are detected by introducing potential micro-cluster outlier to differentiate between the real data points and outliers. The main distinction between the two is weight. If the weight is less than the density threshold of the outlier microcluster, then the microcluster is a real outlier. The algorithm, therefore, removes the outlier micro-clusters. To show the effectiveness of the algorithm, it was evaluated against CluStream BIB009 . The algorithm's efficiency showed improved performance over that of CluStream in terms of memory since they save snapshots on a disk rather than in memory. However, the method still falls short in some areas, for example, in finding arbitrary shape clusters at multiple levels of granularity and in adapting to dynamic parameters in the data streams. Even though the proposed technique was done in 2006, we still believe some future work can be done to address these issues, since, to the best of our knowledge, the problems continue to exist. In the other technique, D-Stream BIB014 , it is similar to DenStream with an online and offline component, except that it is a density grid-based clustering algorithm. Detecting outliers here is not as difficult as in the previous method due to the introduction of sparse, dense and sporadic grids that define the noise. Outliers are considered to be grids whose sparse grids are less than the defined density threshold. The algorithm also shows better performance over CluStream in terms of time performance and clustering performance. Since the algorithms in BIB013 and BIB014 use damped window models, Ren et al. BIB019 proposed SDstream, an algorithm that uses the sliding window model. Assent et al. BIB021 proposed AnyOut to compute and detect outliers anytime in streaming data quickly. To identify the outliers in the form of constant varying arrival rates at a given time, AnyOut uses ClusTree to build a precise tree structure. The ClusTree is appropriate for anytime clustering. Elahi et al. BIB017 , using k-means, a clustering-based outlier detection technique was proposed for the data stream that splits the data stream into chunks for processing. However, it does not fit well for grouped outliers. The experimental results illustrated that their method achieved a better performance than some existing techniques BIB015 , BIB016 for discovering significant outliers over the data stream. However, the authors still believe that by integrating distance-based methods more firmly with the clustering algorithm, it can yield a better result. Moreover, finding other ways to assign the outlierness degree to the detected outliers in the data stream is another good research quest to investigate. In another study, using a similar concept of k-means as in MacQueen et al. BIB001 together with a rule for the weight, the authors proposed a clustering-based framework to detect outliers in changing data streams . They assign a weight to the feature with respect to their relevance. The weighted attributes are significant because they curb the effect of noise attributes in the algorithm process. When this technique is compared to LOF BIB008 , it has less time consumption, shows a higher outlier detection rate, and also a low false alarm rate. Even though the work showed improved performance over the other baseline algorithm (LOF), it falls short in extending the algorithm for real-world data sets and in investigating its effects. Extending the algorithm to address this issue and in designing new scales for the outlierness degree in the data stream can be an exciting future study. Hosein et al. BIB022 proposed a clustering-based technique, which is an advance k-mean incremental algorithm for detecting outliers in big data streams. In another work, an unsupervised outlier detection scheme that uses both density-based and partitioning-based schemes for streaming data was proposed by Bhosale et al. . The main idea is based on partitioning clustering techniques BIB002 , , which assign weights (using weighted k-means clustering) to attributes based on their relevance and adaptivity. The technique is incremental and can adapt to the concept of evolution. It has a higher outlier detection rate than BIB017 . The authors in this study suggested extending the work for mixed and categorical data for future research. Moshtaghi et al. BIB023 used a clustering approach to propose a model which labels objects outside the cluster boundaries as outliers. The mean and covariance matrix are incrementally updated to observe the fundamental distribution changes in the data stream. Similar to BIB023 , Moshtaghi et al. in another work, proposed eTSAD BIB025 , an approach that models streaming data with established elliptical fuzzy rules. The fuzzy parameters of incoming data are updated as in BIB023 . This helps in the detection of outliers. Salehi et al. BIB024 proposed an ensemble technique for evolving data streams. Instead of modeling and updating the data streams over time, they proposed using ensemble to generate clustering models. The outlierness value of an incoming data point is calculated by utilizing only the applicable set of clustering models. Chenaghlou et al. BIB028 proposed an efficient outlier detection algorithm, where the concept of active clusters is presented for better time and memory efficient outlier detection result. The input data is split into chunks, and for each current arriving data chunk, active clusters are identified. Here, the models of the underlying distributions are also updated. Rizk et al. BIB026 proposed an optimized calculation algorithm that enhances the process of searching for outliers in both large and small clusters. Chenaghlou et al. BIB029 extend their work in BIB028 by proposing an algorithm that can detect the outliers in real-time. The algorithm detects outliers in real time and also discovers the sequential evolution of the clusters. Yin et al. BIB027 , proposed some new and effective methods in the context of cluster text outlier detection. In their approach, documents with a low prospect of identifying an existing cluster are referred to as outliers. They conclude that the clusters that possess only one document in the result of Gibbs Sampling of Dirichlet Process Multinomial Mixture (GSDPMM) are classified as outliers in the data set. Since GSDPMM has a great potential for incremental clustering, how to relate GSDPMM in incremental clustering will serve to be an interesting research direction for future work. When designing clustering-based algorithms for outlier detection, usually the following questions are taking into consideration. i. Whether the object belongs to a cluster or not, and whether the objects outside the cluster can be regarded as an outlier. ii. Whether the distance between the cluster and the object is distant or closer. If it is at a distant, can it be regarded as an outlier? iii. Whether the object belongs to an insignificant smaller or sparse cluster, and how to label the objects within the cluster?
Progress in Outlier Detection Techniques: A Survey <s> VOLUME 7, 2019 <s> We consider clustering problems in which the available attributes can be split into two independent subsets, such that either subset suffices for learning. Example applications of this multi-view setting include clustering of Web pages which have an intrinsic view (the pages themselves) and an extrinsic view (e.g., anchor texts of inbound hyperlinks); multi-view learning has so far been studied in the context of classification. We develop and study partitioning and agglomerative, hierarchical multi-view clustering algorithms for text data. We find empirically that the multi-view versions of k-means and EM greatly improve on their single-view counterparts. By contrast, we obtain negative results for agglomerative hierarchical multi-view clustering. Our analysis explains this surprising phenomenon. <s> BIB001 </s> Progress in Outlier Detection Techniques: A Survey <s> VOLUME 7, 2019 <s> Outlier detection has recently become an important problem in many industrial and financial applications. In this paper, a novel feature bagging approach for detecting outliers in very large, high dimensional and noisy databases is proposed. It combines results from multiple outlier detection algorithms that are applied using different set of features. Every outlier detection algorithm uses a small subset of features that are randomly selected from the original feature set. As a result, each outlier detector identifies different outliers, and thus assigns to all data records outlier scores that correspond to their probability of being outliers. The outlier scores computed by the individual outlier detection algorithms are then combined in order to find the better quality outliers. Experiments performed on several synthetic and real life data sets show that the proposed methods for combining outputs from multiple outlier detection algorithms provide non-trivial improvements over the base algorithm. <s> BIB002 </s> Progress in Outlier Detection Techniques: A Survey <s> VOLUME 7, 2019 <s> Most existing model-based approaches to anomaly detection construct a profile of normal instances, then identify instances that do not conform to the normal profile as anomalies. This paper proposes a fundamentally different model-based method that explicitly isolates anomalies instead of profiles normal points. To our best knowledge, the concept of isolation has not been explored in current literature. The use of isolation enables the proposed method, iForest, to exploit sub-sampling to an extent that is not feasible in existing methods, creating an algorithm which has a linear time complexity with a low constant and a low memory requirement. Our empirical evaluation shows that iForest performs favourably to ORCA, a near-linear time complexity distance-based method, LOF and random forests in terms of AUC and processing time, and especially in large data sets. iForest also works well in high dimensional problems which have a large number of irrelevant attributes, and in situations where training set does not contain any anomalies. <s> BIB003 </s> Progress in Outlier Detection Techniques: A Survey <s> VOLUME 7, 2019 <s> Outlier scores provided by different outlier models differ widely in their meaning, range, and contrast between different outlier models and, hence, are not easily comparable or interpretable. We propose a unification of outlier scores provided by various outlier models and a translation of the arbitrary “outlier factors” to values in the range [0, 1] interpretable as values describing the probability of a data object of being an outlier. As an application, we show that this unification facilitates enhanced ensembles for outlier detection. <s> BIB004 </s> Progress in Outlier Detection Techniques: A Survey <s> VOLUME 7, 2019 <s> Traditional clustering algorithms identify just a single clustering of the data. Today's complex data, however, allow multiple interpretations leading to several valid groupings hidden in different views of the database. Each of these multiple clustering solutions is valuable and interesting as different perspectives on the same data and several meaningful groupings for each object are given. Especially for high dimensional data, where each object is described by multiple attributes, alternative clusters in different attribute subsets are of major interest. In this tutorial, we describe several real world application scenarios for multiple clustering solutions. We abstract from these scenarios and provide the general challenges in this emerging research area. We describe state-of-the-art paradigms, we highlight specific techniques, and we give an overview of this topic by providing a taxonomy of the existing clustering methods. By focusing on open challenges, we try to attract young researchers for participating in this emerging research field. <s> BIB005 </s> Progress in Outlier Detection Techniques: A Survey <s> VOLUME 7, 2019 <s> Outlier detection research is currently focusing on the development of new methods and on improving the computation time for these methods. Evaluation however is rather heuristic, often considering just precision in the top k results or using the area under the ROC curve. These evaluation procedures do not allow for assessment of similarity between methods. Judging the similarity of or correlation between two rankings of outlier scores is an important question in itself but it is also an essential step towards meaningfully building outlier detection ensembles, where this aspect has been completely ignored so far. In this study, our generalized view of evaluation methods allows both to evaluate the performance of existing methods as well as to compare different methods w.r.t. their detection performance. Our new evaluation framework takes into consideration the class imbalance problem and offers new insights on similarity and redundancy of existing outlier detection methods. As a result, the design of effective ensemble methods for outlier detection is considerably enhanced. <s> BIB006 </s> Progress in Outlier Detection Techniques: A Survey <s> VOLUME 7, 2019 <s> Outlier detection and ensemble learning are well established research directions in data mining yet the application of ensemble techniques to outlier detection has been rarely studied. Here, we propose and study subsampling as a technique to induce diversity among individual outlier detectors. We show analytically and experimentally that an outlier detector based on a subsample per se, besides inducing diversity, can, under certain conditions, already improve upon the results of the same outlier detector on the complete dataset. Building an ensemble on top of several subsamples is further improving the results. While in the literature so far the intuition that ensembles improve over single outlier detectors has just been transferred from the classification literature, here we also justify analytically why ensembles are also expected to work in the unsupervised area of outlier detection. As a side effect, running an ensemble of several outlier detectors on subsamples of the dataset is more efficient than ensembles based on other means of introducing diversity and, depending on the sample rate and the size of the ensemble, can be even more efficient than just the single outlier detector on the complete data. <s> BIB007 </s> Progress in Outlier Detection Techniques: A Survey <s> VOLUME 7, 2019 <s> Ensemble analysis is a widely used meta-algorithm for many data mining problems such as classification and clustering. Numerous ensemble-based algorithms have been proposed in the literature for these problems. Compared to the clustering and classification problems, ensemble analysis has been studied in a limited way in the outlier detection literature. In some cases, ensemble analysis techniques have been implicitly used by many outlier analysis algorithms, but the approach is often buried deep into the algorithm and not formally recognized as a general-purpose meta-algorithm. This is in spite of the fact that this problem is rather important in the context of outlier analysis. This paper discusses the various methods which are used in the literature for outlier ensembles and the general principles by which such analysis can be made more effective. A discussion is also provided on how outlier ensembles relate to the ensemble-techniques used commonly for other data mining problems. <s> BIB008 </s> Progress in Outlier Detection Techniques: A Survey <s> VOLUME 7, 2019 <s> Outlier detection and ensemble learning are well established research directions in data mining yet the application of ensemble techniques to outlier detection has been rarely studied. Building an ensemble requires learning of diverse models and combining these diverse models in an appropriate way. We propose data perturbation as a new technique to induce diversity in individual outlier detectors as well as a rank accumulation method for the combination of the individual outlier rankings in order to construct an outlier detection ensemble. In an extensive evaluation, we study the impact, potential, and shortcomings of this new approach for outlier detection ensembles. We show that this ensemble can significantly improve over weak performing base methods. <s> BIB009 </s> Progress in Outlier Detection Techniques: A Survey <s> VOLUME 7, 2019 <s> Ensemble analysis has recently been studied in the context of the outlier detection problem. In this paper, we investigate the theoretical underpinnings of outlier ensemble analysis. In spite of the significant differences between the classification and the outlier analysis problems, we show that the theoretical underpinnings between the two problems are actually quite similar in terms of the bias-variance trade-off. We explain the existing algorithms within this traditional framework, and clarify misconceptions about the reasoning underpinning these methods. We propose more effective variants of subsampling and feature bagging. We also discuss the impact of the combination function and discuss the specific trade-offs of the average and maximization functions. We use these insights to propose new combination functions that are robust in many settings. <s> BIB010 </s> Progress in Outlier Detection Techniques: A Survey <s> VOLUME 7, 2019 <s> The problem of detecting a small number of outliers in a large dataset is an important task in many fields from fraud detection to high-energy physics. Two approaches have emerged to tackle this problem: unsupervised and supervised. Supervised approaches require a sufficient amount of labeled data and are challenged by novel types of outliers and inherent class imbalance, whereas unsupervised methods do not take advantage of available labeled training examples and often exhibit poorer predictive performance. We propose BORE (a Bagged Outlier Representation Ensemble) which uses unsupervised outlier scoring functions (OSFs) as features in a supervised learning framework. BORE is able to adapt to arbitrary OSF feature representations, to the imbalance in labeled data as well as to prediction-time constraints on computational cost. We demonstrate the good performance of BORE compared to a variety of competing methods in the non-budgeted and the budgeted outlier detection problem on 12 real-world datasets. <s> BIB011 </s> Progress in Outlier Detection Techniques: A Survey <s> VOLUME 7, 2019 <s> Ensemble techniques for classification and clustering have long proven effective, yet anomaly ensembles have been barely studied. In this work, we tap into this gap and propose a new ensemble approach for anomaly mining, with application to event detection in temporal graphs. Our method aims to combine results from heterogeneous detectors with varying outputs, and leverage the evidence from multiple sources to yield better performance. However, trusting all the results may deteriorate the overall ensemble accuracy, as some detectors may fall short and provide inaccurate results depending on the nature of the data in hand. This suggests that being selective in which results to combine is vital in building effective ensembles---hence "less is more". ::: In this paper we propose SELECT; an ensemble approach for anomaly mining that employs novel techniques to automatically and systematically select the results to assemble in a fully unsupervised fashion. We apply our method to event detection in temporal graphs, where SELECT successfully utilizes five base detectors and seven consensus methods under a unified ensemble framework. We provide extensive quantitative evaluation of our approach on five real-world datasets (four with ground truth), including Enron email communications, New York Times news corpus, and World Cup 2014 Twitter news feed. Thanks to its selection mechanism, SELECT yields superior performance compared to individual detectors alone, the full ensemble (naively combining all results), and an existing diversity-based ensemble. <s> BIB012 </s> Progress in Outlier Detection Techniques: A Survey <s> VOLUME 7, 2019 <s> Ensemble methods for classification and clustering have been effectively used for decades, while ensemble learning for outlier detection has only been studied recently. In this work, we design a new ensemble approach for outlier detection in multi-dimensional point data, which provides improved accuracy by reducing error through both bias and variance. Although classification and outlier detection appear as different problems, their theoretical underpinnings are quite similar in terms of the bias-variance trade-off [1], where outlier detection is considered as a binary classification task with unobserved labels but a similar bias-variance decomposition of error. ::: In this paper, we propose a sequential ensemble approach called CARE that employs a two-phase aggregation of the intermediate results in each iteration to reach the final outcome. Unlike existing outlier ensembles which solely incorporate a parallel framework by aggregating the outcomes of independent base detectors to reduce variance, our ensemble incorporates both the parallel and sequential building blocks to reduce bias as well as variance by ($i$) successively eliminating outliers from the original dataset to build a better data model on which outlierness is estimated (sequentially), and ($ii$) combining the results from individual base detectors and across iterations (parallelly). Through extensive experiments on sixteen real-world datasets mainly from the UCI machine learning repository [2], we show that CARE performs significantly better than or at least similar to the individual baselines. We also compare CARE with the state-of-the-art outlier ensembles where it also provides significant improvement when it is the winner and remains close otherwise. <s> BIB013 </s> Progress in Outlier Detection Techniques: A Survey <s> VOLUME 7, 2019 <s> Outlier detection methods have used approximate neighborhoods in filter-refinement approaches. Outlier detection ensembles have used artificially obfuscated neighborhoods to achieve diverse ensemble members. Here we argue that outlier detection models could be based on approximate neighborhoods in the first place, thus gaining in both efficiency and effectiveness. It depends, however, on the type of approximation, as only some seem beneficial for the task of outlier detection, while no (large) benefit can be seen for others. In particular, we argue that space-filling curves are beneficial approximations, as they have a stronger tendency to underestimate the density in sparse regions than in dense regions. In comparison, LSH and NN-Descent do not have such a tendency and do not seem to be beneficial for the construction of outlier detection ensembles. <s> BIB014 </s> Progress in Outlier Detection Techniques: A Survey <s> VOLUME 7, 2019 <s> In many domains, important events are not represented as the common scenario, but as deviations from the rule. The importance and impact associated with these particular, outnumbered, deviant, and sometimes even previously unseen events is directly related to the application domain (e.g., breast cancer detection, satellite image classification, etc.). The detection of these rare events or outliers has recently been gaining popularity as evidenced by the wide variety of algorithms currently available. These algorithms are based on different assumptions about what constitutes an outlier, a characteristic pointing toward their integration in an ensemble to improve their individual detection rate. However, there are two factors that limit the use of current ensemble outlier detection approaches: first, in most cases, outliers are not detectable in full dimensionality, but instead are located in specific subspaces of data; and second, despite the expected improvement on detection rate achieved using an ensemble of detectors, the computational efficiency of the ensemble will increase linearly as the number of components increases. In this article, we propose an ensemble approach that identifies outliers based on different subsets of features and subsamples of data, providing more robust results while improving the computational efficiency of similar ensemble outlier detection approaches. <s> BIB015 </s> Progress in Outlier Detection Techniques: A Survey <s> VOLUME 7, 2019 <s> A new semi-supervised ensemble algorithm called XGBOD (Extreme Gradient Boosting Outlier Detection) is proposed, described and demonstrated for the enhanced detection of outliers from normal observations in various practical datasets. The proposed framework combines the strengths of both supervised and unsupervised machine learning methods by creating a hybrid approach that exploits each of their individual performance capabilities in outlier detection. XGBOD uses multiple unsupervised outlier mining algorithms to extract useful representations from the underlying data that augment the predictive capabilities of an embedded supervised classifier on an improved feature space. The novel approach is shown to provide superior performance in comparison to competing individual detectors, the full ensemble and two existing representation learning based algorithms across seven outlier datasets. <s> BIB016 </s> Progress in Outlier Detection Techniques: A Survey <s> VOLUME 7, 2019 <s> Ensemble techniques have been applied to the unsupervised outlier detection problem in some scenarios. Challenges are the generation of diverse ensemble members and the combination of individual results into an ensemble. For the latter challenge, some methods tried to design smaller ensembles out of a wealth of possible ensemble members, to improve the diversity and accuracy of the ensemble (relating to the ensemble selection problem in classification). We propose a boosting strategy for combinations showing improvements on benchmark datasets. <s> BIB017 </s> Progress in Outlier Detection Techniques: A Survey <s> VOLUME 7, 2019 <s> In unsupervised outlier ensembles, the absence of ground truth makes the combination of base outlier detectors a challenging task. Specifically, existing parallel outlier ensembles lack a reliable way of selecting competent base detectors, affecting accuracy and stability, during model combination. In this paper, we propose a framework---called Locally Selective Combination in Parallel Outlier Ensembles (LSCP)---which addresses the issue by defining a local region around a test instance using the consensus of its nearest neighbors in randomly selected feature subspaces. The top-performing base detectors in this local region are selected and combined as the model's final output. Four variants of the LSCP framework are compared with seven widely used parallel frameworks. Experimental results demonstrate that one of these variants, LSCP_AOM, consistently outperforms baselines on the majority of twenty real-world datasets. <s> BIB018 </s> Progress in Outlier Detection Techniques: A Survey <s> VOLUME 7, 2019 <s> Selecting and combining the outlier scores of different base detectors used within outlier ensembles can be quite challenging in the absence of ground truth. In this paper, an unsupervised outlier detector combination framework called DCSO is proposed, demonstrated and assessed for the dynamic selection of most competent base detectors, with an emphasis on data locality. The proposed DCSO framework first defines the local region of a test instance by its k nearest neighbors and then identifies the top-performing base detectors within the local region. Experimental results on ten benchmark datasets demonstrate that DCSO provides consistent performance improvement over existing static combination approaches in mining outlying objects. To facilitate interpretability and reliability of the proposed method, DCSO is analyzed using both theoretical frameworks and visualization techniques, and presented alongside empirical parameter setting instructions that can be used to improve the overall performance. <s> BIB019
Although ensemble-based techniques for outlier detection when compared to other OD methods have had very few reports BIB002 , , BIB007 - BIB010 . However, they are often used in recent outlier detection problems BIB016 , BIB012 , and have more open challenges. Ensemble techniques are used in cases where one is prompted to answer the question of whether an outlier should be a linear-model based, distancebased, or other kinds of model-based. They are usually applied in classification and clustering problems. They combine the results from dissimilar models to produce more robust models and then reduce the dependency of one model to a particular dataset or data locality. However, ensemble methods in the context of outlier detection are known to be very difficult. In recent years, several techniques have been introduced, including the following: (I) Bagging BIB002 and boosting BIB012 for classification problems (ii) Isolation forest BIB003 for parallel techniques. (iii) for sequential methods BIB013 and Extreme Gradient Boosting Outlier Detection (XGBOD) BIB016 and a Bagged Outlier Representation Ensemble (BORE) BIB011 for the hybrid methods. Lazarevic et al. BIB002 , proposed the very first known ensemble method on improving outlier detection using the ensemble method. It makes use of the feature bagging approach to handle very large high dimensional datasets. The technique combines the outputs of multiple outlier detection algorithms, each of which is created through a random designated subset of features. Each of the algorithms randomly selects a small subset of its real feature set and then assigns an outlier score. The score is assigned to all the data records that match up with the probability of them being considered outliers. Each of the outlier score obtained from the different algorithms is then combined to get better quality outliers. From their experiment, it shows that the combined method can produce a better outlier detection performance because it focuses on smaller feature projections from the combined multiple outputs and distinct predictions. However, considering how to fully characterize these methods for very large and high dimensional datasets would be motivating future work. Also, examining the impact of shifting the data distributions in detecting the outliers for each round of the combined methods (not limited to only distance-based approaches but other approaches) is worth considering. Aggarwal BIB008 , presented a study on outlier ensemble analysis, which has recently provoked great interest in literature BIB017 , BIB014 . He discusses various outlier ensemble methods and how such outlier ensemble analyses can be more effective. He further explained how these methods are connected to ensemble-methods in data mining problems. Some examples of outlier ensembles in the context of classification and clustering were then given. In the classification context, boosting BIB017 and bagging (Bootstrap Aggregating) BIB002 are two examples of ensemble-based methods that have been proposed. In the context of clustering, the Multiview BIB001 and alterative clustering BIB005 serve as examples. Another critical study in their work is how to categorize ensemble outlier analysis problems, whether they are independent or sequential ensembles, data-centered, or modelcentered ensembles. The ensemble algorithms are classified by the ''component independence'' which tries to answer to the question of whether the different components of the ensemble are independent or dependent on one another. For example, in boosting where the results depend on a prior execution, such a method is not independent of the other, while bagging is the opposite; they are independent of one another. For the ''component type,'' each component of an ensemble is described according to its model choice or data choice. The ''model-centered'' is independent, while the ''data-centered'' is sequential. However, one cannot give an ultimate conclusion because it might depend on the foundation of the data and models. Other succeeding studies , BIB004 , BIB006 in later years that focused on using ensembles for outlier detection faced numerous challenges. Some of these challenges include the issue of comparing the scores using different functions and mixture models to fit the outlier scores and to give a score combination. In addition, issues of how to support the combination of different detectors or methods to form one ensemble arise. Schubert et al. BIB006 , compared the outlier ranking based on the scores using similarity measures. A greedy ensemble technique was proposed as an application, which shows the significance of the performance of ensembles through diversifying approaches. Earlier in 2010, Nguyen et al. studied the difficulties of ensemble OD methods for high dimensional datasets. They proposed a unified framework that combines non-compatible methods of different outlier detection algorithms. Instead of applying the same approach each time to determine the outlier score, various detection methods are applied to approximate the outlier score. Using the formal concept of the outlier score, they propose Heterogeneous Detector Ensemble on random Subspaces (HeDES) through the combination of functions, to address the issue of heterogeneity. Unlike the Lazarevic et al. BIB002 framework, HeDES can bring together different techniques that produce different outlier scores and score types; for instance, a real-value against that of the binary-value. Even though from their experimental studies, the framework shows effectiveness in the detection of outliers in a real-world data set, we believe considering an orderly extension in doing a further experiment on all possible combined functions. In addition, extending the analysis to larger and higher dimensional datasets could be interesting future work. Zimek et al. BIB007 proposed a random subsampling technique to estimate the nearest neighbors and then its local density. Usually, applying subsampling techniques from a set of given datasets, it will obtain the training objects without replacement. This can improve and enhance the outlier detection method performance. Using other outlier detection algorithms coupled with a subsampling technique can give a different set of results and higher efficiency. Zimek et al. BIB009 in another work, the authors considered their technique from the perspective of learning theory as another possible approach to ensemble outlier detection. To construct the outlier detection ensemble, the authors proposed a data perturbation technique that brings forth diversity in different outlier detectors and a method that combines distinct outlier rankings. The main focus of their approach utilizes the notion of distance and density estimations in Euclidean distance type dataspaces. To get a more consistent density estimate, the attribute values at each point are altered by adding small randomized amounts of noise. All the i perturbed bootstrapped data sets then go through a selected outlier detection algorithm, which helps in recording each data point identity, aggregates the scores and then ranks the positions. The i outlier scoring (or rankings) are then combined to attain a steadier and dependable outlier scoring of the data. Pasillas-Diaz et al. BIB015 considered both subsampling and feature bagging techniques. The feature bagging technique is used to obtain the various elements at each iteration, while the subsampling technique calculates the outlier scores of the different subsets of data. One key drawback in their method is the difficulty in obtaining the variance of the objects through feature bagging. Also, the size of the subsampled dataset influences the sensitivity of the final result. Zhao et al. BIB019 proposed Dynamic Combination of Detector Scores for Outlier Ensembles (DCSO) an unsupervised outlier detector framework. DCSO tries to solve the challenge of choosing and combining the outlier scores in the absence of the ground truth for different base detectors. It selects the most suitable base detectors, with focus on the locality of the data. DCSO initially labels the local region of a test instance with respect to its k nearest neighbors. It then detects the base detectors that show the best performance within the local region. Zhao et al. BIB018 proposed Locally Selective Combination in Parallel Outlier Ensembles (LSCP) framework to address the same issues in BIB019 . They use a similar approach as in BIB019 and presented four variations of the LSCP framework. For more details and broader discussions about outlier ensemble techniques, the Aggarwal et al. outlier ensemble book gives detailed discussions on outlier ensemble methods. Although most of the studies mentioned there were done before 2017, however, the book itself is very comprehensive and rich in details for the understanding of outlier ensemble methods. It presents the different types of ensemble methods and categorize them into different types. In addition, it gives an overview of the outlier ensemble design frameworks.
Progress in Outlier Detection Techniques: A Survey <s> Disadvantages, Challenges, And Gaps <s> Outlier detection has recently become an important problem in many industrial and financial applications. In this paper, a novel feature bagging approach for detecting outliers in very large, high dimensional and noisy databases is proposed. It combines results from multiple outlier detection algorithms that are applied using different set of features. Every outlier detection algorithm uses a small subset of features that are randomly selected from the original feature set. As a result, each outlier detector identifies different outliers, and thus assigns to all data records outlier scores that correspond to their probability of being outliers. The outlier scores computed by the individual outlier detection algorithms are then combined in order to find the better quality outliers. Experiments performed on several synthetic and real life data sets show that the proposed methods for combining outputs from multiple outlier detection algorithms provide non-trivial improvements over the base algorithm. <s> BIB001 </s> Progress in Outlier Detection Techniques: A Survey <s> Disadvantages, Challenges, And Gaps <s> Most existing model-based approaches to anomaly detection construct a profile of normal instances, then identify instances that do not conform to the normal profile as anomalies. This paper proposes a fundamentally different model-based method that explicitly isolates anomalies instead of profiles normal points. To our best knowledge, the concept of isolation has not been explored in current literature. The use of isolation enables the proposed method, iForest, to exploit sub-sampling to an extent that is not feasible in existing methods, creating an algorithm which has a linear time complexity with a low constant and a low memory requirement. Our empirical evaluation shows that iForest performs favourably to ORCA, a near-linear time complexity distance-based method, LOF and random forests in terms of AUC and processing time, and especially in large data sets. iForest also works well in high dimensional problems which have a large number of irrelevant attributes, and in situations where training set does not contain any anomalies. <s> BIB002 </s> Progress in Outlier Detection Techniques: A Survey <s> Disadvantages, Challenges, And Gaps <s> Outlier scores provided by different outlier models differ widely in their meaning, range, and contrast between different outlier models and, hence, are not easily comparable or interpretable. We propose a unification of outlier scores provided by various outlier models and a translation of the arbitrary “outlier factors” to values in the range [0, 1] interpretable as values describing the probability of a data object of being an outlier. As an application, we show that this unification facilitates enhanced ensembles for outlier detection. <s> BIB003 </s> Progress in Outlier Detection Techniques: A Survey <s> Disadvantages, Challenges, And Gaps <s> Ensembles for unsupervised outlier detection is an emerging topic that has been neglected for a surprisingly long time (although there are reasons why this is more difficult than supervised ensembles or even clustering ensembles). Aggarwal recently discussed algorithmic patterns of outlier detection ensembles, identified traces of the idea in the literature, and remarked on potential as well as unlikely avenues for future transfer of concepts from supervised ensembles. Complementary to his points, here we focus on the core ingredients for building an outlier ensemble, discuss the first steps taken in the literature, and identify challenges for future research. <s> BIB004 </s> Progress in Outlier Detection Techniques: A Survey <s> Disadvantages, Challenges, And Gaps <s> Outlier detection and ensemble learning are well established research directions in data mining yet the application of ensemble techniques to outlier detection has been rarely studied. Building an ensemble requires learning of diverse models and combining these diverse models in an appropriate way. We propose data perturbation as a new technique to induce diversity in individual outlier detectors as well as a rank accumulation method for the combination of the individual outlier rankings in order to construct an outlier detection ensemble. In an extensive evaluation, we study the impact, potential, and shortcomings of this new approach for outlier detection ensembles. We show that this ensemble can significantly improve over weak performing base methods. <s> BIB005 </s> Progress in Outlier Detection Techniques: A Survey <s> Disadvantages, Challenges, And Gaps <s> Ensemble analysis has recently been studied in the context of the outlier detection problem. In this paper, we investigate the theoretical underpinnings of outlier ensemble analysis. In spite of the significant differences between the classification and the outlier analysis problems, we show that the theoretical underpinnings between the two problems are actually quite similar in terms of the bias-variance trade-off. We explain the existing algorithms within this traditional framework, and clarify misconceptions about the reasoning underpinning these methods. We propose more effective variants of subsampling and feature bagging. We also discuss the impact of the combination function and discuss the specific trade-offs of the average and maximization functions. We use these insights to propose new combination functions that are robust in many settings. <s> BIB006 </s> Progress in Outlier Detection Techniques: A Survey <s> Disadvantages, Challenges, And Gaps <s> Ensemble techniques for classification and clustering have long proven effective, yet anomaly ensembles have been barely studied. In this work, we tap into this gap and propose a new ensemble approach for anomaly mining, with application to event detection in temporal graphs. Our method aims to combine results from heterogeneous detectors with varying outputs, and leverage the evidence from multiple sources to yield better performance. However, trusting all the results may deteriorate the overall ensemble accuracy, as some detectors may fall short and provide inaccurate results depending on the nature of the data in hand. This suggests that being selective in which results to combine is vital in building effective ensembles---hence "less is more". ::: In this paper we propose SELECT; an ensemble approach for anomaly mining that employs novel techniques to automatically and systematically select the results to assemble in a fully unsupervised fashion. We apply our method to event detection in temporal graphs, where SELECT successfully utilizes five base detectors and seven consensus methods under a unified ensemble framework. We provide extensive quantitative evaluation of our approach on five real-world datasets (four with ground truth), including Enron email communications, New York Times news corpus, and World Cup 2014 Twitter news feed. Thanks to its selection mechanism, SELECT yields superior performance compared to individual detectors alone, the full ensemble (naively combining all results), and an existing diversity-based ensemble. <s> BIB007
i. Ensemble techniques in the context of detecting outliers when compared to other data mining problems are poorly developed. This is as a result of the difficulties in evaluating the features of the ensembles. Moreover, selecting the right meta-detectors is a difficult task. ii. For real datasets, the outlier analysis can be very complex to evaluate due to the combination of a smaller sample space and its unsupervised nature. This can further result in the incorrect prediction of the steps of the algorithm in making robust decisions without triggering the over-fitting problem. Although outlier ensemble techniques have shown promising results, they still have areas for improvement. Ensemble analysis techniques can be very useful in areas where the data is noisy and in streaming scenarios. This is mainly because in these scenarios they are usually challenged with some drawbacks, such as the quality of the data and the processing time that makes the results produced from individual classifiers not very robust. More techniques are being proposed to address the challenge of model combinations. To address these challenges and many others proposed by Zimek et al. BIB004 , several additional methods have been proposed , BIB006 , BIB003 - BIB002 to improve outlier detection using ensemble methods, but most of these methods are meta methods except for those suggested by BIB001 . To further discuss and delve deep into outlier ensembles techniques, Zimek et al. BIB004 in their study have presented several open questions and challenges in using ensemble methods in the detection of outliers. Although some new emerging research work has started to contribute to these open research problems BIB005 however, topics about the issues of proposing diversifying principles and how to combine outlier rankings remains to be open and engaging for future research directions. Some techniques BIB004 , BIB007 are static and do not involve any detector selection methods. This kind of technique BIB007 that is characterized by the absence of a detector selection process hinders the performance of the manner in identifying the unknown outlier cases. Another significant aspect that is not given much attention is the importance of data locality. An open research problem will be to consider the data locality. That is, instead of only focusing on evaluating the competence of the base detector on a more global view, the local region with respect to the test objects can be considered as well. This will help in the detector selection and VOLUME 7, 2019 combination processes. Other essential problems for further research, are in addressing the issue among ensemble outlier methods for creating a variety of models and meaningful ways of combining the outlier rankings.
Progress in Outlier Detection Techniques: A Survey <s> 1) ACTIVE LEARNING <s> The outlier detection problem has important applications in the field of fraud detection, network robustness analysis, and intrusion detection. Most such applications are most important for high-dimensional domains in which the data can contain hundreds of dimensions. Many recent algorithms have been proposed for outlier detection that use several concepts of proximity in order to find the outliers based on their relationship to the other points in the data. However, in high-dimensional space, the data are sparse and concepts using the notion of proximity fail to retain their effectiveness. In fact, the sparsity of high-dimensional data can be understood in a different way so as to imply that every point is an equally good outlier from the perspective of distance-based definitions. Consequently, for high-dimensional data, the notion of finding meaningful outliers becomes substantially more complex and nonobvious. In this paper, we discuss new techniques for outlier detection that find the outliers by studying the behavior of projections from the data set. <s> BIB001 </s> Progress in Outlier Detection Techniques: A Survey <s> 1) ACTIVE LEARNING <s> Traditional outlier mining methods identify outliers from a global point of view. It is usually difficult to find deviated data points in low-dimensional subspaces using these methods. The concept lattice, due to its straight-forwardness, conciseness and completeness in knowledge expression, has become an effective tool for data analysis and knowledge discovery. In this paper, a concept lattice based outlier mining algorithm (CLOM) for low-dimensional subspaces is proposed, which treats the intent of every concept lattice node as a subspace. First, sparsity and density coefficients, which measure outliers in low-dimensional subspaces, are defined and discussed. Second, the intent of a concept lattice node is regarded as a subspace, and sparsity subspaces are identified based on a predefined sparsity coefficient threshold. At this stage, whether the intent of any ancestor node of a sparsity subspace is a density subspace is identified based on a predefined density coefficient threshold. If it is a density subspace, then the objects in the extent of the node whose intent is a sparsity subspace are defined as outliers. Experimental results on a star spectral database show that CLOM is effective in mining outliers in low-dimensional subspaces. The accuracy of the results is also greatly improved. <s> BIB002 </s> Progress in Outlier Detection Techniques: A Survey <s> 1) ACTIVE LEARNING <s> Anomaly detection is being regarded as an unsupervised learning task as anomalies stem from adversarial or unlikely events with unknown distributions. However, the predictive performance of purely unsupervised anomaly detection often fails to match the required detection rates in many tasks and there exists a need for labeled data to guide the model generation. Our first contribution shows that classical semi-supervised approaches, originating from a supervised classifier, are inappropriate and hardly detect new and unknown anomalies. We argue that semi-supervised anomaly detection needs to ground on the unsupervised learning paradigm and devise a novel algorithm that meets this requirement. Although being intrinsically non-convex, we further show that the optimization problem has a convex equivalent under relatively mild assumptions. Additionally, we propose an active learning strategy to automatically filter candidates for labeling. In an empirical study on network intrusion detection data, we observe that the proposed learning methodology requires much less labeled data than the state-of-the-art, while achieving higher detection accuracies. <s> BIB003 </s> Progress in Outlier Detection Techniques: A Survey <s> 1) ACTIVE LEARNING <s> Traditional outlier mining methods identify outliers from a global point of view. These methods are inefficient to find locally biased data points outliers in low dimensional subspaces. Constrained concept lattices can be used as an effective formal tool for data analysis because constrained concept lattices have the characteristics of high constructing efficiency, practicability and pertinency. In this paper, we propose an outlier mining algorithm that treats the intent of any constrained concept lattice node as a subspace. We introduce sparsity and density coefficients to measure outliers in low dimensional subspaces. The intent of any constrained concept lattice node is regarded as a subspace, and sparsity subspaces are searched by traversing the constrained concept lattice according to a sparsity coefficient threshold. If the intent of any father node of the sparsity subspace is a density subspace according to a density coefficient threshold, then objects contained in the extent of the sparsity subspace node are considered as bias data points or outliers. Our experimental results show that the proposed algorithm performs very well for high red-shift spectral data sets. <s> BIB004 </s> Progress in Outlier Detection Techniques: A Survey <s> 1) ACTIVE LEARNING <s> Unsupervised anomaly detection algorithms search for outliers and then predict that these outliers are the anomalies. When deployed, however, these algorithms are often criticized for high false positive and high false negative rates. One cause of poor performance is that not all outliers are anomalies and not all anomalies are outliers. In this paper, we describe an Active Anomaly Discovery (AAD) method for incorporating expert feedback to adjust the anomaly detector so that the outliers it discovers are more in tune with the expert user's semantic understanding of the anomalies. The AAD approach is designed to operate in an interactive data exploration loop. In each iteration of this loop, our algorithm first selects a data instance to present to the expert as a potential anomaly and then the expert labels the instance as an anomaly or as a nominal data point. Our algorithm updates its internal model with the instance label and the loop continues until a budget of B queries is spent. The goal of our approach is to maximize the total number of true anomalies in the B instances presented to the expert. We show that when compared to other state-of-the-art algorithms, AAD is consistently one of the best performers. <s> BIB005 </s> Progress in Outlier Detection Techniques: A Survey <s> 1) ACTIVE LEARNING <s> Outlier detection has been an active area of research for a few decades. We propose a new definition of outlier that is useful for high-dimensional data. According to this definition, given a dictionary of atoms learned using the sparse coding objective, the outlierness of a data point depends jointly on two factors: the frequency of each atom in reconstructing all data points (or its negative log activity ratio, NLAR) and the strength by which it is used in reconstructing the current point. A R arity based O utlier D etection algorithm in a S parse coding framework (RODS) that consists of two components, NLAR learning and outlier scoring, is developed. This algorithm is unsupervised; both the offline and online variants are presented. It is governed by very few manually-tunable parameters and operates in linear time. We demonstrate the superior performance of the RODS in comparison with various state-of-the-art outlier detection algorithms on several benchmark datasets. We also demonstrate its effectiveness using three real-world case studies: saliency detection in images, abnormal event detection in videos, and change detection in data streams. Our evaluations shows that RODS outperforms competing algorithms reported in the outlier detection, saliency detection, video event detection, and change detection literature. <s> BIB006 </s> Progress in Outlier Detection Techniques: A Survey <s> 1) ACTIVE LEARNING <s> This work formalizes the new framework for anomaly detection, called active anomaly detection. This framework has, in practice, the same cost of unsupervised anomaly detection but with the possibility of much better results. We show that unsupervised anomaly detection is an undecidable problem and that a prior needs to be assumed for the anomalies probability distribution in order to have performance guarantees. Finally, we also present a new layer that can be attached to any deep learning model designed for unsupervised anomaly detection to transform it into an active anomaly detection method, presenting results on both synthetic and real anomaly detection datasets. <s> BIB007 </s> Progress in Outlier Detection Techniques: A Survey <s> 1) ACTIVE LEARNING <s> Outlier detection is an important topic in machine learning and has been used in a wide range of applications. In this paper, we approach outlier detection as a binary-classification issue by sampling potential outliers from a uniform reference distribution. However, due to the sparsity of data in high-dimensional space, a limited number of potential outliers may fail to provide sufficient information to assist the classifier in describing a boundary that can separate outliers from normal data effectively. To address this, we propose a novel Single-Objective Generative Adversarial Active Learning (SO-GAAL) method for outlier detection, which can directly generate informative potential outliers based on the mini-max game between a generator and a discriminator. Moreover, to prevent the generator from falling into the mode collapsing problem, the stop node of training should be determined when SO-GAAL is able to provide sufficient information. But without any prior information, it is extremely difficult for SO-GAAL. Therefore, we expand the network structure of SO-GAAL from a single generator to multiple generators with different objectives (MO-GAAL), which can generate a reasonable reference distribution for the whole dataset. We empirically compare the proposed approach with several state-of-the-art outlier detection methods on both synthetic and real-world datasets. The results show that MO-GAAL outperforms its competitors in the majority of cases, especially for datasets with various cluster types or high irrelevant variable ratio. The experiment codes are available at: https://github.com/leibinghe/GAAL-based-outlier-detection <s> BIB008 </s> Progress in Outlier Detection Techniques: A Survey <s> 1) ACTIVE LEARNING <s> Anomaly detection (AD) task corresponds to identifying the true anomalies from a given set of data instances. AD algorithms score the data instances and produce a ranked list of candidate anomalies, which are then analyzed by a human to discover the true anomalies. However, this process can be laborious for the human analyst when the number of false-positives is very high. Therefore, in many real-world AD applications including computer security and fraud prevention, the anomaly detector must be configurable by the human analyst to minimize the effort on false positives. ::: In this paper, we study the problem of active learning to automatically tune ensemble of anomaly detectors to maximize the number of true anomalies discovered. We make four main contributions towards this goal. First, we present an important insight that explains the practical successes of AD ensembles and how ensembles are naturally suited for active learning. Second, we present several algorithms for active learning with tree-based AD ensembles. These algorithms help us to improve the diversity of discovered anomalies, generate rule sets for improved interpretability of anomalous instances, and adapt to streaming data settings in a principled manner. Third, we present a novel algorithm called GLocalized Anomaly Detection (GLAD) for active learning with generic AD ensembles. GLAD allows end-users to retain the use of simple and understandable global anomaly detectors by automatically learning their local relevance to specific data instances using label feedback. Fourth, we present extensive experiments to evaluate our insights and algorithms. Our results show that in addition to discovering significantly more anomalies than state-of-the-art unsupervised baselines, our active learning algorithms under the streaming-data setup are competitive with the batch setup. <s> BIB009
Active learning is an example of a semi-supervised learning method in which designed algorithms interact with the user or information source to get the desired outputs BIB005 , BIB007 . For example, in cases of some real dataset with huge unlabeled datasets, the task of manually labeling these data is expensive. Such a scenario demands the learning algorithm to query the information source or user actively. When applying an active learning algorithm in such a scenario, the algorithm will be able to discover those smaller fractions of instances that were labeled by the user in the training data set. This is done to boost the improvement of the re-trained model. Active learning resembles a system in which the learning algorithm can request the user for input labels of the instances to give better predictions. Active learning for outlier detection has recently been embraced in different research domain BIB002 - BIB008 . Aggarwal et al. BIB001 use the concept of active learning in outlier detection to solve the ambiguity of giving clear reasons why outliers are flagged and what prompts the relatively high computational demand for density-estimation based OD methods. In their approach, they initially apply the classification techniques to the labeled dataset that contains potential outliers (artificially generated). The active learning method is then applied to minimize the classification problem through a selective sampling mechanism known as ''ensemble-based minimum margin active learning.'' Gornitz et al. BIB003 proposed another work where an active learning strategy is applied for anomaly detection. To obtain a good predictive performance, they repeat the process of alternating between the active learning process and the update of the model. The active learning rule is applied after the method is trained on unlabeled and improved examples. Das et al. BIB006 , BIB004 used an active approach to query the human analyst to obtain a better result. They avidly select the best data instances for the querying process, but no clear insight, explanation, and interpretation of the design choice were given. In the next study, they try to address these issues. In 2019, Das et al. BIB009 then proposed an active outlier detection method via ensembles called GLocalized Anomaly Detection (GLAD). They study how to automatically fit ensemble outlier detectors in active learning problems. In GLAD, the end-users maintain the use of modest and comprehensible global outlier detectors. This is attained through learning automatically their local weight in particular data instances by means of label feedback. The fine-tuning of the ensemble helps in maximizing the number of correct outliers discovered. This kind of framework is referred to as a human-in-the-loop because the human analyst for each round of iterations gives label feedback. Even though active learning for outlier detection has recently been embraced in the research domain, they still lack in the literature, and there is still more work that needs to be done. The process of discovering true outliers by the human analyst can be difficult, the need for the techniques to minimize the effect of false positives through the design and configuration of an effective outlier detector is needed for the human analyst in the future. In addition, better insights and interpretations of outlier scores and related results obtained through employing different algorithms are needed. Active learning in the context of outlier detection needs solid interpretations and explanations for it to be well understood in the research community. Finally, the design of active learning algorithms for handling data streams is also a promising research challenge.
Progress in Outlier Detection Techniques: A Survey <s> 2) SUBSPACE LEARNING <s> The outlier detection problem has important applications in the field of fraud detection, network robustness analysis, and intrusion detection. Most such applications are most important for high-dimensional domains in which the data can contain hundreds of dimensions. Many recent algorithms have been proposed for outlier detection that use several concepts of proximity in order to find the outliers based on their relationship to the other points in the data. However, in high-dimensional space, the data are sparse and concepts using the notion of proximity fail to retain their effectiveness. In fact, the sparsity of high-dimensional data can be understood in a different way so as to imply that every point is an equally good outlier from the perspective of distance-based definitions. Consequently, for high-dimensional data, the notion of finding meaningful outliers becomes substantially more complex and nonobvious. In this paper, we discuss new techniques for outlier detection that find the outliers by studying the behavior of projections from the data set. <s> BIB001 </s> Progress in Outlier Detection Techniques: A Survey <s> 2) SUBSPACE LEARNING <s> Outlier detection is an important data mining task for consistency checks, fraud detection, etc. Binary decision making on whether or not an object is an outlier is not appropriate in many applications and moreover hard to parametrize. Thus, recently, methods for outlier ranking have been proposed. Determining the degree of deviation, they do not require setting a decision boundary between outliers and the remaining data. High dimensional and heterogeneous (continuous and categorical attributes) data, however, pose a problem for most outlier ranking algorithms. In this work, we propose our OutRank approach for ranking outliers in heterogeneous high dimensional data. We introduce a consistent model for different attribute types. Our novel scoring functions transform the analyzed structure of the data to a meaningful ranking. Promising results in preliminary experiments show the potential for successful outlier ranking in high dimensional data. <s> BIB002 </s> Progress in Outlier Detection Techniques: A Survey <s> 2) SUBSPACE LEARNING <s> Traditional outlier mining methods identify outliers from a global point of view. It is usually difficult to find deviated data points in low-dimensional subspaces using these methods. The concept lattice, due to its straight-forwardness, conciseness and completeness in knowledge expression, has become an effective tool for data analysis and knowledge discovery. In this paper, a concept lattice based outlier mining algorithm (CLOM) for low-dimensional subspaces is proposed, which treats the intent of every concept lattice node as a subspace. First, sparsity and density coefficients, which measure outliers in low-dimensional subspaces, are defined and discussed. Second, the intent of a concept lattice node is regarded as a subspace, and sparsity subspaces are identified based on a predefined sparsity coefficient threshold. At this stage, whether the intent of any ancestor node of a sparsity subspace is a density subspace is identified based on a predefined density coefficient threshold. If it is a density subspace, then the objects in the extent of the node whose intent is a sparsity subspace are defined as outliers. Experimental results on a star spectral database show that CLOM is effective in mining outliers in low-dimensional subspaces. The accuracy of the results is also greatly improved. <s> BIB003 </s> Progress in Outlier Detection Techniques: A Survey <s> 2) SUBSPACE LEARNING <s> We propose an original outlier detection schema that detects outliers in varying subspaces of a high dimensional feature space. In particular, for each object in the data set, we explore the axis-parallel subspace spanned by its neighbors and determine how much the object deviates from the neighbors in this subspace. In our experiments, we show that our novel subspace outlier detection is superior to existing full-dimensional approaches and scales well to high dimensional databases. <s> BIB004 </s> Progress in Outlier Detection Techniques: A Survey <s> 2) SUBSPACE LEARNING <s> Outlier mining is an important data analysis task to distinguish exceptional outliers from regular objects. For outlier mining in the full data space, there are well established methods which are successful in measuring the degree of deviation for outlier ranking. However, in recent applications traditional outlier mining approaches miss outliers as they are hidden in subspace projections. Especially, outlier ranking approaches measuring deviation on all available attributes miss outliers deviating from their local neighborhood only in subsets of the attributes. In this work, we propose a novel outlier ranking based on the objects deviation in a statistically selected set of relevant subspace projections. This ensures to find objects deviating in multiple relevant subspaces, while it excludes irrelevant projections showing no clear contrast between outliers and the residual objects. Thus, we tackle the general challenges of detecting outliers hidden in subspaces of the data. We provide a selection of subspaces with high contrast and propose a novel ranking based on an adaptive degree of deviation in arbitrary subspaces. In thorough experiments on real and synthetic data we show that our approach outperforms competing outlier ranking approaches by detecting outliers in arbitrary subspace projections. <s> BIB005 </s> Progress in Outlier Detection Techniques: A Survey <s> 2) SUBSPACE LEARNING <s> Outlier mining is a major task in data analysis. Outliers are objects that highly deviate from regular objects in their local neighborhood. Density-based outlier ranking methods score each object based on its degree of deviation. In many applications, these ranking methods degenerate to random listings due to low contrast between outliers and regular objects. Outliers do not show up in the scattered full space, they are hidden in multiple high contrast subspace projections of the data. Measuring the contrast of such subspaces for outlier rankings is an open research challenge. In this work, we propose a novel subspace search method that selects high contrast subspaces for density-based outlier ranking. It is designed as pre-processing step to outlier ranking algorithms. It searches for high contrast subspaces with a significant amount of conditional dependence among the subspace dimensions. With our approach, we propose a first measure for the contrast of subspaces. Thus, we enhance the quality of traditional outlier rankings by computing outlier scores in high contrast projections only. The evaluation on real and synthetic data shows that our approach outperforms traditional dimensionality reduction techniques, naive random projections as well as state-of-the-art subspace search techniques and provides enhanced quality for outlier ranking. <s> BIB006 </s> Progress in Outlier Detection Techniques: A Survey <s> 2) SUBSPACE LEARNING <s> Outlier detection and ensemble learning are well established research directions in data mining yet the application of ensemble techniques to outlier detection has been rarely studied. Here, we propose and study subsampling as a technique to induce diversity among individual outlier detectors. We show analytically and experimentally that an outlier detector based on a subsample per se, besides inducing diversity, can, under certain conditions, already improve upon the results of the same outlier detector on the complete dataset. Building an ensemble on top of several subsamples is further improving the results. While in the literature so far the intuition that ensembles improve over single outlier detectors has just been transferred from the classification literature, here we also justify analytically why ensembles are also expected to work in the unsupervised area of outlier detection. As a side effect, running an ensemble of several outlier detectors on subsamples of the dataset is more efficient than ensembles based on other means of introducing diversity and, depending on the sample rate and the size of the ensemble, can be even more efficient than just the single outlier detector on the complete data. <s> BIB007 </s> Progress in Outlier Detection Techniques: A Survey <s> 2) SUBSPACE LEARNING <s> We propose a new approach for outlier detection, based on a ranking measure that focuses on the question of whether a point is ‘central’ for its nearest neighbours. Using our notations, a low cumulative rank implies that the point is central. For instance, a point centrally located in a cluster has a relatively low cumulative sum of ranks because it is among the nearest neighbours of its own nearest neighbours, but a point at the periphery of a cluster has a high cumulative sum of ranks because its nearest neighbours are closer to each other than the point. Use of ranks eliminates the problem of density calculation in the neighbourhood of the point and this improves the performance. Our method performs better than several density-based methods on some synthetic data sets as well as on some real data sets. <s> BIB008 </s> Progress in Outlier Detection Techniques: A Survey <s> 2) SUBSPACE LEARNING <s> Outlier detection has been an active area of research for a few decades. We propose a new definition of outlier that is useful for high-dimensional data. According to this definition, given a dictionary of atoms learned using the sparse coding objective, the outlierness of a data point depends jointly on two factors: the frequency of each atom in reconstructing all data points (or its negative log activity ratio, NLAR) and the strength by which it is used in reconstructing the current point. A R arity based O utlier D etection algorithm in a S parse coding framework (RODS) that consists of two components, NLAR learning and outlier scoring, is developed. This algorithm is unsupervised; both the offline and online variants are presented. It is governed by very few manually-tunable parameters and operates in linear time. We demonstrate the superior performance of the RODS in comparison with various state-of-the-art outlier detection algorithms on several benchmark datasets. We also demonstrate its effectiveness using three real-world case studies: saliency detection in images, abnormal event detection in videos, and change detection in data streams. Our evaluations shows that RODS outperforms competing algorithms reported in the outlier detection, saliency detection, video event detection, and change detection literature. <s> BIB009 </s> Progress in Outlier Detection Techniques: A Survey <s> 2) SUBSPACE LEARNING <s> Outlier detection in high-dimensional data is a challenging yet important task, as it has applications in, e.g., fraud detection and quality control. State-of-the-art density-based algorithms perform well because they 1) take the local neighbourhoods of data points into account and 2) consider feature subspaces. In highly complex and high-dimensional data, however, existing methods are likely to overlook important outliers because they do not explicitly take into account that the data is often a mixture distribution of multiple components. We therefore introduce GLOSS, an algorithm that performs local subspace outlier detection using global neighbourhoods. Experiments on synthetic data demonstrate that GLOSS more accurately detects local outliers in mixed data than its competitors. Moreover, experiments on real-world data show that our approach identifies relevant outliers overlooked by existing methods, confirming that one should keep an eye on the global perspective even when doing local outlier detection. <s> BIB010
Outlier detection methods mentioned to this point usually detect outliers from the complete data space considering all the dimensions. But most outliers are often denoted as rare neighborhood activities in a declining dimensional subspace. For objects with several attributes, Zimek et al. BIB007 denote that, only subsets with important attributes give valuable information. While characteristics like the residual attributes contribute little or no importance to the task and might delay the process of separating the OD model in solving such an issue, it will be interesting to perceive the outliers from a suitable subspace. In the outlier detection field, subspace learning is widely studied and applied in high dimensional problems. For subspace learning-based OD approaches, the main objective is to discover meaningful outliers in a well-organized way by examining dissimilar subsets of dimensions in the dataset. Mostly, these approaches are divided into sparse subspace BIB003 , BIB009 and relevant subspace BIB008 , BIB002 , BIB005 methods. The former project the high-dimensional data points onto sparse and low dimensional subspaces. These objects within the sparse subspace can then be labeled as outliers since they are characterized with a lower density. One big drawback of these methods is the time consumption with regards to exploring the sparse projections from the entire high-dimensional space. To address this drawback, Aggarwal et al. BIB001 proposed a method that improves the effectiveness of exploring the subspaces. The subspaces are achieved through an evolutionary algorithm. However, the performance evaluation of the algorithm is highly dependent on the initial population. An additional method that focuses on the path of the sparse subspace approaches is the Zhang et al. BIB003 method. Here, the concept of the lattice is used to signify the subspace relationships. The sparse subspace here also is those with low-density coefficients. Again, creating the idea of lattice influences and hinders the efficiency of the method because of its complexity, and this results in low efficiency. Dutta et al. BIB009 proposed a way to achieve sparse space. They applied sparse encoding to develop objects to multiple linear transformation space. The OD method uses relevant subspaces to examine the local information, which is useful for identifying outliers since they are essential features. Huang et al. BIB008 proposed Subspace Outlier Detection (SOD), a kind of relevant subspace method. Here, every objects correlation with its shared nearest neighbors is examined. They use the ranks of individual objects that are close as the degree of proximity of the object, but not take into consideration the objects distance information with respect to their neighbors. SOD focuses mainly on the variances of the features. Muller et al. BIB005 proposed another method to determine the subspaces. They make use of the significant relationships of the features; unlike SOD that only focuses on the variances of the features. However, a significant drawback of their method is its computational demand. In a similar study, Kriegel et al. BIB004 applied principal component analysis to get the relevant subspaces and Mahalanobis distance computation through gamma distribution to detect the outliers. The key difference compared to the previous study BIB005 , is that a large amount of local data is needed to identify the deviation trend. This consequently affects the flexibility and scalability of the method. To address the issue of flexibility of the technique, Keller et al. BIB006 designed a flexible OD technique which uses subspace searching and outlier ranking process. Initially, using the Monte Carlo sampling method, they obtained the High Contrast Subspaces (HiCS) and then combined the LOF scores based on the HiCS. Stein et al. BIB010 then proposed a local subspace OD method by adopting global neighborhoods in another study. Initially, the HiCS obtained all the relevant subspaces and instead of LOF, and LoOP technique was used to calculate the outlier scores. Although subspace learning OD methods show high efficiency and are useful in some cases, they are generally computationally expensive. This is because, in subspace learning methods, there is a prerequisite in exploring the subspace high dimensional space. Discovering the relevant subspaces for the outliers can also be another difficult task. Designing and proposing effective methods to handle these challenges can be exciting research in the future of subspace OD related methods.
Progress in Outlier Detection Techniques: A Survey <s> 3) GRAPH-BASED LEARNING METHODS <s> This paper introduces a stochastic graph-based algorithm, called OutRank, for detecting outliers in data. We consider two approaches for constructing a graph representation of the data, based on the object similarity and number of shared neighbors between objects. The heart of this approach is the Markov chain model that is built upon this graph, which assigns an outlier score to each object. Using this framework, we show that our algorithm is more robust than the existing outlier detection schemes and can effectively address the inherent problems of such schemes. Empirical studies conducted on both real and synthetic data sets show that significant improvements in detection rate and false alarm rate are achieved using the proposed framework. <s> BIB001 </s> Progress in Outlier Detection Techniques: A Survey <s> 3) GRAPH-BASED LEARNING METHODS <s> Detecting anomalies in data is a vital task, with numerous high-impact applications in areas such as security, finance, health care, and law enforcement. While numerous techniques have been developed in past years for spotting outliers and anomalies in unstructured collections of multi-dimensional points, with graph data becoming ubiquitous, techniques for structured graph data have been of focus recently. As objects in graphs have long-range correlations, a suite of novel technology has been developed for anomaly detection in graph data. This survey aims to provide a general, comprehensive, and structured overview of the state-of-the-art methods for anomaly detection in data represented as graphs. As a key contribution, we give a general framework for the algorithms categorized under various settings: unsupervised versus (semi-)supervised approaches, for static versus dynamic graphs, for attributed versus plain graphs. We highlight the effectiveness, scalability, generality, and robustness aspects of the methods. What is more, we stress the importance of anomaly attribution and highlight the major techniques that facilitate digging out the root cause, or the `why', of the detected anomalies for further analysis and sense-making. Finally, we present several real-world applications of graph-based anomaly detection in diverse domains, including financial, auction, computer traffic, and social networks. We conclude our survey with a discussion on open theoretical and practical challenges in the field. <s> BIB002 </s> Progress in Outlier Detection Techniques: A Survey <s> 3) GRAPH-BASED LEARNING METHODS <s> Large number of outlier detection methods have emerged in recent years due to their importance in many real-world applications. The graph-based methods, which can effectively capture the inter-dependencies of related objects, is one of the most powerful methods in this area. However, most of the graph-based methods ignore the local information around each node, which leads to a high false-positive rate for outlier detection. In this study, we present a new outlier detection model, which combines the graph representation with the local information around each object to construct a local information graph, and calculates the outlier score by performing a random walk process on the graph. Local information graph is constructed to capture the asymmetric inter-dependencies relationship between various types of data objects. Based on two different types of restart vectors to solve the dangling link problem, we propose two distinct algorithms for outlier detection. Experiments on synthetic datasets indicate that the proposed algorithms could efficiently distinguish outlier objects in different distributed datasets. Furthermore, the results on a number of real-world datasets also show that our approaches outperform the state-of-the-art outlier detection methods. <s> BIB003 </s> Progress in Outlier Detection Techniques: A Survey <s> 3) GRAPH-BASED LEARNING METHODS <s> Owing to its wide applications in both industry and academia, a large number of new approaches are emerging every year in the field of outlier detection. Among which, neighborhood-based approaches are adopted by a great number of researchers and they still represent the mainstream in the field. However, how to determine appropriate local information from the definition of neighbors is an arduous problem which still has no widely accepted solution. In this study, we propose a new outlier detection model utilizing multiple neighborhood graphs, each of which is based on changed neighbors to capture various local information from different perspectives. An outlier score for each object is then deduced by performing random walk on the predefined graphs. Experiments on ten real-world datasets suggested that the proposed model could obtain promising results compared with four state-of-the-art algorithms by the measure of ROC AUC and precision at n. <s> BIB004
The use of graph data is becoming universal in many domains. Using graph-based learning for OD methods has been the focus of some researchers. In graphs, objects usually take the form of long-range connections, and a set of new techniques has been proposed for outlier detection in graph data. Akoglu et al. BIB002 , presented a comprehensive survey of graph-based outlier detection techniques and descriptions. They included state-of-the-art methods and some open research challenges and questions. Furthermore, the importance of adopting graphs for outlier detection techniques was given. The graph-based approach in outlier detection is vital as they show the inter-dependent state of the data, show insightful representations, and robust machinery. Moonesinghe et al. BIB001 proposed Outrank, which is among the first constructed graph-based outlier detection framework. From the original data set, they developed fully linked undirected graphs and applied the Markov random walk method on the predefined graph. The stationary distribution values of the random walk serve as the outlier scores. In the most recent study, Wang et al. BIB003 proposed a novel method that combines the representation of the graph together with each object's local information in its surroundings. They address the problem of a high false-positive rates in the OD methods, which is usually as a result of the neglect of the local information around each node for graph-based methods. The local information obtained from around each object helps in the construction of a local information graph. The outliers are detected by computing the outlier scores through the process of a random walk on the graph. Wang et al. BIB004 in another study proposed another OD method that captures different local information from different standpoints. They used multiple neighborhood graphs, and the outlier scores are deduced through a random walk on the predetermined graph. These methods all show improved performances as claimed by the authors. However, since using graph-based learning methods has not yet been widely embraced, it is another domain for outlier detection research in the future.
Progress in Outlier Detection Techniques: A Survey <s> 4) DEEP LEARNING METHODS <s> An important issue in processing data from sensors is outlier detection. Plenty of methods for solving this task exist - applying rules, Support Vector Machines, Naive Bayes. They are not computationally intensive and give good results where border between outliers and inliers is linear. However, when the border's shape is highly non-linear, more sophisticated methods should be applied, with the requirement of not being computationally intensive. Deep learning architecture is applied to solve this problem and results are compared with the ones obtained by applying shallow architectures. <s> BIB001 </s> Progress in Outlier Detection Techniques: A Survey <s> 4) DEEP LEARNING METHODS <s> A great deal of attention has been given to deep learning over the past several years, and new deep learning techniques are emerging with improved functionality. Many computer and network applications actively utilize such deep learning algorithms and report enhanced performance through them. In this study, we present an overview of deep learning methodologies, including restricted Bolzmann machine-based deep belief network, deep neural network, and recurrent neural network, as well as the machine learning techniques relevant to network anomaly detection. In addition, this article introduces the latest work that employed deep learning techniques with the focus on network anomaly detection through the extensive literature survey. We also discuss our local experiments showing the feasibility of the deep learning approach to network traffic analysis. <s> BIB002 </s> Progress in Outlier Detection Techniques: A Survey <s> 4) DEEP LEARNING METHODS <s> Deep autoencoders, and other deep neural networks, have demonstrated their effectiveness in discovering non-linear features across many problem domains. However, in many real-world problems, large outliers and pervasive noise are commonplace, and one may not have access to clean training data as required by standard deep denoising autoencoders. Herein, we demonstrate novel extensions to deep autoencoders which not only maintain a deep autoencoders' ability to discover high quality, non-linear features but can also eliminate outliers and noise without access to any clean training data. Our model is inspired by Robust Principal Component Analysis, and we split the input data X into two parts, $X = L_{D} + S$, where $L_{D}$ can be effectively reconstructed by a deep autoencoder and $S$ contains the outliers and noise in the original data X. Since such splitting increases the robustness of standard deep autoencoders, we name our model a "Robust Deep Autoencoder (RDA)". Further, we present generalizations of our results to grouped sparsity norms which allow one to distinguish random anomalies from other types of structured corruptions, such as a collection of features being corrupted across many instances or a collection of instances having more corruptions than their fellows. Such "Group Robust Deep Autoencoders (GRDA)" give rise to novel anomaly detection approaches whose superior performance we demonstrate on a selection of benchmark problems. <s> BIB003 </s> Progress in Outlier Detection Techniques: A Survey <s> 4) DEEP LEARNING METHODS <s> PCA is a classical statistical technique whose simplicity and maturity has seen it find widespread use as an anomaly detection technique. However, it is limited in this regard by being sensitive to gross perturbations of the input, and by seeking a linear subspace that captures normal behaviour. The first issue has been dealt with by robust PCA, a variant of PCA that explicitly allows for some data points to be arbitrarily corrupted, however, this does not resolve the second issue, and indeed introduces the new issue that one can no longer inductively find anomalies on a test set. This paper addresses both issues in a single model, the robust autoencoder. This method learns a nonlinear subspace that captures the majority of data points, while allowing for some data to have arbitrary corruption. The model is simple to train and leverages recent advances in the optimisation of deep neural networks. Experiments on a range of real-world datasets highlight the model's effectiveness. <s> BIB004 </s> Progress in Outlier Detection Techniques: A Survey <s> 4) DEEP LEARNING METHODS <s> Anomaly detection is a critical step towards building a secure and trustworthy system. The primary purpose of a system log is to record system states and significant events at various critical points to help debug system failures and perform root cause analysis. Such log data is universally available in nearly all computer systems. Log data is an important and valuable resource for understanding system status and performance issues; therefore, the various system logs are naturally excellent source of information for online monitoring and anomaly detection. We propose DeepLog, a deep neural network model utilizing Long Short-Term Memory (LSTM), to model a system log as a natural language sequence. This allows DeepLog to automatically learn log patterns from normal execution, and detect anomalies when log patterns deviate from the model trained from log data under normal execution. In addition, we demonstrate how to incrementally update the DeepLog model in an online fashion so that it can adapt to new log patterns over time. Furthermore, DeepLog constructs workflows from the underlying system log so that once an anomaly is detected, users can diagnose the detected anomaly and perform root cause analysis effectively. Extensive experimental evaluations over large log data have shown that DeepLog has outperformed other existing log-based anomaly detection methods based on traditional data mining methodologies. <s> BIB005 </s> Progress in Outlier Detection Techniques: A Survey <s> 4) DEEP LEARNING METHODS <s> Videos represent the primary source of information for surveillance applications. Video material is often available in large quantities but in most cases it contains little or no annotation for supervised learning. This article reviews the state-of-the-art deep learning based methods for video anomaly detection and categorizes them based on the type of model and criteria of detection. We also perform simple studies to understand the different approaches and provide the criteria of evaluation for spatio-temporal anomaly detection. <s> BIB006 </s> Progress in Outlier Detection Techniques: A Survey <s> 4) DEEP LEARNING METHODS <s> Unsupervised anomaly detection on multi- or high-dimensional data is of great importance in both fundamental machine learning research and industrial applications, for which density estimation lies at the core. Although previous approaches based on dimensionality reduction followed by density estimation have made fruitful progress, they mainly suffer from decoupled model learning with inconsistent optimization goals and incapability of preserving essential information in the low-dimensional space. In this paper, we present a Deep Autoencoding Gaussian Mixture Model (DAGMM) for unsupervised anomaly detection. Our model utilizes a deep autoencoder to generate a low-dimensional representation and reconstruction error for each input data point, which is further fed into a Gaussian Mixture Model (GMM). Instead of using decoupled two-stage training and the standard Expectation-Maximization (EM) algorithm, DAGMM jointly optimizes the parameters of the deep autoencoder and the mixture model simultaneously in an end-to-end fashion, leveraging a separate estimation network to facilitate the parameter learning of the mixture model. The joint optimization, which well balances autoencoding reconstruction, density estimation of latent representation, and regularization, helps the autoencoder escape from less attractive local optima and further reduce reconstruction errors, avoiding the need of pre-training. Experimental results on several public benchmark datasets show that, DAGMM significantly outperforms state-of-the-art anomaly detection techniques, and achieves up to 14% improvement based on the standard F1 score. <s> BIB007 </s> Progress in Outlier Detection Techniques: A Survey <s> 4) DEEP LEARNING METHODS <s> It is important to detect anomalous inputs when deploying machine learning systems. The use of larger and more complex inputs in deep learning magnifies the difficulty of distinguishing between anomalous and in-distribution examples. At the same time, diverse image and text data are available in enormous quantities. We propose leveraging these data to improve deep anomaly detection by training anomaly detectors against an auxiliary dataset of outliers, an approach we call Outlier Exposure (OE). This enables anomaly detectors to generalize and detect unseen anomalies. In extensive experiments on natural language processing and small- and large-scale vision tasks, we find that Outlier Exposure significantly improves detection performance. We also observe that cutting-edge generative models trained on CIFAR-10 may assign higher likelihoods to SVHN images than to CIFAR-10 images; we use OE to mitigate this issue. We also analyze the flexibility and robustness of Outlier Exposure, and identify characteristics of the auxiliary dataset that improve performance. <s> BIB008 </s> Progress in Outlier Detection Techniques: A Survey <s> 4) DEEP LEARNING METHODS <s> Anomaly detection in supercomputers is a very difficult problem due to the big scale of the systems and the high number of components. The current state of the art for automated anomaly detection employs Machine Learning methods or statistical regression models in a supervised fashion, meaning that the detection tool is trained to distinguish among a fixed set of behaviour classes (healthy and unhealthy states).We propose a novel approach for anomaly detection in HighPerformance Computing systems based on a Machine (Deep) Learning technique, namely a type of neural network called autoencoder. The key idea is to train a set of autoencoders to learn the normal (healthy) behaviour of the supercomputer nodes and, after training, use them to identify abnormal conditions. This is different from previous approaches which where based on learning the abnormal condition, for which there are much smaller datasets (since it is very hard to identify them to begin with).We test our approach on a real supercomputer equipped with a fine-grained, scalable monitoring infrastructure that can provide large amount of data to characterize the system behaviour. The results are extremely promising: after the training phase to learn the normal system behaviour, our method is capable of detecting anomalies that have never been seen before with a very good accuracy (values ranging between 88% and 96%). <s> BIB009 </s> Progress in Outlier Detection Techniques: A Survey <s> 4) DEEP LEARNING METHODS <s> We propose a one-class neural network (OC-NN) model to detect anomalies in complex data sets. OC-NN combines the ability of deep networks to extract progressively rich representation of data with the one-class objective of creating a tight envelope around normal data. The OC-NN approach breaks new ground for the following crucial reason: data representation in the hidden layer is driven by the OC-NN objective and is thus customized for anomaly detection. This is a departure from other approaches which use a hybrid approach of learning deep features using an autoencoder and then feeding the features into a separate anomaly detection method like one-class SVM (OC-SVM). The hybrid OC-SVM approach is suboptimal because it is unable to influence representational learning in the hidden layers. A comprehensive set of experiments demonstrate that on complex data sets (like CIFAR and PFAM), OC-NN significantly outperforms existing state-of-the-art anomaly detection methods. <s> BIB010 </s> Progress in Outlier Detection Techniques: A Survey <s> 4) DEEP LEARNING METHODS <s> The prevalence of networked sensors and actuators in many real-world systems such as smart buildings, factories, power plants, and data centers generate substantial amounts of multivariate time series data for these systems. The rich sensor data can be continuously monitored for intrusion events through anomaly detection. However, conventional threshold-based anomaly detection methods are inadequate due to the dynamic complexities of these systems, while supervised machine learning methods are unable to exploit the large amounts of data due to the lack of labeled data. On the other hand, current unsupervised machine learning approaches have not fully exploited the spatial-temporal correlation and other dependencies amongst the multiple variables (sensors/actuators) in the system for detecting anomalies. In this work, we propose an unsupervised multivariate anomaly detection method based on Generative Adversarial Networks (GANs). Instead of treating each data stream independently, our proposed MAD-GAN framework considers the entire variable set concurrently to capture the latent interactions amongst the variables. We also fully exploit both the generator and discriminator produced by the GAN, using a novel anomaly score called DR-score to detect anomalies by discrimination and reconstruction. We have tested our proposed MAD-GAN using two recent datasets collected from real-world CPS: the Secure Water Treatment (SWaT) and the Water Distribution (WADI) datasets. Our experimental results showed that the proposed MAD-GAN is effective in reporting anomalies caused by various cyber-intrusions compared in these complex real-world systems. <s> BIB011 </s> Progress in Outlier Detection Techniques: A Survey <s> 4) DEEP LEARNING METHODS <s> Anomaly detection is an important problem that has been well-studied within diverse research areas and application domains. The aim of this survey is two-fold, firstly we present a structured and comprehensive overview of research methods in deep learning-based anomaly detection. Furthermore, we review the adoption of these methods for anomaly across various application domains and assess their effectiveness. We have grouped state-of-the-art research techniques into different categories based on the underlying assumptions and approach adopted. Within each category we outline the basic anomaly detection technique, along with its variants and present key assumptions, to differentiate between normal and anomalous behavior. For each category, we present we also present the advantages and limitations and discuss the computational complexity of the techniques in real application domains. Finally, we outline open issues in research and challenges faced while adopting these techniques. <s> BIB012
Recently, more attention has been given to deep learning in many areas including several studies related to outlier detection problems , BIB001 BIB002 , BIB006 - BIB011 . Most recently, Chalapathy and Chawla BIB012 in their survey presented a comprehensive study of deep learning methods for outlier detection. They review how deep learning methods are used in various outlier detection applications and evaluate their effectiveness. The use of deep learning techniques in detecting outliers is important because of one or several of these reasons. (1) the need for better ways of detecting outliers in large-scale data. (2) better ways of learning the hierarchical discriminative features from the data (3) better ways to set the boundary between a normal and unusual behavior in continuous evolving data sets. Deep learning can be based on supervised, semi-supervised, and unsupervised approaches to learning data representations. For example, employing the deep learning concept in fraud and anti-money laundering systems can detect and identify the relationships within the data, and subsequently enable researchers to learn the data points that are not similar to each other and then predict outliers. In the supervised deep OD methods, the binary or multiclass classifier are trained by utilizing the labels of the normal and abnormal data instances. The supervised models, for instance, that are framed as multi-class classifiers, help in identifying abnormal behaviors such as fraudulent health-care transactions BIB012 . Although supervised methods are shown to have improved performance, however, semisupervised and unsupervised methods are mostly adopted. This is because, supervised methods lack the readiness of labeled training data and also, there is a problem of class imbalance, which makes it sub-optimal to the others. In semi-supervised deep OD methods, the ease of obtaining the labels of the normal instances compared to the outliers makes it more widely appealing. They make good use of prevailing normal positive classes to differentiate the outliers. Semi-supervised techniques can be applied for training deep autoencoders on data samples missing outliers. With enough normal class training samples, the autoencoders will show significant improvement for the normal instance, with fewer reconstruction errors over the abnormal event. In unsupervised deep OD methods, the outliers are detected exclusively on the essential features of the data instances. Here, the data samples are unlabeled, and unsupervised OD techniques are used to label the unlabeled data samples. In most of unsupervised deep OD models, autoencoders play a central role BIB007 , . Most emerging research studies adopting deep learning techniques for OD methods utilize unsupervised methods. Using deep learning for unsupervised outlier detection problems has shown to be effective BIB003 , BIB004 . They are mostly categorized into the model architecture adopting autoencoders and hybrid models . The autoencoder related models assess the anomalies through reconstruction errors, i.e., employing the magnitude of the residual vector, whereas, in the later models, the autoencoder is used as the feature extractor, and then the hidden layers represent the input. In another study for deep learning models, Dan et al. BIB008 proposed Outlier Exposure (OE) to improve the outlier detection performance. They offered a method through iterations to find a suitable classifier for the model to learn the heuristics. This helps in differentiating between the outliers and in-distribution sample. In another study, Du et al. BIB005 proposed Deeplog, a universal framework that adopts a deep neural network approach for online log outlier detection and analysis. The deeplog utilizes Long Short-Term Memory (LSTM) to model the system log. The whole log messages are learned and encoded by the deeplog. Here, the anomaly detection process is done for every log entry level, in contrast to other methods per session level approach. Borghesi et al. BIB009 proposed a new way of detecting anomalies in High-Performance Computing Systems (HPCS) through adopting a kind of neural network called autoencoder. They first choose a set of autoencoders and train them to learn the normal pattern of the supercomputer nodes. After the training phase, they are applied to identify abnormal conditions. In deep OD methods, based on the training objectives, these methods can employ Deep Hybrid Models (DHM) or One Class Neural Networks (OCNN) BIB012 . The DHM uses deep neural networks. It focuses primarily on autoencoders for feature extraction, and the hidden representation of the autoencoders learned serves as the input for detecting outliers for most OD algorithms such a One-Class SVM. Although hybrid approaches maximize the detection performance of outliers, however, a notable limitation is the shortage of trainable objective solely designed for outlier detection. Therefore, DHM is limited in extracting rich differential features to detect the outliers. To solve this drawback, Chalapathy et al. BIB010 and Ruff et al. proposed One class neural networks and Deep one-class classification, respectively. The One-Class Neural Networks (OC-NN) combines the advantage of deep networks ability to extract rich feature representations of the data and the benefit of one-class creating a close-fitting structure around the normal data. The deep learning OD based technique is still active to be explored further and are promising for future work. In the discussion section, we propose and recommend some open challenges for future research work.
Progress in Outlier Detection Techniques: A Survey <s> Ranking Objects Candidates for the Outlier (ROCO) <s> ELKI is a unified software framework, designed as a tool suitable for evaluation of different algorithms on high dimensional real-valued feature-vectors. A special case of high dimensional real-valued feature-vectors are time series data where traditional distance measures like L p -distances can be applied. However, also a broad range of specialized distance measures like, e.g., dynamic time-warping, or generalized distance measures like second order distances, e.g., shared-nearest-neighbor distances, have been proposed. The new version ELKI 0.2 now is extended to time series data and offers a selection of these distance measures. It can serve as a visualization- and evaluation-tool for the behavior of different distance measures on time series data. <s> BIB001 </s> Progress in Outlier Detection Techniques: A Survey <s> Ranking Objects Candidates for the Outlier (ROCO) <s> Many outlier detection methods do not merely provide the decision for a single data object being or not being an outlier. Instead, many approaches give an “outlier score” or “outlier factor” indicating “how much” the respective data object is an outlier. Such outlier scores differ widely in their range, contrast, and expressiveness between different outlier models. Even for one and the same outlier model, the same score can indicate a different degree of “outlierness” in different data sets or regions of different characteristics in one data set. Here, we demonstrate a visualization tool based on a unification of outlier scores that allows to compare and evaluate outlier scores visually even for high dimensional data. <s> BIB002 </s> Progress in Outlier Detection Techniques: A Survey <s> Ranking Objects Candidates for the Outlier (ROCO) <s> Abstract We survey unsupervised machine learning algorithms in the context of outlier detection. This task challenges state-of-the-art methods from a variety of research fields to applications including fraud detection, intrusion detection, medical diagnoses and data cleaning. The selected methods are benchmarked on publicly available datasets and novel industrial datasets. Each method is then submitted to extensive scalability, memory consumption and robustness tests in order to build a full overview of the algorithms’ characteristics. <s> BIB003
The authors classified their work according to whether these algorithms go through clustering preprocessing phase, the type of pruning scheme use, and whether an object's candidates can be ranked according to neighbors and outliers. From their study, it is apparent that one cannot justify the effectiveness of any single optimization or the combination of optimizations over the other, as it always relies on the characteristics of the dataset. In another study, Achtert et al. BIB002 propose a visualization tool , BIB001 to compare and evaluate outlier scores for high dimensional data sets. Over the years, many approaches have presented the degree of an object being considered as an outlier through an outlier score or factor. However, these outlier scores or factors vary in their contrast, range, and definition among various outlier models. This makes it quite difficult for a novice user with OD methods to be able to interpret the outlier score or factor. In some cases, even for the same or similar outlier model, the same score within one or in the same data set can depict a different outlier degree. For illustration, the same outlier score x in database y and database z can have a considerable degree of outlierness. This makes it also very tedious to interpret and to compare the outlier scores. In addition, we should consider that in different models, various assumptions are made, and therefore, this might directly influence the interpretation of the degree of outlierness in the data set and how to define an outlier in the same datasets or for different datasets. Not much provision for better evaluation techniques has been proposed in recent studies, and most studies concentrate on introducing new methods to improve the detection rate and computational time. In contrast to classification problems, the evaluation of outlier detection algorithms performance is more complicated. Researchers have provided several adopted measurements to evaluate outlier detection algorithm performance BIB003 . They are defined as follows: i. Precision -this denotes the ratio of the number of correct outliers m, divided by the whole number of outliers t. In a particular application, setting t can be difficult. Therefore, t is usually assigned as the number of outliers in the ground truth. ii. R-precision -this refers to the proportion of correct outliers in the top number of ground truth potential outliers identified. The R-precision does not contain enough information because the number of true outliers is minimal when compared to the total size of the data. iii. Average precision -this denotes to the average of the precision scores over the ranks of the outlier points. It combines recall and precision.
Progress in Outlier Detection Techniques: A Survey <s> iv. Receiver Operating Characteristic (ROC) and Area <s> For many KDD applications, such as detecting criminal activities in E-commerce, finding the rare instances or the outliers, can be more interesting than finding the common patterns. Existing work in outlier detection regards being an outlier as a binary property. In this paper, we contend that for many scenarios, it is more meaningful to assign to each object a degree of being an outlier. This degree is called the local outlier factor (LOF) of an object. It is local in that the degree depends on how isolated the object is with respect to the surrounding neighborhood. We give a detailed formal analysis showing that LOF enjoys many desirable properties. Using real-world datasets, we demonstrate that LOF can be used to find outliers which appear to be meaningful, but can otherwise not be identified with existing approaches. Finally, a careful performance evaluation of our algorithm confirms we show that our approach of finding local outliers can be practical. <s> BIB001 </s> Progress in Outlier Detection Techniques: A Survey <s> iv. Receiver Operating Characteristic (ROC) and Area <s> Outlier detection is concerned with discovering exceptional behaviors of objects in data sets.It is becoming a growingly useful tool in applications such as credit card fraud detection, discovering criminal behaviors in e-commerce, identifying computer intrusion, detecting health problems, etc. In this paper, we introduce a connectivity-based outlier factor (COF) scheme that improves the effectiveness of an existing local outlier factor (LOF) scheme when a pattern itself has similar neighbourhood density as an outlier. We give theoretical and empirical analysis to demonstrate the improvement in effectiveness and the capability of the COF scheme in comparison with the LOF scheme. <s> BIB002 </s> Progress in Outlier Detection Techniques: A Survey <s> iv. Receiver Operating Characteristic (ROC) and Area <s> Mining outliers in database is to find exceptional objects that deviate from the rest of the data set. Besides classical outlier analysis algorithms, recent studies have focused on mining local outliers, i.e., the outliers that have density distribution significantly different from their neighborhood. The estimation of density distribution at the location of an object has so far been based on the density distribution of its k-nearest neighbors [2,11]. However, when outliers are in the location where the density distributions in the neighborhood are significantly different, for example, in the case of objects from a sparse cluster close to a denser cluster, this may result in wrong estimation. To avoid this problem, here we propose a simple but effective measure on local outliers based on a symmetric neighborhood relationship. The proposed measure considers both neighbors and reverse neighbors of an object when estimating its density distribution. As a result, outliers so discovered are more meaningful. To compute such local outliers efficiently, several mining algorithms are developed that detects top-n outliers based on our definition. A comprehensive performance evaluation and analysis shows that our methods are not only efficient in the computation but also more effective in ranking outliers. <s> BIB003 </s> Progress in Outlier Detection Techniques: A Survey <s> iv. Receiver Operating Characteristic (ROC) and Area <s> Many outlier detection methods do not merely provide the decision for a single data object being or not being an outlier but give also an outlier score or "outlier factor" signaling "how much" the respective data object is an outlier. A major problem for any user not very acquainted with the outlier detection method in question is how to interpret this "factor" in order to decide for the numeric score again whether or not the data object indeed is an outlier. Here, we formulate a local density based outlier detection method providing an outlier "score" in the range of [0, 1] that is directly interpretable as a probability of a data object for being an outlier. <s> BIB004 </s> Progress in Outlier Detection Techniques: A Survey <s> iv. Receiver Operating Characteristic (ROC) and Area <s> Outlier detection research is currently focusing on the development of new methods and on improving the computation time for these methods. Evaluation however is rather heuristic, often considering just precision in the top k results or using the area under the ROC curve. These evaluation procedures do not allow for assessment of similarity between methods. Judging the similarity of or correlation between two rankings of outlier scores is an important question in itself but it is also an essential step towards meaningfully building outlier detection ensembles, where this aspect has been completely ignored so far. In this study, our generalized view of evaluation methods allows both to evaluate the performance of existing methods as well as to compare different methods w.r.t. their detection performance. Our new evaluation framework takes into consideration the class imbalance problem and offers new insights on similarity and redundancy of existing outlier detection methods. As a result, the design of effective ensemble methods for outlier detection is considerably enhanced. <s> BIB005 </s> Progress in Outlier Detection Techniques: A Survey <s> iv. Receiver Operating Characteristic (ROC) and Area <s> Novelty detection is especially important for monitoring safety-critical systems in which novel conditions rarely occur and knowledge about novelty in that system is often limited or unavailable. There are a large number of studies in the area of novelty detection, but there is a lack of a comprehensive experimental evaluation of existing novelty detection methods. This paper aims to fill this void by conducting experimental evaluation of representative novelty detection methods. It presents a state-of-the-art review of novelty detection, with a focus on methods reported in the last few years. In addition, a rigorous comparative evaluation of four widely used methods, representative of different categories of novelty detectors, is carried out using 10 benchmark datasets with different scale, dimensionality and problem complexity. The experimental results demonstrate that the k-NN novelty detection method exhibits competitive overall performance to the other methods in terms of the AUC metric. <s> BIB006 </s> Progress in Outlier Detection Techniques: A Survey <s> iv. Receiver Operating Characteristic (ROC) and Area <s> Special Complex non-Gaussian processes may have dynamic operation scenario shifts so that the traditional Outlier detection approaches become ill-suited. This paper proposes a new outlier detection approach based on using subspace learning and Gaussian mixture model(GMM) in energy disaggregation. Locality preserving projections(LPP) of subspace learning can optimally preserve the neighborhood structure, reveal the intrinsic manifold structure of the data and keep outliers far away from the normal sample compared with the principal component analysis (PCA). The results show proposed approach can significantly improve performance of outlier detection in energy disaggregation, increase the fraction true-positive from 93.8% to 97%, decrease the fraction false-positive from 35.48% to 25.8%. <s> BIB007 </s> Progress in Outlier Detection Techniques: A Survey <s> iv. Receiver Operating Characteristic (ROC) and Area <s> The evaluation of unsupervised outlier detection algorithms is a constant challenge in data mining research. Little is known regarding the strengths and weaknesses of different standard outlier detection models, and the impact of parameter choices for these algorithms. The scarcity of appropriate benchmark datasets with ground truth annotation is a significant impediment to the evaluation of outlier methods. Even when labeled datasets are available, their suitability for the outlier detection task is typically unknown. Furthermore, the biases of commonly-used evaluation measures are not fully understood. It is thus difficult to ascertain the extent to which newly-proposed outlier detection methods improve over established methods. In this paper, we perform an extensive experimental study on the performance of a representative set of standard k nearest neighborhood-based methods for unsupervised outlier detection, across a wide variety of datasets prepared for this purpose. Based on the overall performance of the outlier detection methods, we provide a characterization of the datasets themselves, and discuss their suitability as outlier detection benchmark sets. We also examine the most commonly-used measures for comparing the performance of different methods, and suggest adaptations that are more suitable for the evaluation of outlier detection results. <s> BIB008 </s> Progress in Outlier Detection Techniques: A Survey <s> iv. Receiver Operating Characteristic (ROC) and Area <s> Much of the world's data is streaming, time-series data, where anomalies give significant information in critical situations, examples abound in domains such as finance, IT, security, medical, and energy. Yet detecting anomalies in streaming data is a difficult task, requiring detectors to process data in real-time, not batches, and learn while simultaneously making predictions. There are no benchmarks to adequately test and score the efficacy of real-time anomaly detectors. Here we propose the Numenta Anomaly Benchmark (NAB), which attempts to provide a controlled and repeatable environment of open-source tools to test and measure anomaly detection algorithms on streaming data. The perfect detector would detect all anomalies as soon as possible, trigger no false alarms, work with real-world time-series data across a variety of domains, and automatically adapt to changing statistics. Rewarding these characteristics is formalized in NAB, using a scoring algorithm designed for streaming data. NAB evaluates detectors on a benchmark dataset with labeled, real-world time-series data. We present these components, and give results and analyses for several open source, commercially-used algorithms. The goal for NAB is to provide a standard, open source framework with which the research community can compare and evaluate different algorithms for detecting anomalies in streaming data. <s> BIB009 </s> Progress in Outlier Detection Techniques: A Survey <s> iv. Receiver Operating Characteristic (ROC) and Area <s> Anomaly detection is the process of identifying unexpected items or events in datasets, which differ from the norm. In contrast to standard classification tasks, anomaly detection is often applied on unlabeled data, taking only the internal structure of the dataset into account. This challenge is known as unsupervised anomaly detection and is addressed in many practical applications, for example in network intrusion detection, fraud detection as well as in the life science and medical domain. Dozens of algorithms have been proposed in this area, but unfortunately the research community still lacks a comparative universal evaluation as well as common publicly available datasets. These shortcomings are addressed in this study, where 19 different unsupervised anomaly detection algorithms are evaluated on 10 different datasets from multiple application domains. By publishing the source code and the datasets, this paper aims to be a new well-funded basis for unsupervised anomaly detection research. Additionally, this evaluation reveals the strengths and weaknesses of the different approaches for the first time. Besides the anomaly detection performance, computational effort, the impact of parameter settings as well as the global/local anomaly detection behavior is outlined. As a conclusion, we give an advise on algorithm selection for typical real-world tasks. <s> BIB010
Under the Curve (AUC) -the ROC is a graphical plot that shows the true positive rate against the false positive rate. The true or false positive rate signifies the number of outliers or inliers ranked among the potential outliers in the top number of outliers in the ground truth. The AUC shows the numerical evaluation performance of the outlier detection method. v. Correlation coefficient -is a numerical measure of correlation, i.e., a statistical relationship between two variables. For instance, Spearman's rank similarity or Pearson correlation. More importance is placed on the possible outliers ranked at the top. vi. Rank power (RP)-it ranks the true outliers at the top and normal ones at the bottom. It comprehensively evaluates the ranking of true outliers. Most of the evaluation methods are instead heuristic and focus on precision, the receiver operating characteristic (ROC) curve and area under the curve (AUC) in showing the results. The drawback of these evaluation procedures is that there is no provision for a similarity check among the methods. Knowing how similar or correlated the ranking of outlier scores are is considered a very significant step towards constructing better OD methods. AUC completely disregards the small variations among scores and only considers the ranking. It is also inferior and not perfect for unbalanced class problems when compared to techniques such as the area under the precision-recall curve, which shows a better possibility in highlighting small detection changes. However, despite these drawbacks, AUC, ROC, and precision-recall still serve as the de facto standard in evaluating many outlier detection problems. Since knowing how similar or correlated the ranking of outlier scores are is a very significant step towards constructing better OD methods, Schubert et al. BIB005 in their study gave a global view that permits the evaluation of the performance of different approaches against each other. The proposed framework considers the problem of class imbalance and then offers a new understanding about the similarity and redundancy of prevailing outlier detection techniques. To achieve the main objective in giving a better evaluation for both the outlier rankings and scores, a suitable correlation measure for comparing rankings by taking into account the outlier scores was established. In another study, Goldstein et al. BIB007 proposed a comparatively universal evaluation of nineteen different unsupervised outlier detection algorithms with ten publicly available datasets. The main aim was to address the lack of interesting literature that exists BIB006 , that gives a better evaluation of outlier detection algorithms. One notable trend with existing research literature is the comparison of newly proposed algorithms with some previous or state-of-the-art methods. However, most of these studies fail to publish the datasets with appropriate preprocessing or indicate which application scenarios they are most suitable. They also mostly lack a clear understanding of the effect of the parameter k and the established criteria of whether it is a local or global outlier. The authors address these issues by performing an evaluation study that reveals the performance of the effect of the parameter settings, computational strength, and the overall strengths and weaknesses of different algorithms. The list of algorithms was categorized into nearest-neighbor based methods, statistical based methods, clustering based methods, subspace methods, and classifier-based techniques. These algorithms were then compared. For the KNN methods, the choice of the parameter k is very significant as it influences the outlier score. Other important things to consider are the dataset, the dimensionality, and normalizations. They experimented with investigating the influence of the parameter and then evaluated the nearest neighbor algorithms. Key findings from their study were that local outlier detection algorithms such as LOF BIB001 , INFLO BIB003 , COF BIB002 and LoOP BIB004 are not suitable for detecting global outliers since they showed poor performance on datasets comprised of global outliers. However, it is the opposite of the performance of global outlier detection problems on local outlier detection problems. In addition, they found that the clustering-based algorithms were in most cases inferior to the nearest-neighbor based algorithms. Therefore, it is recommended for a global task to apply the nearest-neighbor techniques, and for a local task, the local outlier algorithms like LOF are more suitable than other clustering-based methods. Another issue with regard evaluating most OD models is that there is a scarcity of rich knowledge about the strength and weaknesses of these outlier detection models, suitable benchmark datasets for outlier detection task and some biases that are used in the evaluation process that are not well understood. Campos et al. BIB008 , similar to BIB010 , did an experimental study across a wide variety of specific datasets to observe the performance of different unsupervised outlier detection algorithms. In their study, they classified different datasets and deliberated on how suitable they are as outlier detection standard datasets. Also, they further discuss and examine the commonly-known and used methods/measures for comparing outlier detection performance. Some common misconceptions the authors clarify are, for instance, the ground-truth datasets containing a large number of outliers. It is sometimes believed that these outliers will influence the evaluation performance of these methods, but this does not hold in all scenarios. Usually, the large proportions of outliers in datasets are not suitable to evaluate outlier detection techniques because outliers are supposed to be less common in the datasets. A small percentage of outliers and normalized datasets usually produce a much better performance in most cases. Another critical misconception held is concerning the influence of the dimensionality. An increase in the dimensionality often results in a high computational cost but is not directly proportional to the overall performance, especially in terms of the detection rate. Another important area to consider is the evaluation of outliers in data streams. Outlier detection in data streams is usually a difficult task, because the data should be learned and processed in real-time while concurrently making good predictions. Except for the Lavin et al. BIB009 Numenta Anomaly Benchmark (NAB) framework, there is a lack of benchmarks to effectively test and score the effectiveness of real-time outlier detection methods. With more recent studies concentrated in this domain, there is a need for proposing efficient and rigorous benchmarks to evaluate real-time outlier detection algorithms in data streams effectively.
Progress in Outlier Detection Techniques: A Survey <s> B. TOOLS FOR OUTLIER DETECTION <s> For many KDD applications, such as detecting criminal activities in E-commerce, finding the rare instances or the outliers, can be more interesting than finding the common patterns. Existing work in outlier detection regards being an outlier as a binary property. In this paper, we contend that for many scenarios, it is more meaningful to assign to each object a degree of being an outlier. This degree is called the local outlier factor (LOF) of an object. It is local in that the degree depends on how isolated the object is with respect to the surrounding neighborhood. We give a detailed formal analysis showing that LOF enjoys many desirable properties. Using real-world datasets, we demonstrate that LOF can be used to find outliers which appear to be meaningful, but can otherwise not be identified with existing approaches. Finally, a careful performance evaluation of our algorithm confirms we show that our approach of finding local outliers can be practical. <s> BIB001 </s> Progress in Outlier Detection Techniques: A Survey <s> B. TOOLS FOR OUTLIER DETECTION <s> Outlier detection is concerned with discovering exceptional behaviors of objects in data sets.It is becoming a growingly useful tool in applications such as credit card fraud detection, discovering criminal behaviors in e-commerce, identifying computer intrusion, detecting health problems, etc. In this paper, we introduce a connectivity-based outlier factor (COF) scheme that improves the effectiveness of an existing local outlier factor (LOF) scheme when a pattern itself has similar neighbourhood density as an outlier. We give theoretical and empirical analysis to demonstrate the improvement in effectiveness and the capability of the COF scheme in comparison with the LOF scheme. <s> BIB002 </s> Progress in Outlier Detection Techniques: A Survey <s> B. TOOLS FOR OUTLIER DETECTION <s> Outlier detection is an integral part of data mining and has attracted much attention recently [M. Breunig et al., (2000)], [W. Jin et al., (2001)], [E. Knorr et al., (2000)]. We propose a new method for evaluating outlierness, which we call the local correlation integral (LOCI). As with the best previous methods, LOCI is highly effective for detecting outliers and groups of outliers (a.k.a. micro-clusters). In addition, it offers the following advantages and novelties: (a) It provides an automatic, data-dictated cutoff to determine whether a point is an outlier-in contrast, previous methods force users to pick cut-offs, without any hints as to what cut-off value is best for a given dataset. (b) It can provide a LOCI plot for each point; this plot summarizes a wealth of information about the data in the vicinity of the point, determining clusters, micro-clusters, their diameters and their inter-cluster distances. None of the existing outlier-detection methods can match this feature, because they output only a single number for each point: its outlierness score, (c) Our LOCI method can be computed as quickly as the best previous methods, (d) Moreover, LOCI leads to a practically linear approximate method, aLOCI (for approximate LOCI), which provides fast highly-accurate outlier detection. To the best of our knowledge, this is the first work to use approximate computations to speed up outlier detection. Experiments on synthetic and real world data sets show that LOCI and aLOCI can automatically detect outliers and micro-clusters, without user-required cut-offs, and that they quickly spot both expected and unexpected outliers. <s> BIB003 </s> Progress in Outlier Detection Techniques: A Survey <s> B. TOOLS FOR OUTLIER DETECTION <s> Most existing model-based approaches to anomaly detection construct a profile of normal instances, then identify instances that do not conform to the normal profile as anomalies. This paper proposes a fundamentally different model-based method that explicitly isolates anomalies instead of profiles normal points. To our best knowledge, the concept of isolation has not been explored in current literature. The use of isolation enables the proposed method, iForest, to exploit sub-sampling to an extent that is not feasible in existing methods, creating an algorithm which has a linear time complexity with a low constant and a low memory requirement. Our empirical evaluation shows that iForest performs favourably to ORCA, a near-linear time complexity distance-based method, LOF and random forests in terms of AUC and processing time, and especially in large data sets. iForest also works well in high dimensional problems which have a large number of irrelevant attributes, and in situations where training set does not contain any anomalies. <s> BIB004 </s> Progress in Outlier Detection Techniques: A Survey <s> B. TOOLS FOR OUTLIER DETECTION <s> Many outlier detection methods do not merely provide the decision for a single data object being or not being an outlier but give also an outlier score or "outlier factor" signaling "how much" the respective data object is an outlier. A major problem for any user not very acquainted with the outlier detection method in question is how to interpret this "factor" in order to decide for the numeric score again whether or not the data object indeed is an outlier. Here, we formulate a local density based outlier detection method providing an outlier "score" in the range of [0, 1] that is directly interpretable as a probability of a data object for being an outlier. <s> BIB005 </s> Progress in Outlier Detection Techniques: A Survey <s> B. TOOLS FOR OUTLIER DETECTION <s> Many outlier detection methods do not merely provide the decision for a single data object being or not being an outlier. Instead, many approaches give an “outlier score” or “outlier factor” indicating “how much” the respective data object is an outlier. Such outlier scores differ widely in their range, contrast, and expressiveness between different outlier models. Even for one and the same outlier model, the same score can indicate a different degree of “outlierness” in different data sets or regions of different characteristics in one data set. Here, we demonstrate a visualization tool based on a unification of outlier scores that allows to compare and evaluate outlier scores visually even for high dimensional data. <s> BIB006 </s> Progress in Outlier Detection Techniques: A Survey <s> B. TOOLS FOR OUTLIER DETECTION <s> Scikit-learn is a Python module integrating a wide range of state-of-the-art machine learning algorithms for medium-scale supervised and unsupervised problems. This package focuses on bringing machine learning to non-specialists using a general-purpose high-level language. Emphasis is put on ease of use, performance, documentation, and API consistency. It has minimal dependencies and is distributed under the simplified BSD license, encouraging its use in both academic and commercial settings. Source code, binaries, and documentation can be downloaded from http://scikit-learn.sourceforge.net. <s> BIB007 </s> Progress in Outlier Detection Techniques: A Survey <s> B. TOOLS FOR OUTLIER DETECTION <s> PyOD is an open-source Python toolbox for performing scalable outlier detection on multivariate data. Uniquely, it provides access to a wide range of outlier detection algorithms, including established outlier ensembles and more recent neural network-based approaches, under a single, well-documented API designed for use by both practitioners and researchers. With robustness and scalability in mind, best practices such as unit testing, continuous integration, code coverage, maintainability checks, interactive examples and parallelization are emphasized as core components in the toolbox's development. PyOD is compatible with both Python 2 and 3 and can be installed through Python Package Index (PyPI) or this https URL. <s> BIB008 </s> Progress in Outlier Detection Techniques: A Survey <s> B. TOOLS FOR OUTLIER DETECTION <s> Online retailers execute a very large number of price updates when compared to brick-and-mortar stores. Even a few mis-priced items can have a significant business impact and result in a loss of customer trust. Early detection of anomalies in an automated real-time fashion is an important part of such a pricing system. In this paper, we describe unsupervised and supervised anomaly detection approaches we developed and deployed for a large-scale online pricing system at Walmart. Our system detects anomalies both in batch and real-time streaming settings, and the items flagged are reviewed and actioned based on priority and business impact. We found that having the right architecture design was critical to facilitate model performance at scale, and business impact and speed were important factors influencing model selection, parameter choice, and prioritization in a production environment for a large-scale system. We conducted analyses on the performance of various approaches on a test set using real-world retail data and fully deployed our approach into production. We found that our approach was able to detect the most important anomalies with high precision. <s> BIB009 </s> Progress in Outlier Detection Techniques: A Survey <s> B. TOOLS FOR OUTLIER DETECTION <s> The smart campus is becoming a reality with the advancement of information and communication technologies. For energy efficiency, it is essential to detect abnormal energy consumption in a smart campus, which is important for a “smart” campus. However, the obtained data are usually continuously generated by ubiquitous sensing devices, and the abnormal patterns hidden in the data are usually unknown, which makes detecting anomalies in such a context more challenging. Moreover, evaluating the quality of anomaly detection algorithms is difficult without labeled datasets. If the data are annotated well, classical criteria such as the receiver operating characteristic or precision recall curves can be used to compare the performance of different anomaly detection algorithms. In a smart campus environment, it is difficult to acquire labeled data to train a model due to the limited capabilities of the sensing devices. Therefore, distributed intelligence is preferred. In this paper, we present a multi-agent-based unsupervised anomaly detection method. We tackle these challenges in two stages with this method. First, we label the data using ensemble models. Second, we propose a method based on deep learning techniques to detect anomalies in an unsupervised fashion. The result of the first stage is used to evaluate the performance of the proposed method. We validate the proposed method with several datasets, and the experimental results demonstrate the effectiveness of our method. <s> BIB010
In outlier detection, many tools and datasets have been used. Here, we introduce some popular tools used for outlier detection processes and some outlier detection databases. The prevalence of outlier detection in industrial applications has seen the development of many software tools such as the following provided below. Scikit-learn Outlier Detection BIB007 The scikit-learn project offers some machine learning tools that can be applied for outlier detection problems. It includes some algorithms like LOF BIB001 and Isolation Forest BIB004 . 2) Python Outlier Detection (PyOD) BIB008 PyOD is used for detecting outliers in multivariate data. It is a scalable python tool that has been used in many research and commercial projects, including new deep learning and outlier ensembles models BIB009 , BIB010 , . 3) Environment for Developing KDD-Applications Supported by Index-Structures (ELKI) BIB006 ELKI is an open source data mining algorithm that provides a collection of data mining algorithms, including OD algorithms. It allows the ease and fair assessment and benchmarking of OD algorithms. It is written in Java. 4) Rapid Miner The extension of this tool contains many popular unsupervised outlier detection algorithms such as LOF, COF BIB002 , LOCI BIB003 , and LoOP BIB005 . 5) MATLAB MATLAB also supports many outlier detection algorithms and functions. Algorithms can be implemented using MATLAB because it is user-friendly. 6) Massive Online Analysis (MOA) tool . MOA is an open source framework that provides a collection of data stream mining algorithm. It includes some distancebased outlier detection algorithms such as COD, ACOD, Abstract C, MCOD, and some tools for evaluation.
Progress in Outlier Detection Techniques: A Survey <s> C. DATASETS FOR OUTLIER DETECTION <s> In recent years, many new techniques have been developed for mining and managing uncertain data. This is because of the new ways of collecting data which has resulted in enormous amounts of inconsistent or missing data. Such data is often remodeled in the form of uncertain data. In this paper, we will examine the problem of outlier detection with uncertain data sets. The outlier detection problem is particularly challenging for the uncertain case, because the outlier-like behavior of a data point may be a result of the uncertainty added to the data point. Furthermore, the uncertainty added to the other data points may skew the overall data distribution in such a way that true outliers may be masked. Therefore, it is critical to be able to remove the effects of the uncertainty added both at the aggregate level as well as at the level of individual data points. In this paper, we will examine a density based approach to outlier detection, and show how to use it to remove the uncertainty from the underlying data. We present experimental results illustrating the effectiveness of the method. <s> BIB001 </s> Progress in Outlier Detection Techniques: A Survey <s> C. DATASETS FOR OUTLIER DETECTION <s> Outlier detection has attracted substantial attention in many applications and research areas; some of the most prominent applications are network intrusion detection or credit card fraud detection. Many of the existing approaches are based on calculating distances among the points in the dataset. These approaches cannot easily adapt to current datasets that usually contain a mix of categorical and continuous attributes, and may be distributed among different geographical locations. In addition, current datasets usually have a large number of dimensions. These datasets tend to be sparse, and traditional concepts such as Euclidean distance or nearest neighbor become unsuitable. We propose a fast distributed outlier detection strategy intended for datasets containing mixed attributes. The proposed method takes into consideration the sparseness of the dataset, and is experimentally shown to be highly scalable with the number of points and the number of attributes in the dataset. Experimental results show that the proposed outlier detection method compares very favorably with other state-of-the art outlier detection strategies proposed in the literature and that the speedup achieved by its distributed version is very close to linear. <s> BIB002 </s> Progress in Outlier Detection Techniques: A Survey <s> C. DATASETS FOR OUTLIER DETECTION <s> The evaluation of unsupervised outlier detection algorithms is a constant challenge in data mining research. Little is known regarding the strengths and weaknesses of different standard outlier detection models, and the impact of parameter choices for these algorithms. The scarcity of appropriate benchmark datasets with ground truth annotation is a significant impediment to the evaluation of outlier methods. Even when labeled datasets are available, their suitability for the outlier detection task is typically unknown. Furthermore, the biases of commonly-used evaluation measures are not fully understood. It is thus difficult to ascertain the extent to which newly-proposed outlier detection methods improve over established methods. In this paper, we perform an extensive experimental study on the performance of a representative set of standard k nearest neighborhood-based methods for unsupervised outlier detection, across a wide variety of datasets prepared for this purpose. Based on the overall performance of the outlier detection methods, we provide a characterization of the datasets themselves, and discuss their suitability as outlier detection benchmark sets. We also examine the most commonly-used measures for comparing the performance of different methods, and suggest adaptations that are more suitable for the evaluation of outlier detection results. <s> BIB003
Outlier detection methods have been applied in different kinds of data, such as in regular and high-dimensional data sets BIB002 , streaming datasets, network data, uncertain data BIB001 , and time series data. In outlier detection literature, two types of data are mostly considered and required for evaluating the performance of the algorithms. They are realworld datasets and synthetic datasets. The real-world datasets can be obtained from publicly available databases. Some of the most popular and useful databases that contain real-world datasets for outlier detection include the following: 1) The UCI repository . The UCI repository has hundreds of freely available data sets, and many OD methods use the repository to evaluate the performance of the algorithms. However, the majority of these datasets are designed for classification methods. In outlier detection scenarios, the generally used approach is to preprocess the datasets. The outliers represent objects in the minor class, and the rest are considered as the normal ones. 2) Outlier Detection Datasets (ODDS) [51] . Unlike UCI repository, ODDS, provides open access to a collection of datasets only suitable for the outlier detection process. The datasets are grouped into different types including multi-dimensional datasets, time series univariate and multivariate datasets, and time series graph datasets. 3) ELKI Outlier Datasets . ELKI has a collection of data sets for outlier detection and also many data sets for OD methods evaluation. These data sets are used to study the performance of several OD algorithms and parameters. 4) Unsupervised Anomaly Detection Dataverse . These datasets are used for evaluating unsupervised outlier detection algorithms by making comparison with the standards. It is obtained from multiple sources with the majority of the data sets from supervised machine learning datasets. It is important to note that with real-world data sets, a lot of data is not publicly accessible due to privacy and security concerns. Synthetic datasets are often created under the settings of defined constraints and conditions. Synthetic datasets, when compared to real-world datasets, are mostly less complex and eccentric, and shows better validity of the OD algorithms performance. For the outlier detection process, since most of the data adopted are not purpose-specific for just OD methods, the repurposing of supervised classification data has been widely adopted. In many studies, the data has been treated as it is, rather than manipulated. As stated earlier, in outlier detection experiments, to evaluate the OD methods there is need to use both real-world and synthetic data sets. Also, many benchmark datasets are required to develop an algorithm that captures a broader view of the problems. The availability of many benchmark datasets also helps in the better and more robust way of reporting and presenting the results. In most supervised classification types of datasets, they require some preprocessing for outlier detection tasks. Two important aspects are considered in the preprocessing phase BIB003 . That is, for semantically significant outlier datasets, the outliers are the classes related to the minor objects and the normal data is the rest of the data. When choosing a data set for OD methods, the data should be tailored in terms of precise and meaningful attributes which can fit the problem definition. For example, for an OD method related to the data stream, it is better to use streaming data rather than other kinds of data. The selected algorithm should fit the data in terms of the right attribute types, the correct distribution model, the speed and scalability and other important anticipated incremental capabilities that can be managed and model well upon the arrival of new objects. Some of the other concerns in dealing with datasets include how to handle the downsampling of data, dealing with duplicate data, transforming categorical attributes to numeric types, normalization, and dealing with missing values. In future work, it will be crucial to study how to evaluate dataset for outlier detection methods and what key attributes to take into consideration.
Methodological Review: A review of causal inference for biomedical informatics <s> Introduction <s> There occurs on some occasions a difficulty in deciding the direction of causality between two related variables and also whether or not feedback is occurring. Testable definitions of causality and feedback are proposed and illustrated by use of simple two-variable models. The important problem of apparent instantaneous causality is discussed and it is suggested that the problem often arises due to slowness in recording information or because a sufficiently wide class of possible causal variables has not been used. It can be shown that the cross spectrum between two variables can be decomposed into two parts, each relating to a single causal arm of a feedback situation. Measures of causal lag and causal strength can then be constructed. A generalisation of this result with the partial cross spectrum is suggested. <s> BIB001 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Introduction <s> A discussion of matching, randomization, random sampling, and other methods of controlling extraneous variation is presented. The objective is to specify the benefits of randomization in estimating causal effects of treatments. The basic conclusion is that randomization should be employed whenever possible but that the use of carefully controlled nonrandomized data to estimate causal effects is a reasonable and necessary procedure in many cases. Recent psychological and educational literature has included extensive criticism of the use of nonrandomized studies to estimate causal effects of treatments (e.g., Campbell & Erlebacher, 1970). The implication in much of this literature is that only properly randomized experiments can lead to useful estimates of causal effects. If taken as applying to all fields of study, this position is untenable. Since the extensive use of randomized experiments is limited to the last half century,8 and in fact is not used in much scientific investigation today,4 one is led to the conclusion that most scientific "truths" have been established without using randomized experiments. In addition, most of us successfully determine the causal effects of many of our everyday actions, even interpersonal behaviors, without the benefit of randomization. Even if the position that causal effects of treatments can only be well established from randomized experiments is taken as applying only to the social sciences in which <s> BIB002 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Introduction <s> What assumptions and methods allow us to turn observations into causal knowledge, and how can even incomplete causal knowledge be used in planning and prediction to influence and control our environment? In this book Peter Spirtes, Clark Glymour, and Richard Scheines address these questions using the formalism of Bayes networks, with results that have been applied in diverse areas of research in the social, behavioral, and physical sciences. The authors show that although experimental and observational study designs may not always permit the same inferences, they are subject to uniform principles. They axiomatize the connection between causal structure and probabilistic independence, explore several varieties of causal indistinguishability, formulate a theory of manipulation, and develop asymptotically reliable procedures for searching over equivalence classes of causal models, including models of categorical data and structural equation models with and without latent variables. The authors show that the relationship between causality and probability can also help to clarify such diverse topics in statistics as the comparative power of experimentation versus observation, Simpson's paradox, errors in regression models, retrospective versus prospective sampling, and variable selection. The second edition contains a new introduction and an extensive survey of advances and applications that have appeared since the first edition was published in 1993. <s> BIB003 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Introduction <s> Background: Causal reasoning as a way to make a diagnosis seems convincing. Modern medicine depends on the search for causes of disease and it seems fair to assert that such knowledge is employed in diagnosis. Causal reasoning as it has been presented neglects to some extent the conception of multifactorial disease causes. <s> BIB004 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Introduction <s> OBJECTIVE ::: To determine if inadequate approaches to randomized controlled trial design and execution are associated with evidence of bias in estimating treatment effects. ::: ::: ::: DESIGN ::: An observational study in which we assessed the methodological quality of 250 controlled trials from 33 meta-analyses and then analyzed, using multiple logistic regression models, the associations between those assessments and estimated treatment effects. ::: ::: ::: DATA SOURCES ::: Meta-analyses from the Cochrane Pregnancy and Childbirth Database. ::: ::: ::: MAIN OUTCOME MEASURES ::: The associations between estimates of treatment effects and inadequate allocation concealment, exclusions after randomization, and lack of double-blinding. ::: ::: ::: RESULTS ::: Compared with trials in which authors reported adequately concealed treatment allocation, trials in which concealment was either inadequate or unclear (did not report or incompletely reported a concealment approach) yielded larger estimates of treatment effects (P < .001). Odds ratios were exaggerated by 41% for inadequately concealed trials and by 30% for unclearly concealed trials (adjusted for other aspects of quality). Trials in which participants had been excluded after randomization did not yield larger estimates of effects, but that lack of association may be due to incomplete reporting. Trials that were not double-blind also yielded larger estimates of effects (P = .01), with odds ratios being exaggerated by 17%. ::: ::: ::: CONCLUSIONS ::: This study provides empirical evidence that inadequate methodological approaches in controlled trials, particularly those representing poor allocation concealment, are associated with bias. Readers of trial reports should be wary of these pitfalls, and investigators must improve their design, execution, and reporting of trials. <s> BIB005 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Introduction <s> The paper discusses the evolving concept of causationin epidemiology and its potential interaction with logic and scientific philosophy. Causes arecontingent but the necessity which binds them totheir effects relies on contrary-to-fact conditionals,i.e. conditional statements whose antecedent is false.Chance instead of determinism plays a growing role inscience and, although rarely acknowledged yet, inepidemiology: causes are multiple and chancy; a priorevent causes a subsequent event if the probabilitydistribution of the subsequent event changesconditionally upon the probability of the prior event.There are no known sufficient causes in epidemiology.We merely observe tendencies toward sufficiency ortendencies toward necessity: cohort studies evaluatethe first tendencies, and case-control studies thelatter.In applied sciences, such as medicine andepidemiology, causes are intrinsically connected withgoals and effective strategies: they are recipewhich have a potential harmful or sucessful use; theyare contrastive since they make a differencebetween circumstances in which they are present andthose in which they are absent: causes do notexplain Event E but event E rather thaneven F. Causation is intrinsically linked with thenotion of ``what is pathological''.Any definition of causation will inevitably collapseinto the use made of epidemiologic methods. Theprogressive methodological sophistication of the lastforty years is in perfect alignment with a gradualimplicit overhaul of our concept of causation. <s> BIB006 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Introduction <s> Randomized controlled trials (RCTs) are essential for evaluating the efficacy of clinical interventions, where the causal chain between the agent and the outcome is relatively short and simple and where results may be safely extrapolated to other settings. However, causal chains in public health interventions are complex, making RCT results subject to effect modification in different populations. Both the internal and external validity of RCT findings can be greatly enhanced by observational studies using adequacy or plausibility designs. For evaluating large-scale interventions, studies with plausibility designs are often the only feasible option and may provide valid evidence of impact. There is an urgent need to develop evaluation standards and protocols for use in circumstances where RCTs are not appropriate. <s> BIB007 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Introduction <s> The randomized controlled trial (RCT) is not a gold standard: it is a good experimental design in some circumstances, but that's all. Potential shortcomings in the design and implementation of RCTs are often mentioned in passing, yet most researchers consider that RCTs are always superior to all other types of evidence. This paper examines the limitations of RCTs and shows that some types of evidence commonly supposed to be inferior to all RCTs are actually superior to many. This has important consequences for research methodology, for quality of care in clinical medicine, and—especially—for research funding policy. Because every study design may have problems in particular applications, studies should be evaluated by appropriate criteria, and not primarily according to the simplistic RCT/non-RCT dichotomy promoted by some prominent advocates of the evidence-based medicine movement and by the research evaluation guidelines based on its principles. <s> BIB008 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Introduction <s> Causal diagrams are rigorous tools for controlling confounding. They also can be used to describe complex causal systems, which is done routinely in communicable disease epidemiology. The use of change diagrams has advantages over static diagrams, because change diagrams are more tractable, relate better to interventions, and have clearer interpretations.Causal diagrams are a useful basis for modeling. They make assumptions explicit, provide a framework for analysis, generate testable predictions, explore the effects of interventions, and identify data gaps. Causal diagrams can be used to integrate different types of information and to facilitate communication both among public health experts and between public health experts and experts in other fields. Causal diagrams allow the use of instrumental variables, which can help control confounding and reverse causation. <s> BIB009 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Introduction <s> Randomised controlled trials (RCTs) must be internally valid (i.e., design and conduct must eliminate the possibility of bias), but to be clinically useful, the result must also be relevant to a definable group of patients in a particular clinical setting (i.e., they must be externally valid). Lack of external validity is the most frequent criticism by clinicians of RCTs, systematic reviews, and guidelines, and is one explanation for the widespread underuse in routine practice of many treatments that have been shown to be beneficial in trials and are recommended in guidelines [1]. Yet medical journals, funding agencies, ethics committees, the pharmaceutical industry, and governmental regulators seem to give external validity a low priority. Admittedly, whereas the determinants of internal validity are intuitive and can generally be worked out from first principles, understanding of the determinants of the external validity of an RCT requires clinical rather than statistical expertise, and often depends on a detailed understanding of the particular clinical condition under study and its management in routine clinical practice. However, reliable judgments about the external validity of RCTs are essential if treatments are to be used correctly in as many patients as possible in routine clinical practice. ::: ::: The results of RCTs or systematic reviews will never be relevant to all patients and all settings, but they should be designed and reported in a way that allows clinicians to judge to whom the results can reasonably be applied. Table 1 lists some of the important potential determinants of external validity, each of which is reviewed briefly below. Many of the considerations will only be relevant in certain types of trials, for certain interventions, or in certain clinical settings, but they can each sometimes undermine external validity. Moreover, the list is not exhaustive and requires more detailed annotation and explanation than is possible in this short review. ::: ::: ::: ::: Table 1 ::: ::: Main Issues That Can Affect External Validity and Should Be Addressed in Reports of the Results of Randomised Controlled Trials or Systematic Reviews and Considered by Clinicians ::: ::: ::: ::: Some of the issues that determine external validity are relevant to the distinction between pragmatic trials and explanatory trials [2], but it would be wrong to assume that pragmatic trials necessarily have greater external validity than explanatory trials. For example, broad eligibility criteria, limited collection of baseline data, and inclusion of centres with a range of expertise and differing patient populations have many advantages, but they can also make it very difficult to generalise the overall average effect of treatment to a particular clinical setting. <s> BIB010 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Introduction <s> The claims of randomized controlled trials (RCTs) to be the gold standard rest on the fact that the ideal RCT is a deductive method: if the assumptions of the test are met, a positive result implies the appropriate causal conclusion. This is a feature that RCTs share with a variety of other methods, which thus have equal claim to being a gold standard. This article describes some of these other deductive methods and also some useful non-deductive methods, including the hypothetico-deductive method. It argues that with all deductive methods, the benefit that the conclusions follow deductively in the ideal case comes with a great cost: narrowness of scope. This is an instance of the familiar trade-off between internal and external validity. RCTs have high internal validity but the formal methodology puts severe constraints on the assumptions a target population must meet to justify exporting a conclusion from the test population to the target. The article reviews one such set of assumptions to show the kind of knowledge required. The overall conclusion is that to draw causal inferences about a target population, which method is best depends case-by-case on what background knowledge we have or can come to obtain. There is no gold standard. <s> BIB011 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Introduction <s> In their recent book, Is Inequality Bad for Our Health?, Daniels, Kennedy, and Kawachi claim that to “act justly in health policy, we must have knowledge about the causal pathways through which socioeconomic (and other) inequalities work to produce differential health outcomes.” One of the central problems with this approach is its dependency on “knowledge about the causal pathways.” A widely held belief is that the randomized clinical trial (RCT) is, and ought to be the “gold standard” of evaluating the causal efficacy of interventions. However, often the only data available are non-experimental, observational data. For such data, the necessary randomization is missing. Because the randomization is missing, it seems to follow that it is not possible to make epistemically warranted claims about the causal pathways. Although we are not sanguine about the difficulty in using observational data to make warranted causal claims, we are not as pessimistic as those who believe that the only warranted causal claims are claims based on data from (idealized) RCTs. We argue that careful, thoughtful study design, informed by expert knowledge, that incorporates propensity score matching methods in conjunction with instrumental variable analyses, provides the possibility of warranted causal claims using observational data. <s> BIB012 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Introduction <s> Numerous methods for causality assessment of adverse drug reactions (ADRs) have been published. The aim of this review is to provide an overview of these methods and discuss their strengths and weaknesses. We conducted electronic searches in MEDLINE (via PubMed), EMBASE and the Cochrane databases to find all assessment methods. Thirty-four different methods were found, falling into three broad categories: expert judgement/global introspection, algorithms and probabilistic methods (Bayesian approaches). Expert judgements are individual assessments based on previous knowledge and experience in the field using no standardized tool to arrive at conclusions regarding causality. Algorithms are sets of specific questions with associated scores for calculating the likelihood of a cause-effect relationship. Bayesian approaches use specific findings in a case to transform the prior estimate of probability into a posterior estimate of probability of drug causation. The prior probability is calculated from epidemiological information and the posterior probability combines this background information with the evidence in the individual case to come up with an estimate of causation. As a result of problems of reproducibility and validity, no single method is universally accepted. Different causality categories are adopted in each method, and the categories are assessed using different criteria. Because assessment methods are also not entirely devoid of individual judgements, inter-rater reliability can be low. In conclusion, there is still no method universally accepted for causality assessment of ADRs. <s> BIB013 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Introduction <s> OBJECTIVES ::: Most contemporary epidemiologic studies require complex analytical methods to adjust for bias and confounding. New methods are constantly being developed, and older more established methods are yet appropriate. Careful application of statistical analysis techniques can improve causal inference of comparative treatment effects from nonrandomized studies using secondary databases. A Task Force was formed to offer a review of the more recent developments in statistical control of confounding. ::: ::: ::: METHODS ::: The Task Force was commissioned and a chair was selected by the ISPOR Board of Directors in October 2007. This Report, the third in this issue of the journal, addressed methods to improve causal inference of treatment effects for nonrandomized studies. ::: ::: ::: RESULTS ::: The Task Force Report recommends general analytic techniques and specific best practices where consensus is reached including: use of stratification analysis before multivariable modeling, multivariable regression including model performance and diagnostic testing, propensity scoring, instrumental variable, and structural modeling techniques including marginal structural models, where appropriate for secondary data. Sensitivity analyses and discussion of extent of residual confounding are discussed. ::: ::: ::: CONCLUSIONS ::: Valid findings of causal therapeutic benefits can be produced from nonrandomized studies using an array of state-of-the-art analytic techniques. Improving the quality and uniformity of these studies will improve the value to patients, physicians, and policymakers worldwide. <s> BIB014 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Introduction <s> This article examines definitions of cause in the epidemiological literature. Those definitions describe causes as factors that make a difference to the distribution of disease or to individual health status. In philosophical terms, they are "difference-makers." I argue that those definitions are underpinned by an epistemology and a methodology that hinge upon the notion of variation, contra the dominant Humean paradigm according to which we infer causality from regularity. Furthermore, despite the fact that causes are defined in terms of difference-making, this doesn't fix the causal metaphysics but rather reflects the "variational" epistemology and methodology of epidemiology. I suggest that causality in epidemiology ought to be interpreted according to Williamson's epistemic theory. In this approach, causal attribution depends on the available evidence and on the methods used. In turn, evidence to establish causal claims requires both difference-making and mechanistic considerations. <s> BIB015 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Introduction <s> Background ::: External validity of study results is an important issue from a clinical point of view. From a methodological point of view, however, the concept of external validity is more complex than it seems to be at first glance. ::: ::: Methods ::: Methodological review to address the concept of external validity. ::: ::: Results ::: External validity refers to the question whether results are generalizable to persons other than the population in the original study. The only formal way to establish the external validity would be to repeat the study for that specific target population. We propose a three-way approach for assessing the external validity for specified target populations. (i) The study population might not be representative for the eligibility criteria that were intended. It should be addressed whether the study population differs from the intended source population with respect to characteristics that influence outcome. (ii) The target population will, by definition, differ from the study population with respect to geographical, temporal and ethnical conditions. Pondering external validity means asking the question whether these differences may influence study results. (iii) It should be assessed whether the study's conclusions can be generalized to target populations that do not meet all the eligibility criteria. ::: ::: Conclusion ::: Judging the external validity of study results cannot be done by applying given eligibility criteria to a single target population. Rather, it is a complex reflection in which prior knowledge, statistical considerations, biological plausibility and eligibility criteria all have place. <s> BIB016 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Introduction <s> What kinds of evidence reliably support predictions of effectiveness for health and social care interventions? There is increasing reliance, not only for health care policy and practice but also for more general social and economic policy deliberation, on evidence that comes from studies whose basic logic is that of JS Mill's method of difference. These include randomized controlled trials, case–control studies, cohort studies, and some uses of causal Bayes nets and counterfactual-licensing models like ones commonly developed in econometrics. The topic of this paper is the 'external validity' of causal conclusions from these kinds of studies. We shall argue two claims. Claim, negative: external validity is the wrong idea; claim, positive: 'capacities' are almost always the right idea, if there is a right idea to be had. If we are right about these claims, it makes big problems for policy decisions. Many advice guides for grading policy predictions give top grades to a proposed policy if it has two good Mill's-method-of difference studies that support it. But if capacities are to serve as the conduit for support from a method-of-difference study to an effectiveness prediction, much more evidence, and much different in kind, is required. We will illustrate the complexities involved with the case of multisystemic therapy, an internationally adopted intervention to try to diminish antisocial behaviour in young people. <s> BIB017 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Introduction <s> 1. Introduction to probabilities, graphs, and causal models 2. A theory of inferred causation 3. Causal diagrams and the identification of causal effects 4. Actions, plans, and direct effects 5. Causality and structural models in the social sciences 6. Simpson's paradox, confounding, and collapsibility 7. Structural and counterfactual models 8. Imperfect experiments: bounds and counterfactuals 9. Probability of causation: interpretation and identification Epilogue: the art and science of cause and effect. <s> BIB018
One of the core concerns of all branches of medicine is causality. Pharmacovigilance aims to find the adverse effects of drugs BIB013 , doctors diagnose patients based on their symptoms and history BIB004 , comparative effectiveness involves determining the relative risks and benefits of treatments BIB014 , basic medical research elucidates novel causes of disease, epidemiology seeks causal relationships between environmental and other factors and disease BIB006 BIB015 , and health policy uses the information gained from these areas to determine effective strategies for promoting health and preventing disease BIB009 . Biomedical informatics spans many of these areas, so advances in computational approaches to causal inference could have a major impact on everything from clinical decision support to public health. After hundreds of years of work in philosophy and medicine on how to address these questions, the prevailing wisdom is that when it comes to health, highly controlled experiments such as randomized controlled trials (RCTs) are the only ones that can answer them . While an ideal RCT can eliminate confounding , allowing reliable inference of causal relationships, this ideal is not always achieved in practice BIB005 and the internal validity that this ensures (that the study can answer the questions being asked) often comes at the expense of external validity (generalizability to other populations and situations) BIB016 BIB010 . Even determining how to use the results of RCTs to treat patients is a difficult problem, leading to the development of checklists for assessing external validity and the proposal to combine RCTs with observational studies BIB007 . As a result, it has been argued that RCTs should not be considered the ''gold standard'' for causal inference and that there is in fact no such standard BIB011 BIB017 BIB008 . On the other hand, the increasing prevalence of electronic health records has allowed us to conduct studies on large heterogenous populations, addressing some of the external validity problems of RCTs. However, relying on observational data for causal inference requires a reassessment of inference methods in order to ensure we maintain internal validity and understand the types of questions these data can answer. While there has been some recent work discussing how we can draw causal conclusions from observational data in the context of biomedical inference BIB012 , there is also a significant and underutilized body of work from artificial intelligence and statistics BIB001 BIB018 BIB002 BIB003 on causal inference from primarily observational data. This article aims to bridge this gap by introducing biomedical researchers to current methods for causal inference, and discussing how these relate to informatics problems, focusing in particular on the inference of causal relationships from observational data such as from EHRs. In this work, when we refer to causal inference we mean the process of uncovering causal relationships from data (while causal explanation refers to reasoning about why particular events occurred) and our discussion will focus on algorithms for doing this in an automated way. We will introduce a number of concepts related to causality throughout the paper, and in Fig. 2 show how these processes are generally assumed to be connected in the methods described. Note that this depiction is not necessarily complete and the processes can be connected in other ways. For example, there may be no connection between inference and explanation, or additional steps requiring experimentation on systems rather than only observational data. While many informatics problems implicitly involve determining causality, there has been less of a focus on discussing how we can do this than there has been in epidemiology. We will begin by reviewing some basic concepts in causality, covering a few of the ways both philosophers and epidemiologists have suggested that we can identify causes. We then turn our attention to the primary focus of this article: a survey of methods for automated inference of causal relationships from data and discussion of their relation and applicability to biomedical problems. For the sake of space we will not provide an exhaustive account of all inference methods, but aim to cover the primary approaches to inference and those most applicable to large scale inference from observational data. As a result, we focus on Bayesian and dynamic Bayesian networks, Granger causality, and temporal-logic based inference. Some primary omissions are structural equation models (SEM) (which can be related to Bayesian networks) and potential outcomes approaches such as the Rubin Causal Model BIB002 . BIB013 We begin by discussing the problem of finding general (type-level) relationships, which relates to finding causes of effects, and then discuss the problem of explanation (finding token-level relationships), which aims to find the causes of effects. This is the difference between asking whether smoking will cause a person to develop lung cancer (what effect will result from the cause) versus asking whether an individual's lung cancer was caused by her years of smoking (the cause of an observed effect).
Methodological Review: A review of causal inference for biomedical informatics <s> Why causality? <s> This chapter discusses direction of time. For the construction of clock, the direction of flow was not important but the rate of flow was more important. Relation to the direction of the flow of time, the expression of the law in terms of the increase of disorder, is very much more helpful. There remains a fascinating possibility of linking cosmological processes to the direction of flow of time. All the evidence available suggests that the universe is expanding so that one has a unidirectional phenomenon on the cosmic scale. It is tempting to link this to the other unidirectional phenomenon that is so all pervading, time; but at present this is as far as scientists appear to have got. The chapter presents microscopic systems. These are made up of a very large number of microscopic systems, i.e. individual particles in inter-action with other individual particles. <s> BIB001 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Why causality? <s> Providing explanations of the conclusions of decision-support systems can be viewed as presenting inference results in a manner that enhances the user's insight into how these results were obtained. The ability to explain inferences has been demonstrated to be an important factor in making medical decision-support systems acceptable for clinical use. Although many researchers in artificial intelligence have explored the automatic generation of explanations for decision-support systems based on symbolic reasoning, research in automated explanation of probabilistic results has been limited. We present the results of an evaluation study of INSITE, a program that explains the reasoning of decision-support systems based on Bayesian belief networks. In the domain of anesthesia, we compared subjects who had access to a belief network with explanations of the inference results to control subjects who used the same belief network without explanations. We show that, compared to control subjects, the explanation subjects demonstrated greater diagnostic accuracy, were more confident about their conclusions, were more critical of the belief network, and found the presentation of the inference results more clear. <s> BIB002 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Why causality? <s> Background: Causal reasoning as a way to make a diagnosis seems convincing. Modern medicine depends on the search for causes of disease and it seems fair to assert that such knowledge is employed in diagnosis. Causal reasoning as it has been presented neglects to some extent the conception of multifactorial disease causes. <s> BIB003 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Why causality? <s> Background: The widespread availability of new computational methods and tools for data analysis and predictive modeling requires medical informatics researchers and practitioners to systematically select the most appropriate strategy to cope with clinical prediction problems. In particular, the collection of methods known as ‘data mining’ offers methodological and technical solutions to deal with the analysis of medical data and construction of prediction models. A large variety of these methods requires general and simple guidelines that may help practitioners in the appropriate selection of data mining tools, construction and validation of predictive models, along with the dissemination of predictive models within clinical environments. Purpose: The goal of this review is to discuss the extent and role of the research area of predictive data mining and to propose a framework to cope with the problems of constructing, assessing and exploiting data mining models in clinical medicine. Methods: We review the recent relevant work published in the area of predictive data mining in clinical medicine, highlighting critical issues and summarizing the approaches in a set of learned lessons. Results: The paper provides a comprehensive review of the state of the art of predictive data mining in clinical medicine and gives guidelines to carry out data mining studies in this <s> BIB004 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Why causality? <s> BACKGROUND ::: Electronic health record (EHR) databases contain vast amounts of information about patients. Machine learning techniques such as Boosting and support vector machine (SVM) can potentially identify patients at high risk for serious conditions, such as heart disease, from EHR data. However, these techniques have not yet been widely tested. ::: ::: ::: OBJECTIVE ::: To model detection of heart failure more than 6 months before the actual date of clinical diagnosis using machine learning techniques applied to EHR data. To compare the performance of logistic regression, SVM, and Boosting, along with various variable selection methods in heart failure prediction. ::: ::: ::: RESEARCH DESIGN ::: Geisinger Clinic primary care patients with data in the EHR data from 2001 to 2006 diagnosed with heart failure between 2003 and 2006 were identified. Controls were randomly selected matched on sex, age, and clinic for this nested case-control study. ::: ::: ::: MEASURES ::: Area under the curve (AUC) of receiver operator characteristic curve was computed for each method using 10-fold cross-validation. The number of variables selected by each method was compared. ::: ::: ::: RESULTS ::: Logistic regression with model selection based on Bayesian information criterion provided the most parsimonious model, with about 10 variables selected on average, while maintaining a high AUC (0.77 in 10-fold cross-validation). Boosting with strict variable importance threshold provided similar performance. ::: ::: ::: CONCLUSIONS ::: Heart failure was predicted more than 6 months before clinical diagnosis, with AUC of about 0.76, using logistic regression and Boosting. These results were achieved even with strict model selection criteria. SVM had the poorest performance, possibly because of imbalanced data. <s> BIB005
Before delving into the question of how to go about finding them, we may first wonder whether we need causes at all, or if associations could be used instead. Let us look at the three primary uses of causal relationships -prediction, explanation, and policyand the degree to which each depends on the relationships used being causal. BIB003 Predictions, such as determining how likely it is that someone will develop lung cancer after exposure to secondhand smoke, can frequently be made on the basis of associations alone (and there is much work in informatics on doing this BIB004 BIB005 ), but this can be problematic as we do not know why the predictions work and thus cannot tell when they will stop working. For example, we may be able to predict the rate of lung cancer in a region based on the amount of matches sold, but the correspondence between matches and smoking may be unstable. As shown in Fig. 1 , where arrows denote causal influence, many variables may affect match sales while lung cancer only depends on smoking. Thus match sales may initially seem to be a good predictor of lung cancer if that dependency is stronger than the others, but when there are anomalous events such as blackouts, there will be no corresponding change in lung cancer rates. Once we have data on smoking, information about match sales becomes redundant. Similarly, black box models based on associations may also have redundant variables, leading to unnecessary medical tests if these are then applied for diagnostic purposes. There are two types of explanations that we seek: explanations for the relationship between two phenomena (why they are associated) and explanations for particular events (why they occurred at all, or why they occurred in the manner they did). In the first case, we generally want explanations for inferences and predictive rules, particularly if these are to be used for tasks such as clinical decision support BIB002 , but explaining the relationship between, say, matches and lung cancer means identifying the causal relationships between smoking and match sales and smoking and lung cancer. In general, explaining associations means describing how the elements either cause one another or have a common cause BIB001 . For example, a seeming adverse drug event may in fact be a symptom of the disease being treated, making it associated with the drug prescribed even though both are caused by the underlying disease. In the case of explaining a particular event, we want to describe why a patient fell ill or diagnose her based on her symptoms and history (finding the cause of her illness). Associations are of no use in this case, as this type of explanation means providing information about the causal history of the event . Finally, in order to create effective policies or strategies, like population-level campaigns to discourage smoking or individuallevel plans such as giving a patient quinine to treat her malaria, we need to know that smoking causes cancer (or some other negative outcome) and that the quinine has the ability to cure the patient. That is, if there were instead a gene that caused people to both enjoy smoking and to have a higher risk of cancer (with no other relationship between smoking and cancer), then suggesting that people not smoke would not be an effective way of lowering their cancer risk. We need to know that our interventions are targeting something that can alter either the level or probability of the cause, and that we are not instead manipulating something merely associated with the effect.
Methodological Review: A review of causal inference for biomedical informatics <s> Probabilistic causality <s> Tins paper contains a suggested quantitative explication of probabilistic causality in terms of physical probability.1 The main result is to show that, starting from very reasonable desiderata, there is a unique meaning, up to a continuous increasing transformation, that can be attached to ' the tendency of one event to cause another one '. A reasonable explicatum will also be suggested for the degree to which one event caused another one. It may be possible to find other reasonable explicata for tendency to cause, but, if so, the assumptions made here will have to be changed. I believe that the first clear-cut application in science will be to the foundations of statistics, such as to an improved understanding of the function of randormsation, but I am content for the present to regard the work as contributing to the philosophy of science, and especially to what may be called the ' mathematics of philosophy '. Light may also be shed on problems of allocating blame and credit. I hope to consider applications to statistics on another occasion.2 In a previous note 3 I have tried to give an interpretation of 'an event F caused another event E ' without making reference to time. It was presumably clear from the last three paragraphs, which were added in <s> BIB001 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Probabilistic causality <s> In many domains we face the problem of determining the underlying causal structure from time-course observations of a system. Whether we have neural spike trains in neuroscience, gene expression levels in systems biology, or stock price movements in finance, we want to determine why these systems behave the way they do. For this purpose we must assess which of the myriad possible causes are significant while aiming to do so with a feasible computational complexity. At the same time, there has been much work in philosophy on what it means for something to be a cause, but comparatively little attention has been paid to how we can identify these causes. Algorithmic approaches from computer science have provided the first steps in this direction, but fail to capture the complex, probabilistic and temporal nature of the relationships we seek. ::: This dissertation presents a novel approach to the inference of general (type-level) and singular (token-level) causes. The approach combines philosophical notions of causality with algorithmic approaches built on model checking and statistical techniques for false discovery rate control. By using a probabilistic computation tree logic to describe both cause and effect, we allow for complex relationships and explicit description of the time between cause and effect as well as the probability of this relationship being observed (e.g. “a and b until c, causing d in 10–20 time units”). Using these causal formulas and their associated probabilities, we develop a novel measure for the significance of a cause for its effect, thus allowing discovery of those that are statistically interesting, determined using the concepts of multiple hypothesis testing and false discovery control. We develop algorithms for testing these properties in time-series observations and for relating the inferred general relationships to token-level events (described as sequences of observations). Finally, we illustrate these ideas with example data from both neuroscience and finance, comparing the results to those found with other inference methods. The results demonstrate that our approach achieves superior control of false discovery rates, due to its ability to appropriately represent and infer temporal information. <s> BIB002
One of the main problems with regularity theories is that, whether this is due to our lack of knowledge about the full set of conditions required for a cause to produce its effect or is an underlying feature of the relationship itself, many relationships are probabilistic. While the regularity models introduced above allow us to attribute some fraction of the set to ''other causes,'' they do not allow us to reason quantitatively about how much of a difference each of those components makes to the probability of the effect given other potential explanations for it. The basic idea of probabilistic theories of causality BIB001 is that a cause raises the probability of, and occurs prior to, its effect. The condition that a cause, C, raises the probability of its effect, E, is described using conditional probabilities as: Note that P(E) is sometimes replaced with P(Ej:C), which is equivalent in all non-deterministic cases BIB002 . However, these conditions of temporal priority and probability raising are neither necessary nor sufficient for a causal relationship. One of the classic examples illustrating this is that of a falling barometer and rain. The barometer falling occurs before and may seem to raise the probability of rain, but decreasing air pressure is causing both. In biomedical cases, a scenario with a similar structure may be a disease causing two symptoms where one regularly precedes the other. The primary difference between the various probabilistic theories of causality is in how they distinguish between genuine and spurious causes. Suppes' approach is to look for earlier events that account at least as well for the effect, so that the later cause only increases the probability of the effect by some small epsilon. In the case of rain given above, including information about the barometer will not affect the probability of the effect once we know about the earlier event of decreasing air pressure. Another method, that of Eells is to take sets of background contexts (comprised of all variables held fixed in all possible ways) and then test how much a potential cause raises the probability of its effect with respect to each of these, leading to a measure of the average degree of significance of a cause for its effect. It should be noted though that in both cases we must decide at what level of epsilon, or what average significance value, something should be considered causal. Similarly, the background context approach is difficult to implement in practice due to both computational complexity (N variables lead to 2 N background contexts) as well as the availability of data (each context will not be seen often enough to be statistically significant).
Methodological Review: A review of causal inference for biomedical informatics <s> Hill criteria <s> In this paper, criteria used by many epidemiologists as aids in causal inference are reviewed and revised. The revised scheme emphasizes the distinction between essential properties of a cause and criteria useful for deciding on the presence of these properties in a given case. A systematic procedure for causal inference tests each essential causal property in turn against appropriate criteria. For a pragmatic epidemiology in which all determinants serve as causes, their essential properties are held to be association, time order, and direction, in an ascending hierarchy. Criteria for association are probabilistic and can be enhanced by strength and consistency. Given association, criteria for time order of the relevant variables follow from access to observation, which is dependent on design. Given association and time order, causal direction (or consequential change) calls on an array of criteria, namely, consistency and survivability, strength, specificity in cause and in effect, predictive performance, and coherence in all its forms (e.g., theoretical, factual, biologic, and statistical). The evolution of such criteria is traced through the epidemiologic literature in the light of historical context. Although Popper's philosophy cannot directly serve an inherently inductive judgmental process, his notion of survivability has here been added, alongside replicability, as a subclass of consistency. This criterion is proposed to bridge the gap between the particularity of designs and the generality required of causal relations. Designs are ordered and described in the framework of testing survivability. Finally, definitions are offered for the list of criteria deployed. <s> BIB001 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Hill criteria <s> The paper discusses the evolving concept of causationin epidemiology and its potential interaction with logic and scientific philosophy. Causes arecontingent but the necessity which binds them totheir effects relies on contrary-to-fact conditionals,i.e. conditional statements whose antecedent is false.Chance instead of determinism plays a growing role inscience and, although rarely acknowledged yet, inepidemiology: causes are multiple and chancy; a priorevent causes a subsequent event if the probabilitydistribution of the subsequent event changesconditionally upon the probability of the prior event.There are no known sufficient causes in epidemiology.We merely observe tendencies toward sufficiency ortendencies toward necessity: cohort studies evaluatethe first tendencies, and case-control studies thelatter.In applied sciences, such as medicine andepidemiology, causes are intrinsically connected withgoals and effective strategies: they are recipewhich have a potential harmful or sucessful use; theyare contrastive since they make a differencebetween circumstances in which they are present andthose in which they are absent: causes do notexplain Event E but event E rather thaneven F. Causation is intrinsically linked with thenotion of ``what is pathological''.Any definition of causation will inevitably collapseinto the use made of epidemiologic methods. Theprogressive methodological sophistication of the lastforty years is in perfect alignment with a gradualimplicit overhaul of our concept of causation. <s> BIB002 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Hill criteria <s> Austin Bradford Hill's landmark 1965 paper contains several important lessons for the current conduct of epidemiology. Unfortunately, it is almost exclusively cited as the source of the "Bradford-Hill criteria" for inferring causation when association is observed, despite Hill's explicit statement that cause-effect decisions cannot be based on a set of rules. Overlooked are Hill's important lessons about how to make decisions based on epidemiologic evidence. He advised epidemiologists to avoid over-emphasizing statistical significance testing, given the observation that systematic error is often greater than random error. His compelling and intuitive examples point out the need to consider costs and benefits when making decisions about health-promoting interventions. These lessons, which offer ways to dramatically increase the contribution of health science to decision making, are as needed today as they were when Hill presented them. <s> BIB003 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Hill criteria <s> Bradford Hill's considerations published in 1965 had an enormous influence on attempts to separate causal from non-causal explanations of observed associations. These considerations were often applied as a checklist of criteria, although they were by no means intended to be used in this way by Hill himself. Hill, however, avoided defining explicitly what he meant by "causal effect".This paper provides a fresh point of view on Hill's considerations from the perspective of counterfactual causality. I argue that counterfactual arguments strongly contribute to the question of when to apply the Hill considerations. Some of the considerations, however, involve many counterfactuals in a broader causal system, and their heuristic value decreases as the complexity of a system increases; the danger of misapplying them can be high. The impacts of these insights for study design and data analysis are discussed. The key analysis tool to assess the applicability of Hill's considerations is multiple bias modelling (Bayesian methods and Monte Carlo sensitivity analysis); these methods should be used much more frequently. <s> BIB004 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Hill criteria <s> OBJECTIVES ::: Most contemporary epidemiologic studies require complex analytical methods to adjust for bias and confounding. New methods are constantly being developed, and older more established methods are yet appropriate. Careful application of statistical analysis techniques can improve causal inference of comparative treatment effects from nonrandomized studies using secondary databases. A Task Force was formed to offer a review of the more recent developments in statistical control of confounding. ::: ::: ::: METHODS ::: The Task Force was commissioned and a chair was selected by the ISPOR Board of Directors in October 2007. This Report, the third in this issue of the journal, addressed methods to improve causal inference of treatment effects for nonrandomized studies. ::: ::: ::: RESULTS ::: The Task Force Report recommends general analytic techniques and specific best practices where consensus is reached including: use of stratification analysis before multivariable modeling, multivariable regression including model performance and diagnostic testing, propensity scoring, instrumental variable, and structural modeling techniques including marginal structural models, where appropriate for secondary data. Sensitivity analyses and discussion of extent of residual confounding are discussed. ::: ::: ::: CONCLUSIONS ::: Valid findings of causal therapeutic benefits can be produced from nonrandomized studies using an array of state-of-the-art analytic techniques. Improving the quality and uniformity of these studies will improve the value to patients, physicians, and policymakers worldwide. <s> BIB005 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Hill criteria <s> Nancy Cartwright begins her recent book, Hunting Causes and Using Them, by noting that while a few years ago real causal claims were in dispute, nowadays “causality is back, and with a vengeance.” In the case of the social sciences, Keith Morrison writes that “Social science asks ‘why?’. Detecting causality or its corollary—prediction—is the jewel in the crown of social science research.” With respect to the health sciences, Judea Pearl writes that the “research questions that motivate most studies in the health sciences are causal in nature.” However, not all data used by people interested in making causal claims come from experiments that use random assignment to control and treatment groups. Indeed, much research in the social and health science depends on non-experimental, observational data. Thus, one of the most important problems in the social and health sciences concerns making warranted causal claims using non-experimental, observational data; viz., “Can observational data be used to make etiological inferences leading to warranted causal claims?” This paper examines one method of warranting causal claims that is especially widespread in epidemiology and the health sciences generally—the use of causal criteria. It is argued that cases of complex causation generally, and redundant causation—both causal overdetermination and causal preemption—specifically, undermine the use of such criteria to warrant causal claims. <s> BIB006
Likely the most influential and widely applied work on identifying causality in the health sciences is that of Hill, an epidemiologist and statistician. While Hill's approach was intended to help epidemiologists determine the relationship between environmental conditions and a particular disease (where it is rarely possible to create randomized experiments or trials to test specific hypotheses) the recent use of observational data such as EHRs for inference has brought some of the goals of biomedical inference closer to those of epidemiology, while at the same time epidemiological studies on larger and larger cohorts now require the application of inference methods for analyzing these data . Hill described nine features that may be used to evaluate whether an association is causal. Note that these are not features of causes themselves, but rather ways we can recognize them. They are: (1) strength: how strong is the association between cause and effect; (2) consistency: the relationship is present in multiple places, and the results are replicable; (3) specificity: whether the cause leads to a particular effect or group of effects (e.g. smoking causing illness versus smoking causing lung cancer); (4) temporality: the cause precedes the effect; (5) biological gradient: does the level of the effect or risk of it occurring increase with an increase in the level of the cause (e.g. a dose-response curve); (6) plausibility: is there some mechanism that could potentially connect cause and effect, given current biological knowledge; (7) coherence: the relationships should not conflict with what we know of the disease; (8) experiment: experimental results are useful evidence toward a causal relationship; and (9) analogy: after finding the relationship between, say, HPV and cervical cancer we may more readily accept that a virus could cause another type of cancer. Another epidemiologist, Susser, independently described a similar set of criteria , taking association, temporal priority, and what he calls direction as essential properties of causal relationships, and distinguishing between these properties and the criteria -such as Hill's -used to find them. That is, Susser's three criteria are what he believes makes something a cause, and points such as Hill's are essentially heuristics that help us find these features. Susser's first two points are shared by the probabilistic view and Hill's viewpoints, while direction -which stipulates that a change in the cause leads to a change in the effect and change in the effect is a result of change in the cause BIB001 -is most similar to counterfactual [45] and manipulability theories of causality . BIB005 There are also many similarities between these suggestions and both the probabilistic and regularity views of causality. BIB002 It is critical to note that, like the philosophical theories which all face counterexamples, Hill's list of viewpoints is not a checklist for causality, and none of these (aside from temporality ) are required for something to be a cause. Rather, this is a list of points to help evaluate evidence toward causality. Despite this, and Hill's statement that he does not believe there are hard and fast rules for evidence toward causality , the list has long been mislabeled as the ''Hill criteria.'' There has been recent work clarifying this point, with (among many others BIB004 BIB006 ) Rothman and Greenland addressing why each viewpoint is neither necessary nor sufficient , and Phillips and Goodman BIB003 discussing the broader picture of what we are missing by treating Hill's work as a causality checklist.
Methodological Review: A review of causal inference for biomedical informatics <s> Singular causality and personalized medicine <s> An expert system, Causal Arrhythmia Analyzer (CAA), is being developed to establish a framework for the recognition of time varying signals of a complex repetitive nature, such as electrocardiograms (ECGs). ::: ::: Using a stratified knowledge base the CAA system discerns several perspectives about the phenomena of underlying entities, such as the physiological event knowledge of the cardiac conduction system and the morphological waveform knowledge of ECG tracings, where conduction events are projected into the observable waveform domain. Projection links have been defined to represent projection in CAA's frame-based formalism and are used to raise hypotheses across different KBs. ::: ::: The CAA system also introduces and uses causal links extensively to represent various causal and temporal relations between concepts in the physiological event domain. Its control structure uses causal links to predict unseen events from recognized events, to confirm these event hypotheses against input data, and to calculate the degree of integrity among causally related events. The meta-knowledge representation of statistical information about events facilitates a default reasoning mechanism and supports this expectation process providing context sensitive statistical information. ::: ::: The CAA system inherits its basic control mechanisms from the ALVEN (A Left VENtricular Wall Motion Analysis) system [Tsotsos 1981], such as the change/focus attention mechanism with similarity links and the hypothesis rating mechanism. ::: ::: A prototype CAA system with a limited number of abnormalities has been implemented using the knowledge representation language PSN (Procedural Semantic Networks) [Levesque & Mylopoulos 1979]. The prototype has so far demonstrated satisfactory results using independently sampled ECG data. <s> BIB001 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Singular causality and personalized medicine <s> Abstract : In order to address some existing problems in computer-aided medical decision making, a computer program called NESTOR has been developed to aid physicians in determining the most likely diagnostic hypothesis to account for a set of patient findings. The domain of hypercalcemic disorders is used to test solution methods that should be applicable to other medical areas. A key design philosophy underlying NESTOR is that the physicians should have control of the computer interaction to determine what is done and when. In order to provide such controllable, interactive aid, specific technical tasks to be addressed. The unifying philosophy in addressing them is the use of knowledge-based methods within a formal probability theory framework. A user interface module gives the physician control over when and how these tasks are used to aid in diagnosing the cause of a patient's condition. This dissertation presents the problems that are addressed by each of the three tasks, and the details of the methods used to address them. In addition, the results of an evaluation of the hypothesis scoring and search techniques are presented and discussed. Additional keywords: artificial intelligence; expert systems; medical applications; computer aided diagnosis; medical computer applications. <s> BIB002 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Singular causality and personalized medicine <s> s from how likely it is that the person is a smoker in the first place. More generally, the trouble is that the likelihood principle aims to by-pass decisions about prior probabilities. Perhaps there are some inferential contexts where such decisions can sensibly be avoided. But the inference from Harry's heart attack to his smoking isn't one of them. (From a Bayesian point of view, if in addition to knowing that Prob(H/S) = .9, we know that Prob(H) = .1 and that Prob(S) = 1/106, then Harry's heart attack should make us believe that he smoked to the degree 9/106.) What about the second case, where we know that Harry both smoked and had high cholesterol, and want to know which of these his heart attack was actually due to? Here the likelihood principle might seem to make more sense. We don't have to This content downloaded from 207.46.13.85 on Tue, 23 Aug 2016 05:37:44 UTC All use subject to http://about.jstor.org/terms 126 II-DAVID PAPINEAU worry about prior probabilities, because we know that the causal factors are both present. Surely then the sensible thing is to assign the result to the cause that makes the most difference? But I don't accept the presupposition that the heart attack must be due either to the smoking or to the high cholesterol. As I have it, there are actually four possibilities in the single case: the background factors then present allow smoking to make a difference, but not high cholesterol; the background factors allow high cholesterol to make a difference, but not smoking; the background factors allow both to make a difference; the background factors allow neither to make a difference. All four possibilities are left open by the fact that smoking and high cholesterol are population causes of heart attacks. And I don't see how we can reach any conclusions about the single case without some further assumptions about the structure of background causes. Thus, for instance, if we believe that heart attacks are an indeterministic phenomenon, we will be likely to assume (though we won't have to) that both smoking and high cholesterol increase the chance of heart attacks for all sets of background conditions. And this will incline us to judge that Harry's smoking and his high cholesterol are both (part of) what caused his heart attack. On the other hand, if we think that heart attacks are always determined, then we will think that smoking only makes a difference given certain very specific background conditions (namely, given conditions together with which smoking determined a heart attack); and we will think the same about the conditions required for cholesterol to make a difference. And so in Harry's case we will probably think that his heart attack is due either to his smoking, or to his high cholesterol, but not to both. My way of reading the question of whether Harry's heart attack is due to his smoking or to his high cholesterol is different from Sober's. From my point of view, this question only arises when we are ignorant of certain facts about the particular case. We know that there are certain background factors in conjunction with which smoking makes a difference, and we know that there are certain background factors in conjunction with which high cholesterol makes a difference; but we don't know what those background factors are, and so we don't know whether Harry has them or not. If we did know what they were, This content downloaded from 207.46.13.85 on Tue, 23 Aug 2016 05:37:44 UTC All use subject to http://about.jstor.org/terms CAUSAL FACTORS, INFERENCE AND EXPLANATION 127 and whether Harry had them, then we would know what difference Harry's smoking and his high cholesterol made to the chance of his heart attack in the actual circumstances, and then we would know whether his smoking, or his high cholesterol, or both, were single-case causes of his heart attack. Sober, however, thinks that there is a further fact of the matter about what 'token caused' Harry's heart attack, even after we know all the probabilistically relevant facts about the particular case. It seems initially plausible to say that, even if smoking is a property cause of heart attacks, there is still a further question as to whether Harry's smoking caused Harry's heart attack. But I suspect that is because we read this as the question: were Harry's actual circumstances such as to allow his smoking to make a difference to his chance of a heart attack? This latter question is a good question, and one we might still want to ask even after we know that smoking is a population cause of heart attacks. But it's not Sober's question. For it's fully answered once we know all the probabilistic facts about the single case, whereas Sober's question is supposed to depend on some yet further fact of the <s> BIB003 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Singular causality and personalized medicine <s> To give a causal explanation of an event is to show how that event could be deduced from a set of universal laws and a set of initial conditions. How can this philosophical definition be related to the diagnostic causal explanations of a practicing expert? An expert physician gives a causal explanation of a disease state by referring to knowledge of the structure of the physiological mechanisms involved (the laws) and to the environment that the mechanism is exposed to (the initial state). The expert's knowledge, however, is qualitative and is thus able to express incomplete knowledge about the exact structure of the mechanism or the precise state of the patient. A qualitative representation for the structure of a mechanism, as well as the QSIM algorithm for deriving qualitative behavioral predictions from that structure and an initial state, are described. Looking in detail at the mechanism whereby the kidney maintains the body's water balance, qualitative predictions are demonstrated of 1) the response of the normal mechanism to a change in its environment, 2) the response of the diseased mechanism to the normal environment, and 3) the response of the diseased mechanism to therapy. To extend these methods to more complex problems, methods are required to abstract a complex mechanism into a hierarchy of simpler models. An abstraction relation based on relative time scales is presented which allows one equilibrium process in a complex system to view faster processes as instantaneous and slower processes as constant. <s> BIB004 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Singular causality and personalized medicine <s> One of the cornerstones of modern medicine is the search for what causes diseases to develop. A conception of multifactorial disease causes has emerged over the years. Theories of disease causation, however, have not quite been developed in accordance with this view. It is the purpose of this paper to provide a fundamental explication of aspects of causation relevant for discussing causes of disease. <s> BIB005 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Singular causality and personalized medicine <s> Background: Causal reasoning as a way to make a diagnosis seems convincing. Modern medicine depends on the search for causes of disease and it seems fair to assert that such knowledge is employed in diagnosis. Causal reasoning as it has been presented neglects to some extent the conception of multifactorial disease causes. <s> BIB006 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Singular causality and personalized medicine <s> The paper is devoted to presentation of temporal causal networks (TCN) for representing and dealing with causal dependencies propagation over time. A temporal causal network is a causal network incorporating explicit representation of intervals of time during which its symptoms/nodes are valid, not valid, and unknown. The temporal knowledge is represented with use of the so called characteristic functions formalism, which allows for classical binary valued logic specification and can be easily extended towards multivalued or fuzzy logic. The presented approach makes use of specific time constraints propagation algorithm for determining causality propagation over the network in time. This is done by evaluating the characteristic functions used as means for representing time constraints for network nodes. The main application includes simulation, monitoring and elements of diagnostic reasoning for dynamic systems with explicit time representation. <s> BIB007 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Singular causality and personalized medicine <s> Abstract We have added temporal reasoning to the Heart Disease Program (HDP) to take advantage of the temporal constraints inherent in cardiovascular reasoning. Some processes take place over minutes while others take place over months or years and a strictly probabilistic formalism can generate hypotheses that are impossible given the temporal relationships involved. The HDP has temporal constraints on the causal relations specified in the knowledge base and temporal properties on the patient input provided by the user. These are used in two ways. First, they are used to constrain the generation of the pre-computed causal pathways through the model that speed the generation of hypotheses. Second, they are used to generate time intervals for the instantiated nodes in the hypotheses, which are matched and adjusted as nodes are added to each evolving hypothesis. This domain offers a number of challenges for temporal reasoning. Since the nature of diagnostic reasoning is inferring a causal explanation from the evidence, many of the temporal intervals have few constraints and the reasoning has to make maximum use of those that exist. Thus, the HDP uses a temporal interval representation that includes the earliest and latest beginning and ending specified by the constraints. Some of the disease states can be corrected but some of the manifestations may remain. For example, a valve disease such as aortic stenosis produces hypertrophy that remains long after the valve has been replaced. This requires multiple time intervals to account for the existing findings. This paper discusses the issues and solutions that have been developed for temporal reasoning integrated with a pseudo-Bayesian probabilistic network in this challenging domain for diagnosis. <s> BIB008 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Singular causality and personalized medicine <s> A slight fault may even cause critical disruptions or remediless damages to the network while a network manager is lost in a large number of alarms. Therefore, the development of a practical and effective system for network fault diagnosis becomes an urgent job. We develop a hierarchical domain-oriented reasoning mechanism suitable for the delegated management architecture. It is based on the causality graph of the sensibly-reduced network fault propagation model from the result of our empirical study. An automated fault diagnosis system called ACView (Alarm Correlation View) for isolating network faults in a multidomain environment is proposed according to the hierarchical reasoning mechanism. This diagnosis system provides not only the process of automated alarm collection and correlation, but also the function of efficient fault localization and identification. <s> BIB009 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Singular causality and personalized medicine <s> The literature on causation distinguishes between causal claims relating properties or types and causal claims relating individuals or tokens. Many authors maintain that corresponding to these two kinds of causal claims are two different kinds of causal relations. Whether to regard causal relations among variables as yet another variety of causation is also controversial. This essay maintains that causal relations obtain among tokens and that type causal claims are generalizations concerning causal relations among these tokens. <s> BIB010 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Singular causality and personalized medicine <s> Thank you very much for reading making things happen a theory of causal explanation. Maybe you have knowledge that, people have search hundreds times for their chosen novels like this making things happen a theory of causal explanation, but end up in infectious downloads. Rather than reading a good book with a cup of tea in the afternoon, instead they juggled with some malicious bugs inside their desktop computer. <s> BIB011
Since the first human genome was sequenced, there has been a surge of interest in ''personalized medicine'' -understanding each patient's diagnosis, prognosis, treatment, and general health in an individualized way. However, our knowledge of what treatments work, how often patients die from a particular condition and what can lead to a certain set of symptoms generally comes from observing sets of patients, and combining multiple sources of information. This general (type) level information, though, is not immediately applicable at the singular (token) level. Making use of causal inferences for personalized reasoning requires understanding how to relate the type-level relationships to token-level cases, a task that has been addressed primarily in philosophy. This is the difference between, say, finding that smoking causes lung cancer and determining that smoking caused a particular patient's lung cancer at age 42. While token causal explanation (or reasoning) is essential to medicine, and something humans have to do constantly in our everyday lives, it has been difficult to create algorithms to do this without human input since it has required much background knowledge and commonsense reasoning. For example, while smoking could be the likeliest cause of lung cancer, a particular patient may have smoked for only a week but had a significant amount of radon exposure, which caused her to develop lung cancer. A doctor looking at the data could make sense of it, but it is difficult for machines since both prior knowledge and information about the patient will always be incomplete, and may deviate from the likeliest scenarios. That is, even without having detailed information about the timing of smoking and development of lung cancer, a doctor could rule out that hypothesis and ask further questions of the patient while an automated method cannot replicate this type of commonsense reasoning. Even doing this manually is difficult. As mentioned in the introduction, RCTs are one of the primary sources of information used when determining the best course of action for individual patients, but to know that a therapy will work as it did in a trial in an individual case, we need to know not just that it worked but why it worked to ensure that the same necessary conditions for effectiveness are present and no conditions that prevent efficacy are present. However, if we aim to determine, say, whether a patient's symptoms are an instance of an adverse drug event or are due to the underlying disease being treated, we need to tackle this problem. There has been no consensus among philosophers about how to relate the type and token levels, leading to a plurality of approaches such as type-level relationships following as generalizations of token-level ones BIB010 , learning type-level relationships first and then applying these to token-level cases BIB003 BIB011 , or treating these as separate things each requiring their own theory . Computational techniques in this area come primarily from the knowledge representation and reasoning community, which is focused on the problem of fault diagnosis (finding the cause of malfunctions in computer systems based on their visible errors) . There are a number of approaches to this problem BIB007 BIB009 , but in general they assume there is a model of the system, relative to which its behavior is being explained. The biomedical case is much more difficult, as we must build this model, and causality here is more complex than simply changing the truth values of binary variables. There has been some work on distinguishing between the two levels of causality in the context of medicine BIB005 , and relating the idea of token causality to the problem of diagnosis BIB006 , though the problems of inference and diagnosis based on causal information have generally been treated separately. A number of methods have been proposed for automating explanation for the purpose of medical diagnosis using techniques such as qualitative simulation BIB004 , and expert systems working from databases of causal knowledge BIB001 or probabilistic models BIB002 BIB008 . However like the case of fault diagnosis, these approaches generally begin with a set of knowledge or model, but creating such models is difficult when we have only observational data and partial knowledge. Instead it is desirable to connect causal relationships or structures inferred from data to token-level explanation. This is particularly useful in the case of inference from EHR data, where the population being studied is the same one being treated.
Methodological Review: A review of causal inference for biomedical informatics <s> Bayesian networks <s> This paper presents a Bayesian method for constructing probabilistic networks from databases. In particular, we focus on constructing Bayesian belief networks. Potential applications include computer-assisted hypothesis testing, automated scientific discovery, and automated construction of probabilistic expert systems. We extend the basic method to handle missing data and hidden (latent) variables. We show how to perform probabilistic inference by averaging over the inferences of multiple belief networks. Results are presented of a preliminary evaluation of an algorithm for constructing a belief network from a database of cases. Finally, we relate the methods in this paper to previous work, and we discuss open problems. <s> BIB001 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Bayesian networks <s> What assumptions and methods allow us to turn observations into causal knowledge, and how can even incomplete causal knowledge be used in planning and prediction to influence and control our environment? In this book Peter Spirtes, Clark Glymour, and Richard Scheines address these questions using the formalism of Bayes networks, with results that have been applied in diverse areas of research in the social, behavioral, and physical sciences. The authors show that although experimental and observational study designs may not always permit the same inferences, they are subject to uniform principles. They axiomatize the connection between causal structure and probabilistic independence, explore several varieties of causal indistinguishability, formulate a theory of manipulation, and develop asymptotically reliable procedures for searching over equivalence classes of causal models, including models of categorical data and structural equation models with and without latent variables. The authors show that the relationship between causality and probability can also help to clarify such diverse topics in statistics as the comparative power of experimentation versus observation, Simpson's paradox, errors in regression models, retrospective versus prospective sampling, and variable selection. The second edition contains a new introduction and an extensive survey of advances and applications that have appeared since the first edition was published in 1993. <s> BIB002 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Bayesian networks <s> We propose a new definition of actual causes, using structural equations to model counterfactuals. We show that the definition yields a plausible and elegant account of causation that handles well examples which have caused problems for other definitions and resolves major difficulties in the traditional account. <s> BIB003 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Bayesian networks <s> A ski boot heel binding having a cam member slidably disposed within a bore formed in the lengthwise direction of a main body is disclosed, wherein the cam member is connected to the front end of a rotatably mounted unlocking lever. The unlocking lever has an inclined shoulder at the upper portion thereof against which a fulcrum pin bears and which is fixed relative to the main body, whereby when the rear end of the unlocking lever is lifted, the fulcrum pin slides down relative to the shoulder with the result that the fulcrum point of the leverage is shifted so as to retract the cam member in a straight line along the bore and displace the binding to the heel releasing position. <s> BIB004 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Bayesian networks <s> Causal Probabilistic Networks (CPNs), (a.k.a. Bayesian Networks, or Belief Networks) are well-established representations in biomedical applications such as decision support systems and predictive modeling or mining of causal hypotheses. CPNs (a) have well-developed theory for induction of causal relationships, and (b) are suitable for creating sound and practical decision support systems. While several public domain and commercial tools exist for modeling and inference with CPNs, very few software tools and libraries exist currently that give access to algorithms for CPN induction. To that end, we have developed a software library, called Causal Explorer, that implements a suit of global, local and partial CPN induction algorithms. The toolkit emphasizes causal discovery algorithms. Approximately half of the algorithms are enhanced implementations of well-established algorithms, and the remaining ones are novel local and partial algorithms that scale to thousands of variables and thus are particularly suitable for modeling in massive datasets. <s> BIB005 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Bayesian networks <s> This study focused on the development and application of an efficient algorithm to induce causal relationships from observational data. The algorithm, called BLCD, is based on a causal Bayesian network framework. BLCD initially uses heuristic greedy search to derive the Markov Blanket (MB) of a node that serves as the "locality" for the identification of pair-wise causal relationships. BLCD takes as input a dataset and outputs potential causes of the form variable X causally influences variable Y. Identification of the causal factors of diseases and outcomes, can help formulate better management, prevention and control strategies for the improvement of health care. In this study we focused on investigating factors that may contribute causally to infant mortality in the United States. We used the U.S. Linked Birth/Infant Death dataset for 1991 with more than four million records and about 200 variables for each record. Our sample consisted of 41,155 re-cords randomly selected from the whole dataset. Each record had maternal, paternal and child factors and the outcome at the end of the first year--whether the infant survived or not. Using the infant birth and death dataset as input, BLCD out-put six purported causal relationships. Three out of the six relationships seem plausible. Even though we have not yet discovered a clinically novel causal link, we plan to look for novel causal pathways using the full sample. <s> BIB006 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Bayesian networks <s> Although BNs have been used successfully for many medical diagnosis problems, there have been few applications to epidemiological data where data mining methods play a significant role. In this paper, we look at the application of BNs to epidemiological data, specifically assessment of risk for coronary heart disease (CHD). We build the BNs: (1) by knowledge engineering BNs from two epidemiological models of CHD in the literature; (2) by applying a causal BN learner. We evaluate these BNs using cross-validation. We compared performance in predicting CHD events over 10 years, measuring area under the ROC curve and Bayesian information reward. The knowledge engineered BNs performed as well as logistic regression, while being easier to interpret. These BNs will serve as the baseline in future efforts to extend BN technology to better handle epidemiological data, specifically to model CHD. <s> BIB007 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Bayesian networks <s> We describe anytime search procedures that (1) find disjoint subsets of recorded variables for which the members of each subset are d-separated by a single common unrecorded cause, if such exists; (2) return information about the causal relations among the latent factors so identified. We prove the procedure is point-wise consistent assuming (a) the causal relations can be represented by a directed acyclic graph (DAG) satisfying the Markov Assumption and the Faithfulness Assumption; (b) unrecorded variables are not caused by recorded variables; and (c) dependencies are linear. We compare the procedure with standard approaches over a variety of simulated structures and sample sizes, and illustrate its practical value with brief studies of social science data sets. Finally, we consider generalizations for non-linear systems. <s> BIB008 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Bayesian networks <s> Much of the recent work on the epistemology of causation has centered on two assumptions, known as the Causal Markov Condition and the Causal Faithfulness Condition. Philosophical discussions of the latter condition have exhibited situations in which it is likely to fail. This paper studies the Causal Faithfulness Condition as a conjunction of weaker conditions. We show that some of the weaker conjuncts can be empirically tested, and hence do not have to be assumed a priori. Our results lead to two methodologically significant observations: (1) some common types of counterexamples to the Faithfulness condition constitute objections only to the empirically testable part of the condition; and (2) some common defenses of the Faithfulness condition do not provide justification or evidence for the testable parts of the condition. It is thus worthwhile to study the possibility of reliable causal inference under weaker Faithfulness conditions. As it turns out, the modification needed to make standard procedures work under a weaker version of the Faithfulness condition also has the practical effect of making them more robust when the standard Faithfulness condition actually holds. This, we argue, is related to the possibility of controlling error probabilities with finite sample size ("uniform consistency") in causal inference. <s> BIB009 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Bayesian networks <s> This article examines definitions of cause in the epidemiological literature. Those definitions describe causes as factors that make a difference to the distribution of disease or to individual health status. In philosophical terms, they are "difference-makers." I argue that those definitions are underpinned by an epistemology and a methodology that hinge upon the notion of variation, contra the dominant Humean paradigm according to which we infer causality from regularity. Furthermore, despite the fact that causes are defined in terms of difference-making, this doesn't fix the causal metaphysics but rather reflects the "variational" epistemology and methodology of epidemiology. I suggest that causality in epidemiology ought to be interpreted according to Williamson's epistemic theory. In this approach, causal attribution depends on the available evidence and on the methods used. In turn, evidence to establish causal claims requires both difference-making and mechanistic considerations. <s> BIB010 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Bayesian networks <s> 1. Introduction to probabilities, graphs, and causal models 2. A theory of inferred causation 3. Causal diagrams and the identification of causal effects 4. Actions, plans, and direct effects 5. Causality and structural models in the social sciences 6. Simpson's paradox, confounding, and collapsibility 7. Structural and counterfactual models 8. Imperfect experiments: bounds and counterfactuals 9. Probability of causation: interpretation and identification Epilogue: the art and science of cause and effect. <s> BIB011 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Bayesian networks <s> We show that there is a general, informative and reliable procedure for discovering causal relations when, for all the investigator knows, both latent variables and selection bias may be at work. Given information about conditional independence and dependence relations between measured variables, even when latent variables and selection bias may be present, there are sufficient conditions for reliably concluding that there is a causal path from one variable to another, and sufficient conditions for reliably concluding when no such causal path exists. <s> BIB012
One of the first steps toward computational causal inference was the development of theories connecting graphical models to causal concepts BIB011 BIB002 . Graphical model based causal inference has found applications to areas such as epidemiology BIB007 and finding causes of infant mortality BIB006 , and there are a number of software tools available for doing this inference BIB005 . These methods take a set of data and produce a directed acyclic graph (DAG) called a Bayesian network (BN) showing the causal structure of the system. BNs are used to describe the independence relations among the set of variables, where variables are represented by nodes and edges between them represent conditional dependence (and missing edges denote independence). Fig. 4 shows a simple BN depicting that smoking causes both lung cancer and stained fingers, but lung cancer and stained fingers are independent conditional on smoking. The basic premise is that using conditional dependencies and a few assumptions, the edges can be directed from cause to effect without necessarily relying on temporal data (though this can be used when available). In order to infer these graphs from data, three main assumptions are required: the causal Markov condition (CMC), faithfulness, and causal sufficiency. CMC is that a node in the graph is independent of all of its non-descendants (its direct and indirect effects) given its direct causes (parents). This means, for example, that two effects of a common cause (parent) will be independent given the state of that cause. For example, take the structure in Fig. 5 . Here C and D are independent given B while E is independent of all other variables given C. On the other hand, note that every node is either a parent or descendant of B. This allows the probability distributions over a set of variables to be factored and compactly represented in graphical form. If in general we wanted to calculate the probability of node C conditional on all of the variables in this dataset we would have we would have P(CjABDE). However, given this graph and CMC we know that C is independent of the rest of the variables given A and B and thus this is equivalent to P(CjAB). This means that we can factor the probability distribution for a set of variables where pa(x i ) is the parents of x i in the graph. Note that this connects directly to using graphical models for prediction, where we aim to calculate the probability of a future event given the current state of variables in the model. The faithfulness condition stipulates that the dependence relationships in the underlying structure of the system (the causal Bayesian network) hold in the data. Note that this holds only in the large sample limit, as with little data, the observations cannot be assumed to be indicative of the true probabilities. If there are cases where a cause can act through two paths: one where it increases the probability of an effect directly, and one where it increases the probability of an intermediate variable that lowers the probability of the effect, then there could be distributions where these effects exactly cancel out, so that the cause and effect will seem independent. This scenario, referred to as Simpson's paradox is illustrated in Fig. 6 , where birth control pills can cause thrombosis, but they also prevent pregnancy, which is a cause of Table 1 Primary features of the algorithms discussed: (a) how they handle time, (b) whether they infer structures such as graphs or individual relationships, (c) whether they take continuous (C), discrete (D) or mixed (M) data, (d) whether they allow cycles (feedback loops), (e) if they attempt to find latent variables, (f) If they infer only causal relationships (directed) or also correlations (mixed), (g) whether they can be used directly calculate the probability of future events, (h) how they are connected to token causality (explanation). thrombosis. Thus, depending on the exact distribution in a dataset, these paths may cancel out so there seems to be no impact (or even a preventative effect) of birth control pills on thrombosis. Finally, causal sufficiency means that all common causes of pairs on the set of variables are included in the analysis. For example, using the example of Fig. 4 , where smoking causes both lung cancer and stained fingers, then a dataset that includes data only on stained fingers and lung cancer without data on smoking would not be causally sufficient. This assumption is needed since otherwise two common effects of a cause will seem to be dependent when their common cause is not included. In cases where these assumptions do not hold, then a set of graphs representing the dependencies in the data, along with nodes for possible unmeasured common causes, will be inferred. Note that some algorithms for inferring causal Bayesian networks, such as FCI BIB002 BIB012 do not assume sufficiency and can determine whether there are latent (unmeasured) variables and thus can also determine if there is an unconfounded relationship between variables. Since a set of graphs is inferred, then one can determine whether in all graphs explaining the data two variables have an unconfounded relationship. A similar way that faithfulness can fail is through selection bias, something that is particularly important when analyzing the types of observational data found in biomedical sciences. Importantly, this can occur without any missing causes or common causes. For example, if we collect data from an emergency department (ED), it may seem as though fever and abdominal pain are statistically dependent, with this being due to the fact that only patients with those symptoms come to the ED, while patients with only one of the symptoms stay home . In all cases, the theoretical guarantees on when causal inferences can be made given these assumptions hold in the large sample limit (as the amount of data approaches infinity) BIB002 . The main idea of BN inference from data is finding the graph or set of graphs that best explain the data, but there are a number of ways this can be done. The two primary types of methods for this are: (1) assigning scores to graphs and searching over the set of possible graphs attempting to maximize the chosen scoring function, and (2) beginning with an undirected fully connected graph and using repeated conditional independence tests to remove and orient edges in the graph. In the first approach, the idea is that one can begin by generating a possible graph, and then explore the search space by altering this initial graph. The primary differences between algorithms of this type are how the space of graphs is explored (e.g. beginning with a graph and examining adjacent graphs, periodically restarting to avoid convergence to local minima), and what scoring function is used to evaluate graphs. Two of the main methods for scoring graphs are the Bayesian approach of Cooper and Herskovits BIB001 , which calculates the probability of the graph given the data and some prior beliefs about the distribution; and the Bayesian information criterion (BIC), which, being based on the minimum description length, penalizes larger models and aims to find the smallest graph accurately representing the data. Note that this minimality criterion is important since if one simply maximizes the likelihood of the model given the data, this will severely over fit to the particular dataset being observed. The second type of method, based on conditional independence tests, is exemplified by the PC algorithm BIB002 . The general idea of this is to begin with a graph where all variables are connected in all possible ways with undirected edges, and during each iteration to test whether for pairs of variables that are currently connected by an edge, there are other sets of adjacent variables (increasing the size of this set iteratively) that render them independent, in which case that edge is removed. After removing edges from the fully connected graph, the remaining edges can then be directed from cause to effect. One of the primary criticisms of BN methods is that the assumptions made may not normally hold and may be unrealistic to demand BIB004 . In practical cases, we may not know if a set of variables is causally sufficient or if a distribution is faithful. However as mentioned, there are algorithms such as FCI BIB002 BIB012 and others BIB008 that do not assume causal sufficiency and attempt to infer latent variables (though more work is still needed to adapt this for use with time series that have many variables ). Similarly, other research has addressed the second primary critique by developing approaches for determining determining when the faithfulness condition can be tested BIB009 . While there has not been nearly as much attention to relating this framework to token causality, one exception is the work of Halpern and Pearl BIB003 , which links graphical models to counterfactual theories of causality [45] using structural equation models BIB011 . BIB010 Broadly, the counterfactual view of causality says that had the cause not happened, the effect would not have happened either. Pearl's adaption allows one to test these types of statements in Bayesian networks. For example, one could test whether a patient would still have developed lung cancer had she not smoked. While nothing precludes incorporating temporal information, the theory does not naturally allow for this as the inferred relationships and structures do not explicitly include time. In many cases, such as diagnosis, there are complex sets of factors that must act in sequence in order to produce an effect. The primary difficulty with Pearl's approach is that we must know which variables are true and false in the token case (e.g. a patient smoked, had low cholesterol and was not exposed to asbestos), but in fact determining these truth values without temporal information is difficult. We might find that smoking causes lung cancer and then want to determine whether a particular patient's smoking caused his lung cancer. It seems unlikely that his beginning smoking at 9am should cause his lung cancer at 12 pm, but there is no way to automatically exclude such a case without incorporating timing information. This is an extreme example that can be ruled out with common sense, but it becomes more difficult as we must determine where the threshold is where we will consider an event to be an instance of the type level variable. In summary, when their underlying assumptions hold, the BN framework can provide a complete set of tools for inferring causal relationships, using these to explain individual cases, and making predictions based on the inferred causal models.
Methodological Review: A review of causal inference for biomedical informatics <s> Dynamic Bayesian networks <s> Motivation: Signaling pathways are dynamic events that take place over a given period of time. In order to identify these pathways, expression data over time are required. Dynamic Bayesian network (DBN) is an important approach for predicting the gene regulatory networks from time course expression data. However, two fundamental problems greatly reduce the effectiveness of current DBN methods. The first problem is the relatively low accuracy of prediction, and the second is the excessive computational time. ::: ::: Results: In this paper, we present a DBN-based approach with increased accuracy and reduced computational time compared with existing DBN methods. Unlike previous methods, our approach limits potential regulators to those genes with either earlier or simultaneous expression changes (up- or down-regulation) in relation to their target genes. This allows us to limit the number of potential regulators and consequently reduce the search space. Furthermore, we use the time difference between the initial change in the expression of a given regulator gene and its potential target gene to estimate the transcriptional time lag between these two genes. This method of time lag estimation increases the accuracy of predicting gene regulatory networks. Our approach is evaluated using time-series expression data measured during the yeast cell cycle. The results demonstrate that this approach can predict regulatory networks with significantly improved accuracy and reduced computational time compared with existing DBN approaches. ::: ::: Availability: The programs described in this paper can be obtained from the corresponding author upon request. ::: ::: Contact: sconzen@medicine.bsd.uchicago.edu <s> BIB001 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Dynamic Bayesian networks <s> Motivation: Many biomedical and clinical research problems involve discovering causal relationships between observations gathered from temporal events. Dynamic Bayesian networks are a powerful modeling approach to describe causal or apparently causal relationships, and support complex medical inference, such as future response prediction, automated learning, and rational decision making. Although many engines exist for creating Bayesian networks, most require a local installation and significant data manipulation to be practical for a general biologist or clinician. No software pipeline currently exists for interpretation and inference of dynamic Bayesian networks learned from biomedical and clinical data. Results: miniTUBA is a web-based modeling system that allows clinical and biomedical researchers to perform complex medical/ clinical inference and prediction using dynamic Bayesian network analysis with temporal datasets. The software allows users to choose different analysis parameters (e.g. Markov lags and prior topology), and continuously update their data and refine their results. miniTUBA can make temporal predictions to suggest interventions based on an automated learning process pipeline using all data provided. Preliminary tests using synthetic data and laboratory research data indicate that miniTUBA accurately identifies regulatory network structures from temporal data. Availability: miniTUBA is available at http://www.minituba.org Contact: yongqunh@med.umich.edu <s> BIB002 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Dynamic Bayesian networks <s> Prognostic models in medicine are usually been built using simple decision rules, proportional hazards models, or Markov models. Dynamic Bayesian networks (DBNs) offer an approach that allows for the incorporation of the causal and temporal nature of medical domain knowledge as elicited from domain experts, thereby allowing for detailed prognostic predictions. The aim of this paper is to describe the considerations that must be taken into account when constructing a DBN for complex medical domains and to demonstrate their usefulness in practice. To this end, we focus on the construction of a DBN for prognosis of carcinoid patients, compare performance with that of a proportional hazards model, and describe predictions for three individual patients. We show that the DBN can make detailed predictions, about not only patient survival, but also other variables of interest, such as disease progression, the effect of treatment, and the development of complications. Strengths and limitations of our approach are discussed and compared with those offered by traditional methods. <s> BIB003 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Dynamic Bayesian networks <s> Diagnosing ventilator-associated pneumonia in mechanically ventilated patients in intensive care units is seen as a clinical challenge. The difficulty in diagnosing ventilator-associated pneumonia stems from the lack of a simple yet accurate diagnostic test. To assist clinicians in diagnosing and treating patients with pneumonia, a decision-theoretic network had been designed with the help of domain experts. A major limitation of this network is that it does not represent pneumonia as a dynamic process that evolves over time. In this paper, we construct a dynamic Bayesian network that explicitly captures the development of the disease over time. We discuss how probability elicitation from domain experts served to quantify the dynamics involved and how the nature of the patient data helps reduce the computational burden of inference. We evaluate the diagnostic performance of our dynamic model for a number of real patients and report promising results. <s> BIB004 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Dynamic Bayesian networks <s> Dynamic Bayesian networks have been applied widely to reconstruct the structure of regulatory processes from time series data. The standard approach is based on the assumption of a homogeneous Markov chain, which is not valid in many real-world scenarios. Recent research efforts addressing this shortcoming have considered undirected graphs, directed graphs for discretized data, or over-flexible models that lack any information sharing among time series segments. In the present article, we propose a non-stationary dynamic Bayesian network for continuous data, in which parameters are allowed to vary among segments, and in which a common network structure provides essential information sharing across segments. Our model is based on a Bayesian multiple change-point process, where the number and location of the change-points is sampled from the posterior distribution. <s> BIB005 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Dynamic Bayesian networks <s> Coordination among cortical neurons is believed to be a key element in mediating many high-level cortical processes such as perception, attention, learning, and memory formation. Inferring the structure of the neural circuitry underlying this coordination is important to characterize the highly nonlinear, time-varying interactions between cortical neurons in the presence of complex stimuli. In this work, we investigate the applicability of dynamic Bayesian networks (DBNs) in inferring the effective connectivity between spiking cortical neurons from their observed spike trains. We demonstrate that DBNs can infer the underlying nonlinear and time-varying causal interactions between these neurons and can discriminate between mono-and polysynaptic links between them under certain constraints governing their putative connectivity. We analyzed conditionally Poisson spike train data mimicking spiking activity of cortical networks of small and moderately large size. The performance was assessed and compared to other methods under systematic variations of the network structure to mimic a wide range of responses typically observed in the cortex. Results demonstrate the utility of DBN in inferring the effective connectivity in cortical networks. <s> BIB006 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Dynamic Bayesian networks <s> Learning dynamic Bayesian network structures provides a principled mechanism for identifying conditional dependencies in time-series data. An important assumption of traditional DBN structure learning is that the data are generated by a stationary process, an assumption that is not true in many important settings. In this paper, we introduce a new class of graphical model called a non-stationary dynamic Bayesian network, in which the conditional dependence structure of the underlying data-generation process is permitted to change over time. Non-stationary dynamic Bayesian networks represent a new framework for studying problems in which the structure of a network is evolving over time. Some examples of evolving networks are transcriptional regulatory networks during an organism's development, neural pathways during learning, and traffic patterns during the day. We define the non-stationary DBN model, present an MCMC sampling algorithm for learning the structure of the model from time-series data under different assumptions, and demonstrate the effectiveness of the algorithm on both simulated and biological data. <s> BIB007
While Bayesian networks are a useful method for representing and inferring causal relationships between variables in the absence of time, most biomedical cases of interest have strong temporal components. One method of inferring temporal relationships is by extending BNs to include timing information. Dynamic Bayesian networks (DBNs) use a set of BNs to show how variables at one time may influence those at another. That is, we could have a BN representing the system at time t and then another at t + 1 (or 2, 3, etc.) with connections between the graphs showing how a Fig. 6 . Illustration of Simpson's paradox example, where B lowers the probability of P, while both P and B raise the probability of T. variable at t + 1 depends on itself or another variable at time t. In the simplest case, a system may be stationary and Markov and modeled with a BN of its initial state and a DBN with two time slices, showing how variables at time t i influence those at t i+1 . This is shown in Fig. 7 , where there is one graph showing that at the initial time (zero) A influences both B and C. Then there are two more graphs, showing how variables at time i influence those at the subsequent time i + 1. DBNs have been applied to finding gene regulatory networks BIB001 , inferring neural connectivity networks from spike train data BIB006 , and developing prognostic and diagnostic models BIB004 BIB003 ; and there are a number of software packages for inferring them BIB002 . Recent work has also extended DBNs to the case of nonstationary time series, where there are so-called changepoints when the structure of the system (how the variables are connected) changes. Some approaches find such times for the entire system BIB007 while others can find these individually for each variable BIB005 . This approach faces two primary limitations. First, like BNs, there are no current methods for testing complex relationships. While variables may be defined arbitrarily, we are not aware of any structured method for forming and testing hypotheses involving conjunctions or sequences of variables. For example, there is no automated way of determining that smoking for a period of 15 years while having a particular genetic mutation leads to lung cancer in 5-10 years after that with probability 0.5, while smoking for a year and then ceasing smoking leads to lung cancer in 30-40 years with probability 0.01. Second, each connection between each time slice is inferred separately (e.g. we find c at time t causes e at time t + 2 and t + 3), leading to both significant computational complexity and reduced inference power. Since it is not possible to search exhaustively over all possible graphs, one must employ heuristics, but these can be sensitive to the parameters chosen. More critically, few relationships involving health have discrete time lags. When using observational data such as from EHRs, even if the relationship does have a precise timing it is unlikely that patients will be measured at exactly the correct time points, since patients are not measured in a synchronized manner. In order to use DBN methods for these inference problems, one can choose specific time points such as ''3 months'' or ''6 months'' before diagnosis and then group all events happening in ranges of times to be at these specific time points. However, finding these time ranges is frequently the goal of inference.
Methodological Review: A review of causal inference for biomedical informatics <s> Granger causality <s> There occurs on some occasions a difficulty in deciding the direction of causality between two related variables and also whether or not feedback is occurring. Testable definitions of causality and feedback are proposed and illustrated by use of simple two-variable models. The important problem of apparent instantaneous causality is discussed and it is suggested that the problem often arises due to slowness in recording information or because a sufficiently wide class of possible causal variables has not been used. It can be shown that the cross spectrum between two variables can be decomposed into two parts, each relating to a single causal arm of a feedback situation. Measures of causal lag and causal strength can then be constructed. A generalisation of this result with the partial cross spectrum is suggested. <s> BIB001 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Granger causality <s> A general definition of causality is introduced and then specialized to become operational. By considering simple examples a number of advantages, and also difficulties, with the definition are discussed. Tests based on the definitions are then considered and the use of post-sample data emphasized, rather than relying on the same data to fit a model and use it to test causality. It is suggested that a bayesian viewpoint should be taken in interpreting the results of these tests. Finally, the results of a study relating advertising and consumption are briefly presented. <s> BIB002 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Granger causality <s> Multi-electrode neurophysiological recordings produce massive quantities of data. Multivariate time series analysis provides the basic framework for analyzing the patterns of neural interactions in these data. It has long been recognized that neural interactions are directional. Being able to assess the directionality of neuronal interactions is thus a highly desired capability for understanding the cooperative nature of neural computation. Research over the last few years has shown that Granger causality is a key technique to furnish this capability. The main goal of this article is to provide an expository introduction to the concept of Granger causality. Mathematical frameworks for both bivariate Granger causality and conditional Granger causality are developed in detail with particular emphasis on their spectral representations. The technique is demonstrated in numerical examples where the exact answers of causal influences are known. It is then applied to analyze multichannel local field potentials recorded from monkeys performing a visuomotor task. Our results are shown to be physiologically interpretable and yield new insights into the dynamical organization of large-scale oscillatory cortical networks. <s> BIB003 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Granger causality <s> BackgroundIn computational biology, one often faces the problem of deriving the causal relationship among different elements such as genes, proteins, metabolites, neurons and so on, based upon multi-dimensional temporal data. Currently, there are two common approaches used to explore the network structure among elements. One is the Granger causality approach, and the other is the dynamic Bayesian network inference approach. Both have at least a few thousand publications reported in the literature. A key issue is to choose which approach is used to tackle the data, in particular when they give rise to contradictory results.ResultsIn this paper, we provide an answer by focusing on a systematic and computationally intensive comparison between the two approaches on both synthesized and experimental data. For synthesized data, a critical point of the data length is found: the dynamic Bayesian network outperforms the Granger causality approach when the data length is short, and vice versa. We then test our results in experimental data of short length which is a common scenario in current biological experiments: it is again confirmed that the dynamic Bayesian network works better.ConclusionWhen the data size is short, the dynamic Bayesian network inference performs better than the Granger causality approach; otherwise the Granger causality approach is better. <s> BIB004 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Granger causality <s> In many domains we face the problem of determining the underlying causal structure from time-course observations of a system. Whether we have neural spike trains in neuroscience, gene expression levels in systems biology, or stock price movements in finance, we want to determine why these systems behave the way they do. For this purpose we must assess which of the myriad possible causes are significant while aiming to do so with a feasible computational complexity. At the same time, there has been much work in philosophy on what it means for something to be a cause, but comparatively little attention has been paid to how we can identify these causes. Algorithmic approaches from computer science have provided the first steps in this direction, but fail to capture the complex, probabilistic and temporal nature of the relationships we seek. ::: This dissertation presents a novel approach to the inference of general (type-level) and singular (token-level) causes. The approach combines philosophical notions of causality with algorithmic approaches built on model checking and statistical techniques for false discovery rate control. By using a probabilistic computation tree logic to describe both cause and effect, we allow for complex relationships and explicit description of the time between cause and effect as well as the probability of this relationship being observed (e.g. “a and b until c, causing d in 10–20 time units”). Using these causal formulas and their associated probabilities, we develop a novel measure for the significance of a cause for its effect, thus allowing discovery of those that are statistically interesting, determined using the concepts of multiple hypothesis testing and false discovery control. We develop algorithms for testing these properties in time-series observations and for relating the inferred general relationships to token-level events (described as sequences of observations). Finally, we illustrate these ideas with example data from both neuroscience and finance, comparing the results to those found with other inference methods. The results demonstrate that our approach achieves superior control of false discovery rates, due to its ability to appropriately represent and infer temporal information. <s> BIB005 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Granger causality <s> We propose a definition of causality for time series in terms of the effect of an intervention in one component of a multivariate time series on another component at some later point in time. Conditions for identifiability, comparable to the back-door and front-door criteria, are presented and can also be verified graphically. Computation of the causal effect is derived and illustrated for the linear case. <s> BIB006
Another approach to inference from time series is that of Granger BIB001 BIB002 , whose methodology was developed primarily for finance but has also been applied to other areas such as microarray analysis and neuronal spike train data BIB003 . Similarly to DBNs, the approach attempts to find whether one variable is informative about another at some specific lagged time. Unlike DBNs, the approach does not attempt to find the set of relationships that best explains a particular dataset, but rather evaluates each relationship's significance individually. One time series X at time t is said to Granger-cause another time series Y at time t + 1 if with W t being all available knowledge up until time t: That is, X t contains some information about Y t+1 that is not part of the rest of the set W t . This is usually tested with regressions to determine how informative the lagged values of X are about Y . For example, say we have three time series: match sales, incidence of lung cancer, and rate of smoking for a particular neighborhood (as shown in Fig. 1) . Then, to determine whether the rate of smoking predicts the incidence of lung cancer 10 years later (let us assume no in-and out-migration), we would compare the probabilities when we have the full history of smoking rate and match sales up until 10 years before, versus when we remove the information about smoking. If the probabilities differ, then smoking would be said to Granger-cause lung cancer. Note that while this approach is used for causal inference, the relationships found do not all have a causal interpretation in the sense we have described. For example, if the relationship between smoking, stained fingers and lung cancer is as shown in Fig. 6 , but people's fingers become stained before they develop lung cancer, then stained fingers will be found to Granger cause lung cancer (particularly if stained fingers provide an indication of how much a person smoked). Recalling the purposes we described earlier, Granger causes may be suitable for prediction, but cannot be used for explanation or policy. That is, we could not explain a patient's lung cancer as being due to their stained fingers nor can we prevent lung cancer by providing gloves to smokers. It has been verified experimentally that the primary types of errors Granger causality makes are those mistaking the correlation between common effects of a cause for a causal relationship. However, it is less prone to overfitting to the dataset than either DBNs or BNs, since it assesses the relationships individually rather than inferring the model that best explains all of the variables BIB005 . Other comparisons, such as BIB004 have found less of a difference between Granger causality and BNs, but that work used a different methodology. There each algorithm was used to analyze multiple datasets representing the same underlying structure, where the consensus of all inferences was taken (i.e. the causal relationships that were found in every run). When taking the consensus, it is possible to severely overfit to each individual dataset while still performing well overall if the true relationships are identified in each inference. Thus this approach this may overstate the benefit of DBNs over Granger causality, since Kleinberg BIB005 found that results varied considerably between inferences (over 75% intersection between inferences for Granger causality, over 40% for DBNs). Further, outside of data sets for comparison, we cannot always replicate this approach of taking the consensus of multiple inferences. In some cases there is only one dataset that cannot be partitioned (e.g. a particular year of the stock market occurs once) or the partitioning is difficult since it requires more data. Extensions to Granger causality have attempted to address its shortcomings, such as extending the framework to allow analysis of multiple time series generated by nonlinear models , as well as to find the lags between cause and effect as part of the inference process BIB006 and reformulating the problem in terms of graphical models to allow the possibility of handling latent variables . However, like BNs and DBNs, this approach has no intrinsic way of specifying and inferring complex relationships.
Methodological Review: A review of causal inference for biomedical informatics <s> A temporal and logical approach <s> We are given a large database of customer transactions, where each transaction consists of customer-id, transaction time, and the items bought in the transaction. We introduce the problem of mining sequential patterns over such databases. We present three algorithms to solve this problem, and empirically evaluate their performance using synthetic data. Two of the proposed algorithms, AprioriSome and AprioriAll, have comparable performance, albeit AprioriSome performs a little better when the minimum number of customers that must support a sequential pattern is low. Scale-up experiments show that both AprioriSome and AprioriAll scale linearly with the number of customer transactions. They also have excellent scale-up properties with respect to the number of transactions per customer and the number of items in a transaction. > <s> BIB001 </s> Methodological Review: A review of causal inference for biomedical informatics <s> A temporal and logical approach <s> We present a logic for stating properties such as, "after a request for service there is at least a 98\045 probability that the service will be carried out within 2 seconds". The logic extends the temporal logic CTL by Emerson, Clarke and Sistla with time and probabil- ities. Formulas are interpreted over discrete time Markov chains. We give algorithms for checking that a given Markov chain satis- fies a formula in the logic. The algorithms require a polynomial number of arithmetic operations, in size of both the formula and\003This research report is a revised and extended version of a paper that has appeared under the title "A Framework for Reasoning about Time and Reliability" in the Proceeding of the 10thIEEE Real-time Systems Symposium, Santa Monica CA, December 1989. This work was partially supported by the Swedish Board for Technical Development (STU) as part of Esprit BRA Project SPEC, and by the Swedish Telecommunication Administration.1the Markov chain. A simple example is included to illustrate the algorithms. <s> BIB002 </s> Methodological Review: A review of causal inference for biomedical informatics <s> A temporal and logical approach <s> Causal diagrams are rigorous tools for controlling confounding. They also can be used to describe complex causal systems, which is done routinely in communicable disease epidemiology. The use of change diagrams has advantages over static diagrams, because change diagrams are more tractable, relate better to interventions, and have clearer interpretations.Causal diagrams are a useful basis for modeling. They make assumptions explicit, provide a framework for analysis, generate testable predictions, explore the effects of interventions, and identify data gaps. Causal diagrams can be used to integrate different types of information and to facilitate communication both among public health experts and between public health experts and experts in other fields. Causal diagrams allow the use of instrumental variables, which can help control confounding and reverse causation. <s> BIB003 </s> Methodological Review: A review of causal inference for biomedical informatics <s> A temporal and logical approach <s> In many domains we face the problem of determining the underlying causal structure from time-course observations of a system. Whether we have neural spike trains in neuroscience, gene expression levels in systems biology, or stock price movements in finance, we want to determine why these systems behave the way they do. For this purpose we must assess which of the myriad possible causes are significant while aiming to do so with a feasible computational complexity. At the same time, there has been much work in philosophy on what it means for something to be a cause, but comparatively little attention has been paid to how we can identify these causes. Algorithmic approaches from computer science have provided the first steps in this direction, but fail to capture the complex, probabilistic and temporal nature of the relationships we seek. ::: This dissertation presents a novel approach to the inference of general (type-level) and singular (token-level) causes. The approach combines philosophical notions of causality with algorithmic approaches built on model checking and statistical techniques for false discovery rate control. By using a probabilistic computation tree logic to describe both cause and effect, we allow for complex relationships and explicit description of the time between cause and effect as well as the probability of this relationship being observed (e.g. “a and b until c, causing d in 10–20 time units”). Using these causal formulas and their associated probabilities, we develop a novel measure for the significance of a cause for its effect, thus allowing discovery of those that are statistically interesting, determined using the concepts of multiple hypothesis testing and false discovery control. We develop algorithms for testing these properties in time-series observations and for relating the inferred general relationships to token-level events (described as sequences of observations). Finally, we illustrate these ideas with example data from both neuroscience and finance, comparing the results to those found with other inference methods. The results demonstrate that our approach achieves superior control of false discovery rates, due to its ability to appropriately represent and infer temporal information. <s> BIB004 </s> Methodological Review: A review of causal inference for biomedical informatics <s> A temporal and logical approach <s> Computational analysis of time-course data with an underlying causal structure is needed in a variety of domains, including neural spike trains, stock price movements, and gene expression levels. However, it can be challenging to determine from just the numerical time course data alone what is coordinating the visible processes, to separate the underlying prima facie causes into genuine and spurious causes and to do so with a feasible computational complexity. For this purpose, we have been developing a novel algorithm based on a framework that combines notions of causality in philosophy with algorithmic approaches built on model checking and statistical techniques for multiple hypotheses testing. The causal relationships are described in terms of temporal logic formulae, reframing the inference problem in terms of model checking. The logic used, PCTL, allows description of both the time between cause and effect and the probability of this relationship being observed. We show that equipped with these causal formulae with their associated probabilities we may compute the average impact a cause makes to its effect and then discover statistically significant causes through the concepts of multiple hypothesis testing (treating each causal relationship as a hypothesis), and false discovery control. By exploring a well-chosen family of potentially all significant hypotheses with reasonably minimal description length, it is possible to tame the algorithm's computational complexity while exploring the nearly complete search-space of all prima facie causes. We have tested these ideas in a number of domains and illustrate them here with two examples. <s> BIB005
While one could use arbitrarily defined variables with both graphical models and Granger causality, there is no automated method for testing these unstructured relationships that can include properties being true for durations of time, sequences of factors, and conjunctions of variables probabilistically leading to effects in some time windows. On the other hand, data mining techniques BIB001 , created for inferring complex patterns and sets of predictive features do not have the causal interpretations that we have said are needed for prediction, explanation, and policy development. Further, we want to not only infer general properties about populations (such as the relationship between various environmental exposures and disease) but want to use this information to reason about individual patients for disease detection and treatment suggestion. In this section we discuss an approach developed by Kleinberg and Mishra BIB005 BIB004 that combines the philosophical theories of probabilistic causality with temporal logic and statistics for inference of complex, time-dependent, causal relationships in time series data (such as EHRs), addressing both type-level causal inference and token-level explanation. The approach is based on the core principles of probabilistic causality: that a cause is earlier than its effect (temporal priority) and that it raises the probability of its effect, where probabilistic computation tree logic (PCTL) formulas BIB002 are used to represent the causal relationships. In addition to being able to represent properties such as variables being true for durations of time, this also allows a direct representation of the time window between cause and effect. For example, instead of relationships being only ''a causes b'', this method can reason about and infer relationships such as ''asbestos exposure and smoking until a particular genetic mutation occurs causes lung cancer in 1-3 years with probability 0.2''. The overall method is to generate a set of logical formulas, test which are satisfied by the data, and then compute a measure of causal significance that compares possible causes against other explanations to assess the average difference a cause makes to the probability of its effect. The testing is relative to a set of time series data (such as EHRs) and returns a set of significant relationships, rather than a graph structure. BIB003 To do this, a set of logical formulas (representing potential causal relationships) is initially created using background knowledge or by generating all possible logical formulas between the variables in the dataset up to some maximum size. With c and e being PCTL formulas (in the simplest case, they may be atomic propositions), prima facie (potential) causes are defined as those where c has nonzero probability, the unconditional probability of e is less than some value p and: c,
Methodological Review: A review of causal inference for biomedical informatics <s> Pr;6s <s> Benjamini and Hochberg suggest that the false discovery rate may be the appropriate error rate to control in many applied multiple testing problems. A simple procedure was given there as an FDR controlling procedure for independent test statistics and was shown to be much more powerful than comparable procedures which control the traditional familywise error rate. We prove that this same procedure also controls the false discovery rate when the test statistics have positive regression dependency on each of the test statistics corresponding to the true null hypotheses. This condition for positive dependency is general enough to cover many problems of practical interest, including the comparisons of many treatments with a single control, multivariate normal test statistics with positive correlation matrix and multivariate t. Furthermore, the test statistics may be discrete, and the tested hypotheses composite without posing special difficulties. For all other forms of dependency, a simple conservative modification of the procedure controls the false discovery rate. Thus the range of problems for which a procedure with proven FDR control can be offered is greatly increased. 1.1. Simultaneous hypotheses testing. The control of the increased type I error when testing simultaneously a family of hypotheses is a central issue in the area of multiple comparisons. Rarely are we interested only in whether all hypotheses are jointly true or not, which is the test of the intersection null hypothesis. In most applications, we infer about the individual hypotheses, realizing that some of the tested hypotheses are usually true—we hope not all—and some are not. We wish to decide which ones are not true, indicating (statistical) discoveries. An important such problem is that of multiple endpoints in a clinical trial: a new treatment is compared with an existing one in terms of a large number of potential benefits (endpoints). <s> BIB001 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Pr;6s <s> Current scientific techniques in genomics and image processing routinely produce hypothesis testing problems with hundreds or thousands of cases to consider simultaneously. This poses new difficulties for the statistician, but also opens new opportunities. In particular, it allows empirical estimation of an appropriate null hypothesis. The empirical null may be considerably more dispersed than the usual theoretical null distribution that would be used for any one case considered separately. An empirical Bayes analysis plan for this situation is developed, using a local version of the false discovery rate to examine the inference issues. Two genomics problems are used as examples to show the importance of correctly choosing the null hypothesis. <s> BIB002 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Pr;6s <s> A primary problem in causal inference is the following: From a set of time course data, such as that generated by gene expression microarrays, is it possible to infer all significant causal relationships between the elements described by this data? In prior work [2], we have proposed a framework that combines notions of causality in philosophy, with the algorithmic approaches built on model checking and statistical techniques for multiple hypotheses testing. The causal relationships can be then described in terms of temporal logic formulas, reframing the problem in terms of model checking. The logic used, PCTL, allows description of both the time between cause and effect and the probability of this relationship being observed. Borrowing from philosophy, we define prima facie causes in terms of probability raising, and then determine genuine causality by computing the average difference a prima facie cause makes to the occurrence of its effect, given each of the other prima facie causes of that effect. However, it faces many interesting issues confronted in statistical theories of hypotheses testing, namely, given these causal formulas with their associated probabilities and our average computed differences, instead of choosing an arbitrary threshold, how do we decide which are “significant”? To address this problem rigorously, we use the concepts of multiple hypothesis testing (treating each causal relationship as a hypothesis), and false discovery control. In particular, we apply the empirical Bayesian formulation proposed by Efron in [1]. This method uses an empirical rather than theoretical null, which has been shown to be better equipped for cases where the test statistics are dependent - as may be true in the case of complex causal structures. <s> BIB003 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Pr;6s <s> We describe a new framework for causal inference and its application to return time series. In this system, causal relationships are represented as logical formulas, allowing us to test arbitrarily complex hypotheses in a computationally efficient way. We simulate return time series using a common factor model, and show that on this data the method described significantly outperforms Granger causality (a primary approach to this type of problem). Finally we apply the method to real return data, showing that the method can discover novel relationships between stocks. The approach described is a general one that will allow combination of price and volume data with qualitative information at varying time scales (from interest rate announcements, to earnings reports to news stories) shedding light on some of the previously invisible common causes of seemingly correlated price movements. <s> BIB004 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Pr;6s <s> In many domains we face the problem of determining the underlying causal structure from time-course observations of a system. Whether we have neural spike trains in neuroscience, gene expression levels in systems biology, or stock price movements in finance, we want to determine why these systems behave the way they do. For this purpose we must assess which of the myriad possible causes are significant while aiming to do so with a feasible computational complexity. At the same time, there has been much work in philosophy on what it means for something to be a cause, but comparatively little attention has been paid to how we can identify these causes. Algorithmic approaches from computer science have provided the first steps in this direction, but fail to capture the complex, probabilistic and temporal nature of the relationships we seek. ::: This dissertation presents a novel approach to the inference of general (type-level) and singular (token-level) causes. The approach combines philosophical notions of causality with algorithmic approaches built on model checking and statistical techniques for false discovery rate control. By using a probabilistic computation tree logic to describe both cause and effect, we allow for complex relationships and explicit description of the time between cause and effect as well as the probability of this relationship being observed (e.g. “a and b until c, causing d in 10–20 time units”). Using these causal formulas and their associated probabilities, we develop a novel measure for the significance of a cause for its effect, thus allowing discovery of those that are statistically interesting, determined using the concepts of multiple hypothesis testing and false discovery control. We develop algorithms for testing these properties in time-series observations and for relating the inferred general relationships to token-level events (described as sequences of observations). Finally, we illustrate these ideas with example data from both neuroscience and finance, comparing the results to those found with other inference methods. The results demonstrate that our approach achieves superior control of false discovery rates, due to its ability to appropriately represent and infer temporal information. <s> BIB005 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Pr;6s <s> While type causality helps us to understand general relationships such as the etiology of a disease (smoking causing lung cancer), token causality aims to explain causal connections in specific instantiated events, such as the diagnosis of a patient (Ravi's developing lung cancer after a 20-year smoking habit). Understanding why something happened, as in these examples, is central to reasoning in such diverse cases as the diagnosis of patients, understanding why the US financial market collapsed in 2007 and finding a causal explanation for Obama's victory over Clinton in the US primary. However, despite centuries of work in philosophy and decades of research in computer science, the problem of how to rigorously formalize token causality and how to automate such reasoning has remained unsolved. In this paper, we show how to use type-level causal relationships, represented as temporal logic formulas, together with philosophical principles, to reason about these token-level cases. <s> BIB006 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Pr;6s <s> Computational analysis of time-course data with an underlying causal structure is needed in a variety of domains, including neural spike trains, stock price movements, and gene expression levels. However, it can be challenging to determine from just the numerical time course data alone what is coordinating the visible processes, to separate the underlying prima facie causes into genuine and spurious causes and to do so with a feasible computational complexity. For this purpose, we have been developing a novel algorithm based on a framework that combines notions of causality in philosophy with algorithmic approaches built on model checking and statistical techniques for multiple hypotheses testing. The causal relationships are described in terms of temporal logic formulae, reframing the inference problem in terms of model checking. The logic used, PCTL, allows description of both the time between cause and effect and the probability of this relationship being observed. We show that equipped with these causal formulae with their associated probabilities we may compute the average impact a cause makes to its effect and then discover statistically significant causes through the concepts of multiple hypothesis testing (treating each causal relationship as a hypothesis), and false discovery control. By exploring a well-chosen family of potentially all significant hypotheses with reasonably minimal description length, it is possible to tame the algorithm's computational complexity while exploring the nearly complete search-space of all prima facie causes. We have tested these ideas in a number of domains and illustrate them here with two examples. <s> BIB007
Pp e ð5Þ where r and s are times such that 1 6 r 6 s 6 1 and r -1. This formula means that the probability of e happening in between r and s time units after c is p (the conditional probability of e given c). This representation is equivalent to that of Suppes (described in Section 3.2). Then, to determine whether a particular prima facie cause c is a significant (also called just-so) cause of an effect e, where X is the set of all prima facie causes of e, we compute: e avg ðc; eÞ ¼ Something with a low value of this measure may be a spurious cause of the effect (perhaps due to a common cause of it and the effect) or may be a genuine cause but a weak one. If this value is exactly equal to zero, we cannot conclude that c has no influence on e, since its positive and negative influence may have canceled out. The primary strengths of this type of pairwise testing is that, in contrast to some methods for searching over graphs, the order of testing does not matter, and the computational complexity is significantly reduced. Note that this is testing the absolute increase in probability. If one instead used a ratio of the two probabilities, then the cause of a low-probability effect that leads to a 3-fold increase in probability (e.g. 0.001 to 0.003) would seem as significant as one that leads to the same order of magnitude change in a higher probability event (e.g. 0.1 to 0.3). While these may both be causal, for practical purposes the latter one provides a better opportunity for potential intervention. One must then determine which values of e avg are significant. Note that we are generally testing a large number of causal hypotheses, where we expect only a small portion of those tested to be genuinely causal, so the large number of tests conducted can be used to our advantage, allowing us to treat the problem as a multiple hypothesis testing and false discovery control one BIB001 where we can use an empirical null hypothesis BIB002 BIB003 . The method cited for fdr control relies on two primary assumptions: in the absence of causal relationships the e avg values will be normally distributed, and there are a small number of true positives in the set. These also allow us to determine when we do not have enough data to test our hypotheses, as the results will differ significantly from a normal distribution. This approach has been validated on synthetically generated data sets in multiple areas (neuronal spike train BIB007 and stock market data BIB004 ) and compared extensively against BN, Granger, and DBN methods. It was shown that in cases where temporal information is important, it leads to significantly lower false discovery rates than the other approaches BIB005 . Note that unlike the BN and DBN methods described, since a model is not inferred, there is no immediate way of calculating the joint probabilities that can be useful for prognosis. While the goal of this approach is to infer relationships rather than such a model, it is possible that one can use it to find the relationships and evaluate their timings and then use this prior information when building a BN or DBN. One would still need to define joint probability distributions for complex events, however assuming there are many fewer actual relationships than those initially tested, this reduces the complexity significantly. Another limitation is that there is no attempt to infer latent variables, and this becomes more difficult as the relationships tested become more complex. It has also been connected to token-level causal inference and explanation BIB006 , allowing for explanation of complex events in a way that incorporates temporal information in both the type-level relationships and token-level observations. The premise of the approach is that, even though it is unclear philosophically how to relate type and token level causality, type-level relationships are good evidence toward token causality. However, since causes can be logical formulas, such as a^b, we may be unable to determine whether they are true, such as if we only know that a happened and not whether b did too. Beginning with a set of inferred type-level causes and a sequence of token-level observations consisting of truth values of variables and their times (such as a particular patient's EHR), one can test which formulas are satisfied by the sequence of observations and, in the case where we cannot determine a formula's truth value, its probability can be calculated given the observation sequence. For example, the token-level scenario may be the following sequence, beginning from observation of a system, to occurrence of the effect, e. Thus the observation sequence V is the set of things true at each timepoint. Here a is true at time zero, both b and c are true at time 1, a is true again at time 2 and then there are no further observations until the effect e occurs at time 5. Where V is an observation sequence, the token-level significance of a particular cause c for an effect e (omitting the temporal subscripts for ease of notation) is: e avg ðc; eÞ Â PðcjVÞ: ð8Þ This weights the type-level significance scores e avg by the probability of the cause token-occurring given the sequence of observations, P(ejV). The result is a ranking of possible causes of an effect that weight the type-level significance scores by the token-level probabilities. Since it does not take a counterfactual approach to explanation, this method can handle many of the counterexamples found in the philosophical literature, allowing explanation of overdetermined events BIB005 .
Methodological Review: A review of causal inference for biomedical informatics <s> Causal explanation <s> A discussion of matching, randomization, random sampling, and other methods of controlling extraneous variation is presented. The objective is to specify the benefits of randomization in estimating causal effects of treatments. The basic conclusion is that randomization should be employed whenever possible but that the use of carefully controlled nonrandomized data to estimate causal effects is a reasonable and necessary procedure in many cases. Recent psychological and educational literature has included extensive criticism of the use of nonrandomized studies to estimate causal effects of treatments (e.g., Campbell & Erlebacher, 1970). The implication in much of this literature is that only properly randomized experiments can lead to useful estimates of causal effects. If taken as applying to all fields of study, this position is untenable. Since the extensive use of randomized experiments is limited to the last half century,8 and in fact is not used in much scientific investigation today,4 one is led to the conclusion that most scientific "truths" have been established without using randomized experiments. In addition, most of us successfully determine the causal effects of many of our everyday actions, even interpersonal behaviors, without the benefit of randomization. Even if the position that causal effects of treatments can only be well established from randomized experiments is taken as applying only to the social sciences in which <s> BIB001 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Causal explanation <s> The literature on causal discovery has focused on interventions that involve randomly assigning values to a single variable. But such a randomized intervention is not the only possibility, nor is it always optimal. In some cases it is impossible or it would be unethical to perform such an intervention. We provide an account of ‘hard’ and ‘soft’ interventions and discuss what they can contribute to causal discovery. We also describe how the choice of the optimal intervention(s) depends heavily on the particular experimental setup and the assumptions that can be made. <s> BIB002 </s> Methodological Review: A review of causal inference for biomedical informatics <s> Causal explanation <s> Thank you very much for reading making things happen a theory of causal explanation. Maybe you have knowledge that, people have search hundreds times for their chosen novels like this making things happen a theory of causal explanation, but end up in infectious downloads. Rather than reading a good book with a cup of tea in the afternoon, instead they juggled with some malicious bugs inside their desktop computer. <s> BIB003
Giving the reasons why an event occurred by citing the causal relationships related to the situation (e.g. it is known that a patient smoked and then developed lung cancer. The relationship between smoking and lung cancer explains why he developed lung cancer) or providing information about an event (e.g. If the relationship between smoking and lung cancer is known, then a particular patient's lung cancer can be explained by providing information on his smoking). This can also be called causal reasoning Causal inference The process of finding causal relationships. Here we mean the process of doing this in an automated way from data. This is sometimes referred to as causal discovery Confounding In this context, confounding is when variables may seem causally related, but the relationship is fully explained by another factor such as a common cause Necessary cause If a cause is necessary, the effect cannot occur without it Prediction The process of using a causal model of a system to find the probability of future events and, ideally, what will result from an intervention on the system. This is sometimes referred to as causal inference or causal reasoning Sufficient cause A sufficient cause is one such that whenever it is true, it brings about the effect regime change, such as a diabetic whose glucose is under control most of the time, but who has periods of hypoglycemia. During these times there may be different sets of causal relationships governing their health. While methods that incorporate time windows can account for this type of behavior to some extent, it will be important to take this into account in a more explicit way in order to allow both accurate prediction as well as better inference (such as based on a time series that has been segmented into periods of relative stationarity). We have endeavored to cover the primary methods for causal inference, but it is not possible to discuss all approaches in depth. We now highlight some key omissions and areas for further reading. First, we focused on the inference of causal relationships from data, and did not discuss the creation of causal models, which may be done using background knowledge or a combination of prior knowledge to create the structure of the model and then inference of the probabilities or numerical relationships in the structure from data. One of the key approaches in this area is structural equation modeling (SEM) , which relates to the path analysis approach of Wright . Secondly, in many cases we want to understand what will happen if we change something in a system -intervening by forcing a variable to take a certain value, and it is also possible to understand causality in terms of such interventions (where causes are roughly ways of manipulating effects) BIB003 BIB002 . One approach to quantifying the effect of interventions is the Rubin causal model (RCM), or potential outcomes approach BIB001 .
A survey of fault tolerance in cloud computing <s> Introduction <s> Cloud computing represents a new IT model for consumption and delivery of the services over the Internet. Cloud computing is a natural evolution of the widespread adoption of virtualization, service-oriented architecture and utility computing. <s> BIB001 </s> A survey of fault tolerance in cloud computing <s> Introduction <s> Cloud computing is the biggest buzz in the computer world these days -- maybe too big of a buzz. Cloud computing means different things to different people. Cloud computing is not a small, undeveloped branch of IT. Research firm IDC thinks that cloud computing will reach $42 billion in 2012. You can do everything on cloud from running applications to storing data off-site. You can run entire operating systems on the cloud. This paper is for anyone who may have recently heard the term "cloud computing" for the first time and needs to know what it is and how it helps them. <s> BIB002 </s> A survey of fault tolerance in cloud computing <s> Introduction <s> With the immense growth of internet and its users, Cloud computing, with its incredible possibilities in ease, Quality of service and on-interest administrations, has turned into a guaranteeing figuring stage for both business and nonbusiness computation customers. It is an adoptable technology as it provides integration of software and resources which are dynamically scalable. The dynamic environment of cloud results in various unexpected faults and failures. The ability of a system to react gracefully to an unexpected equipment or programming malfunction is known as fault tolerance. In order to achieve robustness and dependability in cloud computing, failure should be assessed and handled effectively. Various fault detection methods and architectural models have been proposed to increase fault tolerance ability of cloud. The objective of this paper is to propose an algorithm using Artificial Neural Network for fault detection which will overcome the gaps of previously implemented algorithms and provide a fault tolerant model. <s> BIB003 </s> A survey of fault tolerance in cloud computing <s> Introduction <s> Cloud computing provides services as a type of Internet-based computing using data centers that contain servers, storage and networks. For this reason, the cloud computing its great potentials in low cost and on-demand services. In recent years, the end user is highly increased to utilize the services in cloud computing. However, the faulty of infrastructure, software and application are the major problem in cloud computing. Fault tolerance uses techniques that concerned to guarantee availability, reliability of critical services and application execution. This paper discusses the existing fault tolerance techniques and challenges to minimize failure impact on the system and application execution in cloud computing. <s> BIB004 </s> A survey of fault tolerance in cloud computing <s> Introduction <s> Cloud computing offers variety of services from software instance to resource provisioning. As the user demands increase, there is a necessity to enhance cloud offerings. But still in some cases fault-tolerance is the major challenge for cloud environment. Multiple request to access the same server sometime leads to server over loaded and may increase faults and cause unreliability for the server. Few fault-tolerance techniques like self-healing, job migration, static load balancing and replication are existed but they are not fully reliable and effective for cloud environment. In this paper, we propose pro-active approach for fault-tolerance based on Processing power, Memory and Network parameters to increase resource reliability. Through this approach, we first calculate the reliability of each Virtual Machine (VM) based on success rate of task execution and then schedule the task on highly reliable VM. This approach provides comparatively good results for the VM reliability and system stability. <s> BIB005 </s> A survey of fault tolerance in cloud computing <s> Introduction <s> Distributed Systems have swiftly evolved from network of personal computers to cluster and then to grid, moving on to the era of cloud computing and now the latest one as Internet of things (IoT). With these rapid enhancements, the scale and complexity of systems providing cloud computing services have also increased tremendously. The major challenge faced by cloud service providers today is to provide an efficient, cost-effective, and reliable solution for seamless delivery of services to users. To achieve this research community is constantly working hard on different related issues like scheduling, power consumption, high availability, customer retention, resource provisioning, reliability and minimizing the probability of failures, etc. Reliability of service is an important parameter. With a large number of components in the cloud, the probability of failures is becoming a norm rather than an exception while delivering services to users. This emphasizes the need to develop fault tolerant schemes for cloud environment to deliver the required level of reliability. In this work, we have proposed a novel fault detection and mitigation approach. The novelty of approach lies in the method of detecting the fault based on running status of the job. The detection algorithm periodically monitors the progress of job on virtual machines (VMs) and reports the stalled job due to failed VM to fault tolerant manager (FTM). This not only reduces the resources wastage but ensures timely delivery of services to avoid any penalty due to service level agreement (SLA) violation. The validation of the proposed approach is done using CloudSim simulator. The performance analysis reveals the effectiveness of the proposed approach. <s> BIB006
Cloud computing refers to accessing, configuring and manipulating the resources (such as software and hardware) at a remote location BIB002 . defined the Cloud computing in terms of distributed computing ''A Cloud is a type of parallel and distributed system containing a set of interconnected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources based on service-level agreements established through negotiation between the service provider and consumers". According to the U.S. National Institute of Standards and Technology (NIST) definition: ''Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (for example servers, networks, storage, services, and applications) that can be quickly provisioned and released with least management effort or service provider interaction'' . Cloud computing offers various resources in the form of services to the end users on-demand basis. It enables businesses and users to use applications without installing them on physical machines and allows access to required resources over the Internet. It provides features as high performance, pay-as-you-go, connectivity, interactivity, reliability, ease of programmability, efficiency, scalability, management of large amount of data and elasticity to transform IT from a product to a service BIB001 , as depicted in Fig. 1 . The cloud computing, as a fast advancing technology, is increasingly being used to host many business or enterprise applications. However, the extensive use of the cloud-based services for hosting business or enterprise applications leads to service reliability and availability issues for both service providers and users BIB006 . These issues are intrinsic to cloud computing because of its highly distributed nature, heterogeneity of resources and the massive scale of operation. Consequently, several types of faults may occur in the cloud environment leading to failures and performance degradation. The major types of faults BIB003 BIB004 are listed as follows: Network fault: Since cloud computing resources are accessed over a network (Internet), a predominant cause of failures in cloud computing are the network faults. These faults may occur due to partitions in the network, packet loss or corruption, congestion, failure of the destination node or link, etc. Physical faults: These are faults that mainly occur in hardware resources, such as faults in CPUs, in memory, in storage, failure of power etc. Process faults: faults may occur in processes because of resource shortage, bugs in software, incompetent processing capabilities, etc. Service expiry fault: If a resource's service time expires while an application that leased it is using it, it leads to service failures. Failures lead to system breakdown or shut down of a system. However, distributed computing and thus, cloud computing is characterized by the notion of partial failures. A fault may occur in any constituent node, process or network component. This leads to a partial failure and consequently, performance degradation instead of a complete breakdown. Though this results in robust and dependable systems, faults should be handled effectively by proper fault tolerance mechanisms for high-performance computing. Fault tolerance enables the system to serve the request even some of the components are not working properly BIB006 BIB005 . Fault tolerance (FT) is the capability of a system that keeps on performing its anticipated function regardless of faults. In other words, FT is related to reliability, successful operation, and absence of breakdowns. A FT based system should be capable to examine faults in particular software or hardware components, failures of power or other varieties of unexpected adversities and still fulfil its specification .
A survey of fault tolerance in cloud computing <s> Cloud deployment models <s> The computational world is becoming very large and complex. Cloud Computing has emerged as a popular computing model to support processing large volumetric data using clusters of commodity computers. According to J.Dean and S. Ghemawat [1], Google currently processes over 20 terabytes of raw web data. It's some fascinating, large-scale processing of data that makes your head spin and appreciate the years of distributed computing fine-tuning applied to today's large problems. The evolution of cloud computing can handle such massive data as per on demand service. Nowadays the computational world is opting for pay-for-use models and Hype and discussion aside, there remains no concrete definition of cloud computing. In this paper, we first develop a comprehensive taxonomy for describing cloud computing architecture. Then we use this taxonomy to survey several existing cloud computing services developed by various projects world-wide such as Google, force.com, Amazon. We use the taxonomy and survey results not only to identify similarities and differences of the architectural approaches of cloud computing, but also to identify areas requiring further research. <s> BIB001 </s> A survey of fault tolerance in cloud computing <s> Cloud deployment models <s> Cloud computing represents a new IT model for consumption and delivery of the services over the Internet. Cloud computing is a natural evolution of the widespread adoption of virtualization, service-oriented architecture and utility computing. <s> BIB002 </s> A survey of fault tolerance in cloud computing <s> Cloud deployment models <s> Cloud computing is the biggest buzz in the computer world these days -- maybe too big of a buzz. Cloud computing means different things to different people. Cloud computing is not a small, undeveloped branch of IT. Research firm IDC thinks that cloud computing will reach $42 billion in 2012. You can do everything on cloud from running applications to storing data off-site. You can run entire operating systems on the cloud. This paper is for anyone who may have recently heard the term "cloud computing" for the first time and needs to know what it is and how it helps them. <s> BIB003 </s> A survey of fault tolerance in cloud computing <s> Cloud deployment models <s> The recent emergence of cloud computing has drastically altered everyone's perception of infrastructure architectures, software delivery and development models. Projecting as an evolutionary step, following the transition from mainframe computers to client/server deployment models, cloud computing encompasses elements from grid computing, utility computing and autonomic computing, into an innovative deployment architecture. This rapid transition towards the clouds, has fuelled concerns on a critical issue for the success of information systems, communication and information security. From a security perspective, a number of unchartered risks and challenges have been introduced from this relocation to the clouds, deteriorating much of the effectiveness of traditional protection mechanisms. As a result the aim of this paper is twofold; firstly to evaluate cloud security by identifying unique security requirements and secondly to attempt to present a viable solution that eliminates these potential threats. This paper proposes introducing a Trusted Third Party, tasked with assuring specific security characteristics within a cloud environment. The proposed solution calls upon cryptography, specifically Public Key Infrastructure operating in concert with SSO and LDAP, to ensure the authentication, integrity and confidentiality of involved data and communications. The solution, presents a horizontal level of service, available to all implicated entities, that realizes a security mesh, within which essential trust is maintained. <s> BIB004
The cloud deployment model is based on the motive and environment in which a cloud service is expected to be used. The selection of the deployment model determines the incurred cost, power P. Kumari, P. Kaur / Journal of King Saud University -Computer and Information Sciences xxx (2018) xxx-xxxconsumption by resources and other capital expenses BIB001 . The most commonly used deployment models in cloud environments are public cloud, private cloud, community cloud, and hybrid cloud. Public Cloud: The public cloud permits the general public to access the systems and services offered by an enterprise provider. It provides flexibility, scalability, location independence with the very low cost since multi-tenancy is generally used BIB003 BIB002 BIB001 . Resources are dynamically provisioned on an ondemand basis from a remote third-party provider who offers resources using a multi-tenant approach. Private Cloud: The private cloud is used within a particular organization, i.e., the cloud resources and services can be accessed or used inside an organization. This model ensures high application and data security and privacy BIB003 BIB001 ). Community Cloud: This model is used by various enterprises/ organizations simultaneously and helps a particular community/society that contains communal involvements (for example security necessities, mission, and compliance considerations and so on). This model may be operated, owned, and managed by one or multiple organizations inside the community or/and a third party. BIB002 BIB004 . Hybrid Cloud: The hybrid cloud is an alliance of both public cloud and private cloud. In this cloud deployment, critical events (e.g. those requiring secure operations) are accomplished using the private cloud services and non-critical events are implemented by using the public cloud BIB002 BIB001 ). Public clouds are most suitable in scenarios where organizations wish to use collaboration services like chat, and video conferences but sufficient IT resources or infrastructure are not available locally. In contrast, if strict security and privacy are issues of high priority, a private deployment model should be used. On the other hand, an organization that possesses a large IT infrastructure and is also expanding its capabilities, a hybrid deployment model should be the choice.
A survey of fault tolerance in cloud computing <s> Cloud service model <s> The computational world is becoming very large and complex. Cloud Computing has emerged as a popular computing model to support processing large volumetric data using clusters of commodity computers. According to J.Dean and S. Ghemawat [1], Google currently processes over 20 terabytes of raw web data. It's some fascinating, large-scale processing of data that makes your head spin and appreciate the years of distributed computing fine-tuning applied to today's large problems. The evolution of cloud computing can handle such massive data as per on demand service. Nowadays the computational world is opting for pay-for-use models and Hype and discussion aside, there remains no concrete definition of cloud computing. In this paper, we first develop a comprehensive taxonomy for describing cloud computing architecture. Then we use this taxonomy to survey several existing cloud computing services developed by various projects world-wide such as Google, force.com, Amazon. We use the taxonomy and survey results not only to identify similarities and differences of the architectural approaches of cloud computing, but also to identify areas requiring further research. <s> BIB001 </s> A survey of fault tolerance in cloud computing <s> Cloud service model <s> Cloud computing is the biggest buzz in the computer world these days -- maybe too big of a buzz. Cloud computing means different things to different people. Cloud computing is not a small, undeveloped branch of IT. Research firm IDC thinks that cloud computing will reach $42 billion in 2012. You can do everything on cloud from running applications to storing data off-site. You can run entire operating systems on the cloud. This paper is for anyone who may have recently heard the term "cloud computing" for the first time and needs to know what it is and how it helps them. <s> BIB002 </s> A survey of fault tolerance in cloud computing <s> Cloud service model <s> Reliability is a critical requirement for any system. To achieve high reliability the fault tolerance must be accomplished. Fault tolerance refers to the task must be executed even in occurring the fault. Cloud computing has emerged that grants users with access to remote computing resources. Although the current development of the cloud computing technology there are more challenges and chances of errors occur during execution. In this paper, the proposed model tolerates the faults by using replication and resubmission techniques. Then it decides which the best virtual machine depending on the reliability assessments. Then it reschedules the task once the failure occurs to the highest reliability processing node instead of replicating this task to all available nodes. Additionally, we compare our proposed model with another model that used replication and resubmission without any improvement. And we evaluate the experiments by a CloudSim simulator. We conclude that the proposed model can provide comparable performance with the traditional replication and resubmission techniques. <s> BIB003
Though cloud computing has highly evolved in recent years, services are still into three major service models BIB002 BIB001 . The basic service models are demonstrated in Fig. 3 . Software-as-a-Service (SaaS): In this model, the software applications are presented by cloud service provider in the form of services to the consumer/end-users BIB002 BIB001 ). An application delivered as a service to the client removes the need to install and execute the cloud application on the user's computer and simplifies maintenance. As for example, the web conferencing services, email applications, social media platforms etc. The list of SaaS providers is Amazon AWS, Google Compute Engine, Microsoft Azure, IBM SmartCloud Enterprise, CloudStack, OpenStack, OpenNebula, CloudForge, Citrix, Qstack and so on (https://www. datamation.com,cloud-computing,50-leading-saas-companies. html). Platform-as-a-Service (Paas): This model provides a platform to develop, run, test and manage applications in the cloud BIB001 BIB003 . A user can lease an environment with a software stack from a CSP and use it for custom application development. The list of PaaS providers is Acquia Cloud, Amazon AWS, App Agile, Apprenda, AppScale, Bluemix, Cloud 66, Cloudways and so on (https://stackify.com/top-paas-providers/). Infrastructure-as-a-Service (IaaS): The IaaS model provides facility to access to some primary resources i.e. physical machines, storage, networks, servers, virtual machines on the cloud etc BIB003 . The IaaS provider offers services such as dynamic virtual machine provisioning and on-demand storage facilities. The list of SaaS providers is Salesforce, Microsoft, Amazon Web Services, Slack, Zendesk, GitHub, Oracle, Cisco and so on (https://stackify. com/top-iaas-providers/). Anything-as-a-Service (XaaS): The XaaS is another service model that may be anything or everything as a service. The cloud system is capable to maintain the huge amount of resources to fulfill the personal, granular, and specific requirements using Security-as-a-Service, Identity-as-a-Service, Communicationas-a-Service, DaaS (Database-as-a-Service) or Strategy-as-aService and so on .
A survey of fault tolerance in cloud computing <s> Fault tolerance approaches in distributed systems <s> Proactive fault tolerance (FT) in high-performance computing is a concept that prevents compute node failures from impacting running parallel applications by preemptively migrating application parts away from nodes that are about to fail. This paper provides a foundation for proactive FT by defining its architecture and classifying implementation options. This paper further relates prior work to the presented architecture and classification, and discusses the challenges ahead for needed supporting technologies. <s> BIB001 </s> A survey of fault tolerance in cloud computing <s> Fault tolerance approaches in distributed systems <s> With the immense growth of internet and its users, Cloud computing, with its incredible possibilities in ease, Quality of service and on-interest administrations, has turned into a guaranteeing figuring stage for both business and nonbusiness computation customers. It is an adoptable technology as it provides integration of software and resources which are dynamically scalable. The dynamic environment of cloud results in various unexpected faults and failures. The ability of a system to react gracefully to an unexpected equipment or programming malfunction is known as fault tolerance. In order to achieve robustness and dependability in cloud computing, failure should be assessed and handled effectively. Various fault detection methods and architectural models have been proposed to increase fault tolerance ability of cloud. The objective of this paper is to propose an algorithm using Artificial Neural Network for fault detection which will overcome the gaps of previously implemented algorithms and provide a fault tolerant model. <s> BIB002 </s> A survey of fault tolerance in cloud computing <s> Fault tolerance approaches in distributed systems <s> Cloud computing provides services as a type of Internet-based computing using data centers that contain servers, storage and networks. For this reason, the cloud computing its great potentials in low cost and on-demand services. In recent years, the end user is highly increased to utilize the services in cloud computing. However, the faulty of infrastructure, software and application are the major problem in cloud computing. Fault tolerance uses techniques that concerned to guarantee availability, reliability of critical services and application execution. This paper discusses the existing fault tolerance techniques and challenges to minimize failure impact on the system and application execution in cloud computing. <s> BIB003 </s> A survey of fault tolerance in cloud computing <s> Fault tolerance approaches in distributed systems <s> Cloud computing offers variety of services from software instance to resource provisioning. As the user demands increase, there is a necessity to enhance cloud offerings. But still in some cases fault-tolerance is the major challenge for cloud environment. Multiple request to access the same server sometime leads to server over loaded and may increase faults and cause unreliability for the server. Few fault-tolerance techniques like self-healing, job migration, static load balancing and replication are existed but they are not fully reliable and effective for cloud environment. In this paper, we propose pro-active approach for fault-tolerance based on Processing power, Memory and Network parameters to increase resource reliability. Through this approach, we first calculate the reliability of each Virtual Machine (VM) based on success rate of task execution and then schedule the task on highly reliable VM. This approach provides comparatively good results for the VM reliability and system stability. <s> BIB004 </s> A survey of fault tolerance in cloud computing <s> Fault tolerance approaches in distributed systems <s> Cloud computing provides support for hosting client's application. Cloud is a distributed platform that provides hardware, software and network resources to both execute consumer's application and also to store and mange user's data. Cloud is also used to execute scientific workflow applications that are in general complex in nature when compared to other applications. Since cloud is a distributed platform, it is more prone to errors and failures. In such an environment, avoiding a failure is difficult and identifying the source of failure is also complex. Because of this, fault tolerance mechanisms are implemented on the cloud platform. This ensures that even if there are failures in the environment, critical data of the client is not lost and user's application running on cloud is not affected in any manner. Fault tolerance mechanisms also help in improving the cloud's performance by proving the services to the users as required on demand. In this paper a survey of existing fault tolerance mechanisms for the cloud platform are discussed. This paper also discusses the failures, fault tolerant clustering methods and fault tolerant models that are specific for scientific workflow applications. <s> BIB005 </s> A survey of fault tolerance in cloud computing <s> Fault tolerance approaches in distributed systems <s> Distributed Systems have swiftly evolved from network of personal computers to cluster and then to grid, moving on to the era of cloud computing and now the latest one as Internet of things (IoT). With these rapid enhancements, the scale and complexity of systems providing cloud computing services have also increased tremendously. The major challenge faced by cloud service providers today is to provide an efficient, cost-effective, and reliable solution for seamless delivery of services to users. To achieve this research community is constantly working hard on different related issues like scheduling, power consumption, high availability, customer retention, resource provisioning, reliability and minimizing the probability of failures, etc. Reliability of service is an important parameter. With a large number of components in the cloud, the probability of failures is becoming a norm rather than an exception while delivering services to users. This emphasizes the need to develop fault tolerant schemes for cloud environment to deliver the required level of reliability. In this work, we have proposed a novel fault detection and mitigation approach. The novelty of approach lies in the method of detecting the fault based on running status of the job. The detection algorithm periodically monitors the progress of job on virtual machines (VMs) and reports the stalled job due to failed VM to fault tolerant manager (FTM). This not only reduces the resources wastage but ensures timely delivery of services to avoid any penalty due to service level agreement (SLA) violation. The validation of the proposed approach is done using CloudSim simulator. The performance analysis reveals the effectiveness of the proposed approach. <s> BIB006 </s> A survey of fault tolerance in cloud computing <s> Fault tolerance approaches in distributed systems <s> This paper presents a comprehensive survey of the state-of-the-art work on fault tolerance methods proposed for cloud computing. The survey classifies fault-tolerance methods into three categories: 1) ReActive Methods (RAMs); 2) PRoactive Methods (PRMs); and 3) ReSilient Methods (RSMs). RAMs allow the system to enter into a fault status and then try to recover the system. PRMs tend to prevent the system from entering a fault status by implementing mechanisms that enable them to avoid errors before they affect the system. On the other hand, recently emerging RSMs aim to minimize the amount of time it takes for a system to recover from a fault. Machine Learning and Artificial Intelligence have played an active role in RSM domain in such a way that the recovery time is mapped to a function to be optimized (i.e by converging the recovery time to a fraction of milliseconds). As the system learns to deal with new faults, the recovery time will become shorter. In addition, current issues and challenges in cloud fault tolerance are also discussed to identify promising areas for future research. <s> BIB007
Fault tolerance is crucial for a system in order to permit it to offer the needed services even in the presence of component failures, or one or multiple faults (Charity and Hua, 2016; Valle , 2008) . Failures in a system occur as a result of errors which are in turn due to faults (Fig. 4) . These are described as follows: Faults: It is the incapability of a system to perform its necessary/ needed task that is caused by some abnormal state or bug present in one or multiple parts of a system BIB002 BIB003 . Various types of faults may occur in the system, which is classified as depicted in Fig. 5 . Error: A system component can move to an error state or an incorrect condition due to the presence of faults. A system component's performing erroneously may result in the system's partial or even complete failure BIB002 . A distributed system can contain various types of errors, which are as shown in Fig. 6 . Failure: It refers to the misbehaviour of a system that may be observed by a user (a human or some other computer system). A failure is recognized only when the system's output or outcome is incorrect BIB002 BIB005 . Failures may be classified as depicted in Fig. 7 . Fault tolerance approaches are necessary as they aid in detecting and handling faults in the system that may occur either due to hardware (H/W) failure or software (S/W) faults. Fault tolerance is especially crucial in cloud platform as it gives assurance regarding performance, reliability, and availability of the applications. To achieve the robustness in the cloud computing, need to access as well as handle the failure effectively BIB006 BIB002 . Some fault tolerance approaches, identified from literature can be categorized (Fig. 8) as follows: Reactive fault tolerance: This approach is mainly used to decrease the influence of failure in the cloud system after the failures/faults have actually occurred. It provides robustness or reliability to a system BIB004 . Reactive fault tolerance approaches have been explored for cloud as well as other distributed systems. These have been listed in Table 1 . Proactive fault-tolerance: -This approach is used to predicts the faults proactively and substitute the suspected component by some running components, i.e., it avoids recovery from faults and errors BIB004 BIB007 BIB001 BIB001 ). An overview of proactive FT techniques is described in Table 2 . Table 1 Reactive fault tolerance mechanism with the description.
A survey of fault tolerance in cloud computing <s> Reactive Fault Tolerance Techniques <s> Fault tolerance is a major concern to guarantee availability and reliability of critical services as well as application execution. In order to minimize failure impact on the system and application execution, failures should be anticipated and proactively handled. Fault tolerance techniques are used to predict these failures and take an appropriate action before failures actually occur. This paper discusses the existing fault tolerance techniques in cloud computing based on their policies, tools used and research challenges. Cloud virtualized system architecture has been proposed. In the proposed system autonomic fault tolerance has been implemented. The experimental results demonstrate that the proposed system can deal with various software faults for server applications in a cloud virtualized environment. <s> BIB001 </s> A survey of fault tolerance in cloud computing <s> Reactive Fault Tolerance Techniques <s> Cloud computing datacenter hosts hundreds of thousands of servers that coordinate users' tasks in order to deliver highly available computing service. These servers consist of multiple memory modules, network cards, storage disks, processors etc…, each of these components while capable of failing. At such a large scale, hardware component failure is the norm rather than an exception. Hardware failure can lead to performance degradation to users and can result in losses to the business. Fault tolerant is one of efficient modules that keep hardware in operational mode as much as possible. In this paper, we survey the most famous fault tolerance technique in cloud computing, and list numerous FT methods proposed by the research experts in this field. <s> BIB002 </s> A survey of fault tolerance in cloud computing <s> Reactive Fault Tolerance Techniques <s> In recent years, cloud computing is highly embraced and more organizations consider at least some type of cloud strategy and apply theming their business process. Since failure is probable in cloud data centers and access to cloud resources available is fundamental, evaluation and application of different fault-tolerance methods is inevitable. On the other hand, the increasing growth of cloud storage users motivated us to study fault-tolerance techniques, and their strengths and weaknesses. In this paper, after introducing the concept off ault-tolerance in the context of cloud computing, the faulttolerant techniques are presented, and after introduction of some measures, a comparative analysis is provided. <s> BIB003 </s> A survey of fault tolerance in cloud computing <s> Reactive Fault Tolerance Techniques <s> With the immense growth of internet and its users, Cloud computing, with its incredible possibilities in ease, Quality of service and on-interest administrations, has turned into a guaranteeing figuring stage for both business and nonbusiness computation customers. It is an adoptable technology as it provides integration of software and resources which are dynamically scalable. The dynamic environment of cloud results in various unexpected faults and failures. The ability of a system to react gracefully to an unexpected equipment or programming malfunction is known as fault tolerance. In order to achieve robustness and dependability in cloud computing, failure should be assessed and handled effectively. Various fault detection methods and architectural models have been proposed to increase fault tolerance ability of cloud. The objective of this paper is to propose an algorithm using Artificial Neural Network for fault detection which will overcome the gaps of previously implemented algorithms and provide a fault tolerant model. <s> BIB004 </s> A survey of fault tolerance in cloud computing <s> Reactive Fault Tolerance Techniques <s> Cloud computing provides support for hosting client's application. Cloud is a distributed platform that provides hardware, software and network resources to both execute consumer's application and also to store and mange user's data. Cloud is also used to execute scientific workflow applications that are in general complex in nature when compared to other applications. Since cloud is a distributed platform, it is more prone to errors and failures. In such an environment, avoiding a failure is difficult and identifying the source of failure is also complex. Because of this, fault tolerance mechanisms are implemented on the cloud platform. This ensures that even if there are failures in the environment, critical data of the client is not lost and user's application running on cloud is not affected in any manner. Fault tolerance mechanisms also help in improving the cloud's performance by proving the services to the users as required on demand. In this paper a survey of existing fault tolerance mechanisms for the cloud platform are discussed. This paper also discusses the failures, fault tolerant clustering methods and fault tolerant models that are specific for scientific workflow applications. <s> BIB005
Description Check-pointing BIB002 BIB003 Used to save the system's state periodically. In case of a constituent task's failure, the job is restarted from the last checked pointed state rather than from the beginning. It prevents the loss of useful computation Job Migration BIB005 If a job cannot complete its execution on some specific physical machine due to some reason and fails, then it is migrated to some other machine Replication BIB004 BIB003 Used to create multiple copies of tasks and store replicas at different locations. A task can continue execution in presence of malfunction or failures until all replicas are destroyed S-Guard BIB001 It depends on rollback and recovery process Retry BIB005 In this approach, a task is executed repeatedly until it succeeds. The same resource is used to retry the unsuccessful/failed task Task Resubmission BIB004 BIB002 In this method, the failed task is again submitted/resubmitted to the identical resource and/or to a diverse machine for execution Rescue workflow BIB005 It enables a system to continue working after the failure of the task/job till it will not be able to proceed without amending the fault Table 2 Proactive fault tolerance with the description.
A survey of fault tolerance in cloud computing <s> Proactive Fault Tolerance Techniques Description <s> Most distributed computing environments today are extremely complex and time-consuming for human administrators to manage. Thus, there is increasing demand for the self-healing and self-diagnosing of problems or errors arising in systems operating within today's ubiquitous computing environment. This paper proposes a proactive self-healing system that monitors, diagnoses and heals its own internal problems using self-awareness as contextual information. The proposed system consists of Multi-Agents that analyze the log context, error events and resource status in order to perform self-healing and self-diagnosis. To minimize the resources used by the Adapters, which monitor the logs in an existing system, we place a single process in memory. By this, we mean a single Monitoring Agent monitors the context of the logs generated by the different system components. For rapid and efficient self-healing, we use a 6-step process. The effectiveness of the proposed system is confirmed through practical experiments conducted with a prototype system. <s> BIB001 </s> A survey of fault tolerance in cloud computing <s> Proactive Fault Tolerance Techniques Description <s> Proactive fault tolerance (FT) in high-performance computing is a concept that prevents compute node failures from impacting running parallel applications by preemptively migrating application parts away from nodes that are about to fail. This paper provides a foundation for proactive FT by defining its architecture and classifying implementation options. This paper further relates prior work to the presented architecture and classification, and discusses the challenges ahead for needed supporting technologies. <s> BIB002 </s> A survey of fault tolerance in cloud computing <s> Proactive Fault Tolerance Techniques Description <s> The computational world is becoming very large and complex. Cloud Computing has emerged as a popular computing model to support processing large volumetric data using clusters of commodity computers. According to J.Dean and S. Ghemawat [1], Google currently processes over 20 terabytes of raw web data. It's some fascinating, large-scale processing of data that makes your head spin and appreciate the years of distributed computing fine-tuning applied to today's large problems. The evolution of cloud computing can handle such massive data as per on demand service. Nowadays the computational world is opting for pay-for-use models and Hype and discussion aside, there remains no concrete definition of cloud computing. In this paper, we first develop a comprehensive taxonomy for describing cloud computing architecture. Then we use this taxonomy to survey several existing cloud computing services developed by various projects world-wide such as Google, force.com, Amazon. We use the taxonomy and survey results not only to identify similarities and differences of the architectural approaches of cloud computing, but also to identify areas requiring further research. <s> BIB003 </s> A survey of fault tolerance in cloud computing <s> Proactive Fault Tolerance Techniques Description <s> Fault tolerance is a major concern to guarantee availability and reliability of critical services as well as application execution. In order to minimize failure impact on the system and application execution, failures should be anticipated and proactively handled. Fault tolerance techniques are used to predict these failures and take an appropriate action before failures actually occur. This paper discusses the existing fault tolerance techniques in cloud computing based on their policies, tools used and research challenges. Cloud virtualized system architecture has been proposed. In the proposed system autonomic fault tolerance has been implemented. The experimental results demonstrate that the proposed system can deal with various software faults for server applications in a cloud virtualized environment. <s> BIB004 </s> A survey of fault tolerance in cloud computing <s> Proactive Fault Tolerance Techniques Description <s> Cloud computing datacenter hosts hundreds of thousands of servers that coordinate users' tasks in order to deliver highly available computing service. These servers consist of multiple memory modules, network cards, storage disks, processors etc…, each of these components while capable of failing. At such a large scale, hardware component failure is the norm rather than an exception. Hardware failure can lead to performance degradation to users and can result in losses to the business. Fault tolerant is one of efficient modules that keep hardware in operational mode as much as possible. In this paper, we survey the most famous fault tolerance technique in cloud computing, and list numerous FT methods proposed by the research experts in this field. <s> BIB005 </s> A survey of fault tolerance in cloud computing <s> Proactive Fault Tolerance Techniques Description <s> With the immense growth of internet and its users, Cloud computing, with its incredible possibilities in ease, Quality of service and on-interest administrations, has turned into a guaranteeing figuring stage for both business and nonbusiness computation customers. It is an adoptable technology as it provides integration of software and resources which are dynamically scalable. The dynamic environment of cloud results in various unexpected faults and failures. The ability of a system to react gracefully to an unexpected equipment or programming malfunction is known as fault tolerance. In order to achieve robustness and dependability in cloud computing, failure should be assessed and handled effectively. Various fault detection methods and architectural models have been proposed to increase fault tolerance ability of cloud. The objective of this paper is to propose an algorithm using Artificial Neural Network for fault detection which will overcome the gaps of previously implemented algorithms and provide a fault tolerant model. <s> BIB006 </s> A survey of fault tolerance in cloud computing <s> Proactive Fault Tolerance Techniques Description <s> Cloud computing provides support for hosting client's application. Cloud is a distributed platform that provides hardware, software and network resources to both execute consumer's application and also to store and mange user's data. Cloud is also used to execute scientific workflow applications that are in general complex in nature when compared to other applications. Since cloud is a distributed platform, it is more prone to errors and failures. In such an environment, avoiding a failure is difficult and identifying the source of failure is also complex. Because of this, fault tolerance mechanisms are implemented on the cloud platform. This ensures that even if there are failures in the environment, critical data of the client is not lost and user's application running on cloud is not affected in any manner. Fault tolerance mechanisms also help in improving the cloud's performance by proving the services to the users as required on demand. In this paper a survey of existing fault tolerance mechanisms for the cloud platform are discussed. This paper also discusses the failures, fault tolerant clustering methods and fault tolerant models that are specific for scientific workflow applications. <s> BIB007
Self-Healing BIB005 BIB001 This method uses the divide and conquers technique, where a large task is decomposed into multiple chunks. This partition is mainly done to improve the system's performance. When numerous instances of the same application run on various VMs (virtual machines), then the failure of application's instances are handled automatically. It permits the computing devices or systems to itself identity, recognizes and heal dilemmas/problems occurring, without depending on the administrator Software Rejuvenation BIB006 BIB007 In this approach, the system undergoes periodic reboots and begins from a new state every time Pre-emptive Migration BIB004 BIB002 BIB002 In this approach, an application is observed and analysed constantly and thus, depends upon a feedbackloop control method Load Balancing BIB003 This approach is used to balance the load of memory and CPU when it exceeds a maximum/certain limit. The load of exceeded CPU is transferred to some other CPU that does not exceed its maximum limit Parameters Used for Fault tolerance in Cloud Computing: -The fault tolerance approaches in the cloud computing are evaluated using various parameters to check the efficiency and effectiveness of the cloud systems BIB007 BIB004 . The possible parameters is listed in the Table 3 .
A survey of fault tolerance in cloud computing <s> Fault tolerance in distributed computing environments <s> A mobile computing system consists of mobile and stationary nodes, connected to each other by a communication network. The presence of mobile nodes in the system places constraints on the permissible energy consumption and available communication bandwidth. To minimize the lost computation during recovery from node failures, periodic collection of a consistent snapshot of the system (checkpoint) is required. Locating mobile nodes contributes to the checkpointing and recovery costs. Synchronous snapshot collection algorithms, designed for static networks, either force every node in the system to take a new local snapshot, or block the underlying computation during snapshot collection. Hence, they are not suitable for mobile computing systems. If nodes take their local checkpoints independently in an uncoordinated manner, each node may have to store multiple local checkpoints in stable storage. This is not suitable for mobile nodes as they have small memory. This paper presents a synchronous snapshot collection algorithm for mobile systems that neither forces every node to take a local snapshot, nor blocks the underlying computation during snapshot collection. If a node initiates snapshot collection, local snapshots of only those nodes that have directly or transitively affected the initiator since their last snapshots need to be taken. We prove that the global snapshot collection terminates within a finite time of its invocation and the collected global snapshot is consistent. We also propose a minimal rollback/recovery algorithm in which the computation at a node is rolled back only if it depends on operations that have been undone due to the failure of node(s). Both the algorithms have low communication and storage overheads and meet the low energy consumption and low bandwidth constraints of mobile computing systems. <s> BIB001 </s> A survey of fault tolerance in cloud computing <s> Fault tolerance in distributed computing environments <s> A number of checkpointing and message logging algorithms have been proposed to support fault tolerance of mobile computing systems. However, little attention has been paid to the optimistic message logging scheme. Optimistic logging has a lower failure-free operation cost compared to other logging schemes. It also has a lower failure recovery cost compared to the checkpointing schemes. This paper presents an efficient scheme to implement optimistic logging for the mobile computing environment. In the proposed scheme, the task of logging is assigned to the mobile support station so that volatile logging can be utilized. In addition, to reduce the message overhead, the mobile support station takes care of dependency tracking and the potential dependency between mobile hosts is inferred from the dependency between mobile support stations. The performance of the proposed scheme is evaluated by an extensive simulation study. The results show that the proposed scheme requires a small failure-free overhead and the cost of unnecessary rollback caused by the imprecise dependency is adjustable by properly selecting the logging frequency. <s> BIB002 </s> A survey of fault tolerance in cloud computing <s> Fault tolerance in distributed computing environments <s> Mobile computing is going to change the way, computers are used today. However mobile computing environment has features like high mobility, frequent disconnections, and lack of resources, such as memory and battery power. Such features make applications, running on mobile devices, more susceptible to faults. Checkpointing is a major technique to confine faults and restart applications faster. In this paper, we present a coordinated checkpointing algorithm for deterministic applications. We are using anti-messages along-with selective logging to achieve faster recovery and reduced energy consumption. Our algorithm is non-blocking in nature and avoids unnecessary computation. We ask only minimum number of processes to take the checkpoint and also take in to account the limited storage available at mobile devices. <s> BIB003 </s> A survey of fault tolerance in cloud computing <s> Fault tolerance in distributed computing environments <s> Accomplishing the distributed snapshots problem in mobile systems is an important issue as well as in distributed systems. This work presents a distributed snapshots protocol for mobile computing systems. In addition, this protocol can be used for achieving an efficient checkpointing protocol in the mobile environment. Specifically, it is a robust adaptation of the classical distributed snapshots protocol, where the mobile hosts can still roam among the different cells within the mobile system. The main benefit of this work is to provide distributed snapshots for a mobile system without adding any restriction to the system, such as FIFO ordering among the application messages as required for a traditional distributed system. <s> BIB004 </s> A survey of fault tolerance in cloud computing <s> Fault tolerance in distributed computing environments <s> Mobile computing systems have many constraints such as low battery power, low bandwidth , high mobility and lack of stable storage which are not presented in static distributed systems. In this paper, we propose an efficient communication-induced checkpointing protocol for mobile computing systems. We also propose an asynchronous recovery protocol based on the checkpointing protocol. Mobile support stations control major parts of the checkpointing and recovery such as storing and tracing the checkpoints, requesting rollback and logging messages, so that mobile hosts do not incur much overhead. The recovery algorithm has no domino effect and a failed process needs to roll back to its latest checkpoint and request only a subset of the processes to rollback to a consistent checkpoint. Our recovery protocol uses selective message logging at the mobile support station to handle the messages lost due to rollback. <s> BIB005 </s> A survey of fault tolerance in cloud computing <s> Fault tolerance in distributed computing environments <s> In this paper, we give a survey on fault tolerant issue in distributed systems. More specially speaking, we talk about one important and basic component called failure detection, which is to detect the failure of the process quickly and accurately. Thus, a good failure detection method will avoid the further system lost due to process crash. This survey provides the related research results and also explored the future directions about failure detection, and it is a good reference for researcher on this topic. <s> BIB006 </s> A survey of fault tolerance in cloud computing <s> Fault tolerance in distributed computing environments <s> This paper deals with decentralized, QoS-aware middleware for checkpointing arrangement in Mobile Grid (MoG) computing systems. Checkpointing is more crucial in MoG systems than in their conventional wired counterparts due to host mobility, dynamicity, less reliable wireless links, frequent disconnections, and variations in mobile systems. We've determined the globally optimal checkpoint arrangement to be NP-complete and so consider Reliability Driven (ReD) middleware, employing decentralized QoS-aware heuristics, to construct superior checkpointing arrangements efficiently. With ReD, an MH (mobile host) simply sends its checkpointed data to one selected neighboring MH, and also serves as a stable point of storage for checkpointed data received from a single approved neighboring MH. ReD works to maximize the probability of checkpointed data recovery during job execution, increasing the likelihood that a distributed application, executed on the MoG, completes without sustaining an unrecoverable failure. It allows collaborative services to be offered practically and autonomously by the MoG. Simulations and actual testbed implementation show ReD's favorable recovery probabilities with respect to Random Checkpointing Arrangement (RCA) middleware, a QoS-blind comparison protocol producing random arbitrary checkpointing arrangements. <s> BIB007 </s> A survey of fault tolerance in cloud computing <s> Fault tolerance in distributed computing environments <s> The widespread availability and increasing processing power of mobile devices has lead to a focus towards the development of autonomous mobile computing grids (MoGs). Such mobile grids allow the successful execution of distributed applications; without access to any static nodes or wired networks. However, the implementation of a fault tolerance technique is essential to completely utilize the mobile devices as viable computing resources. Checkpointing is a well explored fault tolerance technique for mobile computing systems. The paper presents an adaptive checkpointing technique for failure recovery of mobile nodes in a MoG. The presented protocol relies on cooperative checkpointing by the constituent nodes in the system. A node uses the stable storage of other nodes in the system to save its checkpoint data, in case the requisite stable storage is not available at the node itself. Further, depending on the availability of resources in the MoG, the scheme replicates a node's checkpoint data at multiple nodes. The results of simulation for the presented scheme verify that the introduction of redundancy with the checkpointing procedure vastly increases the likelihood of successful recovery of a failed node in a MoG. <s> BIB008 </s> A survey of fault tolerance in cloud computing <s> Fault tolerance in distributed computing environments <s> Mobile ad hoc networks (MANETs) are increasingly being employed for expanding the computing capabilities of existing cellular mobile systems and in the implementation of mobile computing grids. However, MANETs are susceptible to various transient as well as permanent failures and a fault tolerance technique is crucial in order to effectively utilize the constituent nodes as viable compute resources. Checkpointing and message logging based rollback recovery is a well established approach to provide fault tolerance in static and cellular mobile distributed systems; yet its use for achieving fault tolerance in MANETs is comparatively less explored. The existing recovery algorithms cannot be applied directly to MANETs due to their insufficiency in handling challenges like absence of static infrastructure, frequent node movement, constrained wireless bandwidth and limited stable storage. In this paper, we propose a checkpointing based rollback recovery protocol for clustered MANETs that determines the checkpointing frequency of a mobile node based on its mobility; thereby avoiding unnecessary checkpoints. The protocol uses a popular graph theoretic construct called connected dominating set to lower the communication overhead due to the recovery procedure. The findings of our scheme have been substantiated by the complexity analysis and simulation under varying network conditions. <s> BIB009 </s> A survey of fault tolerance in cloud computing <s> Fault tolerance in distributed computing environments <s> A Mobile Ad Hoc networks is a dynamic environment which due to frequently mobile wireless nodes experiences Communication failures due to network partitioning, and nodes failures exhibiting different faulty actions temporary or long lasting arising out of glitches related to hardware or software. As the mobile nodes are mostly resource constrained, in case of faulty nodes packets forwarding could be lead to further complications. Hence in designing a robust mobile ad hoc network fault tolerance plays a major role. Due to the presence of faulty nodes, the Performance of routing degrades and the reason for the faulty nodes has to be identified to address routing by exploring network redundancies. In our previous work[13],we devised Genetic Algorithm(GA) based Energy efficient QoS routing (GAEEQR) protocol. As extension of previous work, we devised protocol called Fault Tolerance QoS Routing Protocol, which has capability to send the data with an alternative route when route break occurs. This protocol gives the better results than GAEEQR in terms of delay, packet delivery ratio, and throughput and energy consumption. <s> BIB010 </s> A survey of fault tolerance in cloud computing <s> Fault tolerance in distributed computing environments <s> Distributed Systems have swiftly evolved from network of personal computers to cluster and then to grid, moving on to the era of cloud computing and now the latest one as Internet of things (IoT). With these rapid enhancements, the scale and complexity of systems providing cloud computing services have also increased tremendously. The major challenge faced by cloud service providers today is to provide an efficient, cost-effective, and reliable solution for seamless delivery of services to users. To achieve this research community is constantly working hard on different related issues like scheduling, power consumption, high availability, customer retention, resource provisioning, reliability and minimizing the probability of failures, etc. Reliability of service is an important parameter. With a large number of components in the cloud, the probability of failures is becoming a norm rather than an exception while delivering services to users. This emphasizes the need to develop fault tolerant schemes for cloud environment to deliver the required level of reliability. In this work, we have proposed a novel fault detection and mitigation approach. The novelty of approach lies in the method of detecting the fault based on running status of the job. The detection algorithm periodically monitors the progress of job on virtual machines (VMs) and reports the stalled job due to failed VM to fault tolerant manager (FTM). This not only reduces the resources wastage but ensures timely delivery of services to avoid any penalty due to service level agreement (SLA) violation. The validation of the proposed approach is done using CloudSim simulator. The performance analysis reveals the effectiveness of the proposed approach. <s> BIB011 </s> A survey of fault tolerance in cloud computing <s> Fault tolerance in distributed computing environments <s> This paper presents a comprehensive survey of the state-of-the-art work on fault tolerance methods proposed for cloud computing. The survey classifies fault-tolerance methods into three categories: 1) ReActive Methods (RAMs); 2) PRoactive Methods (PRMs); and 3) ReSilient Methods (RSMs). RAMs allow the system to enter into a fault status and then try to recover the system. PRMs tend to prevent the system from entering a fault status by implementing mechanisms that enable them to avoid errors before they affect the system. On the other hand, recently emerging RSMs aim to minimize the amount of time it takes for a system to recover from a fault. Machine Learning and Artificial Intelligence have played an active role in RSM domain in such a way that the recovery time is mapped to a function to be optimized (i.e by converging the recovery time to a fraction of milliseconds). As the system learns to deal with new faults, the recovery time will become shorter. In addition, current issues and challenges in cloud fault tolerance are also discussed to identify promising areas for future research. <s> BIB012
Fault tolerance (FT) is an essential concern in cloud computing platform since it enables the system to provide the required services with good performance in presence of the one or more failures of the system components BIB011 BIB012 . In past, fault tolerance approaches have been applied to many different distributed computing environments, apart from cloud computing. Some of them are as follows: -Wired Distributed System: -It is a collection of autonomous computers, which appears as a single consistent system to its clients/users. All computers in the distributed system contain an individual set of resources and can share some general peripheral devices, e.g. a printer. Message passing is generally used for communication in a distributed system. Designing a distributed system is a difficult task due to the existence of components that may be located at different places/sites. One of the major challenges that the system designer has to face is providing fault tolerance (FT). In general, FT is highly required in distributed network systems, especially in the large-scale environment. Users of a distributed system require the system to stay working continuously even in case of technical failures. If one or more of the members of the system have crashed, even then the system should be able to fulfil the client's requests. Therefore, an efficient system must be designed and implemented to handle the partial failure of its components. Failure detection (FD) and process monitoring are the most common techniques for FT in the distributed systems. Reactive fault tolerance approaches such as checkpointing, replication, retry, resubmission, etc have been used to handle failures in these systems BIB006 ; https://www.slideshare. net/sumitjain2013/fault-tolerance-in-distributed-systems). Mobile Computing System: -It is a type of distributed system, where some or all the constituent nodes are mobile computers. This system preserves continuous network connectivity even in the existence of mobility of the hosts due to which their site/ location within the network may vary with time. Each node in the system works independently, with infrequent asynchronous message communication. The fixed nodes in the mobile system may be interconnected using a static network. Moreover, a fixed node (commonly the mobile base station) is used to establish the communication between the connected nodes, i.e., the mobile node and the other nodes inside the system. Nodes in the mobile system communicate with each other using messages BIB002 BIB005 . Some restrictions of mobile systems are limited bandwidth, mobile hosts with limited disk space, user mobility, narrow battery life, etc. To overcome the limitation of the mobile computing system, some fault tolerance approaches are used. The most commonly used FT approaches in the mobile system is check-pointing since limited resources prevent the use of redundancy-based schemes as replication etc. The FT technique needs the processes that are check-pointed at regular intervals to move the error-free state to some storage (fixed/constant). If any failure arises in the process, then that failure may be recovered by finding the newest saved/maintained state (called rollback recovery). BIB003 . Checkpointing schemes can be categorized as coordinated, communication-induced, and uncoordinated checkpointing. In a coordinated method, the processes adjust their checkpointing actions through transferring the checkpoint-based coordination messages. The coordinated check-pointing policies include huge message overhead, therefore not appropriate for the mobile systems and have very less bandwidth wireless communication channels/stations in the network. Furthermore, at the time of checkpointing coordination, the process execution may also require to be suspended, which may result in degradation of the performance. The uncoordinated checkpointing methods permit processes to receive checkpoints at regular intervals without synchronization with others BIB001 BIB004 ) but this method may suffer from the domino effect. A communication-induced check-pointing approach has been used to handle the domino effect BIB002 . Mobile-Grid Computing: -Grids are very large-scale systems, distributed in nature. These spread the amount of work to be done among the constituent systems. Grid computing facilitates the sharing of large-scale resources between loosely coordinated, distributed systems to solve the computational needs of largesized tasks. Therefore, grid computing provides users with huge computational, bandwidth resources and storage. It is also possible to use grid computing in conjunction with mobile computing to get better performance. This approach is also important to handle essential restrictions of mobile devices effectively . However, incorporation of mobile and grid devices for use of computing resources is challenging because of unreliable connections, random node mobility, battery dependence, small-bandwidth for communication, restricted power for processing and fixed storage. An effective execution of the distributed applications in mobile grid computing (MoG) is desirable if the faults/failure of the mobile devices are handled properly BIB008 . Therefore, FT policies are required to handle the different types of faults in the MoGs. The most frequently used FT technique in MoGs are checkpointing and the rollback recovery. These FT methods have been used extensively in conventional wired and cellular portable distributed systems. BIB008 presented an adaptive approach for MoG based on checkpointing to recover the failure of mobile nodes. BIB007 presented ReD (Reliability Driven) middleware approach, which enabled the mobile grid scheduler to build informed conclusions/decisions, selectively submitting work portions to hosts having improved check-pointing arrangements that guarantee successful completion. MANET (Mobile ad hoc networks(N/W)): -It is a wireless ad hoc network which is self-configuring, and does not depend on infrastructure, i.e., it is an infrastructure-less network of mobile devices connected in a wireless manner. All devices in the MANET are autonomous and can change their path and direction dynamically, and therefore alter links to the other devices frequently. MANETs are widely used for increasing the computing abilities to existing mobile systems and MoG (mobile grids computing). But, MANETs are vulnerable to several transient/ temporary and permanent failures. To handle the failure, an FT technique is required to be used effectively. The checkpointing, as well as rollback recovery, is a widely used policy to handle faults in static and/or cellular mobile systems. However, the use of FT approaches with MANETs are less examined. The preceding recovery-based approaches are not applied to the MANETs directly because of some challenges such as deficiency of the static infrastructure, regular movement of a node, limited bandwidth as well as the restricted amount of stable storage. To handle faults in MANETs, rollback recovery protocol based on checkpointing has been proposed, which determines the frequency of checkpoint of a mobile device/node depending on the mobility/portability and therefore avoids unnecessary checkpoints BIB009 BIB010 .
A survey of fault tolerance in cloud computing <s> System model <s> The fat-tree is one of the most widely-used topologies by interconnection network manufacturers. Recently, a deterministic routing algorithm that optimally balances the network traffic in fat--trees was proposed. It can not only achieve almost the same performance than adaptive routing, but also outperforms it for some traffic patterns. Nevertheless, fat-trees require a high number of switches with a non-negligible wiring complexity. In this paper, we propose replacing the fat-tree by an unidirectional multistage interconnection network referred to as reduced unidirectional fat-tree (RUFT) that uses a a simplified version of the aforementioned deterministic routing algorithm. As a consequence, switch hardware is almost reduced to the half, decreasing, in this way, power consumption, arbitration complexity, switch size, and network cost. Evaluation results show that RUFT obtains lower latency than fat-tree for low and medium traffic loads. Furthermore, in large networks, it obtains almost the same throughput than the classical fat-tree. <s> BIB001 </s> A survey of fault tolerance in cloud computing <s> System model <s> A fundamental challenge in data center networking is how to efficiently interconnect an exponentially increasing number of servers. This paper presents DCell, a novel network structure that has many desirable features for data center networking. DCell is a recursively defined structure, in which a high-level DCell is constructed from many low-level DCells and DCells at the same level are fully connected with one another. DCell scales doubly exponentially as the node degree increases. DCell is fault tolerant since it does not have single point of failure and its distributed fault-tolerant routing protocol performs near shortest-path routing even in the presence of severe link or node failures. DCell also provides higher network capacity than the traditional tree-based structure for various types of services. Furthermore, DCell can be incrementally expanded and a partial DCell provides the same appealing features. Results from theoretical analysis, simulations, and experiments show that DCell is a viable interconnection structure for data centers. <s> BIB002 </s> A survey of fault tolerance in cloud computing <s> System model <s> This article presents an efficient and scalable mechanism to overcome the limitations of collective communication in switched interconnection networks in the presence of faults. Considering that current trends in supercomputing are moving toward massively parallel computers, with many thousands of components, reliability becomes a challenge. In such scenario, fat-tree networks that provide hardware support for collective communication suffer from serious performance degradation due to the presence of, even, a single faulty node. This paper describes a new mechanism to provide high-performance collective communication in such situations. The feasibility of the proposed technique is formally demonstrated. We present the design of a new hardware-based routing algorithm for multicast, that is at the base of our proposal. The proposed mechanism is implemented and experimentally evaluated. Our experimental results show that hardware-based multicast trees provide an efficient and scalable solution for collective communication in fat-tree networks, significantly outperforming traditional solutions. <s> BIB003 </s> A survey of fault tolerance in cloud computing <s> System model <s> To be agile and cost effective, data centers should allow dynamic resource allocation across large server pools. In particular, the data center network should enable any server to be assigned to any service. To meet these goals, we present VL2, a practical network architecture that scales to support huge data centers with uniform high capacity between servers, performance isolation between services, and Ethernet layer-2 semantics. VL2 uses (1) flat addressing to allow service instances to be placed anywhere in the network, (2) Valiant Load Balancing to spread traffic uniformly across network paths, and (3) end-system based address resolution to scale to large server pools, without introducing complexity to the network control plane. VL2's design is driven by detailed measurements of traffic and fault data from a large operational cloud service provider. VL2's implementation leverages proven network technologies, already available at low cost in high-speed hardware implementations, to build a scalable and reliable network architecture. As a result, VL2 networks can be deployed today, and we have built a working prototype. We evaluate the merits of the VL2 design using measurement, analysis, and experiments. Our VL2 prototype shuffles 2.7 TB of data among 75 servers in 395 seconds - sustaining a rate that is 94% of the maximum possible. <s> BIB004 </s> A survey of fault tolerance in cloud computing <s> System model <s> This paper presents BCube, a new network architecture specifically designed for shipping-container based, modular data centers. At the core of the BCube architecture is its server-centric network structure, where servers with multiple network ports connect to multiple layers of COTS (commodity off-the-shelf) mini-switches. Servers act as not only end hosts, but also relay nodes for each other. BCube supports various bandwidth-intensive applications by speeding-up one-to-one, one-to-several, and one-to-all traffic patterns, and by providing high network capacity for all-to-all traffic. BCube exhibits graceful performance degradation as the server and/or switch failure rate increases. This property is of special importance for shipping-container data centers, since once the container is sealed and operational, it becomes very difficult to repair or replace its components. Our implementation experiences show that BCube can be seamlessly integrated with the TCP/IP protocol stack and BCube packet forwarding can be efficiently implemented in both hardware and software. Experiments in our testbed demonstrate that BCube is fault tolerant and load balancing and it significantly accelerates representative bandwidth-intensive applications. <s> BIB005 </s> A survey of fault tolerance in cloud computing <s> System model <s> Fat trees are a very common communication architecture in current large-scale parallel computers. The probability of failure in these systems increases with the number of components. We present a routing method for deterministically and adaptively routed fat trees, applicable to both distributed and source routing, that is able to handle several concurrent faults and that transparently returns to the original routing strategy once the faulty components have recovered. The method is local and dynamic, completely masking the fault from the rest of the system. It only requires a small extra functionality in the switches to handle rerouting packets around a fault. The method guarantees connectedness and deadlock and livelock freedom for up to k -1 benign simultaneous switch and/or link faults where k is half the number of ports in the switches. Our simulation experiments show a graceful degradation of performance as more faults occur. Furthermore, we demonstrate that for most fault combinations, our method will even be able to handle significantly more faults beyond the k -1 limit with high probability. <s> BIB006 </s> A survey of fault tolerance in cloud computing <s> System model <s> The geometrical arrangement of computer resources, remote devices and communication facilities is known as Network structure or Network topology. <s> BIB007 </s> A survey of fault tolerance in cloud computing <s> System model <s> In recent days for computing, distributed computer systems have become very important and popular issue. It delivers high end performance at a low cost. Autonomous computers are connected by means of a communication network in a distributed computing environment which is arranged in a geometrical shape called network topology. In the present paper a detailed study and analysis on network topologies is presented. Definitions of Physical and Logical Topologies are also provided. <s> BIB008 </s> A survey of fault tolerance in cloud computing <s> System model <s> Today's vehicles contain hundreds of circuits, sensors and around 80-120 Electronic Control Units(ECUs). The communication is needed among many circuits and functions of a vehicle. In earlier vehicle systems, this type of communication was handled via a dedicated wire through point-to-point connections. If all possible combinations of switches, sensors, ECUs and other electronic devices in fully featured vehicles are accumulated, the resulting number of connections and dedicated wiring is enormous. Hence networking of these components is necessary to reduce the complexity of electronics inside a vehicle. In-vehicle networking provides a more efficient method for today's complex in-vehicle communications. This paper focuses on the comparison of the performance of ring and star network topologies on the basis of bus load. <s> BIB009 </s> A survey of fault tolerance in cloud computing <s> System model <s> Large-scale data centers enable the new era of cloud computing and provide the core infrastructure to meet the computing and storage requirements for both enterprise information technology needs and cloud-based services. To support the ever-growing cloud computing needs, the number of servers in today’s data centers are increasing exponentially, which in turn leads to enormous challenges in designing an efficient and cost-effective data center network. With data availability and security at stake, the issues with data center networks are more critical than ever. Motivated by these challenges and critical issues, many novel and creative research works have been proposed in recent years. In this paper, we investigate in data center networks and provide a general overview and analysis of the literature covering various research areas, including data center network interconnection architectures, network protocols for data center networks, and network resource sharing in multitenant cloud data centers. We start with an overview on data center networks and together with its requirements navigate the data center network designs. We then present the research literature related to the aforementioned research topics in the subsequent sections. Finally, we draw the conclusions. <s> BIB010 </s> A survey of fault tolerance in cloud computing <s> System model <s> Although performance is a key design issue of interconnection networks, fault-tolerance is becoming more important due to the large amount of components of large machines. In this paper, we focus on designing a simple indirect topology with both good performance and fault-tolerance properties. The idea is to take full advantage of the network resources consumed by the topology. To do that, starting from the RUFT topology, which is a simple UMIN topology that does not tolerate any link fault, we first duplicate injection and ejection links connecting these extra links in a particular way. The resulting topology tolerates 3 network link faults and also slightly increases performance with marginal increase in the network hardware cost. Most important, contrary to most of the available topologies, the topology is able to tolerate also faults in the links that connect to end-nodes. We also propose another topology that also duplicates network links, achieving 2x performance improvements and tolerating up to 7 network link faults. These results are better than the ones obtained by a BMIN with a similar amount of resources. <s> BIB011 </s> A survey of fault tolerance in cloud computing <s> System model <s> Data Center Networks (DCNs) are an essential infrastructure that impact the success of cloud computing. A scalable and efficient data center is crucial in both the construction and operation of stable cloud services. In recent years, the growing importance of data center networking has drawn much attention to related issues including connective simplification and service stability. However, existing DCNs lack the necessary agility for multi-tenant demands in the cloud, creating poor responsiveness and limited scalability. In this paper, we present an overview of data center networks for cloud computing and evaluate construction prototypes based on these issues. We provide, specifically, detailed descriptions of several important aspects: the physical architecture, virtualized infrastructure, and DCN routing. Each section of this work discusses and evaluates resolution approaches, and presents the use cases for cloud computing service. In our attempt to build insight relevant to future research, we also present some open research issues. Based on experience gained in both research and industrial trials, the future of data center networking must include careful consideration of the interactions between the important aspects mentioned above. <s> BIB012 </s> A survey of fault tolerance in cloud computing <s> System model <s> Data centers are becoming increasingly popular for their flexibility and processing capabilities in the modern computing environment. They are managed by a single entity (administrator) and allow dynamic resource provisioning, performance optimization as well as efficient utilization of available resources. Each data center consists of massive compute, network and storage resources connected with physical wires. The large scale nature of data centers requires careful planning of compute, storage, network nodes, interconnection as well as inter-communication for their effective and efficient operations. In this paper, we present a comprehensive survey and taxonomy of network topologies either used in commercial data centers, or proposed by researchers working in this space. We also compare and evaluate some of those topologies using mininet as well as gem5 simulator for different traffic patterns, based on various metrics including throughput, latency and bisection bandwidth. <s> BIB013 </s> A survey of fault tolerance in cloud computing <s> System model <s> Fat tree topologies have been extensively used as interconnection networks for high performance computing, cluster and data center systems, with their most recent variants able to fairly extend and scale to accommodate higher processing power. While each progressive and evolved fat-tree topology includes some extra advancements, these networks do not fully address all the issues of large scale HPC. We propose a topology called Zoned-Fat tree ( Z -Fat tree,) which is a further extension to the fat trees. The extension relates to the provision of extra degree of connectivity to utilize the extra ports per switches (routing nodes), that are, in some cases, not utilized by the architectural constraints of other variants of fat trees, and hence increases the bisection bandwidth, reduces the latency and supplies additional paths for fault tolerance. To support and profit from the extra links, we propose an adaptive low latency routing for up traffic which is based on a series of leading direction bits predefined at the source; furthermore we suggest a deterministic routing by implementing a dynamic round robin algorithm that overtakes D-mod-K in same cases and guarantees the utilization of all the extra links. We also propose a fault tolerance algorithm, named recoil-and-reroute which makes use of the extra links to ensure higher message delivery even in the presence of faulty links and switches. <s> BIB014
Cloud computing provides various services and scalable computing resources through the internet . On the provider's side, a DC (data center) provides facility to keep computer systems as well as their associated components, like networking, storage, uninterruptible power supply, etc. BIB010 BIB012 . To provide services to the clients, many virtual machines (VMs) run on the physical machines in cloud DC. These DCs use different types of network topologies. Fault tolerance approaches in cloud computing systems are dependent on underlying network topologies. In this section, we discuss basic common network topologies in DCs for the cloud system as well as reactive and proactive FT approaches used in the cloud. The network topology is the arrangement of nodes within a network BIB007 . In other words, topology is a basic building block of the network that connects computer systems to each other . The basic network topologies are the bus, ring, star, mesh, tree, and hybrid topology (https://www. studytonight.com/computer-networks/network-topology-types) (Refer Table 4 ). Bus Topology: -This topology is used to connect all computers and network devices through a single cable. It transfers data in one direction. This topology is very cost effective, used in small networks, easier to understand and expand. However, in this topology when cables get fail, the entire network fails BIB008 . If the network(n/w) is heavy then the performance of this topology is degraded. The cable contains a limited amount of length. This topology slower as compared to ring topology ; https://www. studytonight.com/computer-networks/network-topology-types). Ring Topology: -In this topology, computer systems are connected to each other in a ring structure, where the last device is connected to the first one BIB009 . This topology is very cheap to install as well as expand. In case of heavy network traffic and the addition of some extra node, the transmission network is not affected. However, the failure of one computer system can affect the entire network. The network activities are disturbed by adding or deleting the computers. Troubleshooting is also very difficult in the ring topology https://www.studytonight. com/computer-networks/network-topology-types). Star Topology: -This topology is used to connect all computer systems to a single hub with the help of a cable. This hub is used as a central node and all other available nodes are linked to it. This topology provides fast performance, easier to troubleshoot, set up and modify. However, this topology is expensive to use and if the hub gets fail, then the entire network stopped working. The installation cost is high BIB007 BIB009 . Mesh Topology: -In this topology, all the node or computer systems are fully linked to each other. This topology is very robust, easier to diagnose the failure. It also provides privacy and security. However, this topology is very difficult to install or configure. The cost of cabling is also high and it requires bulk wiring BIB007 BIB008 . Tree Topology: -This topology contains a root node and all other computers or nodes are linked to it (known as hierarchical topology). The minimum level of the hierarchy should be three. This topology is an extended version of bus and star topology. It is also easier to manage and maintain. Error detection is also done easily in this topology. However, this topology includes a costly process and is heavily cabled BIB007 ; https:// www.studytonight.com/computer-networks/network-topologytypes; Santra and Acharjya, 2013). Hybrid Topology: -This topology is a combination of two or more than two topologies. This topology provides features like reliability, scalability, and flexibility. However, its design is complex and involves a costly process (https://www. studytonight.com/computer-networks/network-topology-types; Santra and Acharjya, 2013). The most commonly used network topologies in data centers of cloud computing environment are as follows: Fat Tree topology: -This topology is the most widely used topology for the high-performance computing (HPC) and clustering of cloud data centers (DC). It is a bidirectional multi-stage indirect topology, which provides fault tolerance and good performance levels. However, the hardware used by fat tree topology is very costly BIB010 BIB003 BIB006 . RUFT (reduced unidirectional fat tree): -The RUFT topology is unidirectional MIN (multistage interconnection network), which provides good performance similar to the fat tree with very less hardware cost. This topology has not been used by any fault tolerance (FT) approach BIB011 BIB001 . RUFT-PL (reduced unidirectional fat tree with parallel link): -It is based on creating duplicate copies of enterprise and the injection links. It also distributes the traffic of the network in a balanced mode to decrease the HoL (Head of the line) blocking impact between dual links. The number of switches used by RUFT-PL is similar to the number of switches in the RUFT and Fat tree topologies. The switches of RUFT-PL can also twice the amount of unidirectional ports of the RUFT switches (Bermúdez BIB011 . FT-RUFT-212 (Fault Tolerance RUFT 212): -This topology facilitates the fault-tolerance (FT) feature by creating the duplicate copy of the links, which connect to or from the ending nodes i.e. connect the nodes in a planned way that may entail a little increment in hardware(h/w) price. It uses the similar quantity of links as in RUFT topology as well as similar connection blueprint between switches (Bermúdez BIB011 . Z-Fat Tree topology: -It is an extension of the fat tree known as Z-fat tree (Zoned-Fat tree). The extension is mainly related to the utilization of extra ports for each switch by providing some additional degree of connectivity. The main purpose of Z-fat tree is to deal with the issues of scalability, FT as well as routing to accomplish low latency also high bandwidth. It deals with optimization issues related to the creation of optimal FT (fat tree) topology with minimum complexity BIB014 . Clos Network topology: -It is a type of multistage network. It provides a non-blocking network, multistage switching architecture, which reduces the number of ports needed to establish a connection. This network topology contains three stages: ingress, middle, and egress stage. Each of these stages is prepared using a number of crossbar switches BIB010 ). VL2 topology: -This is an agile also very cost-efficient network (n/w) design. It is created using various switches, organized inside a Clos network topology. This topology employs Valiant Load Balancing (VLB) to distribute the traffic among network paths. It also makes use of address resolution to help great server pools BIB010 BIB004 Zhang et al., 2018) . DCell topology: -This topology makes use of servers through several ports also low-end small switches to make the recursively defined construction. In this topology, the main basic component is DCell0, contains n number of servers and single n-port switch. Every server in DCell0 is linked to a switch in the identical DCell0 Zhang et al., 2018; BIB013 BIB002 . BCube topology: -It provides a recursively defined construction, which is particularly designed to deliver modular data centers based on the container. The most basic element is BCube0 similar to DCell0, i.e. n number of servers linked to the single n-port switch. For constructing a BCube1, n number of additional switches are used, linking to one server to every BCube0 BIB010 BIB013 BIB005 ).
A survey of fault tolerance in cloud computing <s> Reactive approaches <s> This paper deals with decentralized, QoS-aware middleware for checkpointing arrangement in Mobile Grid (MoG) computing systems. Checkpointing is more crucial in MoG systems than in their conventional wired counterparts due to host mobility, dynamicity, less reliable wireless links, frequent disconnections, and variations in mobile systems. We've determined the globally optimal checkpoint arrangement to be NP-complete and so consider Reliability Driven (ReD) middleware, employing decentralized QoS-aware heuristics, to construct superior checkpointing arrangements efficiently. With ReD, an MH (mobile host) simply sends its checkpointed data to one selected neighboring MH, and also serves as a stable point of storage for checkpointed data received from a single approved neighboring MH. ReD works to maximize the probability of checkpointed data recovery during job execution, increasing the likelihood that a distributed application, executed on the MoG, completes without sustaining an unrecoverable failure. It allows collaborative services to be offered practically and autonomously by the MoG. Simulations and actual testbed implementation show ReD's favorable recovery probabilities with respect to Random Checkpointing Arrangement (RCA) middleware, a QoS-blind comparison protocol producing random arbitrary checkpointing arrangements. <s> BIB001 </s> A survey of fault tolerance in cloud computing <s> Reactive approaches <s> The on-demand use, high scalability, and low maintainance cost nature of cloud computing have attracted more and more enterprises to migrate their legacy applications to the cloud environment. Although the cloud platform itself promises high reliability, ensuring high quality of service is still one of the major concerns, since the enterprise applications are usually complicated and consist of a large number of distributed components. Thus, improving the reliability of an application during cloud migration is a challenging and critical research problem. To address this problem, we propose a reliability-based optimization framework, named ROCloud, to improve the application reliability by fault tolerance. ROCloud includes two ranking algorithms. The first algorithm ranks components for the applications that all their components will be migrated to the cloud. The second algorithm ranks components for hybrid applications that only part of their components are migrated to the cloud. Both algorithms employ the application structure information as well as the historical reliability information for component ranking. Based on the ranking result, optimal fault-tolerant strategy will be selected automatically for the most significant components with respect to their predefined constraints. The experimental results show that by refactoring a small number of error-prone components and tolerating faults of the most significant components, the reliability of the application can be greatly improved. <s> BIB002 </s> A survey of fault tolerance in cloud computing <s> Reactive approaches <s> Data deduplication is a technique for eliminating duplicate copies of data, and has been widely used in cloud storage to reduce storage space and upload bandwidth. However, there is only one copy for each file stored in cloud even if such a file is owned by a huge number of users. As a result, deduplication system improves storage utilization while reducing reliability. Furthermore, the challenge of privacy for sensitive data also arises when they are outsourced by users to cloud. Aiming to address the above security challenges, this paper makes the first attempt to formalize the notion of distributed reliable deduplication system. We propose new distributed deduplication systems with higher reliability in which the data chunks are distributed across multiple cloud servers. The security requirements of data confidentiality and tag consistency are also achieved by introducing a deterministic secret sharing scheme in distributed storage systems, instead of using convergent encryption as in previous deduplication systems. Security analysis demonstrates that our deduplication systems are secure in terms of the definitions specified in the proposed security model. As a proof of concept, we implement the proposed systems and demonstrate that the incurred overhead is very limited in realistic environments. <s> BIB003 </s> A survey of fault tolerance in cloud computing <s> Reactive approaches <s> Cloud computing technology has become an integral trend in the market of information technology. Cloud computing virtualization and its Internet-based lead to various types of failures to occur and thus the need for reliability and availability has become a crucial issue. To ensure cloud reliability and availability, a fault tolerance strategy should be developed and implemented. Most of the early fault tolerant strategies focused on using only one method to tolerate faults. This paper presents an adaptive framework to cope with the problem of fault tolerance in cloud computing environments. The framework employs both replication and checkpointing methods in order to obtain a reliable platform for carrying out customer requests. Also, the algorithm determines the most appropriate fault tolerance method for each selected virtual machine. Simulation experiments are carried out to evaluate the framework's performance. The results of the experiments show that the proposed framework improves the performance of the cloud in terms of throughput, overheads, monetary cost, and availability. <s> BIB004 </s> A survey of fault tolerance in cloud computing <s> Reactive approaches <s> Task clustering has proven to be an effective method to reduce execution overhead and to improve the computational granularity of scientific workflow tasks executing on distributed resources. However, a job composed of multiple tasks may have a higher risk of suffering from failures than a single task job. In this paper, we conduct a theoretical analysis of the impact of transient failures on the runtime performance of scientific workflow executions. We propose a general task failure modeling framework that uses a maximum likelihood estimation-based parameter estimation process to model workflow performance. We further propose three fault-tolerant clustering strategies to improve the runtime performance of workflow executions in faulty execution environments. Experimental results show that failures can have significant impact on executions where task clustering policies are not fault-tolerant, and that our solutions yield makespan improvements in such scenarios. In addition, we propose a dynamic task clustering strategy to optimize the workflow's makespan by dynamically adjusting the clustering granularity when failures arise. A trace-based simulation of five real workflows shows that our dynamic method is able to adapt to unexpected behaviors, and yields better makespans when compared to static methods. <s> BIB005 </s> A survey of fault tolerance in cloud computing <s> Reactive approaches <s> With rapid adoption of the cloud computing model, many enterprises have begun deploying cloud-based services. Failures of virtual machines (VMs) in clouds have caused serious quality assurance issues for those services. VM replication is a commonly used technique for enhancing the reliability of cloud services. However, when determining the VM redundancy strategy for a specific service, many state-of-the-art methods ignore the huge network resource consumption issue that could be experienced when the service is in failure recovery mode. This paper proposes a redundant VM placement optimization approach to enhancing the reliability of cloud services. The approach employs three algorithms. The first algorithm selects an appropriate set of VM-hosting servers from a potentially large set of candidate host servers based upon the network topology. The second algorithm determines an optimal strategy to place the primary and backup VMs on the selected host servers with k-fault-tolerance assurance. Lastly, a heuristic is used to address the task-to-VM reassignment optimization problem, which is formulated as finding a maximum weight matching in bipartite graphs. The evaluation results show that the proposed approach outperforms four other representative methods in network resource consumption in the service recovery stage. <s> BIB006 </s> A survey of fault tolerance in cloud computing <s> Reactive approaches <s> Cloud computing is becoming an important solution for providing scalable computing resources via Internet. Because there are tens of thousands of nodes in data center, the probability of server failures is nontrivial. Therefore, it is a critical challenge to guarantee the service reliability. Fault-tolerance strategies, such as checkpoint, are commonly employed. Because of the failure of the edge switches, the checkpoint image may become inaccessible. Therefore, current checkpoint-based fault tolerance method cannot achieve the best effect. In this paper, we propose an optimal checkpoint method with edge switch failure-aware. The edge switch failure-aware checkpoint method includes two algorithms. The first algorithm employs the data center topology and communication characteristic for checkpoint image storage server selection. The second algorithm employs the checkpoint image storage characteristic as well as the data center topology to select the recovery server. Simulation experiments are performed to demonstrate the effectiveness of the proposed method. <s> BIB007 </s> A survey of fault tolerance in cloud computing <s> Reactive approaches <s> Modern day data centers coordinate hundreds of thousands of heterogeneous tasks and aim at delivering highly reliable cloud computing services. Although offering equal reliability to all users benefits everyone at the same time, users may find such an approach either inadequate or too expensive to fit their individual requirements, which may vary dramatically. In this paper, we propose a novel method for providing elastic reliability optimization in cloud computing. Our scheme makes use of peer-to-peer checkpointing and allows user reliability levels to be jointly optimized based on an assessment of their individual requirements and total available resources in the data center. We show that the joint optimization can be efficiently solved by a distributed algorithm using dual decomposition. The solution improves resource utilization and presents an additional source of revenue to data center operators. Our validation results suggest a significant improvement of reliability over existing schemes. <s> BIB008 </s> A survey of fault tolerance in cloud computing <s> Reactive approaches <s> Resubmission and replication are two fundamental and widely recognized techniques in distributed computing systems for fault tolerance. The resubmission based strategy has an advantage in resource utilization, while the replication based strategy can reduce the task completed time in the context of fault. However, few researches take these two techniques together for fault-tolerant workflow scheduling, especially in Cloud systems. In this paper, we present a novel fault-tolerant workflow scheduling (ICFWS) algorithm for Cloud systems by combining the aforementioned two strategies together to play their respective advantages for fault tolerance while trying to meet the soft deadline of workflow. First, it divides the soft deadline of workflow into multiple sub-deadlines for all tasks. Then, it selects a reasonable fault-tolerant strategy and reserves suitable resource for each task by taking the imbalance sub-deadlines among tasks and on-demand resource provisioning of Cloud systems into consideration. Finally, an online scheduling and reservation adjustment scheme is designed to select a suitable resource for the task with resubmission strategy and adjust the sub-deadlines as well as fault-tolerant strategies of some unexecuted tasks during the task execution process, respectively. The proposed algorithm is evaluated on both real-world and randomly generated workflows. The results demonstrate that the ICFWS outperforms some well-known approaches on corresponding metrics. <s> BIB009
Some proposed models and frameworks based on reactive fault tolerance are as follows: BIB006 proposed an OPVMP (optimal redundant virtual machine placement) model. This approach has been developed to improve the reliability of server-based cloud services using a replication-based fault tolerance method. The proposed approach consists of the three steps: -selection of the host server, optimal VM placement, and recovery strategy decision. A heuristic algorithm has been used for appropriate host server selection and also for optimal VM placement. The experiments to verify the benefits of the approach have been performed on the CloudSim simulator . Results of the proposed approach have been compared to five other existing models. An experimental result demonstrated that the proposed approach has used the fewer network resources than other algorithms. BIB004 has developed an adaptive model/framework to handle the fault-related issue in the cloud environment. The adaptive model contains both checkpointing and replication fault tolerance techniques to get a highly reliable platform to serve the client requests. The proposed framework consists of two algorithms: one for selecting VMs to serve the consumer's requests, and other for deciding/choosing FT method, i.e. replication or checkpointing approaches. The performance of the proposed framework has been evaluated on the basis of throughput, overheads, availability and monetary cost. To perform the simulation, CloudSim tool has been used. The performance of the proposed framework has been compared to the existing OCI (optimal checkpoint algorithm). As a result, an adaptive nature of the proposed framework has improved the performance of the cloud environment as compared to existing algorithms. BIB007 proposed an Edge switch failure aware checkpointing (EDCKP) model. This model is designed with the aim to enhance service reliability in the cloud computing system. Fat tree network topology has been considered and two algorithms have been proposed to address the edge switch failure. One algorithm is to select storage server for the checkpoint image and other is for the recovery server. The proposed model has been compared with other existing models such as NOCKP (No checkpoint-based method) and NDCKP (is a network topology aware distributed delta checkpoint-based technique). The simulation result illustrated that the EDCKP method has reduced the total execution time and consumes fewer network resources to get the better service reliability. BIB001 presented a ReD (Reliability driven) approach for FT in a mobile grid (MoG) environment. This enabled a mobile host to transmit its check-pointed data to a selected neighbouring mobile host. The selected mobile host serves as stable point storage for the checkpoints from the approved neighbouring mobile host. The ReD approach has been used to increase the likelihood of recovery of checkpointed data. It thereby maximized the probability that a distributed application executing on the mobile grid (MoG) can be completed without supporting an unrecoverable failure. In other words, ReD, Reliability Driven middleware enabled the mobile grid scheduler in informed decision making by selectively submitting work segments to the hosts that contain bestcheckpointing arrangements to guarantee successful completion. The ReD approach has been evaluated in the simulator and test-bed environments. The outcome of ReD has been compared with the RCA (Random Checkpoint algorithm). As a result, stability control enhancements to the ReD produces the greatest payoff at smaller wireless areas, resulted in superior average/ normal reliability achievement and in that order less break messages. The outcome is practical also useful because, in ReD, all host can decide its neighbourhood density, during message conversation/exchange among neighbours indirectly. BIB008 presented a JCSR (Joint Checkpoint Scheduling and Routing) method to provide flexible reliability optimization in the cloud environment. A peer-to-peer checkpointing method has been used that allows client consistency levels/points to be optimized on the basis of evaluation of their individual requirements and entirely resources available in the data center. A distributed algorithm has been designed to solve the joint optimization by dual decomposition. However, the given solution can improve the use of resources and presented a supplementary source of profits to the data center operators. The validation outcome demonstrated a significant enhancement of reliability over existing methods. A heuristic algorithm, JCSR has been proposed with peer-to-peer check-pointing, which is used to locate the sub-optimal resolution for the joint checkpoint scheduling problem and routing difficulty leveraging dual decomposition process. The motive of the proposed algorithm is to come across an individual user optimization problem. BIB005 presented task failure-modelling framework. In this framework, a hypothetical analysis has been conducted on the runtime performance to know the impact of transient failures (a failure that occurs for short periods) of scientific workflow executions. To improve the workflow's runtime performance, three fault tolerance clustering strategies have been proposed, these are selective re-clustering algorithm (a newclustered job is used to keep the clustered job having only failed tasks), dynamic re-clustering algorithm (used to adjust the granularity dynamically or size of the cluster or a number of tasks inside a job) and vertical re-clustering algorithm (used to divide the clustered jobs into improved jobs that reduce the job granularity and then retries them). The WorkflowSim tool has been used for the simulation. BIB002 presented a ROCloud (reliability-based design optimization) model/framework to enhance the reliability of application using the FT approach. This framework has been used to migrate the legacy applications on cloud. The proposed model included three steps. In the first part, the legacy application analysis has been done. In the second part, the important component ranking has been done and in the last part, an optimal FT method has been chosen automatically. However, two algorithms have been proposed for ranking the components of both ordinary applications (all components of the application can migrate to cloud) and hybrid application (only some parts/portion of their components can migrate to cloud). The important value of every component has been computed based on the construction of an application, the relationships between component invocations and failure rates of component and effects of failure. A tool developed in C++ language has been used for the simulation. BIB003 presented a KNF (k-out-of-n reliability) framework, which dealt with issues of energy efficiency as well as fault tolerance. The proposed framework has provided the facility of data fragmentation at the time of data storage by a node. Other nodes are able to fetch data consistently with nominal energy consumption and also permits mobile nodes to process the distributed data. However, energy consumption required for processing data is also diminished. This framework has been evaluated in MATLAB. BIB009 presented the ICFWS (Fault tolerance workflow scheduling) algorithm. This algorithm provides the benefits of both resubmission and replication-based FT. First, the ICFWS algorithm partitions the soft deadline of the available workflow into numerous sub-deadlines. The FT policies of replication or resubmission are selected for each task and then all available tasks are scheduled for their initial implementation. Backup copies of the tasks are retained using the replication method. After that, an online reservation, adjustment method has been proposed to fine-tune the sub-deadlines of any non-executed task during task execution process. The main features of the discussed models and frameworks for FT have been summarized in Table 5 .
A survey of fault tolerance in cloud computing <s> Other miscellaneous approaches used for FT <s> The field of metaheuristics for the application to combinatorial optimization problems is a rapidly growing field of research. This is due to the importance of combinatorial optimization problems for the scientific as well as the industrial world. We give a survey of the nowadays most important metaheuristics from a conceptual point of view. We outline the different components and concepts that are used in the different metaheuristics in order to analyze their similarities and differences. Two very important concepts in metaheuristics are intensification and diversification. These are the two forces that largely determine the behavior of a metaheuristic. They are in some way contrary but also complementary to each other. We introduce a framework, that we call the I&D frame, in order to put different intensification and diversification components into relation with each other. Outlining the advantages and disadvantages of different metaheuristic approaches we conclude by pointing out the importance of hybridization of metaheuristics as well as the integration of metaheuristics and other methods for optimization. <s> BIB001 </s> A survey of fault tolerance in cloud computing <s> Other miscellaneous approaches used for FT <s> This paper surveys work from the field of machine learning on the problem of within-network learning and inference. To give motivation and context to the rest of the survey, we start by presenting some (published) applications of within-network inference. After a brief formulation of this problem and a discussion of probabilistic inference in arbitrary networks, we survey machine learning work applied to networked data, along with some important predecessors--mostly from the statistics and pattern recognition literature. We then describe an application of within-network inference in the domain of suspicion scoring in social networks. We close the paper with pointers to toolkits and benchmark data sets used in machine learning research on classification in network data. We hope that such a survey will be a useful resource to workshop participants, and perhaps will be complemented by others. <s> BIB002 </s> A survey of fault tolerance in cloud computing <s> Other miscellaneous approaches used for FT <s> Metaheuristics are general algorithmic frameworks, often nature-inspired, designed to solve complex optimization problems, and they are a growing research area since a few decades. In recent years, metaheuristics are emerging as successful alternatives to more classical approaches also for solving optimization problems that include in their mathematical formulation uncertain, stochastic, and dynamic information. In this paper metaheuristics such as Ant Colony Optimization, Evolutionary Computation, Simulated Annealing, Tabu Search and others are introduced, and their applications to the class of Stochastic Combinatorial Optimization Problems (SCOPs) is thoroughly reviewed. Issues common to all metaheuristics, open problems, and possible directions of research are proposed and discussed. In this survey, the reader familiar to metaheuristics finds also pointers to classical algorithmic approaches to optimization under uncertainty, and useful informations to start working on this problem domain, while the reader new to metaheuristics should find a good tutorial in those metaheuristics that are currently being applied to optimization under uncertainty, and motivations for interest in this field. <s> BIB003 </s> A survey of fault tolerance in cloud computing <s> Other miscellaneous approaches used for FT <s> A fault-tolerant control method combining fault diagnosis and fault-tolerant control is proposed for sensor faults. A BP neural network based on Particle Swarm Optimization algorithm is used to estimate system states and fault parameters of the constructed model for sensor faults. The estimated fault parameters are processed by the modified Bayes classification algorithm to achieve sensor faults diagnosis, separation and estimation on-line, and sensor faults are described as "equivalent bias" vectors to realize fault-tolerant control by compensation algorithm. Simulation results for continuous stirred tank reactor (CSTR) show good convergence of the approach and strong fault-tolerant ability for sensor faults. <s> BIB004 </s> A survey of fault tolerance in cloud computing <s> Other miscellaneous approaches used for FT <s> As the popularity of cloud computing increases, more and more applications are migrated onto them. Web 2.0 applications are the most common example of such applications. These applications require to scale, be highly available, fault tolerant and able to run uninterrupted for long periods of time (or even indefinitely). Moreover as new cloud providers appear there is a natural tendency towards choosing the best provider or a combination of them for deploying the application. Thus multi-cloud scenarios emerge from this situation. However, as multi-cloud resource provisioning is both complex and costly, the choice of which resources to lend and how to allocate them to application components needs to rely on efficient strategies. These need to take into account many factors including deployment and run-time cost, resource load, and application availability in case of failures. For this aim multi-objective scheduling algorithms seem an appropriate choice. This paper presents an algorithm which tries to achieve application high-availability and fault-tolerance while reducing the application cost and keeping the resource load maximized. The proposed algorithm is compared with a classic Round Robin strategy -- used by many commercial clouds -- and the obtained results prove the efficiency of our solution. <s> BIB005 </s> A survey of fault tolerance in cloud computing <s> Other miscellaneous approaches used for FT <s> Data check pointing is an important fault tolerance technique in High Performance Computing (HPC) systems. As the HPC systems move towards exascale, the storage space and time costs of check pointing threaten to overwhelm not only the simulation but also the post-simulation data analysis. One common practice to address this problem is to apply compression algorithms to reduce the data size. However, traditional lossless compression techniques that look for repeated patterns are ineffective for scientific data in which high-precision data is used and hence common patterns are rare to find. This paper exploits the fact that in many scientific applications, the relative changes in data values from one simulation iteration to the next are not very significantly different from each other. Thus, capturing the distribution of relative changes in data instead of storing the data itself allows us to incorporate the temporal dimension of the data and learn the evolving distribution of the changes. We show that an order of magnitude data reduction becomes achievable within guaranteed user-defined error bounds for each data point. We propose NUMARCK, North western University Machine learning Algorithm for Resiliency and Check pointing, that makes use of the emerging distributions of data changes between consecutive simulation iterations and encodes them into an indexing space that can be concisely represented. We evaluate NUMARCK using two production scientific simulations, FLASH and CMIP5, and demonstrate a superior performance in terms of compression ratio and compression accuracy. More importantly, our algorithm allows users to specify the maximum tolerable error on a per point basis, while compressing the data by an order of magnitude. <s> BIB006
Some miscellaneous approaches have also been used for integrating fault tolerance methods in cloud systems to enhance their performance and to make the systems robust. Some miscellaneous approaches are discussed as follows: Machine learning based Approaches: -Machine learning is an application of artificial intelligence, which enables systems to learn automatically and use experience for improvement without any explicit programming. Its main motive is to develop computer programs, which can access the data, and they then use that data to learn BIB006 BIB002 . These machine learning techniques have also been used to develop fault tolerance methods to enhance service reliability. In particular, machine learning is employed to develop proactive fault tolerance methods where failure prediction needs to be done before it occurs in the system based on previous data of the system for the same (Leam, xxxx; BIB004 . Some machine learning based algorithms are described in Table 6 . Meta-Heuristics: -A meta-heuristic is an advanced level of heuristic which is designed to create, find or choose a heuristic. Mainly used to direct the search procedure, the objective is to effectively examine the search space to locate the nearoptimal solutions. Approaches that comprise algorithms of meta-heuristic may range from easy local search processes to the complex learning procedures BIB005 . Meta-heuristic algorithms also approximate the outcome and are, in general, non-deterministic. The meta-heuristics approaches are usually not considered as being problem specific and therefore have been used for the efficient solution of multiple optimization problems . The most commonly used meta-heuristics algorithms are ACO algorithm (ant colony optimization), PSO algorithm (Particle swarm optimization, SCO algorithm (Social cognitive optimization), etc. BIB001 BIB003 ). The metaheuristic algorithms can be applied in the many distinct areas such that optimization of function, fuzzy system control, Table 5 Summary of the proposed model/framework based on proactive and reactive fault tolerance.
A survey of fault tolerance in cloud computing <s> Proposed Framework/Model <s> This paper deals with decentralized, QoS-aware middleware for checkpointing arrangement in Mobile Grid (MoG) computing systems. Checkpointing is more crucial in MoG systems than in their conventional wired counterparts due to host mobility, dynamicity, less reliable wireless links, frequent disconnections, and variations in mobile systems. We've determined the globally optimal checkpoint arrangement to be NP-complete and so consider Reliability Driven (ReD) middleware, employing decentralized QoS-aware heuristics, to construct superior checkpointing arrangements efficiently. With ReD, an MH (mobile host) simply sends its checkpointed data to one selected neighboring MH, and also serves as a stable point of storage for checkpointed data received from a single approved neighboring MH. ReD works to maximize the probability of checkpointed data recovery during job execution, increasing the likelihood that a distributed application, executed on the MoG, completes without sustaining an unrecoverable failure. It allows collaborative services to be offered practically and autonomously by the MoG. Simulations and actual testbed implementation show ReD's favorable recovery probabilities with respect to Random Checkpointing Arrangement (RCA) middleware, a QoS-blind comparison protocol producing random arbitrary checkpointing arrangements. <s> BIB001 </s> A survey of fault tolerance in cloud computing <s> Proposed Framework/Model <s> The on-demand use, high scalability, and low maintainance cost nature of cloud computing have attracted more and more enterprises to migrate their legacy applications to the cloud environment. Although the cloud platform itself promises high reliability, ensuring high quality of service is still one of the major concerns, since the enterprise applications are usually complicated and consist of a large number of distributed components. Thus, improving the reliability of an application during cloud migration is a challenging and critical research problem. To address this problem, we propose a reliability-based optimization framework, named ROCloud, to improve the application reliability by fault tolerance. ROCloud includes two ranking algorithms. The first algorithm ranks components for the applications that all their components will be migrated to the cloud. The second algorithm ranks components for hybrid applications that only part of their components are migrated to the cloud. Both algorithms employ the application structure information as well as the historical reliability information for component ranking. Based on the ranking result, optimal fault-tolerant strategy will be selected automatically for the most significant components with respect to their predefined constraints. The experimental results show that by refactoring a small number of error-prone components and tolerating faults of the most significant components, the reliability of the application can be greatly improved. <s> BIB002 </s> A survey of fault tolerance in cloud computing <s> Proposed Framework/Model <s> Data deduplication is a technique for eliminating duplicate copies of data, and has been widely used in cloud storage to reduce storage space and upload bandwidth. However, there is only one copy for each file stored in cloud even if such a file is owned by a huge number of users. As a result, deduplication system improves storage utilization while reducing reliability. Furthermore, the challenge of privacy for sensitive data also arises when they are outsourced by users to cloud. Aiming to address the above security challenges, this paper makes the first attempt to formalize the notion of distributed reliable deduplication system. We propose new distributed deduplication systems with higher reliability in which the data chunks are distributed across multiple cloud servers. The security requirements of data confidentiality and tag consistency are also achieved by introducing a deterministic secret sharing scheme in distributed storage systems, instead of using convergent encryption as in previous deduplication systems. Security analysis demonstrates that our deduplication systems are secure in terms of the definitions specified in the proposed security model. As a proof of concept, we implement the proposed systems and demonstrate that the incurred overhead is very limited in realistic environments. <s> BIB003 </s> A survey of fault tolerance in cloud computing <s> Proposed Framework/Model <s> Cloud computing technology has become an integral trend in the market of information technology. Cloud computing virtualization and its Internet-based lead to various types of failures to occur and thus the need for reliability and availability has become a crucial issue. To ensure cloud reliability and availability, a fault tolerance strategy should be developed and implemented. Most of the early fault tolerant strategies focused on using only one method to tolerate faults. This paper presents an adaptive framework to cope with the problem of fault tolerance in cloud computing environments. The framework employs both replication and checkpointing methods in order to obtain a reliable platform for carrying out customer requests. Also, the algorithm determines the most appropriate fault tolerance method for each selected virtual machine. Simulation experiments are carried out to evaluate the framework's performance. The results of the experiments show that the proposed framework improves the performance of the cloud in terms of throughput, overheads, monetary cost, and availability. <s> BIB004 </s> A survey of fault tolerance in cloud computing <s> Proposed Framework/Model <s> Cloud computing is becoming an important solution for providing scalable computing resources via Internet. Because there are tens of thousands of nodes in data center, the probability of server failures is nontrivial. Therefore, it is a critical challenge to guarantee the service reliability. Fault-tolerance strategies, such as checkpoint, are commonly employed. Because of the failure of the edge switches, the checkpoint image may become inaccessible. Therefore, current checkpoint-based fault tolerance method cannot achieve the best effect. In this paper, we propose an optimal checkpoint method with edge switch failure-aware. The edge switch failure-aware checkpoint method includes two algorithms. The first algorithm employs the data center topology and communication characteristic for checkpoint image storage server selection. The second algorithm employs the checkpoint image storage characteristic as well as the data center topology to select the recovery server. Simulation experiments are performed to demonstrate the effectiveness of the proposed method. <s> BIB005 </s> A survey of fault tolerance in cloud computing <s> Proposed Framework/Model <s> Modern day data centers coordinate hundreds of thousands of heterogeneous tasks and aim at delivering highly reliable cloud computing services. Although offering equal reliability to all users benefits everyone at the same time, users may find such an approach either inadequate or too expensive to fit their individual requirements, which may vary dramatically. In this paper, we propose a novel method for providing elastic reliability optimization in cloud computing. Our scheme makes use of peer-to-peer checkpointing and allows user reliability levels to be jointly optimized based on an assessment of their individual requirements and total available resources in the data center. We show that the joint optimization can be efficiently solved by a distributed algorithm using dual decomposition. The solution improves resource utilization and presents an additional source of revenue to data center operators. Our validation results suggest a significant improvement of reliability over existing schemes. <s> BIB006 </s> A survey of fault tolerance in cloud computing <s> Proposed Framework/Model <s> Resubmission and replication are two fundamental and widely recognized techniques in distributed computing systems for fault tolerance. The resubmission based strategy has an advantage in resource utilization, while the replication based strategy can reduce the task completed time in the context of fault. However, few researches take these two techniques together for fault-tolerant workflow scheduling, especially in Cloud systems. In this paper, we present a novel fault-tolerant workflow scheduling (ICFWS) algorithm for Cloud systems by combining the aforementioned two strategies together to play their respective advantages for fault tolerance while trying to meet the soft deadline of workflow. First, it divides the soft deadline of workflow into multiple sub-deadlines for all tasks. Then, it selects a reasonable fault-tolerant strategy and reserves suitable resource for each task by taking the imbalance sub-deadlines among tasks and on-demand resource provisioning of Cloud systems into consideration. Finally, an online scheduling and reservation adjustment scheme is designed to select a suitable resource for the task with resubmission strategy and adjust the sub-deadlines as well as fault-tolerant strategies of some unexecuted tasks during the task execution process, respectively. The proposed algorithm is evaluated on both real-world and randomly generated workflows. The results demonstrate that the ICFWS outperforms some well-known approaches on corresponding metrics. <s> BIB007 </s> A survey of fault tolerance in cloud computing <s> Proposed Framework/Model <s> The large-scale utilization of cloud computing services for hosting industrial/enterprise applications has led to the emergence of cloud service reliability as an important issue for both cloud service providers and users. To enhance cloud service reliability, two types of fault tolerance schemes, reactive and proactive, have been proposed. Existing schemes rarely consider the problem of coordination among multiple virtual machines (VMs) that jointly complete a parallel application. Without VM coordination, the parallel application execution results will be incorrect. To overcome this problem, we first propose an initial virtual cluster allocation algorithm according to the VM characteristics to reduce the total network resource consumption and total energy consumption in the data center. Then, we model CPU temperature to anticipate a deteriorating physical machine (PM). We migrate VMs from a detected deteriorating PM to some optimal PMs. Finally, the selection of the optimal target PMs is modeled as an optimization problem that is solved using an improved particle swarm optimization algorithm. We evaluate our approach against five related approaches in terms of the overall transmission overhead, overall network resource consumption, and total execution time while executing a set of parallel applications. Experimental results demonstrate the efficiency and effectiveness of our approach. <s> BIB008
Fault tolerance Approaches Use of Fat-Tree Topology OPVMP Reactive Yes PCFT BIB008 Proactive Yes Adaptive Framework BIB004 Reactive No EDCKP BIB005 Reactive Yes ReD BIB001 Reactive Yes JCSR BIB006 Reactive Yes ROCloud BIB002 Reactive No KNF framework BIB003 Reactive No SVM-Grid (Zhang et al., 2018) Proactive No ICFWS BIB007 Reactive No Table 6 Summary of Machine Learning algorithms.
A survey of fault tolerance in cloud computing <s> Algorithms Descriptions <s> A fault-tolerant control method combining fault diagnosis and fault-tolerant control is proposed for sensor faults. A BP neural network based on Particle Swarm Optimization algorithm is used to estimate system states and fault parameters of the constructed model for sensor faults. The estimated fault parameters are processed by the modified Bayes classification algorithm to achieve sensor faults diagnosis, separation and estimation on-line, and sensor faults are described as "equivalent bias" vectors to realize fault-tolerant control by compensation algorithm. Simulation results for continuous stirred tank reactor (CSTR) show good convergence of the approach and strong fault-tolerant ability for sensor faults. <s> BIB001 </s> A survey of fault tolerance in cloud computing <s> Algorithms Descriptions <s> Network on Chip (NOC) approach has appeared as a solution for on-chip communications to allow integrating various processors and on-chip memories into a single chip. During the field operation of NOC, the latent hard faults technologically advanced in First In First Out buffers (FIFO). By using on line test techniques, that faults have been spotted. In this technique, the test has to be repeated occasionally to avoid the gathering of faults. Also design router architecture with shared queues (RoShaQ) among input ports which exploits the buffer operation by permitting the sharing multiple buffer queues. The efficient use of shared buffers achieves highest possible throughput when the network load becomes heavy. when the lighter traffic load can achieves low latency by allowing packets to effectively bypass these shared queues. For load balancing in faulty networks the ant colony optimization-based fault-aware routing (ACO-FAR) algorithm is propose. The best probabilistic technique is an ant colony optimization algorithm (ACO) for solving computational problems. This technique can be reduced to result a good and shortest paths through graphs. For responding an obstacle implements the three mechanisms as: 1) fault notification; 2) search the path; and 3) select the path. <s> BIB002 </s> A survey of fault tolerance in cloud computing <s> Algorithms Descriptions <s> Denial of service (DOS) attacks are a serious threat to network security. These attacks are often sourced from virtual machines in the cloud, rather than from the attacker's own machine, to achieve anonymity and higher network bandwidth. Past research focused on analyzing traffic on the destination (victim's) side with predefined thresholds. These approaches have significant disadvantages. They are only passive defenses after the attack, they cannot use the outbound statistical features of attacks, and it is hard to trace back to the attacker with these approaches. In this paper, we propose a DOS attack detection system on the source side in the cloud, based on machine learning techniques. This system leverages statistical information from both the cloud server's hypervisor and the virtual machines, to prevent network packages from being sent out to the outside network. We evaluate nine machine learning algorithms and carefully compare their performance. Our experimental results show that more than 99.7% of four kinds of DOS attacks are successfully detected. Our approach does not degrade performance and can be easily extended to broader DOS attacks. <s> BIB003 </s> A survey of fault tolerance in cloud computing <s> Algorithms Descriptions <s> Energy awareness presents an immense challenge for cloud computing infrastructure and the development of next generation data centers. Inefficient resource utilization is one of the greatest causes of energy consumption in data center operations. To address this problem we introduce an Advanced Reinforcement Learning Consolidation Agent (ARLCA) capable of optimizing the distribution of virtual machines across the data center for improved resource management. Determining efficient policies in dynamic environments can be a difficult task, however the proposed Reinforcement Learning (RL) approach learns optimal behaviour in the absence of complete knowledge due to its innate ability to reason under uncertainty. Using real workload data we evaluate our algorithm against a state-of-the-art heuristic, our model shows a significant improvement in energy consumption while also reducing the number of service violations. <s> BIB004 </s> A survey of fault tolerance in cloud computing <s> Algorithms Descriptions <s> The large-scale utilization of cloud computing services for hosting industrial/enterprise applications has led to the emergence of cloud service reliability as an important issue for both cloud service providers and users. To enhance cloud service reliability, two types of fault tolerance schemes, reactive and proactive, have been proposed. Existing schemes rarely consider the problem of coordination among multiple virtual machines (VMs) that jointly complete a parallel application. Without VM coordination, the parallel application execution results will be incorrect. To overcome this problem, we first propose an initial virtual cluster allocation algorithm according to the VM characteristics to reduce the total network resource consumption and total energy consumption in the data center. Then, we model CPU temperature to anticipate a deteriorating physical machine (PM). We migrate VMs from a detected deteriorating PM to some optimal PMs. Finally, the selection of the optimal target PMs is modeled as an optimization problem that is solved using an improved particle swarm optimization algorithm. We evaluate our approach against five related approaches in terms of the overall transmission overhead, overall network resource consumption, and total execution time while executing a set of parallel applications. Experimental results demonstrate the efficiency and effectiveness of our approach. <s> BIB005
Neural Network (NN) NN is a supervised based learning approach. With an NN, a computer system is modelled to work like a human brain or the nervous system. NN works by generating connections among processing elements. The processing elements of NN are non-linear and can be interconnected using the adjustable weights. The outcome is determined by an association and weights of connections KNN (K-Nearest Neighbour) KNN is a form of supervised learning. This algorithm is used to store the available cases and categorizes new cases based on similarity measure called distance functions. It is used to solve both classification predictive and regression predictive problems. In this approach, output interpretation consumes low calculation time and predictive power SVM (Support vector machines) BIB003 SVMs are a kind of supervised learning. The motive of the SVM algorithm is to find a hyperplane in N-dimensional (N stand for the number of features) space that clearly classifies the data points. This algorithm is commonly used in image classification, text and hypertext classification, handwritten text/character recognition and so on Reinforcement Learning (RL) BIB004 RL is a type of ML algorithm that permits machines and software representatives to automatically determine the superlative behavior in a specific situation to maximize the performance. It is a goaloriented learning based on the interaction with the environment. It consists of two components i.e. agent and an environment. The environment states to the object where the agent is acting on scheduling application, cloud computing, image processing, clustering, data mining, to train artificial neural network and many more. These algorithms have also been applied to improve the service reliability, i.e. the meta-heuristic algorithms have been integrated with the fault tolerance approaches to make a reliable system or enhance its performance (Jialei BIB005 BIB001 BIB002 . Some meta-heuristic based algorithms are described in Table 7 . Clustering: -In order to implement the parallel processing application in the enterprises, clustering is used in the cloud, whereby several computers, VMs (virtual machines), servers are tightly or loosely connected to work jointly and appear as a single system to the users (known as computer cluster). Cluster computing is also used for the HPC (High-performance computing). However, the long-term trend in HPC needs increasing amounts of the node inside the parallel computing platform and thus, involves a higher probability of failure. To solve the failure probability various clustering approaches are available, which can be used with the fault tolerance approaches to make the system reliable and robust (
A survey of fault tolerance in cloud computing <s> Algorithms Description <s> Cloud computing environments facilitate applications by providing virtualized resources that can be provisioned dynamically. However, users are charged on a pay-per-use basis. User applications may incur large data retrieval and execution costs when they are scheduled taking into account only the ‘execution time’. In addition to optimizing execution time, the cost arising from data transfers between resources as well as execution costs must also be taken into account. In this paper, we present a particle swarm optimization (PSO) based heuristic to schedule applications to cloud resources that takes into account both computation cost and data transmission cost. We experiment with a workflow application by varying its computation and communication costs. We compare the cost savings when using PSO and existing ‘Best Resource Selection’ (BRS) algorithm. Our results show that PSO can achieve: a) as much as 3 times cost savings as compared to BRS, and b) good distribution of workload onto resources. <s> BIB001 </s> A survey of fault tolerance in cloud computing <s> Algorithms Description <s> There are a mass of researches on the issue of scheduling in cloud computing, most of them, however, are bout workflow and job scheduling. There are fewer researches on service flow scheduling. Here we propose a model of service flow scheduling with various quality of service (QoS) requirements in cloud computing firstly, then we adopt the use of an ant colony optimization (ACO) algorithm to optimize service flow scheduling. In our model, we use default rate to describe the radio of cloud service provider breaking service level agreement (SLA), and also introduce an SLA monitoring module to monitor the running state of cloud services. <s> BIB002 </s> A survey of fault tolerance in cloud computing <s> Algorithms Description <s> This SpringerBrief presents a survey of data center network designs and topologies and compares several properties in order to highlight their advantages and disadvantages. The brief also explores several routing protocols designed for these topologies and compares the basic algorithms to establish connections, the techniques used to gain better performance, and the mechanisms for fault-tolerance. Readers will be equipped to understand how current research on data center networks enables the design of future architectures that can improve performance and dependability of data centers. This concise brief is designed for researchers and practitioners working on data center networks, comparative topologies, fault tolerance routing, and data center management systems. The context provided and information on future directions will also prove valuable for students interested in these topics. <s> BIB003 </s> A survey of fault tolerance in cloud computing <s> Algorithms Description <s> Nature-inspired metaheuristic algorithms, especially those based on swarm intelligence, have attracted much attention in the last ten years. Firefly algorithm appeared in about five years ago, its literature has expanded dramatically with diverse applications. In this paper, we will briefly review the fundamentals of firefly algorithm together with a selection of recent publications. Then, we discuss the optimality associated with balancing exploration and exploitation, which is essential for all metaheuristic algorithms. By comparing with intermittent search strategy, we conclude that metaheuristics such as firefly algorithm are better than the optimal intermittent search strategy. We also analyse algorithms and their implications for higher-dimensional optimization problems. <s> BIB004 </s> A survey of fault tolerance in cloud computing <s> Algorithms Description <s> Purpose: The objective of this study is to optimize task scheduling and resource allocation using an improved differential evolution algorithm (IDEA) based on the proposed cost and time models on cloud computing environment. Methods: The proposed IDEA combines the Taguchi method and a differential evolution algorithm (DEA). The DEA has a powerful global exploration capability on macro-space and uses fewer control parameters. The systematic reasoning ability of the Taguchi method is used to exploit the better individuals on micro-space to be potential offspring. Therefore, the proposed IDEA is well enhanced and balanced on exploration and exploitation. The proposed cost model includes the processing and receiving cost. In addition, the time model incorporates receiving, processing, and waiting time. The multi-objective optimization approach, which is the non-dominated sorting technique, not with normalized single-objective method, is applied to find the Pareto front of total cost and makespan. Results: In the five-task five-resource problem, the mean coverage ratios C(IDEA, DEA) of 0.368 and C(IDEA, NSGA-II) of 0.3 are superior to the ratios C(DEA, IDEA) of 0.249 and C(NSGA-II, IDEA) of 0.288, respectively. In the ten-task ten-resource problem, the mean coverage ratios C(IDEA, DEA) of 0.506 and C(IDEA, NSGA-II) of 0.701 are superior to the ratios C(DEA, IDEA) of 0.286 and C(NSGA-II, IDEA) of 0.052, respectively. Wilcoxon matched-pairs signed-rank test confirms there is a significant difference between IDEA and the other methods. In summary, the above experimental results confirm that the IDEA outperforms both the DEA and NSGA-II in finding the better Pareto-optimal solutions. Conclusions: In the study, the IDEA shows its effectiveness to optimize task scheduling and resource allocation compared with both the DEA and the NSGA-II. Moreover, for decision makers, the Gantt charts of task scheduling in terms of having smaller makespan, cost, and both can be selected to make their decision when conflicting objectives are present. <s> BIB005 </s> A survey of fault tolerance in cloud computing <s> Algorithms Description <s> Resource scheduling management design on Cloud computing is an important problem. Scheduling model, cost, quality of service, time, and conditions of the request for access to services are factors to be focused. A good task scheduler should adapt its scheduling strategy to the changing environment and load balancing Cloud task scheduling policy. Therefore, in this paper, Artificial Bee Colony (ABC) is applied to optimize the scheduling of Virtual Machine (VM) on Cloud computing. The main contribution of work is to analyze the difference of VM load balancing algorithm and to reduce the makespan of data processing time. The scheduling strategy was simulated using CloudSim tools. Experimental results indicated that the combination of the proposed ABC algorithm, scheduling based on the size of tasks, and the Longest Job First (LJF) scheduling algorithm performed a good performance scheduling strategy in changing environment and balancing work load which can reduce the makespan of data processing time. <s> BIB006 </s> A survey of fault tolerance in cloud computing <s> Algorithms Description <s> Scientific workflows are used to model applications of high throughput computation and complex large scale data analysis. In recent years, Cloud computing is fast evolving as the target platform for such applications among researchers. Furthermore, new pricing models have been pioneered by Cloud providers that allow users to provision resources and to use them in an efficient manner with significant cost reductions. In this paper, we propose a scheduling algorithm that schedules tasks on Cloud resources using two different pricing models (spot and on-demand instances) to reduce the cost of execution whilst meeting the workflow deadline. The proposed algorithm is fault tolerant against the premature termination of spot instances and also robust against performance variations of Cloud resources. Experimental results demonstrate that our heuristic reduces up to 70% execution cost as against using only on-demand instances. <s> BIB007 </s> A survey of fault tolerance in cloud computing <s> Algorithms Description <s> Data deduplication is a technique for eliminating duplicate copies of data, and has been widely used in cloud storage to reduce storage space and upload bandwidth. However, there is only one copy for each file stored in cloud even if such a file is owned by a huge number of users. As a result, deduplication system improves storage utilization while reducing reliability. Furthermore, the challenge of privacy for sensitive data also arises when they are outsourced by users to cloud. Aiming to address the above security challenges, this paper makes the first attempt to formalize the notion of distributed reliable deduplication system. We propose new distributed deduplication systems with higher reliability in which the data chunks are distributed across multiple cloud servers. The security requirements of data confidentiality and tag consistency are also achieved by introducing a deterministic secret sharing scheme in distributed storage systems, instead of using convergent encryption as in previous deduplication systems. Security analysis demonstrates that our deduplication systems are secure in terms of the definitions specified in the proposed security model. As a proof of concept, we implement the proposed systems and demonstrate that the incurred overhead is very limited in realistic environments. <s> BIB008 </s> A survey of fault tolerance in cloud computing <s> Algorithms Description <s> Nowadays, Cloud computing is widely used in companies and enterprises. However, there are some challenges in using Cloud computing. The main challenge is resource management, where Cloud computing provides IT resources (e.g., CPU, Memory, Network, Storage, etc.) based on virtualization concept and pay-as-you-go principle. The management of these resources has been a topic of much research. In this paper, a task scheduling algorithm based on Genetic Algorithm (GA) has been introduced for allocating and executing an application’s tasks. The aim of this proposed algorithm is to minimize the completion time and cost of tasks, and maximize resource utilization. The performance of this proposed algorithm has been evaluated using CloudSim toolkit. <s> BIB009 </s> A survey of fault tolerance in cloud computing <s> Algorithms Description <s> As Cloud computing is reforming the infrastructure of IT industries, it has become one of the critical security concerns of the defensive mechanisms applied to secure Cloud environment. Even if there are tremendous advancements in defense systems regarding the confidentiality, authentication and access control, there is still a challenge to provide security against availability of associated resources. Denial-of-service (DoS) attack and distributed denial-of-service (DDoS) attack can primarily compromise availability of the system services and can be easily started by using various tools, leading to financial damage or affecting the reputation. These attacks are very difficult to detect and filter, since packets that cause the attack are very much similar to legitimate traffic. DoS attack is considered as the biggest threat to IT industry, and intensity, size and frequency of the attack are observed to be increasing every year. Therefore, there is a need for stronger and universal method to impede these attacks. In this paper, we present an overview of DoS attack and distributed DoS attack that can be carried out in Cloud environment and possible defensive mechanisms, tools and devices. In addition, we discuss many open issues and challenges in defending Cloud environment against DoS attack. This provides better understanding of the DDoS attack problem in Cloud computing environment, current solution space, and future research scope to deal with such attacks efficiently. <s> BIB010 </s> A survey of fault tolerance in cloud computing <s> Algorithms Description <s> Provisioning fault-tolerant scheduling in computational grid is a challenging task. Most of the existing fault tolerant scheduling schemes are either geared toward proactive or reactive. Proactive schemes emphasize on the reasons responsible for generating faults, whereas reactive mechanisms come into effect after failure detection. Unlike most existing mechanisms, we present a novel, dynamic, adaptive, and hybrid fault-tolerant scheduling scheme based on proactive and reactive approaches. In the proactive approach, the resource filtration algorithm picks resources based on resource location, availability, and reliability. Unlike most existing schemes, which rely on remotely connected resources, the proposed algorithm prefers to employ locally available resources as they might have less failure tendency. To cope with the frequent turnover problem, the proposed scheme calculates resource availability time based on various newly identified parameters (e.g., mean time between availability) and picks highly available nodes for task execution. Resource reliability is an indispensable consideration in the proposed scheme and is calculated based on parameters such as jobs success or failure ratio and the types of failures encountered. We employ an optimal resource identification algorithm to determine and select optimal resources for job execution. The performance of the proposed scheme is validated through the GridSim toolkit. Compared with contemporary approaches, experimental results demonstrate the effectiveness and efficiency of the proposed scheme in terms of various performance metrics, such as wall clock time, throughput, waiting and turnaround time, number of checkpoints, and energy consumption. <s> BIB011 </s> A survey of fault tolerance in cloud computing <s> Algorithms Description <s> Cloud-computing is an enticing topic; it developed very quickly. Numerous huge cloud organizations, alike as Amazon, Yahoo, Google, offer many cloud-services and have many users. Cloud-computing is an Internet-based computing approach, where the software, resources and the applications are shared between many-to-many computing devices. Cloud Load-balancing (CLB) takes the wealth of the cloud's scalability and the physically to meet rerouted workload and to improve overall availability. In the addition of tasks goods and traffic distribution, CLB technology provides fitness checks to the cloud applications. We used GA approach to handling the LB in cloud-computing. Our proposed work is more appropriate than the current techniques work, as we executed the cloudlets in less time and performing the load-balancing in more profitability. <s> BIB012 </s> A survey of fault tolerance in cloud computing <s> Algorithms Description <s> Abstract Cuckoo search is one of many nature-inspired algorithms used extensively to solve optimisation problems in different fields of engineering. It is a very effective in solving global optimisation because it is able to maintain balance between local and global random walks using switching parameter. The switching parameter for the original Cuckoo search algorithm is fixed at 25% and not enough studies have been done to assess the impact of dynamic switching parameter on the performance of Cuckoo search algorithm. This paper’s contribution is the development of three new Cuckoo search algorithms based on dynamically increasing switching parameters. The three new Cuckoo search algorithms are validated on ten mathematical test functions and their results compared to those of Cuckoo search algorithms with constant and dynamically decreasing switching parameters respectively. Finally, the simulations in this study indicate that, the Cuckoo search algorithm with exponentially increasing switching parameter outperformed the other Cuckoo search algorithms. <s> BIB013
Ant Colony Optimization (ACO) BIB002 It is an optimization system proposed in the early 90s by Marco Dorigo []. ACO is a heuristic based, multi-agent optimization method inspired by the biological systems for solving complex combinatorial optimization problems Particle Swarm Optimization (PSO) BIB001 This algorithm is a self-adaptive and robust global search-based optimization approach developed by Rush Eberhart and Jim Kennedy in 1995. This algorithm is developed for the population-based stochastic optimization and simulates the common behaviour of fish schooling or bird flocking. PSO is easy to implement as only a few parameters need to be adjusted to obtain a near-optimal result Artificial Bee Colony (ABC) BIB006 BIB003 ABC algorithm is inspired by the intelligent behaviour of the honey bees. This algorithm is a swarm based meta-heuristic method. It provides a search procedure based on population and has been used to solve many distinct types of problems. The main motive of the bee is to determine source locations of food having high nectar amount. This algorithm comprises three forms of bees: employed (search food throughout the food source inside the memory then share the information about the food source to onlooker bees), onlooker (pick good food sources i.e. founded by employed bees) and scout bees (interpreted from very few employed bees that unrestraint their food sources and look for new ones) Cuckoo Search Algorithm (CSA) BIB013 https://www.slideshare.net/AnujaJoshi6/cuckoo-optimization-ppt) It is an optimization-based algorithm defined in the year 2009. This algorithm is stimulated by restrict brood parasitism of a few cuckoo species by arranging the eggs in the nest of some other host birds. This algorithm can also be used to resolve multi-criteria optimization issues. This algorithm can be applied in many distinct fields such that engineering optimization, stability analysis, reliability problem and many more Bee Colony Optimization (BCO) (https://www.researchgate. net/publication/ 259383112_Honey_Bees_Inspired_Optimization_Method_The_Bees_Algorithm; Yuce et al., 2015) This algorithm, to deal with the combinatorial optimization problems, contains two passes, i.e., forward and backward. In the forward pass phase, the partial solution is produced with individual search and collective knowledge which is subsequently employed to the backward pass phase. In the backward pass phase, the likelihood information is utilized to create decision whether to stay to explore the present solution for a subsequent forward pass or to begin searching the neighborhood with newly selected ones Firefly-Based Algorithm (FA) (https://www.slideshare. net/supriyashilwant/firefly-algorithm-49723859; BIB004 It is a meta-heuristic optimization procedure inspired by fireflies in nature. This algorithm is used to solve extremely non-linear as well as multi-modal optimization problems efficiently. The speed of convergence of the algorithm is high and this algorithm has been integrated with other optimization approaches to build hybrid tools Genetic algorithm (GA) BIB011 BIB012 BIB009 Genetic algorithms are a metaheuristic inspired by the procedure of natural selection. This algorithm involves five main components: initial population (process starts with the group of individuals called population), fitness function (determine capability of an individual to participate/compete with some other individual i.e. how appropriate an individual is), selection (fittest individuals are selected and then their genes are passed to next generation), crossover (for every couple of parents to be matched, a crossover point is selected randomly from the genes i.e. it represents the mating among individuals), mutation (some of their genes having low random probability subjected to mutation i.e. some of the bits with low probability inside bit string is flipped) Differential evolution (DE) BIB005 It is a stochastic as well as population-based optimization method developed by Storn and Price. DE algorithm is a type of evolutionary programming used to resolve optimization problems on continuous domains where every variable is denoted by the set of real numbers. This algorithm has a simple structure and provides robustness Cloud-based services are shared by the millions of consumers/ users. Due to the distributed nature of the cloud, various security issues may occur. The most common security issues that may impact a cloud user are related to data, availability, authentication and authorization, privileged user access, etc. BIB008 . A few commonly occurring attacks are listed in Table 8 . Fault Tolerance approaches can be effectively used to minimize the effect of these attacks . The FT methods can be applied to three levels: At hardware level: if the attack on a hardware resource causes the system failure, then its effect can be compensated by using additional hardware resources. At software (s/w) level: Fault tolerance techniques such as checkpoint restart and recovery methods can be used to progress system execution in the event of failures due to security attacks. At system level: At this level, fault tolerance measures can compensate failure in system amenities and guarantee the availability of network and other resources. Therefore, it is imperative that viewpoints of security and fault tolerance be aligned with respect to cloud computing systems. Only then, cloud computing services will be able to gain the confidence of individual users as well as enterprises with regard to their data and computations. The integration of fault tolerance and security methods should cause a minimal overhead on system performance. B. Workflow scheduling: -The cloud system is widely used by researchers and scientists for the implementation of scientific workflows that perform data analysis and high throughput computing. Workflows enable easy definition of computational component, data and their dependencies in a declarative manner. It enables automatic execution of workflows, improving the performance of an application, and decreasing the amount of time that is needed to obtain scientific outcomes BIB007 . Furthermore, novel pricing models and frameworks have been established by the cloud service providers that permit users to provide the available resources and to make use Distributed Denial of Service attack can affect the availability or accessibility of cloud services because of its multi-tenant architecture It is applied to the numerous cooperated systems with two key motives BIB010 To overpower the resources of the server such as CPU time or the bandwidth of the network, therefore the genuine users cannot access the resource To hide the identity of malicious users or attackers
A survey of fault tolerance in cloud computing <s> SYNFlood attack <s> With the increasingly ubiquitous nature of Social networks and Cloud computing, users are starting to explore new ways to interact with, and exploit these developing paradigms. Social networks are used to reflect real world relationships that allow users to share information and form connections between one another, essentially creating dynamic Virtual Organizations. We propose leveraging the pre-established trust formed through friend relationships within a Social network to form a dynamic“Social Cloud”, enabling friends to share resources within the context of a Social network. We believe that combining trust relationships with suitable incentive mechanisms (through financial payments or bartering) could provide much more sustainable resource sharing mechanisms. This paper outlines our vision of, and experiences with, creating a Social Storage Cloud, looking specifically at possible market mechanisms that could be used to create a dynamic Cloud infrastructure in a Social network environment. <s> BIB001 </s> A survey of fault tolerance in cloud computing <s> SYNFlood attack <s> Botnets are prevailing mechanisms for the facilitation of the distributed denial of service (DDoS) attacks on computer networks or applications. Currently, Botnet-based DDoS attacks on the application layer are latest and most problematic trends in network security threats. Botnet-based DDoS attacks on the application layer limits resources, curtails revenue, and yields customer dissatisfaction, among others. DDoS attacks are among the most difficult problems to resolve online, especially, when the target is the Web server. In this paper, we present a comprehensive study to show the danger of Botnet-based DDoS attacks on application layer, especially on the Web server and the increased incidents of such attacks that has evidently increased recently. Botnetbased DDoS attacks incidents and revenue losses of famous companies and government websites are also described. This provides better understanding of the problem, current solution space, and future research scope to defend against such attacks efficiently. <s> BIB002 </s> A survey of fault tolerance in cloud computing <s> SYNFlood attack <s> Simulation is one of the most popular evaluation methods in scientific workflow studies. However, existing workflow simulators fail to provide a framework that takes into consideration heterogeneous system overheads and failures. They also lack the support for widely used workflow optimization techniques such as task clustering. In this paper, we introduce WorkflowSim, which extends the existing CloudSim simulator by providing a higher layer of workflow management. We also indicate that to ignore system overheads and failures in simulating scientific workflows could cause significant inaccuracies in the predicted workflow runtime. To further validate its value in promoting other research work, we introduce two promising research areas for which WorkflowSim provides a unique and effective evaluation platform. <s> BIB003 </s> A survey of fault tolerance in cloud computing <s> SYNFlood attack <s> Cloud computing is a model for enabling convenient, ubiquitous and on-demand network access to a shared pool of configurable computing resources (e.g. storage, applications, and networks) that can be provisioned with minimal management effort. Despite all these benefits, the sharing of resources with other users is a challenge, cloud providers do not commonly facilitate users in sharing their dedicated resources with others. In developing countries it is often too expensive for people to acquire a virtual machine of their own. Users may therefore wish to manage costs and increase computational resource usage by sharing their instances with others. Sadly it is not easy to do this at present. Social networks provide a structure that allows users to interact and share resources (e.g. pictures and videos) on the basis of a trustworthy relationship (e.g. Friendship). This paper highlights a Cloud Resource Bartering model (CRB-model) for sharing user's computational resources through a social network. In our approach we have linked a social network with the computational cloud to create a social cloud (SC) so that users can share their part of the cloud with their social community. A prototype system has been deployed on a social network by using the bartering resource trading mechanism. It is anticipated that this may help users to share their dedicated resources without the need for money changing hands and different communities. <s> BIB004 </s> A survey of fault tolerance in cloud computing <s> SYNFlood attack <s> Safety and reliability are important in the cloud computing environment. This is especially true today as distributed denial-of-service (DDoS) attacks constitute one of the largest threats faced by Internet users and cloud computing services. DDoS attacks target the resources of these services, lowering their ability to provide optimum usage of the network infrastructure. Due to the nature of cloud computing, the methodologies for preventing or stopping DDoS attacks are quite different compared to those used in traditional networks. In this paper, we investigate the effect of DDoS attacks on cloud resources and recommend practical defense mechanisms against different types of DDoS attacks in the cloud environment. <s> BIB005 </s> A survey of fault tolerance in cloud computing <s> SYNFlood attack <s> This article presents a novel framework XSS-Secure, which detects and alleviates the propagation of Cross-Site Scripting (XSS) worms from the Online Social Network (OSN)-based multimedia web applications on the cloud environment. It operates in two modes: training and detection mode. The former mode sanitizes the extracted untrusted variables of JavaScript code in a context-aware manner. This mode stores such sanitized code in sanitizer snapshot repository and OSN web server for further instrumentation in the detection mode. The detection mode compares the sanitized HTTP response (HRES) generated at the OSN web server with the sanitized response stored at the sanitizer snapshot repository. Any variation observed in this HRES message will indicate the injection of XSS worms from the remote OSN servers. XSS-Secure determines the context of such worms, perform the context-aware sanitization on them and finally sanitized HRES is transmitted to the OSN user. The prototype of our framework was developed in Java and integrated its components on the virtual machines of cloud environment. The detection and alleviation capability of our cloud-based framework was tested on the platforms of real world multimedia-based web applications including the OSN-based Web applications. Experimental outcomes reveal that our framework is capable enough to mitigate the dissemination of XSS worm from the platforms of non-OSN Web applications as well as OSN web sites with acceptable false negative and false positive rate. <s> BIB006 </s> A survey of fault tolerance in cloud computing <s> SYNFlood attack <s> Task clustering has proven to be an effective method to reduce execution overhead and to improve the computational granularity of scientific workflow tasks executing on distributed resources. However, a job composed of multiple tasks may have a higher risk of suffering from failures than a single task job. In this paper, we conduct a theoretical analysis of the impact of transient failures on the runtime performance of scientific workflow executions. We propose a general task failure modeling framework that uses a maximum likelihood estimation-based parameter estimation process to model workflow performance. We further propose three fault-tolerant clustering strategies to improve the runtime performance of workflow executions in faulty execution environments. Experimental results show that failures can have significant impact on executions where task clustering policies are not fault-tolerant, and that our solutions yield makespan improvements in such scenarios. In addition, we propose a dynamic task clustering strategy to optimize the workflow's makespan by dynamically adjusting the clustering granularity when failures arise. A trace-based simulation of five real workflows shows that our dynamic method is able to adapt to unexpected behaviors, and yields better makespans when compared to static methods. <s> BIB007 </s> A survey of fault tolerance in cloud computing <s> SYNFlood attack <s> Fault tolerance (FT) is essential in many Internet of Things (IoT) applications, in particular in the domains such as medical devices and automotive systems where a single fault in the system can lead to serious consequences. Non-volatile memory (NVM), on the other hand, is commonly used to improve system reliability due to its unique properties to retain data even if the power supply is lost. However, one of the most important drawbacks of NVM is that it imposes significant overhead regarding timing and energy. In this paper, we have proposed a unique technique with the use of NVM to create FT application specific architecture with almost no timing overhead and low energy overhead. We address the implementation of applications that are specified using synchronous data flow model of computation. We combine the use of NVM and classical CMOS transistors so that NVM judiciously stores selected complete states of the pertinent program. It allows the program to resume from the saved state in NVM when faults occur. The frequency of the state selection can be flexibly adjusted for an arbitrarily specified FT timing/energy overhead. Moreover, to find an optimal state selection (with low overhead), we have applied an improved min-cut max-flow algorithm. On a variety of typical benchmarks, the simulation results indicate that our approach incurs only a small overhead over lower bounds. It is also generic in a sense that it can be applied to a wide spectrum of underlying IoT architectures and computational models. <s> BIB008 </s> A survey of fault tolerance in cloud computing <s> SYNFlood attack <s> Recently, due to the increasing popularity of enjoying various multimedia services on mobile devices (e.g., smartphones, ipads, and electronic tablets), the generated mobile data traffic has been explosively growing and has become a serve burden on mobile network operators. To address such a serious challenge in mobile networks, an effective approach is to manage data traffic by using complementary technologies (e.g., small cell network, WiFi network, and so on) to achieve mobile data offloading. In this paper, we discuss the recent advances in the techniques of mobile data offloading. Particularly, based on the initiator diversity of data offloading, we classify the existing mobile data offloading technologies into four categories, i.e., data offloading through small cell networks, data offloading through WiFi networks, data offloading through opportunistic mobile networks, and data offloading through heterogeneous networks. Besides, we show a detailed taxonomy of the related mobile data offloading technologies by discussing the pros and cons for various offloading technologies for different problems in mobile networks. Finally, we outline some opening research issues and challenges, which can provide guidelines for future research work. <s> BIB009
It occurs when the attacker transmits many packets on the server but not able to complete the procedure of 3-way handshake. In this circumstance, server hold-up to finish the execution of all those packets, that cause server not to process valid requests. Similarly, SYN flooding can also be done by transferring packets through a spoofed IP address BIB005 IaaS and PaaS Botnet-based Esraa Alomari et al. BIB002 has presented a comprehensive study on Botnet-based DDoS attack and its effect on application layer, particularly on the web server. The Botnet-based DDoS attack on application layer restricts resources, revenue and produces consumer dissatisfaction between others. The main motive of this attack is: a. To involve damage on victim side b. The hidden goal of this attack is personal i.e. it blocks the available computing resources or reduces the service's performance required by any destination machine. So, this attack is done for the purpose of revenge c. One more purpose is to accomplish this attack is to increase popularity in community of hacker. This type of attacks may also be performed for material gain i.e. to interrupt the privacy also use available data/information for their ownStrayer et al. (2008) presented a method to detect botnets attack by examining flow characteristics i.e. packet timing, burst period and bandwidth for indication of botnet control action and command. The author has also created an architecture which first removes traffic i.e. unlikely to be a fragment of botnet, then categorizes the residual traffic in a set/group i.e. possibly to be portion of botnet and then associates the probable traffic patterns to find the common/shared communications which would recommend activity/action of a botnet XSS (Cross-Site Scripting) attack: In a cloud environment, XSS attack generally found on the multimedia web applications of online social network (OSN) i.e. Facebook, Linkedin, Twitter, etc. In XSS attack, the attacker inserts the untrusted JavaScript code on OSN web server. This is done by stealing the user's/handler's login identifications and other complex information such that session tokens and/or financial account data/information.XSS attack can lead to harsh consequences such that stealing the cookie, hijacking of account, misinformation, DOS (Denial-of-Service) attack, etc. To deal with this attack, BIB006 proposed an XSS-secure framework to detect and alleviates the proliferation of the XSS worms from the web application of OSN in the cloud system. The developed framework is operated in two modes: -Training mode: The training mode is used to create the secure sanitized JavaScript (JS) code i.e. encapsulated in the templates of the web pages Detection mode: During the offline mode, the discrepancy in injected sanitizers are detected using the detection mode. This mode also perceives the injection of malignant variables as well as links to JavaScript (JS) code The developed framework has been implemented in Java language and its components are integrated on (VMs) of the cloud system of those resources in an efficient way with major cost falls. Significant price reductions are accomplished due to poorer QoS that make them less efficient/reliable and susceptible to failures. To deal with failure, i.e., to make the system reliable fault tolerance approaches are considered. The most frequently used fault tolerance method in workflows is checkpointing, which can tolerate an instance failure as well as decrease the execution cost BIB007 BIB003 . C. Mobile data offloading: -Offloading is the process used to decrease the amount of data, which is being carried by the mobile phones or cellular bands and also to free the bandwidth for other end users. Data offloading is referred to as the offloading or migration of data or traffic to cloud servers. The mobile data offloading method is categorized as the offloading of data using tiny cell networks, using WiFi networks, opportunistic mobile networks, and using heterogeneous networks BIB009 . Offloading can also be done using cloud computing and virtualization techniques. It also allows mobile devices to migrate the computational element of an application to the powerful servers of the cloud. As mobile devices are portable, an unstable network connectivity of mobile can affect the offloading decision . To deal with this problem, fault tolerance approaches can be applied effectively. D. Internet of Things (IoT) applications: -IoT has facilitated a number of applications such as smart homes, intelligent automotive, agricultural and industrial applications and many more. IoT applications have enforced new needs on FT (fault tolerance) and energy management. In other words, in some applications, such as airplane control systems, a single or a particular fault can be very dangerous and life-threatening. FT is critical in many other IoT applications, in domains like medical strategy and automotive devices, where only one fault can lead to the serious consequences BIB008 . Therefore, it is very significant to detect and to correct faults in IoT applications, i.e. FT is required. E. Social Network (SN): These represent a network between individuals or organizations using some medium to share interests, thoughts, and different activities. SN makes use of online media i.e. social media based on internet help to establish contacts with friends, family, customers, and clients. SN can occur for business as well as social purposes or/and both using sites like Facebook, Twitter, LinkedIn etc. SN is also an important target field for marketers seeking to involve/engage users BIB001 . Nowadays, the social network has also integrated with the cloud computing because cloud computing enables appropriate, pervasive and on-demand access of the network resources such that application, storage, etc., which can be provisioned with very nominal management cost. In other words, the cloud provides many sustainable resources to the social network in many different areas such that online marketing, news, jobs, chatting, share a picture, audio, video, etc. As the cloud environment is very prone to the failure then need to use fault tolerance approaches with the social cloud to ensure the reliable communication among the users BIB004 .
A survey of fault tolerance in cloud computing <s> Deep learning <s> In order to manage the resources in cloud efficiently, ensure the performance of cloud services and reduce the power consumption, it is critical to predict the workload of virtual machines (VM) accurately. In this paper, a new approach for VM workload prediction based on deep learning was proposed. A deep learning prediction model was designed with a deep belief network (DBN) composed of multiple-layered restricted Boltzmann machines (RBMs) and a regression layer. The DBN is used to extract the high level features from all VMs workload data and the regression layer is used to predict the workload of the VMs in the future. With little prior knowledge, DBN could learn the features efficiently for the VM workload prediction in an unsupervised fashion. Experimental results show that the proposed approach improves the workload prediction performance compared with other widely used workload prediction approaches. <s> BIB001 </s> A survey of fault tolerance in cloud computing <s> Deep learning <s> Cloud computing has become an attractive computing paradigm in both academia and industry. Through virtualization technology, Cloud Service Providers (CSPs) that own data centers can structure physical servers into Virtual Machines (VMs) to provide services, resources, and infrastructures to users. Profit-driven CSPs charge users for service access and VM rental, and reduce power consumption and electric bills so as to increase profit margin. The key challenge faced by CSPs is data center energy cost minimization. Prior works proposed various algorithms to reduce energy cost through Resource Provisioning (RP) and/or Task Scheduling (TS). However, they have scalability issues or do not consider TS with task dependencies, which is a crucial factor that ensures correct parallel execution of tasks. This paper presents DRL-Cloud, a novel Deep Reinforcement Learning (DRL)-based RP and TS system, to minimize energy cost for large-scale CSPs with very large number of servers that receive enormous numbers of user requests per day. A deep Q-learning-based two-stage RP-TS processor is designed to automatically generate the best long-term decisions by learning from the changing environment such as user request patterns and realistic electric price. With training techniques such as target network, experience replay, and exploration and exploitation, the proposed DRL-Cloud achieves remarkably high energy cost efficiency, low reject rate as well as low runtime with fast convergence. Compared with one of the state-of-the-art energy efficient algorithms, the proposed DRL-Cloud achieves up to 320% energy cost efficiency improvement while maintaining lower reject rate on average. For an example CSP setup with 5,000 servers and 200,000 tasks, compared to a fast round-robin baseline, the proposed DRL-Cloud achieves up to 144% runtime reduction. <s> BIB002
Today, the cloud computing system has become the most attractive computing model in the field of academia and industry both. The cloud service provider which has its own data center (DC) can use virtualization approach to structure physical machines (PM) into virtual machines to offer resources, infrastructure, and services to the users in the form of utility. The main problem that faced by cloud service providers is task scheduling, prediction of virtual machine workload, resource provisioning, security issues and how to minimize energy cost. There are several different methods have been developed to deal with this issue separately, but all these methods consume more power, time, provide the result with less accuracy and quality. Each of these problems can be modelled as an optimization problem and solved more effectively using deep learning approach. Deep learning (DL) is a subclass of the ML, which learns features from data directly. When the volume of data is enlarged, ML techniques are not appropriate in terms of the performance, so in such case, DL has been found to provide better performance in terms of accuracy BIB001 . DL use various layers (hidden layer) of the nonlinear processing elements for the purpose of feature extraction as well as transformation. In this approach, every consecutive layer uses the outcome of the previous layer in the form of input. DL approaches have been applied in many areas like speech and audio recognition, NLP (natural language processing), computer vision, social network filtering, etc. BIB002 Zhang et al., 2018) . It is envisaged that deep learning approaches can also be integrated with the fault tolerance methods to predict the faults and take the corresponding preventive measures.
A survey of fault tolerance in cloud computing <s> Blockchain <s> The blockchain technology has emerged as an attractive solution to address performance and security issues in distributed systems. Blockchain's public and distributed peer-to-peer ledger capability benefits cloud computing services which require functions such as, assured data provenance, auditing, management of digital assets, and distributed consensus. Blockchain's underlying consensus mechanism allows to build a tamper-proof environment, where transactions on any digital assets are verified by set of authentic participants or miners. With use of strong cryptographic methods, blocks of transactions are chained together to enable immutability on the records. However, achieving consensus demands computational power from the miners in exchange of handsome reward. Therefore, greedy miners always try to exploit the system by augmenting their mining power. In this paper, we first discuss blockchain's capability in providing assured data provenance in cloud and present vulnerabilities in blockchain cloud. We model the block withholding (BWH) attack in a blockchain cloud considering distinct pool reward mechanisms. BWH attack provides rogue miner ample resources in the blockchain cloud for disrupting honest miners' mining efforts, which was verified through simulations. <s> BIB001 </s> A survey of fault tolerance in cloud computing <s> Blockchain <s> Blockchain has drawn attention as the next-generation financial technology due to its security that suits the informatization era. In particular, it provides security through the authentication of peers that share virtual cash, encryption, and the generation of hash value. According to the global financial industry, the market for security-based blockchain technology is expected to grow to about USD 20 billion by 2020. In addition, blockchain can be applied beyond the Internet of Things (IoT) environment; its applications are expected to expand. Cloud computing has been dramatically adopted in all IT environments for its efficiency and availability. In this paper, we discuss the concept of blockchain technology and its hot research trends. In addition, we will study how to adapt blockchain security to cloud computing and its secure solutions in detail. <s> BIB002
The blockchain is one of the new emerging technologies in computer science. It is a digitized, disruptive as well as decentralized technology. It holds a chain of the blocks and each block holds information or data without any central supervision. The data or information stored inside the blocks depend on types of the blockchain. This new technology is widely used to validate transactions of digital currencies. Using this approach, the authenticity of everyone present in the block can be verified even though there is no any centralized authority BIB001 BIB002 . There are many different types of blockchains available, which are defined in Table 9 . There are few concepts which are used by the blockchain are: Table 9 Types of Blockchains and its brief description (https://www.guru99.com/blockchaintutorial.html). Decentralized: no any central authority for supervising anything). consensus mechanism: using this mechanism the decentralized network derives to a consensus on some particular matter. Miners: users who make use of their computational power for mining the blocks.
A survey of fault tolerance in cloud computing <s> Types of Blockchain <s> Blockchain has drawn attention as the next-generation financial technology due to its security that suits the informatization era. In particular, it provides security through the authentication of peers that share virtual cash, encryption, and the generation of hash value. According to the global financial industry, the market for security-based blockchain technology is expected to grow to about USD 20 billion by 2020. In addition, blockchain can be applied beyond the Internet of Things (IoT) environment; its applications are expected to expand. Cloud computing has been dramatically adopted in all IT environments for its efficiency and availability. In this paper, we discuss the concept of blockchain technology and its hot research trends. In addition, we will study how to adapt blockchain security to cloud computing and its secure solutions in detail. <s> BIB001 </s> A survey of fault tolerance in cloud computing <s> Types of Blockchain <s> Nowadays, more and more companies migrate business from their own servers to the cloud. With the influx of computational requests, datacenters consume tremendous energy every day, attracting great attention in the energy efficiency dilemma. In this paper, we investigate the energy-aware resource management problem in cloud datacenters, where green energy with unpredictable capacity is connected. Via proposing a robust blockchain-based decentralized resource management framework, we save the energy consumed by the request scheduler. Moreover, we propose a reinforcement learning method embedded in a smart contract to further minimize the energy cost. Because the reinforcement learning method is informed from the historical knowledge, it relies on no request arrival and energy supply. Experimental results on Google cluster traces and real-world electricity price show that our approach is able to reduce the datacenters cost significantly compared with other benchmark algorithms. <s> BIB002
Apart from digital currencies, this technology has been used to provide an effective security solution for multiple applications. Due to the dispersed behaviour of the cloud, many organizations or individuals using the cloud to store their data face security issues. The blockchain technology can be used to make the cloud system more secure and trustworthy. Besides, it can also be employed to solve other fault tolerance related problems like asynchronous communication, unpredictable network delay complexity, i.e. backup and delay and so on BIB001 BIB002 .
Meta‐analysis and Mendelian randomization: A review <s> | INTRODUCTION <s> Associations between modifiable exposures and disease seen in observational epidemiology are sometimes confounded and thus misleading, despite our best efforts to improve the design and analysis of studies. Mendelian randomization-the random assortment of genes from parents to offspring that occurs during gamete formation and conception-provides one method for assessing the causal nature of some environmental exposures. The association between a disease and a polymorphism that mimics the biological link between a proposed exposure and disease is not generally susceptible to the reverse causation or confounding that may distort interpretations of conventional observational studies. Several examples where the phenotypic effects of polymorphisms are well documented provide encouraging evidence of the explanatory power of Mendelian randomization and are described. The limitations of the approach include confounding by polymorphisms in linkage disequilibrium with the polymorphism under study, that polymorphisms may have several phenotypic effects associated with disease, the lack of suitable polymorphisms for studying modifiable exposures of interest, and canalization-the buffering of the effects of genetic variation during development. Nevertheless, Mendelian randomization provides new opportunities to test causality and demonstrates how investment in the human genome project may contribute to understanding and preventing the adverse effects on human health of modifiable exposures. <s> BIB001 </s> Meta‐analysis and Mendelian randomization: A review <s> | INTRODUCTION <s> 1. Introduction to probabilities, graphs, and causal models 2. A theory of inferred causation 3. Causal diagrams and the identification of causal effects 4. Actions, plans, and direct effects 5. Causality and structural models in the social sciences 6. Simpson's paradox, confounding, and collapsibility 7. Structural and counterfactual models 8. Imperfect experiments: bounds and counterfactuals 9. Probability of causation: interpretation and identification Epilogue: the art and science of cause and effect. <s> BIB002
The primary aim of observational epidemiology is to determine the root causes of illness, with the focus of many epidemiological analyses being to examine whether exposure to a particular risk factor modifies the severity, or the likelihood of developing, a disease. Causal conclusions are rarely justified following a traditional analysis, even when strong statistical associations are measured between an exposure and outcome, because it is never certain that all confounders of the association have been identified, measured, and appropriately adjusted for. Mendelian randomization (MR) BIB001 offers an alternative way to probe the issue of causality in epidemiological research, by using additional genetic variants that are hypothesized to satisfy the instrumental variable (IV) assumptions. Directed acyclic graphs (DAGs) are a useful tool, both to explain the rationale for an MR study and to clarify the IV assumptions that its validity rests on. Figure 1 shows a DAG relating the simplest single unit of genetic variation -a single nucleotide polymorphism (SNP) G-to an exposure, X, and outcome, Y, in the presence of unmeasured confounding, represented by U. The true causal effect of the exposure on the outcome is denoted by the arrow from X to Y in Figure 1 and the parameter β. The "associational" estimate obtained from a simple regression of the outcome on the exposure could be systematically different from this causal effect, because confounding may be responsible for all, or part of, its magnitude. From the DAG in Figure 1 , this can be understood by noting that the association between X and Y is contributed to by the direct effect path X → Y and the "back door" path X ← U → Y. BIB002 Suppose, however, that a SNP G exists, which robustly predicts a proportion of the exposure that is unrelated to any confounders of the exposure-outcome relationship. This is represented by the path G → X and the absence of a path between G and U. If, in addition, G can only influence the outcome through the exposure, as represented by the absence of a direct path from G to Y, then this SNP is said to be a "valid IV." G is usually coded as 0, 1, or 2 to reflect the number of exposure-increasing alleles of a SNP an individual carries. This assumes that G exerts a linear per-allele effect on X. The exposure itself is typically a continuous measure, for example, a person's blood pressure, body mass index, or cholesterol level. It will sometimes represent a binary health behavior, for example, whether an individual is a current smoker. The outcome of interest can be continuous but is often a binary variable, usually representing the presence or absence of a disease.
Meta‐analysis and Mendelian randomization: A review <s> | QUANTIFYING THE CAUSAL EFFECT IN MR USING THE RATIO ESTIMATE AND TSLS <s> 1. Introduction to probabilities, graphs, and causal models 2. A theory of inferred causation 3. Causal diagrams and the identification of causal effects 4. Actions, plans, and direct effects 5. Causality and structural models in the social sciences 6. Simpson's paradox, confounding, and collapsibility 7. Structural and counterfactual models 8. Imperfect experiments: bounds and counterfactuals 9. Probability of causation: interpretation and identification Epilogue: the art and science of cause and effect. <s> BIB001 </s> Meta‐analysis and Mendelian randomization: A review <s> | QUANTIFYING THE CAUSAL EFFECT IN MR USING THE RATIO ESTIMATE AND TSLS <s> Instrumental variable analysis is an approach for obtaining causal inferences on the effect of an exposure (risk factor) on an outcome from observational data. It has gained in popularity over the past decade with the use of genetic variants as instrumental variables, known as Mendelian randomization. An instrumental variable is associated with the exposure, but not associated with any confounder of the exposure-outcome association, nor is there any causal pathway from the instrumental variable to the outcome other than via the exposure. Under the assumption that a single instrumental variable or a set of instrumental variables for the exposure is available, the causal effect of the exposure on the outcome can be estimated. There are several methods available for instrumental variable estimation; we consider the ratio method, two-stage methods, likelihood-based methods, and semi-parametric methods. Techniques for obtaining statistical inferences and confidence intervals are presented. The statistical properties of estimates from these methods are compared, and practical advice is given about choosing a suitable analysis method. In particular, bias and coverage properties of estimators are considered, especially with weak instruments. Settings particularly relevant to Mendelian randomization are prioritized in the paper, notably the scenario of a continuous exposure and a continuous or binary outcome. <s> BIB002 </s> Meta‐analysis and Mendelian randomization: A review <s> | QUANTIFYING THE CAUSAL EFFECT IN MR USING THE RATIO ESTIMATE AND TSLS <s> Mendelian randomization (MR) is a method of exploiting genetic variation to unbiasedly estimate a causal effect in presence of unmeasured confounding. MR is being widely used in epidemiology and other related areas of population science. In this paper, we study statistical inference in the increasingly popular two-sample summary-data MR design. We show a linear model for the observed associations approximately holds in a wide variety of settings when all the genetic variants satisfy the exclusion restriction assumption, or in genetic terms, when there is no pleiotropy. In this scenario, we derive a maximum profile likelihood estimator with provable consistency and asymptotic normality. However, through analyzing real datasets, we find strong evidence of both systematic and idiosyncratic pleiotropy in MR, echoing the omnigenic model of complex traits that is recently proposed in genetics. We model the systematic pleiotropy by a random effects model, where no genetic variant satisfies the exclusion restriction condition exactly. In this case we propose a consistent and asymptotically normal estimator by adjusting the profile score. We then tackle the idiosyncratic pleiotropy by robustifying the adjusted profile score. We demonstrate the robustness and efficiency of the proposed methods using several simulated and real datasets. <s> BIB003
If SNP G is a valid IV, the exposure can be assumed to causally affect the outcome in a linear fashion with no effect modification, then the underlying SNP-outcome association (denoted by Γ) should be the product of the underlying SNP-exposure association (denoted by γ) and the causal effect of the exposure on the outcome, β. That is, From Equation 1, the simplest estimate for β ( b β R , where R stands for "ratio") is obtained by dividing the SNPoutcome association estimate by the SNP-exposure association estimate to give: The standard error of the ratio estimate can be approximated via a Taylor series expansion of b β R using the delta method. BIB002 The ratio estimate in (2) is calculated from two summary estimates, but it is also equivalent to the estimate obtained by the following two-step procedure applied to individual level data on Y, X, and G: Step 1: Regress the exposure on the SNP via the model: Step 2: Regress the outcome on the fitted values of the regression in step 1, b X, via the model: and report its estimated regression coefficient, b β. This is referred to as "two-stage least squares" (TSLS). When multiple SNPs are available as IVs, they can be easily incorporated into a TSLS analysis to yield a single causal estimate, by calculating fitted values based on a multivariable regression of the exposure on all SNPs together in model BIB002 . In that case, γ and G would represent vectors of association parameters and SNP values for each individual. This automatically allows for any potential correlation between the SNPs, for example, due to linkage disequilibrium (LD). Standard errors for the TSLS estimate in (4) must take into account the uncertainty in the first stage model (3). This correction is performed as standard in most software packages. Equation 1, the ratio estimate in (2) and the TSLS procedure in (3) and (4) are only strictly correct when Y is itself continuous. When Y is binary, logistic regression is typically used to quantify the G-Y association in BIB001 or the association between the genetically predicted exposure and outcome in . In this case, the causal effect of a unit change in the exposure on the risk of Y will have a magnitude that depends on the reference level of X chosen (and so will X. This is due to the noncollapsibility of the odds ratio. However, because genetic effects generally explain a very small amount of variation in the exposure, this means that the range of genetically predicted exposure levels is very narrow around the center of the distribution of X. Modeling the causal effect of moving between different levels of the genetically predicted exposure as a constant value therefore provides a very good approximation to the true "local" causal effect. For further discussion, see appendix 1 in Zhao et al. BIB003
Meta‐analysis and Mendelian randomization: A review <s> | THE CHANGING FACE OF STUDY DESIGN: TWO-SAMPLE SUMMARY DATA MR <s> 1. Introduction to probabilities, graphs, and causal models 2. A theory of inferred causation 3. Causal diagrams and the identification of causal effects 4. Actions, plans, and direct effects 5. Causality and structural models in the social sciences 6. Simpson's paradox, confounding, and collapsibility 7. Structural and counterfactual models 8. Imperfect experiments: bounds and counterfactuals 9. Probability of causation: interpretation and identification Epilogue: the art and science of cause and effect. <s> BIB001 </s> Meta‐analysis and Mendelian randomization: A review <s> | THE CHANGING FACE OF STUDY DESIGN: TWO-SAMPLE SUMMARY DATA MR <s> Mendelian randomization (MR) is a method for estimating the causal relationship between an exposure and an outcome using a genetic factor as an instrumental variable (IV) for the exposure. In the traditional MR setting, data on the IV, exposure, and outcome are available for all participants. However, obtaining complete exposure data may be difficult in some settings, due to high measurement costs or lack of appropriate biospecimens. We used simulated data sets to assess statistical power and bias for MR when exposure data are available for a subset (or an independent set) of participants. We show that obtaining exposure data for a subset of participants is a cost-efficient strategy, often having negligible effects on power in comparison with a traditional complete-data analysis. The size of the subset needed to achieve maximum power depends on IV strength, and maximum power is approximately equal to the power of traditional IV estimators. Weak IVs are shown to lead to bias towards the null when the subsample is small and towards the confounded association when the subset is relatively large. Various approaches for confidence interval calculation are considered. These results have important implications for reducing the costs and increasing the feasibility of MR studies. <s> BIB002 </s> Meta‐analysis and Mendelian randomization: A review <s> | THE CHANGING FACE OF STUDY DESIGN: TWO-SAMPLE SUMMARY DATA MR <s> Genome-wide association studies, which typically report regression coefficients summarizing the associations of many genetic variants with various traits, are potentially a powerful source of data for Mendelian randomization investigations. We demonstrate how such coefficients from multiple variants can be combined in a Mendelian randomization analysis to estimate the causal effect of a risk factor on an outcome. The bias and efficiency of estimates based on summarized data are compared to those based on individual-level data in simulation studies. We investigate the impact of gene–gene interactions, linkage disequilibrium, and ‘weak instruments’ on these estimates. Both an inverse-variance weighted average of variant-specific associations and a likelihood-based approach for summarized data give similar estimates and precision to the two-stage least squares method for individual-level data, even when there are gene–gene interactions. However, these summarized data methods overstate precision when variants are in linkage disequilibrium. If the P-value in a linear regression of the risk factor for each variant is less than , then weak instrument bias will be small. We use these methods to estimate the causal association of low-density lipoprotein cholesterol (LDL-C) on coronary artery disease using published data on five genetic variants. A 30% reduction in LDL-C is estimated to reduce coronary artery disease risk by 67% (95% CI: 54% to 76%). We conclude that Mendelian randomization investigations using summarized data from uncorrelated variants are similarly efficient to those using individual-level data, although the necessary assumptions cannot be so fully assessed. <s> BIB003
The CRP example is typical of a traditional MR study design, in that it made use of individual level data and utilized a small number of correlated genetic variants with a known functional role on an exposure to firstly estimate studyspecific causal effects and secondly meta-analyze the results across studies. Unfortunately, the level of cooperation and administrative burden required to share individual level data in this way has meant that this model is relatively inefficient for the large-scale pursuit of MR analyses. In recent years, however, it has become possible in theory for anyone to conduct an MR analysis by combining summary estimates of SNP-trait associations from two genome-wide FIGURE 2 Meta-analysis of the association of rs1205 with C-reactive protein (left) and heart disease (right) in studies contributing towards the C-reactive protein coronary heart disease genetics collaboration. Estimates reflect the mean difference in log CRP per allele (left) and odds ratio of CHD per allele (right). CHD, coronary heart disease; CRP, C-reactive protein association studies (GWASs), released into the public domain by international disease consortia. This has become known as two-sample summary data MR. BIB002 BIB003 Specifically, suppose that a single common SNP is measured in two separate GWAS (eg, "studies 1 and 2") where study 1 measured its association with trait X and study 2 measured its association with trait Y. A ratio estimate for the causal effect of X on Y can be obtained by dividing the SNP-Y association estimate from study 2 by the SNP-X association estimate in study 1, just as in formula BIB001 . Typically, GWASs report summary data estimates of associations with a trait for the strongest SNP within a specific genomic region, across many regions encompassing the entire genome. This has led to a dramatic increase in the number of uncorrelated variants that that can, in principle, be used within an MR analysis. Ratio estimates for each SNP are then combined to yield an overall causal effect using standard inverse variance weighted (IVW) meta-analysis formulae: Here, b β Rj represents the ratio estimate obtained from the jth SNP, and w j is its inverse variance weight. Traditionally, so-called "first order" weights are used, which assume that the denominator of the ratio estimate and can therefore be treated as a constant. This referred to as the "no measurement error (NOME)" assumption and means that A nonexhaustive list of GWASs with publicly available data that has been used in two-sample summary data MR studies is given in Table 1 . For a more complete list of consortia, see Haycock et al.
Meta‐analysis and Mendelian randomization: A review <s> | Precision and weak instrument bias of MR-Egger regression <s> Mendelian randomization investigations are becoming more powerful and simpler to perform, due to the increasing size and coverage of genome-wide association studies and the increasing availability of summarized data on genetic associations with risk factors and disease outcomes. However, when using multiple genetic variants from different gene regions in a Mendelian randomization analysis, it is highly implausible that all the genetic variants satisfy the instrumental variable assumptions. This means that a simple instrumental variable analysis alone should not be relied on to give a causal conclusion. In this article, we discuss a range of sensitivity analyses that will either support or question the validity of causal inference from a Mendelian randomization analysis with multiple genetic variants. We focus on sensitivity analyses of greatest practical relevance for ensuring robust causal inferences, and those that can be undertaken using summarized data. Aside from cases in which the justification of the instrumental variable assumptions is supported by strong biological understanding, a Mendelian randomization analysis in which no assessment of the robustness of the findings to violations of the instrumental variable assumptions has been made should be viewed as speculative and incomplete. In particular, Mendelian randomization investigations with large numbers of genetic variants without such sensitivity analyses should be treated with skepticism. <s> BIB001 </s> Meta‐analysis and Mendelian randomization: A review <s> | Precision and weak instrument bias of MR-Egger regression <s> Pleiotropy, the phenomenon of a single genetic variant influencing multiple traits, is likely widespread in the human genome. If pleiotropy arises because the single nucleotide polymorphism (SNP) influences one trait, which in turn influences another ('vertical pleiotropy'), then Mendelian randomization (MR) can be used to estimate the causal influence between the traits. Of prime focus among the many limitations to MR is the unprovable assumption that apparent pleiotropic associations are mediated by the exposure (i.e. reflect vertical pleiotropy), and do not arise due to SNPs influencing the two traits through independent pathways ('horizontal pleiotropy'). The burgeoning treasure trove of genetic associations yielded through genome wide association studies makes for a tantalizing prospect of phenome-wide causal inference. Recent years have seen substantial attention devoted to the problem of horizontal pleiotropy, and in this review we outline how newly developed methods can be used together to improve the reliability of MR. <s> BIB002
In practice, the IVW estimate is likely to yield far more precise estimates than MR-Egger regression. An important factor affecting only the precision of MR-Egger is the amount of variation between the set of SNP-exposure associations, once they have been oriented in the positive direction. That is, it works best when there are SNPs with both small, medium, and large associations relative to one another. This is true when fitting any sort of univariable linear regression model with an intercept because the explanatory variable of the regression (in this case b γ j ) must exhibit some variation, the more the better. When such variation is not present over and above what would be expected by the SNP-exposure association standard errors, σ Xj , (as represented by the horizontal error bars in Figure 4A ), it would suffer complete regression dilution bias. That is, its estimate would be shrunk on average to zero. Rather being used in its original guise to quantify heterogeneity among causal estimates, Higgins I 2 statistic 23 has been repurposed in MR to quantify the expected dilution of MR-Egger regression estimates, by calculating it with respect to the SNPexposure summary data b γ j ; σ 2 Xj . 11 This is referred to as GX close to 1 would indicate no dilution, whereas an I 2 GX of 0.5 would indicate a likely 50% dilution. Note that an I 2 GX of 1 is equivalent to the NOME assumption being satisfied. This could be achieved even if there were very little variation between the SNPexposure association estimates, as long as they are very precise. I 2 GX is therefore a measure of the collective strength of a set of instruments for MR-Egger. The errors-in-variables technique of simulation extrapolation has successfully been applied to correct for this weak instrument bias when I 2 GX is sufficiently low. Further research is ongoing to extend the modified weights in Equation 10 , so that they can be used for both IVW and in MR-Egger regression. Because of its relative imprecision, MR-Egger regression is not advocated to replace the standard IVW approach. Indeed, it is best utilized within the context of a sensitivity analysis, BIB002 BIB001 and given most credence when it provides a demonstratively better fit to the data.
Meta‐analysis and Mendelian randomization: A review <s> | Robust meta-analytic approaches <s> Developments in genome-wide association studies and the increasing availability of summary genetic association data have made application of Mendelian randomization relatively straightforward. However, obtaining reliable results from a Mendelian randomization investigation remains problematic, as the conventional inverse-variance weighted method only gives consistent estimates if all of the genetic variants in the analysis are valid instrumental variables. We present a novel weighted median estimator for combining data on multiple genetic variants into a single causal estimate. This estimator is consistent even when up to 50% of the information comes from invalid instrumental variables. In a simulation analysis, it is shown to have better finite-sample Type 1 error rates than the inverse-variance weighted method, and is complementary to the recently proposed MR-Egger (Mendelian randomization-Egger) regression method. In analyses of the causal effects of low-density lipoprotein cholesterol and high-density lipoprotein cholesterol on coronary artery disease risk, the inverse-variance weighted method suggests a causal effect of both lipid fractions, whereas the weighted median and MR-Egger regression methods suggest a null effect of high-density lipoprotein cholesterol that corresponds with the experimental evidence. Both median-based and MR-Egger regression methods should be considered as sensitivity analyses for Mendelian randomization investigations with multiple genetic variants. <s> BIB001 </s> Meta‐analysis and Mendelian randomization: A review <s> | Robust meta-analytic approaches <s> Summary Background The benefits of blood pressure lowering treatment for prevention of cardiovascular disease are well established. However, the extent to which these effects differ by baseline blood pressure, presence of comorbidities, or drug class is less clear. We therefore performed a systematic review and meta-analysis to clarify these differences. Method For this systematic review and meta-analysis, we searched MEDLINE for large-scale blood pressure lowering trials, published between Jan 1, 1966, and July 7, 2015, and we searched the medical literature to identify trials up to Nov 9, 2015. All randomised controlled trials of blood pressure lowering treatment were eligible for inclusion if they included a minimum of 1000 patient-years of follow-up in each study arm. No trials were excluded because of presence of baseline comorbidities, and trials of antihypertensive drugs for indications other than hypertension were eligible. We extracted summary-level data about study characteristics and the outcomes of major cardiovascular disease events, coronary heart disease, stroke, heart failure, renal failure, and all-cause mortality. We used inverse variance weighted fixed-effects meta-analyses to pool the estimates. Results We identified 123 studies with 613 815 participants for the tabular meta-analysis. Meta-regression analyses showed relative risk reductions proportional to the magnitude of the blood pressure reductions achieved. Every 10 mm Hg reduction in systolic blood pressure significantly reduced the risk of major cardiovascular disease events (relative risk [RR] 0·80, 95% CI 0·77–0·83), coronary heart disease (0·83, 0·78–0·88), stroke (0·73, 0·68–0·77), and heart failure (0·72, 0·67–0·78), which, in the populations studied, led to a significant 13% reduction in all-cause mortality (0·87, 0·84–0·91). However, the effect on renal failure was not significant (0·95, 0·84–1·07). Similar proportional risk reductions (per 10 mm Hg lower systolic blood pressure) were noted in trials with higher mean baseline systolic blood pressure and trials with lower mean baseline systolic blood pressure (all p trend >0·05). There was no clear evidence that proportional risk reductions in major cardiovascular disease differed by baseline disease history, except for diabetes and chronic kidney disease, for which smaller, but significant, risk reductions were detected. β blockers were inferior to other drugs for the prevention of major cardiovascular disease events, stroke, and renal failure. Calcium channel blockers were superior to other drugs for the prevention of stroke. For the prevention of heart failure, calcium channel blockers were inferior and diuretics were superior to other drug classes. Risk of bias was judged to be low for 113 trials and unclear for 10 trials. Heterogeneity for outcomes was low to moderate; the I 2 statistic for heterogeneity for major cardiovascular disease events was 41%, for coronary heart disease 25%, for stroke 26%, for heart failure 37%, for renal failure 28%, and for all-cause mortality 35%. Interpretation Blood pressure lowering significantly reduces vascular risk across various baseline blood pressure levels and comorbidities. Our results provide strong support for lowering blood pressure to systolic blood pressures less than 130 mm Hg and providing blood pressure lowering treatment to individuals with a history of cardiovascular disease, coronary heart disease, stroke, diabetes, heart failure, and chronic kidney disease. Funding National Institute for Health Research and Oxford Martin School. <s> BIB002 </s> Meta‐analysis and Mendelian randomization: A review <s> | Robust meta-analytic approaches <s> Background ::: Mendelian randomization (MR) is being increasingly used to strengthen causal inference in observational studies. Availability of summary data of genetic associations for a variety of phenotypes from large genome-wide association studies (GWAS) allows straightforward application of MR using summary data methods, typically in a two-sample design. In addition to the conventional inverse variance weighting (IVW) method, recently developed summary data MR methods, such as the MR-Egger and weighted median approaches, allow a relaxation of the instrumental variable assumptions. ::: ::: ::: Methods ::: Here, a new method - the mode-based estimate (MBE) - is proposed to obtain a single causal effect estimate from multiple genetic instruments. The MBE is consistent when the largest number of similar (identical in infinite samples) individual-instrument causal effect estimates comes from valid instruments, even if the majority of instruments are invalid. We evaluate the performance of the method in simulations designed to mimic the two-sample summary data setting, and demonstrate its use by investigating the causal effect of plasma lipid fractions and urate levels on coronary heart disease risk. ::: ::: ::: Results ::: The MBE presented less bias and lower type-I error rates than other methods under the null in many situations. Its power to detect a causal effect was smaller compared with the IVW and weighted median methods, but was larger than that of MR-Egger regression, with sample size requirements typically smaller than those available from GWAS consortia. ::: ::: ::: Conclusions ::: The MBE relaxes the instrumental variable assumptions, and should be used in combination with other approaches in sensitivity analyses. <s> BIB003
The InSIDE assumption is likely to be violated when a SNP is associated with the exposure of interest through a confounder of the exposure-outcome relationship (as represented by the dotted arrow between G and U in Figure 1 ). This is because it would make the magnitude of an SNP's pleiotropy correlated with its strength as an instrument. This invalidates both the IVW and MREgger analyses. For this reason, robust meta-analytic methods have been proposed, BIB001 BIB003 which do not rely on the InSIDE assumption, and are being increasingly implemented alongside IVW and MR-Egger. Specifically, rather than calculating an IVW mean of all ratio estimates (eg, the IVW estimate): • The "weighted median" estimate BIB001 calculates median of the IVW empirical distribution function of ratio estimates. • The mode-based estimate (MBE) BIB003 calculates the modal value of the same weighted empirical distribution function. Currently, both approaches use first order inverse variance weights to define their empirical distribution functions. The weighted median can provide a consistent estimate for the causal effect even if up to half of the SNPs violate InSIDE (ie, most SNPs do not). The MBE can provide a consistent estimate if valid SNPs (ie, those with a zero value of α j in Equation 5 ) form the largest subset of all SNPs that have the same value of α j . In order to improve the robustness of IVW and MREgger regression, outlier detection and removal strategies have also been proposed. For example, in Bowden et al, the individual contribution of each SNP to Cochran Q can be assessed informally against a χ 2 1 distribution to see whether a small number of SNPs are driving the apparent heterogeneity and are therefore candidates for removal in a sensitivity analysis. This approach will be demonstrated in the applied example below. Studentized residuals and Cook's distance have also been used in MR studies to detect influential SNPs 27 that merit closer inspection. The Galbraith radial plot has additionally been repurposed for detecting outlying variants in MR. examined the causal effect of systolic blood pressure (SBP; the exposure) on CHD (the outcome). SNPexposure association estimates were obtained from the International Consortium for Blood Pressure (ICBP) GWAS consortium for 26 variants that were robustly associated with SBP at genome-wide statistical significance levels. Log-odds ratio estimates of SNP-CHD association were collected from Coronary Artery Disease Genome-wide Replication and Meta-analysis (CARDIO-GRAM) consortium. Both data sources are publically accessible (see Table 1 ); however, we provide these data as Supporting Information for the interested reader. Figure 5 shows a scatter plot of the SNP-CHD versus the SNP-SBP associations along with their 95% confidence intervals and its corresponding funnel plot. Causal effect estimates for the log-odds ratio of CHD for a 1 mmHg increase in SBP were obtained via the IVW and MR-Egger approaches. Estimates for the weighted median estimator and MBE are also shown. The SNPexposure association estimates were sufficiently precise (a mean F statistic of 61) and sufficiently varied (an I 2 GX statistic of 0.96) for the NOME assumption to approximately hold. We therefore use first order weights for all estimators in the analysis. Full results are given in Table 2 . To improve their clinical relevance, the estimates Table 2 are shown as odds ratios and reflect the effect of a 5 mmHg increase in SBP. The IVW, weighted median, and MBE approaches all suggest a positive causal effect of SBP on CHD. MR-Egger regression, by contrast, infers that directional pleiotropy is largely driving the analysis and suggests a causal effect close to zero. We would expect Q and Q′ statistics to be equal to their degrees of freedom (25 and 24, respectively), under the null hypothesis of no heterogeneity. Since they are both more than twice this value, substantial heterogeneity around the IVW and MR-Egger estimates is detected that could be due to horizontal pleiotropy. The difference Q − Q ′ = 8.5 is extreme under a χ 2 1 distribution, which suggests that MR-Egger is a better fit to the data. A more detailed outlier analysis revealed that this heterogeneity was largely driven by a single outlying variantrs17249754 in the ATP2B1 gene (shown as a square rather than a circle in Figure 5 ). It alone contributes a value of 28.3 to Q, which equates to 42% of its total. The next largest individual SNP contribution is 8.4. Since rs17249754 is a relatively strong and potentially pleiotropic instrument in the analysis, this could lead to the InSIDE assumption being violated, and be responsible for the large discrepancy between the IVW and MR-Egger results. Repeating the analysis after the removal of rs17249754 shows the estimates are indeed in broad agreement (Table 2) , and statistical heterogeneity around the IVW and MR-Egger estimates is substantially reduced (as noted by the values of Q and Q′). Furthermore, the difference Q − Q ′ = 0.7 now indicates that MR-Egger does not provide a substantially better fit to the data. The weighted median and MBE results are least affected by the removal of rs17249754, highlighting their inherent robustness to outliers. In summary, our MR analysis supports the hypothesis that SBP is causally related to CHD risk, which aligns these findings to meta-analysis of equivalent trial evidence. BIB002