reference
stringlengths
376
444k
target
stringlengths
31
68k
A survey of power management techniques in mobile computing operating systems <s> I <s> Recent advances in hardware and communication technology have made mobile computing possible. It is expected, [BIV92], that in the near future, tens of millions of users will carry a portable computer with a wireless connection to a worldwide information network. This rapidly expanding technology poses new challenging problems. The mobile computing environment is an environment characterized by frequent disconnections, significant limitations of bandwidth and power, resource restrictions and fast-changing locations. The peculiarities of the new environment make old software systems inadequate and raise new challenging research questions. In this report we attempt to investigate the impact of mobility on the todays software systems, report on how research starts dealing with mobility and state some problems that remain open. <s> BIB001 </s> A survey of power management techniques in mobile computing operating systems <s> I <s> We consider wireless broadcasting of data as a way of disseminating information to a massive number of users. Organizing and accessing information on wireless communication channels is different from the problem of organizing and accessing data on the disk. We describe two methods, (1, m ) Indexing and Distributed Indexing , for organizing and accessing broadcast data. We demonstrate that the proposed algorithms lead to significant improvement of battery life, while retaining a low access time. <s> BIB002
The motivation behind searching for and exploiting unique organization and access methods stems from the potential savings in power resulting from being able to wait for expected incoming data while in a "doze" mode BIB001 . When the mobile computer is receiving, it (its CPU) must be in the "active" mode. As was argued in section 1.1, the power consurr/ed by the CPU and memory (in active mode) is not trivial. As pointed out by Imielinski et. al, the/'atio of power consumption in active mode to that in doze mode is on the order of 5000 for the Hobbit chip from AT&T BIB002 . The question is how to organize the data to be broadcast so that it can be accessed by a mobile receiver in a manner that provides for optimal switching between active and doze modes• Due to the dynamic nature of mobile computing in terms of wireless communication cell migration, changing information content, and the multiplexing of mat~y different files over the same communication channels, the authors propose broadcasting the directory of a broadcasted data file along with the data file in the form of an index. Without an index, the client would have to filter (listen to) the entire broadcast in the worst case, or half the broadcast on average. This is undesirable because such filtering requires the mobile unit to be in its active mode, consuming power unnecessarily. Therefore every broadcast (channel) contains all of the information needed--the file and the index. Again, the question is how to organize the data to be broadcast for optimal access by a mobile receiver. For use in evaluating potential methods of organization and access, the authors introduce the two parameters access time and tuning time. The access time is the average time between identification of desired data in the index portion of the broadcast, and download of the data in the data file portion of the broadcast. The tuning time is the amount of time spent by a mobile client actually listening to a broadcast channel. The goal of the authors was to find algorithms for allocating the index together with the data on a broadcast channel, and to do so in a means that struck a balance between the optimal access time algorithm and the optimal tuning time algorithm. The first organization method, called "(1,m) Indexing", broadcasts the entire index m times (equally spaced) during the broadcast of one version of the data file. In other words, the entire in-dex is broadcast every 1/m fraction of the data file. In the second method, "Distributed Indexing", the (1,m) method is improved upon by eliminating much of the redundancy in the m broadcasts of the data file index. Their key observation is that only certain portions of the index tree are necessary between broadcasts of particular segments of the data file. Specifically, the periodically broadcasted index segments need only index the data file segment that follows it. By using fixed-sized "buckets" of data (both index and file data) the two methods allow the mobile client to "doze" for deterministic amounts of time, only to awake just prior to the next necessary listening event, what the authors call a "probe". In their evaluations, the authors found that both schemes achieve tuning times that are almost as good as that of an algorithm that had optimal tuning time. In terms of access time, both algorithms exhibit a savings that is a respectable compromise between the two extremes of an algorithm with optimal access time and an algorithm with optimal tuning time. The Distributed Indexing scheme is always better than the (1,m) scheme. In examples of practical implementations, the authors again compare their algorithms to the extreme cases of an optimal tuning time algorithm, and an optimal access time algorithm. In one example for the (1,m) algorithm, they show a per query reduction of power by a factor of 120 over the optimal access algorithm, but a 45 % increase in access time. For the same (1 ,m) example, they found that the power consumption was very similar to that of the optimal tuning algorithm, but that the access time had improved to 70% of that in the optimal tuning algorithm. In looking at an example of the distributed indexing scheme, they found a per query power reduction of 100 times smaller than that of an optimal access algorithm, while the access time increased by only 10%. Again when compared to an optimal tuning algorithm, they found similar power consumption, but again improved access time of 53% of that for an optimal tuning algorithm. The conclusion of the authors is that by using their distributed indexing scheme in periodic broadcasts, a savings of 100 times less energy can be realized. The fruits of this savings can of course be used for other purposes such as extended battery life, or extra queries.
Integration of Building Information Modelling (BIM) and Sensor Technology: A Review of Current Developments and Future Outlooks <s> INTRODUCTION <s> Discover BIM: A better way to build better buildings. Building Information Modeling (BIM) is a new approach to design, construction, and facility management in which a digital representation of the building process is used to facilitate the exchange and interoperability of information in digital format. BIM is beginning to change the way buildings look, the way they function, and the ways in which they are designed and built. BIM Handbook: A Guide to Building Information Modeling for Owners,Managers, Designers, Engineers, and Contractors provides an in-depth understanding of BIM technologies, the business and organizational issues associated with its implementation, and the profound advantages that effective use of BIM can provide to all members of a project team. The Handbook: Introduces Building Information Modeling and the technologies that support it Reviews BIM and its related technologies, in particular parametric and object-oriented modeling, its potential benefits, its costs, and needed infrastructure Explains how designing, constructing, and operating buildings with BIM differs from pursuing the same activities in the traditional way using drawings, whether paper or electronic Discusses the present and future influences of BIM on regulatory agencies; legal practice associated with the building industry; and manufacturers of building products Presents a rich set of BIM case studies and describes various BIM tools and technologies Shows how specific disciplines?owners, designers, contractors, and fabricators?can adopt and implement BIM in their companies Explores BIM's current and future impact on industry and society Painting a colorful and thorough picture of the state of the art in Building Information Modeling, the BIM Handbook guides readers to successful implementations, helping them to avoid needless frustration and costs and take full advantage of this paradigm-shifting approach to build better buildings, that consume fewer materials, and require less time, labor, and capital resources. <s> BIB001 </s> Integration of Building Information Modelling (BIM) and Sensor Technology: A Review of Current Developments and Future Outlooks <s> INTRODUCTION <s> Abstract Building information modelling (BIM) has been a dominant topic in information technology in construction research since this memorable acronym replaced the boring “product modelling in construction” and the academic “conceptual modelling of buildings”. The ideal of having a complete, coherent, true digital representation of buildings has become a goal of scientific research, software development and industrial application. In this paper, the author asks and answers ten key questions about BIM, including what it is, how it will develop, how real are the promises and fears of BIM and what is its impact. The arguments in the answers are based on an understanding of BIM that considers BIM in the frame of structure-function-behavior paradigm. As a structure, BIM is a database with many remaining database challenges. The function of BIM is building information management. Building information was managed before the invention of digital computers and is managed today with computers. The goal is efficient support of business processes, such as with database-management systems. BIM behaves as a socio-technical system; it changes institutions, businesses, business models, education, workplaces and careers and is also changed by the environment in which it operates. Game theory and institutional theory provide a good framework to study its adoption. The most important contribution of BIM is not that it is a tool of automation or integration but a tool of further specialization. Specialization is a key to the division of labor, which results in using more knowledge, in higher productivity and in greater creativity. <s> BIB002
Building Information Modelling (BIM) has received much attention in academia and architecture/engineering construction sector BIB001 . BIM is defined by the US National BIM Standard as "A digital representation of physical and functional characteristics BIB001 Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Permissions@acm.org. CSAE '18, October 22-24, 2018 of a facility and a shared knowledge resource for information about a facility forming a reliable basis for decisions during its life-cycle; defined as existing from earliest conception to demolition". In broader terms, "BIM refers to a combination or a set of technologies and organizational solutions that are expected to increase inter-organizational and disciplinary collaboration in the construction industry and to improve the productivity and quality of the design, construction, and maintenance of buildings" BIB002 . From the report about construction industry informatization development of China, currently BIM involves many kinds of technology such as 3D scanning, Internet of things (IoT), Geographic Information System (GIS), 3D-printing etc. and is applicable in lots of aspects in building management. According to Isikdag [3] , the first evolution of BIM was from being a shared warehouse of information to an information management strategy. Now the BIM is evolving from being an information management strategy to being a construction management method; sensor networks and IoT are technologies needed in this evolution. The information provided by the sensors integrating with the building information, becomes valuable in transforming the building information into meaningful and full state information that is more accurate and up-to-date . Therefore, this paper provides a brief review to evaluate and clarify the state-of-art in the integration of BIM and sensor technology. A systematic approach was adopted in reviewing related publications. Methods of integrating the two technologies were reviewed. A brief summary is given to highlight research gaps, and recommend future research.
Integration of Building Information Modelling (BIM) and Sensor Technology: A Review of Current Developments and Future Outlooks <s> Integration Methods <s> Only very few constructed facilities today have a complete record of as-built information. Despite the growing use of Building Information Modelling and the improvement in as-built records, several more years will be required before guidelines that require as-built data modelling will be implemented for the majority of constructed facilities, and this will still not address the stock of existing buildings. A technical solution for scanning buildings and compiling Building Information Models is needed. However, this is a multidisciplinary problem, requiring expertise in scanning, computer vision and videogrammetry, machine learning, and parametric object modelling. This paper outlines the technical approach proposed by a consortium of researchers that has gathered to tackle the ambitious goal of automating as-built modelling as far as possible. The top level framework of the proposed solution is presented, and each process, input and output is explained, along with the steps needed to validate them. Preliminary experiments on the earlier stages (i.e. processes) of the framework proposed are conducted and results are shown; the work toward implementation of the remainder is ongoing. <s> BIB001 </s> Integration of Building Information Modelling (BIM) and Sensor Technology: A Review of Current Developments and Future Outlooks <s> Integration Methods <s> Rehabilitation of the existing building stock is a key measure for reaching the proposed reduction in energy consumption and CO2 emissions in all countries. Building Information Models stand as an optimal solution for works management and decision-making assessment, due to their capacity to coordinate all the information needed for the diagnosis of the building and the planning of the rehabilitation works. If these models are generated from laser scanning point clouds automatically textured with thermographic and RGB images, their capacities are exponentially increased, since also their visualization and not only the consultation of their data increases the information available from the building. Since laser scanning, infrared thermography and photography are techniques that acquire information of the object as-is, the resulting BIM includes information on the real condition of the building in the moment of inspection, consequently helping to a more efficient planning of the rehabilitation works, enabling the repair of the most severe faults. This paper proposes a methodology for the automatic generation of textured as-built models, starting with data acquisition and continuing with geometric and thermographic data processing. <s> BIB002 </s> Integration of Building Information Modelling (BIM) and Sensor Technology: A Review of Current Developments and Future Outlooks <s> Integration Methods <s> This paper explores how gamification can provide the platform for integrating Building Information Modeling (BIM) together with the emergent Internet of Things (IoT). The goal of the research is to foster the creation of a testable and persistent virtual building via gaming technology that combines both BIM and IoT. The author discusses the features of each subject area in brief, and points towards the advantages and challenges of integration via gaming technology. Hospitals are the specific architectural typology discussed in the paper, as hospitals have particular properties which make them good candidates for study. <s> BIB003 </s> Integration of Building Information Modelling (BIM) and Sensor Technology: A Review of Current Developments and Future Outlooks <s> Integration Methods <s> Background The emerging Building Information Modelling (BIM) in the Architectural, Engineering and Construction (AEC) / Facility Management (FM) industry promotes life cycle process and collaborative way of working. Currently, many efforts have been contributed for professional integrated design / construction / maintenance process, there are very few practical methods that can enable a professional designer to effectively interact and collaborate with end-users/clients on a functional level. Method This paper tries to address the issue via the utilisation of computer game software combined with Building Information Modelling (BIM). Game-engine technology is used due to its intuitive controls, immersive 3D technology and network capabilities that allow for multiple simultaneous users. BIM has been specified due to the growing trend in industry for the adoption of the design method and the 3D nature of the models, which suit a game engine's capabilities. Results The prototype system created in this paper is based around a designer creating a structure using BIM and this being transferred into the game engine automatically through a two-way data transferring channel. This model is then used in the game engine across a number of network connected client ends to allow end-users to change/add elements to the design, and those changes will be synchronized back to the original design conducted by the professional designer. The system has been tested for its robustness and functionality against the development requirements, and the results showed promising potential to support more collaborative and interactive design process. Conclusion It was concluded that this process of involving the end-user could be very useful in certain circumstances to better elaborate the end user's requirement to design team in real time and in an efficient way. <s> BIB004 </s> Integration of Building Information Modelling (BIM) and Sensor Technology: A Review of Current Developments and Future Outlooks <s> Integration Methods <s> Building Information Modelling (BIM) has become a very important part of the construction industry, and it has not yet reached its full potential, but the use of BIM is not just limited to the construction industry. The aim of BIM is to provide a complete solution for the life cycle of the built environment from the design stage to construction and then to operation. One of the biggest challenges faced by the facility managers is to manage the operation of the infrastructure sustainably; however, this can be achieved through the installation of a Building Management System (BMS). Currently, the use of BIM in facilities management is limited because it does not offer real-time building data integration, which is vital for infrastructure operation. This chapter investigates the integration of real-time data from the BMS system into a BIM model, which would aid facility managers to interact with the real-world environment inside the BIM model. We present the use of web socket functionality to transmit and receive data in real-time over the Internet in a 3D game environment to provide a user-friendly system for the facility managers to help them operate their infrastructure more effectively. This novel and interactive approach would provide rich information about the built environment, which would not have been possible without the integration of BMS with BIM. <s> BIB005
This part is about how BIM can be integrated with sensor technology and mainly discussed three subthemes: what kind of sensor should be chosen; how the sensors should be arranged and distributed in the building; and how to integrate BIM with data collected from sensors, which includes data processing, analysis and presentation technology. The first two subthemes are introduced in different application studies that are mainly focused on information integration technology. Brilakis et al. BIB001 developed an automated algorithm for generating parametric BIM using data acquired by LiDAR (Light Detection and Ranging) or photogrammetry. The algorithm established a classification of building material prototype, shape and relationships to each other. Then the algorithm will recognize the exact element form the classification that fits special and visual descriptions. Modelers are only responsible for model checking and special elements. On this basis, Lagüela et al. BIB002 put forward an automated method for generating textured model. While Xiong et al. proposed another integrating method, which is to learn different elements' surface features and background relationships between objects, then mark them into walls, ceilings or floors, and finally conduct detailed analysis and locate openings to the surface. Further, Isikdag explained in detail about the integration methods of information provided by IoT and sensors, and integration methods of BIM and the information, in which several technical problems have been solved and a complete framework was presented. The above subthemes provide a macro and theoretical technical framework, while some other researchers were more focused on actual implementation. Rowland BIB003 put forward that gamification is the future integrate direction of BIM and IoT through a research towards hospitals, this research proved that gamification can realize better interaction between people and building. Edwards et al. BIB004 proposed a prototype realized by using a game engine to improve the terminal clients' participation of design works, it has been proved that using game engine to realize information interaction is convenient. Moreover, Khalid et al. BIB005 conducted a more detailed research about evaluations of databases and data formats. In this research, two kind of database: MongoDB, which is noSQL database and MySQL, which is SQL database were compared, and two kinds of data format: XML and JSON were evaluated. What's more, Unity 3D game engine is proved to be efficient in dealing with scenes which have large number of vertices.
Integration of Building Information Modelling (BIM) and Sensor Technology: A Review of Current Developments and Future Outlooks <s> Sustainable Building <s> Evaluating a building's performance usually requires a high number of sensors especially if individual rooms are analyzed. This paper introduces a simple and scalable model-based virtual sensor that allows analysis of a buildings' heat consumption down to room level using mainly simple temperature sensors. The approach is demonstrated with different sensor models for a case study of a building that contains a hybrid HVAC system and uses fossil and renewable energy-sources. The results show that, even with simple sensor models, reasonable estimations of rooms' heat consumption are possible and that rooms with high heat consumption are identified. Further, the paper illustrates how virtual sensors for thermal comfort can support the decision making to identify the best ways to optimize building system efficiency while reducing the building monitoring cost. <s> BIB001 </s> Integration of Building Information Modelling (BIM) and Sensor Technology: A Review of Current Developments and Future Outlooks <s> Sustainable Building <s> Opportunities for improving energy efficiency can be recognized in many different ways. Energy benchmarking is a critical step of building retrofit projects because it provides baseline energy information that could help building stakeholders identify energy performance, understand energy requirements, and prioritize potential retrofit opportunities. Sub-metering is one of the important energy benchmarking options for owners of aging commercial buildings in order to obtain critical energy information and develop an energy baseline model; however, it oftentimes lacks baseline energy models collecting granular energy information. This paper discusses the implementation of cost effective energy baseline models supported by wireless sensor networks and Building Information Modeling (BIM). The research team focused on integrating the theories and technologies of BIM, wireless sensor networks, and energy simulations that can be employed and adopted in building retrofit practices. The research activities conducted in this project provide an understanding of the current status and investigate the potentials of the system that would impact the future implementation. The result from a proof of concept project is summarized in order to demonstrate the effectiveness of the proposed system. <s> BIB002 </s> Integration of Building Information Modelling (BIM) and Sensor Technology: A Review of Current Developments and Future Outlooks <s> Sustainable Building <s> Abstract The increase in data center operating costs is driving innovation to improve their energy efficiency. Previous research has investigated computational and physical control intervention strategies to alleviate the competition between energy consumption and thermal performance in data center operation. This study contributes to the body of knowledge by proposing a cyber-physical systems (CPS) approach to innovatively integrate building information modeling (BIM) and wireless sensor networks (WSN). In the proposed framework, wireless sensors are deployed strategically to monitor thermal performance parameters in response to runtime server load distribution. Sensor data are collected and contextualized in reference to the building information model that captures the geometric and functional characteristics of the data center, which will be used as inputs of continuous simulations aiming to predict real-time thermal performance of server working environment. Comparing the simulation results against historical performance data via machine learning and data mining, facility managers can quickly pinpoint thermal hot zones and actuate intervention procedures to improve energy efficiency. This BIM-WSN integration also facilitates smarter power management by capping runtime power demand within peak power capacity of data centers and alerting power outage emergencies. This paper lays out the BIM-WSN integration framework, explains the working mechanism, and discusses the feasibility of implementation in future work. <s> BIB003 </s> Integration of Building Information Modelling (BIM) and Sensor Technology: A Review of Current Developments and Future Outlooks <s> Sustainable Building <s> The research presents a methodology and tool development which delineates the performance-based building active control system. We demonstrates the integration of environment sensors, the parametric engine, and interactive facade components by using the BIM-based system called Sync-BIM. It is developed by the BIM-based parametric engine called Dynamo. The Dynamo engine works as the building brain to determine the interactive control scenarios between buildings and surroundings micro-climate conditions. There are three sequent procedures including 1. data input, 2. scenario process, and command output to loop the interactive control scenarios. The kinetic facade prototype embedded with Sync-BIM system adopts the daylight values as the parameter to control the transformation of facade units. The kinetic facade units dynamically harvest the daylight via opening ratios for the sake of higher building energy performance. <s> BIB004 </s> Integration of Building Information Modelling (BIM) and Sensor Technology: A Review of Current Developments and Future Outlooks <s> Sustainable Building <s> The building sector releases 36% of global CO2 emissions, with 66% of emissions occurring during the operation stage of the life cycle of a building. While current research focuses on using Building Information Modelling (BIM) for energy management of a building, there is little research on the visualizing building carbon emission data in BIM to support decision makings during operation phase. This paper proposes an approach for gathering, analyzing and visualizing building carbon emissions data by integrating BIM and carbon estimation models, to assist the building operation management teams in discovering carbon emissions problems and reducing total carbon emission. Data requirements, carbon emission estimation algorithms, integration mechanism with BIM are investigated in this paper. A case is used to demonstrate the proposed approach. The approach described in this paper provides the inhabitants important graphical representation of data to determine a buildings sustainability performance, and can allow policy makers and building managers to make informed decisions and respond quickly to emergency situations. <s> BIB005
This theme is concerned with two parts: energy consumption and environment protection. Building energy consumption research focuses on energy monitoring and the establishment of a method to improve energy performance or save energy, while environmental protection research concerns about saving resources and carbon emission. Using sensors to monitor and record the use of energy/resources in buildings is the basis of the methods. There are two methods to monitor building energy consumption. One is to embed various sensors into buildings to collect related data such as capture temperature, humidity, CO2, and power consumption data; another is to conduct external scanning to building to acquire its thermal conditions. Woo and Gleason BIB002 established a wireless sensor network (WSN) to collect various data related to energy usage in building, then use these data to assist building retrofit design with the participation of BIM. Different from establishing a building retrofit design assist system, Dong et al. focused on an energy Fault Detection and Diagnostics (FDD) system, a building energy management system (BEMS) integrated FDD and BIM were established to save energy. Ploennigs et al. BIB001 BIB003 used WSN to monitor the operation energy consumption of data center, introducing BIM to predict real time thermal performance of work environment of sever, comparing predict outcomes and historical data, the operators can quickly discover thermal hot zones and conduct intervening to improve energy efficiency. In contrast to the above research directions, Shen and Wu BIB004 was concerned with adjusting building kinetic façade to gain higher energy performance through acquiring sunshine data via sensors. In environment protection subtheme, Howell et al. concerned about the rational use and conservation about natural resources. They used sensor network and BIM to monitor the usage of water resources and established an intelligent management system to manage water resource smartly. Similarly, Mousa et al. BIB005 concerned about carbon emission from buildings. They established a quantitative relationship between carbon and energy consumption data and natural gas consumption data collected by sensors, and founded a carbon emission model via BIM, which can assist carbon emission management and related decision making.
Integration of Building Information Modelling (BIM) and Sensor Technology: A Review of Current Developments and Future Outlooks <s> Site Management <s> Abstract Tower crane operators often operate a tower crane with blind spots. To solve this problem, video camera systems and anti-collision systems are often deployed. However, the current video camera systems do not provide accurate distance and understanding of the crane's surroundings. A collision-detection system provides location information only as numerical data. This study introduces a newly developed tower crane navigation system that provides three-dimensional information about the building and surroundings and the position of the lifted object in real time using various sensors and a building information modeling (BIM) model. The system quality was evaluated in terms of two aspects, “ease of use” and “usefulness,” based on the Technology Acceptance Model (TAM) theory. The perceived ease of use of the system was improved from the initial 3.2 to 4.4 through an iterative design process. The tower crane navigation system was deployed on an actual construction site for 71 days, and the use patterns were video recorded. The results clearly indicated that the tower crane operators relied heavily on the tower crane navigation system during blind lifts (93.33%) compared to the text-based anti-collision system (6.67%). <s> BIB001 </s> Integration of Building Information Modelling (BIM) and Sensor Technology: A Review of Current Developments and Future Outlooks <s> Site Management <s> Display Omitted We introduce a formalized ontology modeling construction sequencing rationale.The presented ontology with BIM can infer state of progress in case of limited visibility.And also in case of higher LoDs and more detailed WBS compared to the underlying 4D BIM.The ontology and classification mechanisms are validated using Charrette test.Their application is shown together with BIM and as-built data on real-world projects. Over the last few years, new methods that detect construction progress deviations by comparing laser scanning or image-based point clouds with 4D BIM are developed. To create complete as-built models, these methods require the visual sensors to have proper line-of-sight and field-of-view to building elements. For reporting progress deviations, they also require Building Information Modeling (BIM) and schedule Work-Breakdown-Structure (WBS) with high Level of Development (LoD). While certain logics behind sequences of construction activities can augment 4D BIM with lower LoDs to support making inferences about states of progress under limited visibility, their application in visual monitoring systems has not been explored. To address these limitations, this paper formalizes an ontology that models construction sequencing rationale such as physical relationships among components. It also presents a classification mechanism that integrates this ontology with BIM to infer states of progress for partially and fully occluded components. The ontology and classification mechanism are validated using a Charrette test and by presenting their application together with BIM and as-built data on real-world projects. The results demonstrate the effectiveness and generality of the proposed ontology. It also illustrates how the classification mechanism augments 4D BIM at lower LoDs and WBS to enable visual progress assessment for partially and fully occluded BIM elements and provide detailed operational-level progress information. <s> BIB002 </s> Integration of Building Information Modelling (BIM) and Sensor Technology: A Review of Current Developments and Future Outlooks <s> Site Management <s> Building information models (BIMs) provide opportunities to serve as an information repository to store and deliver as-built information. Since a building is not always constructed exactly as the design information specifies, there will be discrepancies between a BIM created in the design phase (called as-designed BIM) and the as-built conditions. Point clouds captured by laser scans can be used as a reference to update an as-designed BIM into an as-built BIM (i.e., the BIM that captures the as-built information). Occlusions and construction progress prevent a laser scan performed at a single point in time to capture a complete view of building components. Progressively scanning a building during the construction phase and combining the progressively captured point cloud data together can provide the geometric information missing in the point cloud data captured previously. However, combining all point cloud data will result in large file sizes and might not always guarantee additional building component information. This paper provides the details of an approach developed to help engineers decide on which progressively captured point cloud data to combine in order to get more geometric information and eliminate large file sizes due to redundant point clouds. <s> BIB003 </s> Integration of Building Information Modelling (BIM) and Sensor Technology: A Review of Current Developments and Future Outlooks <s> Site Management <s> According to US Bureau of Labor Statistics (BLS), in 2013 around three hundred deaths occurred in the US construction industry due to exposure to hazardous environment. A study of international safety regulations suggests that lack of oxygen and temperature extremes contribute to hazardous work environments particularly in confined spaces. Real-time monitoring of these confined work environments through wireless sensor technology is useful for assessing their thermal conditions. Moreover, Building Information Modeling (BIM) platform provides an opportunity to incorporate sensor data for improved visualization through new add-ins in BIM software. In an attempt to reduce Health and Safety (HS notifies HS and ultimately attempts to analyze sensor data to reduce emergency situations encountered by workers operating in confined environments. However, fusing the BIM data with sensor data streams will challenge the traditional approaches to data management due to huge volume of data. This work reports upon these challenges encountered in the prototype system. <s> BIB004 </s> Integration of Building Information Modelling (BIM) and Sensor Technology: A Review of Current Developments and Future Outlooks <s> Site Management <s> Background ::: Deep excavations in urban areas have the potential to cause unfavorable effects on ground stability and nearby structures. Thus, it is necessary to evaluate and monitor the environmental impact during deep excavation construction processes. Generally, construction project teams will set up monitoring instruments to control and monitor the overall environmental status, especially during the construction of retaining walls, main excavations, and when groundwater is involved. Large volumes of monitoring data and project information are typically created as the construction project progresses, making it increasingly difficult to manage them comprehensively. <s> BIB005 </s> Integration of Building Information Modelling (BIM) and Sensor Technology: A Review of Current Developments and Future Outlooks <s> Site Management <s> Building Information Modelling (BIM) has become a very important part of the construction industry, and it has not yet reached its full potential, but the use of BIM is not just limited to the construction industry. The aim of BIM is to provide a complete solution for the life cycle of the built environment from the design stage to construction and then to operation. One of the biggest challenges faced by the facility managers is to manage the operation of the infrastructure sustainably; however, this can be achieved through the installation of a Building Management System (BMS). Currently, the use of BIM in facilities management is limited because it does not offer real-time building data integration, which is vital for infrastructure operation. This chapter investigates the integration of real-time data from the BMS system into a BIM model, which would aid facility managers to interact with the real-world environment inside the BIM model. We present the use of web socket functionality to transmit and receive data in real-time over the Internet in a 3D game environment to provide a user-friendly system for the facility managers to help them operate their infrastructure more effectively. This novel and interactive approach would provide rich information about the built environment, which would not have been possible without the integration of BMS with BIM. <s> BIB006 </s> Integration of Building Information Modelling (BIM) and Sensor Technology: A Review of Current Developments and Future Outlooks <s> Site Management <s> AbstractConstruction sites need to be monitored continuously to detect unsafe conditions and protect workers from potential injuries and fatal accidents. In current practices, construction-safety monitoring relies heavily on manual observation, which is labor-intensive and error-prone. Due to the complex environment of construction sites, it is extremely challenging for safety inspectors to continuously monitor and manually identify all incidents that may expose workers to safety risks. There exist many research efforts applying sensing technologies to construction sites to reduce the manual efforts associated with construction-safety monitoring. However, several bottlenecks are identified in applying these technologies to the onsite safety monitoring process, including (1) recognition and registration of potential hazards, (2) real-time detection of unsafe incidents, and (3) reporting and sharing of the detected incidents with relevant participants in a timely manner. The objective of this study was to c... <s> BIB007
This theme includes many aspects, such as operation of site equipment, monitoring site environment, site security management and construction quality management. Various types of sensor are used in site management because it involves many aspects. Alizadehsalehia and Yitmen conducted a survey of construction companies and found that in automated construction project progress monitoring (ACCPM), in terms of popularity, Global positioning system (GPS) and WSN are important for the ACCPM. Moreover, Siddiqui introduced site distribute scheme and management strategy of sensors. The use of 3D laser scanning to generate point clouds is helpful for project progress monitoring, but several questions were raised by Han et al. BIB002 , which are lack of details in asplanned BIM, high-level work breakdown structure (WBS) in construction schedules, and static and dynamic occlusions and incomplete data collection. Another research conducted by Gao et al. BIB003 concerned about BIM update according to as-built BIM generated by scanning devices, a progressively captured point cloud method was developed to evaluate the repeated information in the data cloud and make decisions about which point cloud should be merged, this update method can help project managers acquire actual BIM so that they can make reasonable decisions. Rather than focused on generating actual as-built BIM and progress management, some other researchers concerned about site safety. Riaz et al. BIB004 discussed the focus point of CoSMoS (Confined Space Monitoring System) for real time safety management to reduce the harmful effects of harmful environmental hazards in the construction industry. They proposed that using sensors to collect real time site environment data, and stored the data in a SQL server, and CoSMoS is invoked as a software Revit Add In from Revit Architecture software GUI to realize data visualization, which is different form the conclusion of Khalid et al. BIB006 that a noSQL database is preferred. In addition, Park et al. BIB007 conducted site safety management in a different way, which demonstrated an automated safety monitoring approach that integrates Bluetooth Low Energy (BLE) based location tracking, BIM, and cloud-based communication. Wu et al. BIB005 not only concerned about safety management, but also focused on environment protection, which suggest that using various of sensors to monitor the project, then integrate project data, 3D model, stratum data, analysis data and monitoring data into BIM to establish a BIM-based monitoring system for urban deep excavation projects. In site management, sensors can not only conduct process management and safety control, but also helpful for equipment operation. Lee et al. BIB001 proposed a BIM and sensor based tower crane navigation system for helping cranes with blind spots.
Integration of Building Information Modelling (BIM) and Sensor Technology: A Review of Current Developments and Future Outlooks <s> Operation and Maintenance <s> This paper presents results of the first phase of the research project ''Serious Human Rescue Game'' at Technische Universitat Darmstadt. It presents a new serious gaming approach based on Building Information Modeling (BIM) for the exploration of the effect of building condition on human behavior during the evacuation process. In reality it is impossible to conduct rescue tests in burning buildings to study the human behavior. Therefore, the current methods of data-collecting for existing evacuation simulation models have limitations regarding the individual human factors. To overcome these limitations the research hypothesis is that the human behavior can be explored with a serious computer game: The decisions of a person during the game should be comparable to decisions during an extreme situation in the real world. To verify this hypothesis, this paper introduces a serious gaming approach for analyzing the human behavior in extreme situations. To implement a serious game, developers generally make use of 3D-modeling software to generate the game content. After this, the game logic needs to be added to the content with special software development kits for computer games. Every new game scenario has to be built manually from scratch. This is time-consuming and a great share of modeling work needs to be executed twice (e.g., 3D-modeling), at first by the architect for the parametric building model and the second time by the game designer for the 3D-game content. The key idea of the presented approach is to use the capabilities of BIM together with engineering simulations (fire, smoke) to build realistic serious game scenarios in a new and efficient way. This paper presents the first phase results of the research project mainly focusing on the conceptual design of the serious game prototype. The validation concept is also presented. The inter-operability between building information modeling applications and serious gaming platforms should allow different stakeholders to simulate building-related scenarios in a new, interactive and efficient way. <s> BIB001 </s> Integration of Building Information Modelling (BIM) and Sensor Technology: A Review of Current Developments and Future Outlooks <s> Operation and Maintenance <s> Abstract Rapid transit systems are considered a sustainable mode of transportation compared to other modes of transportation taking into consideration number of passengers, energy consumed and amount of pollution emitted. Building Information Modeling (BIM) is utilized in this research along with a global ranking system to monitor Indoor Environmental Quality (IEQ) in subway stations. The research is concerned with developing global rating system for subway stations' networks. The developed framework is capable of monitoring indoor temperature and Particulate Matter (PM) concentration levels in subway stations. A rating system is developed using Simos' ranking method in order to determine the weights of different components contributing to the whole level of service of a subway station as well as maintenance priority indices. A case study is presented to illustrate the use of the proposed system. The developed ranking system showed its effectiveness in ranking maintenance actions globally. <s> BIB002 </s> Integration of Building Information Modelling (BIM) and Sensor Technology: A Review of Current Developments and Future Outlooks <s> Operation and Maintenance <s> Abstract The ability to locate people quickly and accurately in buildings is critical to the success of building fire emergency response operations, and can potentially contribute to the reduction of various building fire-caused casualties and injuries. This paper introduces an environment aware beacon deployment algorithm designed by the authors to support a sequence based localization schema for locating first responders and trapped occupants at building fire emergency scenes. The algorithm is designed to achieve dual objectives of improving room-level localization accuracy and reducing the effort required to deploy an ad-hoc sensor network, as the required sensing infrastructure is presumably unavailable at most emergency scenes. The deployment effort is measured by the number of beacons to deploy, and the location accessibility to deploy the beacons. The proposed algorithm is building information modeling (BIM) centered, where BIM is integrated to provide the geometric information of the sensing area as input to the algorithm for computing space division quality, a metric that measures the likelihood of correct room-level estimations and associated deployment effort. BIM also provides a graphical interface for user interaction. Metaheuristics are integrated to efficiently search for a satisfactory solution in order to reduce the computational time, which is crucial for the success of emergency response operations. The algorithm was evaluated by simulation, where two building fire emergency scenarios were simulated. The tabu search, which employs dynamically generated constraints to guide the search for optimum solutions, was found to be the most efficient among three tuned tested metaheuristics. The algorithm yielded an average room-level accuracy of 87.1% and 32.1% less deployment effort on average compared with random beacon placements. The robustness of the algorithm was also examined as the deployed ad-hoc sensor network is subject to various hazards at emergency scenes. Results showed that the room-level accuracy could remain above 80% when up to 54% of all deployed nodes were damaged. The tradeoff between the space division quality and deployment effort was also examined, which revealed the relationship between the total deployment effort and localization accuracy. <s> BIB003 </s> Integration of Building Information Modelling (BIM) and Sensor Technology: A Review of Current Developments and Future Outlooks <s> Operation and Maintenance <s> Abstract Increasing size and complexity of indoor structures and increased urbanization has led to much more complication in urban disaster management. Contrary to outdoor environments, first responders and planners have limited information regarding the indoor areas in terms of architectural and semantic information as well as how they interact with their surroundings in case of indoor disasters. Availability of such information could help decision makers interact with building information and thus make more efficient planning prior to entering the disaster site. In addition as the indoor travel times required to reach specific areas in the building could be much longer compared to outdoor, visualizing the exact location of building rooms and utilities in 3D helps in visually communicating the indoor spatial information which could eventually result in decreased routing uncertainty inside the structures and help in more informed navigation strategies. This work aims at overcoming the insufficiencies of existing indoor modelling approaches by proposing a new Indoor Emergency Spatial Model (IESM) based on IFC. The model integrates 3D indoor architectural and semantic information required by first responders during indoor disasters with outdoor geographical information to improve situational awareness about both interiors of buildings as well as their interactions with outdoor components. The model is implemented and tested using the Esri GIS platform. The paper discusses the effectiveness of the model in both decision making and navigation by demonstrating the model's indoor spatial analysis capabilities and how it improves destination travel times. <s> BIB004
This direction includes many aspects, such as indoor environment monitoring and conditioning, user experience optimisation, emergency management and facility management. Marzouk and Abdelaty BIB002 used WSN to collect PM10, PM2.5, temperature and humidity data and proposed a global ranking system integrated with BIM to monitor environment quality of subway station, through which a maintenance priority indices (MPIs) has been developed to help managers with allocating funds. Additionally, Costa et al. also concerned about indoor environment monitoring by integrating saving energy, improving indoor environment and users' experience, where CO2, COV's, humidity level, temperature and occupancy rate were detected via sensor. Other researchers also concerned about emergency management, but in a more detailed way. Li et al. BIB003 developed a BIM-centred position algorithm based on Sequence Based Localization (SBL), which is aimed to rationalise sensors' cost when obtaining higher positioning accuracy. Different from concerning about positioning, Tashakkori et al. BIB004 established an outdoor/indoor 3D emergency spatial model to help rescuers understanding building and surroundings, optimize the rescue route and realize indoor navigation, where the dynamic and semantic building information will be collected by indoor environment sensors. Similarly, Rüppel and Schatz BIB001 established a serious human rescue game and choosed agentbased simulation software FDS+Evac to conduct further consideration, using camera and Radio-Frequency Identification (RFID) to test and verify the game.
Integration of Building Information Modelling (BIM) and Sensor Technology: A Review of Current Developments and Future Outlooks <s> Structural Health Monitoring <s> AbstractBuilding information modelling (BIM) represents the process of development and use of a computer generated model to simulate the planning, design, construction and operation of a building. The utilisation of building information models has increased in recent years due to their economic benefits in design and construction phases and in building management. BIM has been widely applied in the design and construction of new buildings but rarely in the management of existing ones. The point of creating a BIM model for an existing building is to produce accurate information related to the building, including its physical and functional characteristics, geometry and inner spatial relationships. The case study provides a critical appraisal of the process of both collecting accurate survey data using a terrestrial laser scanner combined with a total station and creating a BIM model as the basis of a digital management model. The case study shows that it is possible to detect and define facade damage by in... <s> BIB001 </s> Integration of Building Information Modelling (BIM) and Sensor Technology: A Review of Current Developments and Future Outlooks <s> Structural Health Monitoring <s> In the construction process, real-time quality control and early defects detection are still the most significant approach to reducing project schedule and cost overrun. Current approaches for quality control on construction sites are time-consuming and ineffective since they only provide data at specific locations and times to represent the work in place, which limit a quality manager’s abilities to easily identify and manage defects. The purpose of this paper is to develop an integrated system of Building Information Modelling (BIM) and Light Detection and Ranging (LiDAR) to come up with real-time onsite quality information collecting and processing for construction quality control. Three major research activities were carried out systematically, namely, literature review and investigation, system development and system evaluation. The proposed BIM and LiDAR-based construction quality control system were discussed in five sections: LiDAR-based real-time tracking system, BIM-based real-time checking system, quality control system, point cloud coordinate transformation system, and data processing system. Then, the system prototype was developed for demonstrating the functions of flight path control and real-time construction quality deviation analysis. Finally, three case studies or pilot projects were selected to evaluate the developed system. The results show that the system is able to efficiently identify potential construction defects and support real-time quality control. <s> BIB002 </s> Integration of Building Information Modelling (BIM) and Sensor Technology: A Review of Current Developments and Future Outlooks <s> Structural Health Monitoring <s> Abstract Approximately 20% of accidents in construction industry occur while workers are moving through a construction site. Current construction hazard identification mostly relies on safety managers' capabilities to detect hazards. Consequently, numerous hazards remain unidentified, and unidentified hazards mean that hazards are not included in the safety management process. To enhance the capability of hazard identification, this paper proposes an automated hazardous area identification model based on the deviation between the optimal route (shortest path)—which is determined by extracting nodes from objects in a building information model (BIM)—and the actual route of a laborer collected from the real-time location system (RTLS). The hazardous area identification framework consists of six DBs and three modules. The unidentified hazardous area identification module identifies potentially hazardous areas (PHAs) in laborers' paths. The filtering hazardous area module reduces the range of possible hazardous areas to improve the efficiency of safety management. The monitoring and output generation module provides reports including hazardous area information. The suggested model can identify a hazard automatically and decrease the time laborers are exposed to a hazard. This can help improve both the effectiveness of the hazard identification process and enhance the safety for laborers. <s> BIB003
This theme is concerned about the monitoring of mechanics situation of structures and the discovery of structure defects. Structural defect can be divided into two types: structural partial defect, such as crack and over deflection of concrete elements, and structural integral defect, such as poor verticality and flatness of structural elements. Kim et al. BIB003 researched about manual dangerous examinations, but using RFID to tracing workers and recording their routes, then plan an optimal route of entering into the construction site, reducing potential dangerous areas that workers may pass by and generate a report about information of dangerous area. Rather than concerning about manual structural examinations, some other researchers concerned about automated structural health examination. While Mill et al. BIB001 not only use laser scanning to conduct outdoor building survey, but also conduct indoor building survey using total stations, establishing geodetic network system, founding BIM by importing and merging data. This model can examine and define damage degree of façade. Different from the above methods, a detect method was put forward by Wang et al. BIB002 , which a system integrating BIM and LiDAR for real-time controlling the construction quality has been put forward. Additionally, Zhang and Bai proposed an approach by using breakage-triggered strain sensor via RFID tags to check whether the structural deformation exceeded threshold, where the responding power to the RFID reader/antenna would change by modifying the RFID tag. This method can help engineers to recognize the strain status of structure and making decisions.
Integration of Building Information Modelling (BIM) and Sensor Technology: A Review of Current Developments and Future Outlooks <s> Positioning and Tracing <s> The purposes of this research are to develop and evaluate a framework that utilizes the integration of commercially-available Radio Frequency Identification (RFID) and a BIM model for real-time resource location tracking within an indoor environment. A focus of this paper is to introduce the framework and explain why building models currently lack the integration of sensor data. The need will be explained with potential applications in construction and facility management. Algorithms to process RFID signals and integrate the generated information in BIM will be presented. Furthermore, to demonstrate the benefits of location tracking technology and its integration in BIM, the paper provides a preliminary demonstration on tracking valuable assets inside buildings in real-time. The preliminary results provided the feasibility of integrating passive RFID with BIM for indoor settings. <s> BIB001 </s> Integration of Building Information Modelling (BIM) and Sensor Technology: A Review of Current Developments and Future Outlooks <s> Positioning and Tracing <s> Abstract The ability to locate people quickly and accurately in buildings is critical to the success of building fire emergency response operations, and can potentially contribute to the reduction of various building fire-caused casualties and injuries. This paper introduces an environment aware beacon deployment algorithm designed by the authors to support a sequence based localization schema for locating first responders and trapped occupants at building fire emergency scenes. The algorithm is designed to achieve dual objectives of improving room-level localization accuracy and reducing the effort required to deploy an ad-hoc sensor network, as the required sensing infrastructure is presumably unavailable at most emergency scenes. The deployment effort is measured by the number of beacons to deploy, and the location accessibility to deploy the beacons. The proposed algorithm is building information modeling (BIM) centered, where BIM is integrated to provide the geometric information of the sensing area as input to the algorithm for computing space division quality, a metric that measures the likelihood of correct room-level estimations and associated deployment effort. BIM also provides a graphical interface for user interaction. Metaheuristics are integrated to efficiently search for a satisfactory solution in order to reduce the computational time, which is crucial for the success of emergency response operations. The algorithm was evaluated by simulation, where two building fire emergency scenarios were simulated. The tabu search, which employs dynamically generated constraints to guide the search for optimum solutions, was found to be the most efficient among three tuned tested metaheuristics. The algorithm yielded an average room-level accuracy of 87.1% and 32.1% less deployment effort on average compared with random beacon placements. The robustness of the algorithm was also examined as the deployed ad-hoc sensor network is subject to various hazards at emergency scenes. Results showed that the room-level accuracy could remain above 80% when up to 54% of all deployed nodes were damaged. The tradeoff between the space division quality and deployment effort was also examined, which revealed the relationship between the total deployment effort and localization accuracy. <s> BIB002 </s> Integration of Building Information Modelling (BIM) and Sensor Technology: A Review of Current Developments and Future Outlooks <s> Positioning and Tracing <s> AbstractConstruction sites need to be monitored continuously to detect unsafe conditions and protect workers from potential injuries and fatal accidents. In current practices, construction-safety monitoring relies heavily on manual observation, which is labor-intensive and error-prone. Due to the complex environment of construction sites, it is extremely challenging for safety inspectors to continuously monitor and manually identify all incidents that may expose workers to safety risks. There exist many research efforts applying sensing technologies to construction sites to reduce the manual efforts associated with construction-safety monitoring. However, several bottlenecks are identified in applying these technologies to the onsite safety monitoring process, including (1) recognition and registration of potential hazards, (2) real-time detection of unsafe incidents, and (3) reporting and sharing of the detected incidents with relevant participants in a timely manner. The objective of this study was to c... <s> BIB003
This direction is to develop a method to locate or trace facilities or people inside a building by using sensors. Positioning and tracing can be applied in many occasions, such as emergency management, site security management, user experience optimization and facility management. Costin et al. BIB001 put forward that it is possible to realise resource location tracking using BIM and passive RFID, and their research in 2015 discussed the method further, in which Tekla Structures software is chosen as BIM platform and Trimble ThingMagic is selected to realize RFID technology. Through the use of Application Programing Interface (API) of software and hardware to integrate BIM and RFID, an algorithm was developed to conduct indoor positioning and tracing. This method can help with reducing 64% of wrong readings to achieve the best accuracy 1.66m. The differences between the Costin et al.'s research and the research mentioned above in Li et al. BIB002 is the detail of algorithm. Rather than using RFID technologies, BLE is used in positioning and tracing. The above research done by Park et al. BIB003 used BLE to locate object, through which a self-corrective knowledge based hybrid tracking system is developed. The system uses BLE bacon to acquire absolute position, and motion sensors to acquire relative position, which also integrates BIM to get the geometric information of the building to improve robustness of tracing. The result shows that this hybrid system can reduce positioning mistake rate by 42%.
Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Methodological Background <s> This book primarily discusses issues related to the mining aspects of data streams and it is unique in its primary focus on the subject. This volume covers mining aspects of data streams comprehensively: each contributed chapter contains a survey on the topic, the key ideas in the field for that particular topic, and future research directions. The book is intended for a professional audience composed of researchers and practitioners in industry. This book is also appropriate for advanced-level students in computer science. <s> BIB001 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Methodological Background <s> Fixations are widely analysed in human vision, gaze-based interaction, and experimental psychology research. However, robust fixation detection in mobile settings is profoundly challenging given the prevalence of user and gaze target motion. These movements feign a shift in gaze estimates in the frame of reference defined by the eye tracker's scene camera. To address this challenge, we present a novel fixation detection method for head-mounted eye trackers. Our method exploits that, independent of user or gaze target motion, target appearance remains about the same during a fixation. It extracts image information from small regions around the current gaze position and analyses the appearance similarity of these gaze patches across video frames to detect fixations. We evaluate our method using fine-grained fixation annotations on a five-participant indoor dataset (MPIIEgoFixation) with more than 2,300 fixations in total. Our method outperforms commonly used velocity- and dispersion-based algorithms, which highlights its significant potential to analyse scene image information for eye movement detection. <s> BIB002
In this section, we introduce the basics of stream clustering. Most importantly, we describe how data streams are typically aggregated and how algorithms adapt to changes over time. For a consistent notation, we denote vectors by boldface symbols and formally define a data stream as an infinite sequence X = (x 1 , x 1 , . . . , x N ) where x t is a single observation with d dimensions at time t. To calculate the distance between clusters, an appropriate distance measure needs to be used. For numerical data, the Euclidean distance between the centroids of the clusters is common. However, for binary, ordinal, nominal or text data, appropriate distance measures such as the Jaccard index, simple matching coefficient or Cosine similarity could be used. In general, finding a good clustering solution is defined as an optimization task. The underlying goal is to maximize intra-cluster homogeneity while simultaneously maximizing inter-cluster heterogeneity. This ensures that objects within the same cluster are similar but different clusters are well separated. There are various strategies that aim to achieve this task. Popular strategies include minimizing intra-cluster distances, minimizing radii of clusters or finding maximum likelihood estimates. A popular example is the k-means algorithm which minimizes the within-cluster sum of squares, i.e., the distance from data points to their cluster centroids. In a streaming scenario, these optimization objectives are subject to several restrictions regarding availability and order of the data as well as resource and time limitations. For example, the large volume of data makes it undesirable or infeasible to store all observations of the stream. Typically, observations can only evaluated once and are discarded afterwards. This requires to extract sufficient information from observations before discarding them. Similarly, the Figure 1 Stream of eye tracking data BIB002 at three different points in time. Grey points denote the normalized pupil centers and their opacity and size is relative to their recency. Circles mark the centers of micro-clusters and crosses the centers of macro-clusters. Both are scaled relative to the number of observations assigned to them. Figure 2 Categories of stream clustering algorithms order of observations cannot be influenced. As an illustrative example, let us consider the case of eye tracking which is typically used in order to analyze how people perceive content such as websites or advertisements. It records the movement of the eye and detects where a person is looking. An example of a stream of eye tracking data is visualized in Figure 1 , showing the pupil positions at three different points in times (grey points) BIB002 . In this context, stream clustering can be applied in order to find the areas of interest or subjects that the person is looking at. Throughout this paper, we discuss common strategies that can be used to identify clusters under the streaming restrictions. For example, we could use similarity thresholds in order to decide whether an observation fits into an existing cluster (Figure 2(a) ). Alternatively, we could split the data space into a grid and only store the location of densely populated cells (Figure 2(b) ). Other approaches include fitting a model to represent the observed data (Figure 2(c) ) or projecting high-dimensional data to a lower dimensional space (Figure 2(d) ). Generally, these strategies allow to capture the location of dense areas in the data space. These regions can be considered clusters and they can even be merged when they become too similar over time. However, it is not possible to data stream online 'micro-clusters' 'macro-clusters' offline Figure 3 Exemplary two-phase stream clustering using a grid-based approach ever split a clusters again since the underlying data was discarded and only the centre of the dense region was stored BIB001 . To avoid this problem, many stream clustering algorithms divide the process in two phases: an online and an offline component ].
Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Time Window Models <s> Consider the problem of monitoring tens of thousands of time series data streams in an online fashion and making decisions based on them. In addition to single stream statistics such as average and standard deviation, we also want to find high correlations among all pairs of streams. A stock market trader might use such a tool to spot arbitrage opportunities. This paper proposes efficient methods for solving this problem based on Discrete Fourier Transforms and a three level time interval hierarchy. Extensive experiments on synthetic data and real world financial trading data show that our algorithm beats the direct computation approach by several orders of magnitude. It also improves on previous Fourier Transform approaches by allowing the efficient computation of time-delayed correlation over any size sliding window and any time delay. Correlation also lends itself to an efficient grid-based data structure. The result is the first algorithm that we know of to compute correlations over thousands of data streams in real time. The algorithm is incremental, has fixed response time, and can monitor the pairwise correlations of 10,000 streams on a single PC. The algorithm is embarrassingly parallelizable. <s> BIB001 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Time Window Models <s> Existing data-stream clustering algorithms such as CluStream arebased on k-means. These clustering algorithms are incompetent tofind clusters of arbitrary shapes and cannot handle outliers. Further, they require the knowledge of k and user-specified time window. To address these issues, this paper proposes D-Stream, a framework for clustering stream data using adensity-based approach. The algorithm uses an online component which maps each input data record into a grid and an offline component which computes the grid density and clusters the grids based on the density. The algorithm adopts a density decaying technique to capture the dynamic changes of a data stream. Exploiting the intricate relationships between the decay factor, data density and cluster structure, our algorithm can efficiently and effectively generate and adjust the clusters in real time. Further, a theoretically sound technique is developed to detect and remove sporadic grids mapped to by outliers in order to dramatically improve the space and time efficiency of the system. The technique makes high-speed data stream clustering feasible without degrading the clustering quality. The experimental results show that our algorithm has superior quality and efficiency, can find clusters of arbitrary shapes, and can accurately recognize the evolving behaviors of real-time data streams. <s> BIB002 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Time Window Models <s> Clustering real-time stream data is an important and challenging problem. Existing algorithms such as CluStream are based on the k-means algorithm. These clustering algorithms have difficulties finding clusters of arbitrary shapes and handling outliers. Further, they require the knowledge of k and user-specified time window. To address these issues, this article proposes D-Stream, a framework for clustering stream data using a density-based approach. Our algorithm uses an online component that maps each input data record into a grid and an offline component that computes the grid density and clusters the grids based on the density. The algorithm adopts a density decaying technique to capture the dynamic changes of a data stream and a attraction-based mechanism to accurately generate cluster boundaries. Exploiting the intricate relationships among the decay factor, attraction, data density, and cluster structure, our algorithm can efficiently and effectively generate and adjust the clusters in real time. Further, a theoretically sound technique is developed to detect and remove sporadic grids mapped by outliers in order to dramatically improve the space and time efficiency of the system. The technique makes high-speed data stream clustering feasible without degrading the clustering quality. The experimental results show that our algorithm has superior quality and efficiency, can find clusters of arbitrary shapes, and can accurately recognize the evolving behaviors of real-time data streams. <s> BIB003 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Time Window Models <s> Data stream in a popular research topic in big data era. There are many research results on data stream clustering domain. This paper firstly has a brief introduction to data stream methodologies, such as sampling, sliding windows, etc. Finally, it presents a survey on data streams clustering techniques. <s> BIB004 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Time Window Models <s> As more and more applications produce streaming data, clustering data streams has become an important technique for data and knowledge engineering. A typical approach is to summarize the data stream in real-time with an online process into a large number of so called micro-clusters. Micro-clusters represent local density estimates by aggregating the information of many data points in a defined area. On demand, a (modified) conventional clustering algorithm is used in a second offline step to recluster the micro-clusters into larger final clusters. For reclustering, the centers of the micro-clusters are used as pseudo points with the density estimates used as their weights. However, information about density in the area between micro-clusters is not preserved in the online process and reclustering is based on possibly inaccurate assumptions about the distribution of data within and between micro-clusters (e.g., uniform or Gaussian). This paper describes DBSTREAM, the first micro-cluster-based online clustering component that explicitly captures the density between micro-clusters via a shared density graph. The density information in this graph is then exploited for reclustering based on actual density between adjacent micro-clusters. We discuss the space and time complexity of maintaining the shared density graph. Experiments on a wide range of synthetic and real data sets highlight that using shared density improves clustering quality over other popular data stream clustering methods which require the creation of a larger number of smaller micro-clusters to achieve comparable results. <s> BIB005
As shown in our eye tracking example, the underlying distribution of the stream will often change over time. This is also known as drift or concept-shift. To handle this, algorithms can employ time window models. This approach aims to 'forget' older data to avoid that historic data is biasing the analysis to outdated patterns. There exist four main types of time window models (Figure 4) [Silva et al., 2013 . Figure 4 Overview of time window models BIB001 , Silva et al., 2013 The damped time window assigns a weight to each micro-cluster based on the number of observations assigned to it. In each iteration, the weight is faded by a factor such as 2 −λ , where decay factor λ influences the rate of decay. Since fading the weight in every iteration is computationally costly, the weight can either be updated in fixed time intervals or whenever a cluster is updated BIB002 . In this case, the fading can be performed with respect to the elapsed time ω(∆t) = 2 −λ∆t , where ∆t denotes the time since the cluster was last updated. In Figure 1 , we applied the same fading function to reduce the size and opacity of older data. In some cases, clusters are implicitly decayed over time by considering their weight relative to the total number of observations ]. An alternative is the sliding time window which only considers the most recent observations or micro-clusters in the stream. This is usually based on a First-In-First-Out (FIFO) principle, where the oldest data point in the window is removed once a new data point becomes available. The size of this window can be of fixed or variable length. While a small window size can adapt quickly to concept drift, a larger window size considers more observations and can be more accurate for stable streams. In addition, a landmark time window is a very simple approach which separates the data stream into disjunct chunks based on events. Landmarks can either be defined based on the passed time or other occurrences. The landmark time window summarizes all data points that arrive after the landmark. Whenever a new landmark occurs, all the data in the window is removed and new data is captured. This category also includes algorithms that do not specifically consider changes over time and therefore require the user to regularly restart the clustering. Finally, the pyramidal time model or tilted time window uses different granularity levels based on the recency of data. This approach summarizes recent data more accurately whereas older data is gradually aggregated. Due to the increasing relevance of stream clustering, a number of survey papers began to summarize and structure the field. Most notably provide an overview of the two largest research threads, namely distance-based and grid-based algorithms. In total, the authors discuss ten distance-based approaches, mostly extensions of DenStream , and nine gridbased approaches, mostly extensions of D-Stream BIB002 BIB003 . The authors describe the algorithms, name input parameters and also empirically evaluate some of the algorithms. In addition, the authors highlight interrelations between the algorithms in a timeline. We utilize this timeline and extend it with more algorithms and additional categories. However, their paper focusses only on distance and grid-based algorithms while we have taken more categories and more algorithms into account. Additionally, [Silva et al., 2013] introduced a taxonomy that allows to categorize stream clustering algorithms, e.g., regarding the reclustering algorithm or used time window model. The authors describe a total of 13 stream clustering algorithms and categorize them according to their taxonomy. In addition, application scenarios, data sources and available toolsets are presented. However, a drawback is that many of the discussed algorithms are one-pass clustering algorithms that need extensions to suit the streaming case. In ] the authors discuss 19 algorithms and are among the first to highlight the research area of Neural Gas (NG) for stream clustering. However, only a single grid-based algorithm is discussed and other popular algorithms are missing. Further, the authors in focus on stream clustering and stream classification and present a total of 17 algorithms. Considerably shorter overviews are also provided in , BIB004 , , and . In this survey, we cover a total of 51 different stream clustering algorithms. This makes our survey much more exhaustive than all comparable studies. In addition, our paper identifies four common work streams and how they developed over time. We also focus on common problems when applying stream clustering. As an example, we point to a total of 26 available algorithm implementations, as well as three different frameworks for data stream clustering. Furthermore, we address the problem of configuring stream clustering algorithms and present automatic algorithm configuration as an approach to address this problem. Table 1 briefly summarizes the relevant dimensions of our survey. In previous work , we have also performed a rigorous empirical comparison of the most popular stream clustering algorithms. In total, we evaluated ten algorithms on four synthetic and three real-world data sets. In order to obtain the best results, we performed extensive parameter configuration. Our results have shown that DBSTREAM BIB005 produces the highest cluster quality and is able to detect arbitrarily shaped clusters. However, it is sensitive to the insertion order and has many parameters which makes it difficult to apply in practice. As an alternative, D-Stream BIB002 BIB003 can produce competitive results, but often requires more micro-clusters due to its grid based approach.
Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Clustering Feature <s> Finding useful patterns in large datasets has attracted considerable interest recently, and one of the most widely studied problems in this area is the identification of clusters, or densely populated regions, in a multi-dimensional dataset. Prior work does not adequately address the problem of large datasets and minimization of I/O costs.This paper presents a data clustering method named BIRCH (Balanced Iterative Reducing and Clustering using Hierarchies), and demonstrates that it is especially suitable for very large databases. BIRCH incrementally and dynamically clusters incoming multi-dimensional metric data points to try to produce the best quality clustering with the available resources (i.e., available memory and time constraints). BIRCH can typically find a good clustering with a single scan of the data, and improve the quality further with a few additional scans. BIRCH is also the first clustering algorithm proposed in the database area to handle "noise" (data points that are not part of the underlying pattern) effectively.We evaluate BIRCH's time/space efficiency, data input order sensitivity, and clustering quality through several experiments. We also present a performance comparisons of BIRCH versus CLARANS, a clustering method proposed recently for large datasets, and show that BIRCH is consistently superior. <s> BIB001 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Clustering Feature <s> Data clustering is an important technique for exploratory data analysis, and has been studied for several years. It has been shown to be useful in many practical domains such as data classification and image processing. Recently, there has been a growing emphasis on exploratory analysis of very large datasets to discover useful patterns and/or correlations among attributes. This is called data mining, and data clustering is regarded as a particular branch. However existing data clustering methods do not adequately address the problem of processing large datasets with a limited amount of resources (e.g., memory and cpu cycles). So as the dataset size increases, they do not scale up well in terms of memory requirement, running time, and result quality. ::: ::: In this paper, an efficient and scalable data clustering method is proposed, based on a new in-memory data structure called CF-tree, which serves as an in-memory summary of the data distribution. We have implemented it in a system called BIRCH (Balanced Iterative Reducing and Clustering using Hierarchies), and studied its performance extensively in terms of memory requirements, running time, clustering quality, stability and scalability; we also compare it with other available methods. Finally, BIRCH is applied to solve two real-life problems: one is building an iterative and interactive pixel classification tool, and the other is generating the initial codebook for image compression. <s> BIB002 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Clustering Feature <s> Data stream clustering is an important task in data stream mining. In this paper, we propose SDStream, a new method for performing density-based data streams clustering over sliding windows. SDStream adopts CluStream clustering framework. In the online component, the potential core-micro-cluster and outlier micro-cluster structures are introduced to maintain the potential clusters and outliers. They are stored in the form of Exponential Histogram of Cluster Feature (EHCF) in main memory and are maintained by the maintenance of EHCFs. Outdated micro-clusters which need to be deleted are found by the value of t in Temporal Cluster Feature (TCF). In the offline component, the final clusters of arbitrary shape are generated according to all the potential core-micro-clusters maintained online by DBSCAN algorithm. Experimental results show that SDStream which can generate clusters of arbitrary shape has a much higher clustering quality than CluStream which generates spherical clusters. <s> BIB003 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Clustering Feature <s> Clustering streaming data requires algorithms which are capable of updating clustering results for the incoming data. As data is constantly arriving, time for processing is limited. Clustering has to be performed in a single pass over the incoming data and within the possibly varying inter-arrival times of the stream. Likewise, memory is limited, making it impossible to store all data. For clustering, we are faced with the challenge of maintaining a current result that can be presented to the user at any given time. In this work, we propose a parameter free algorithm that automatically adapts to the speed of the data stream. It makes best use of the time available under the current constraints to provide a clustering of the objects seen up to that point. Our approach incorporates the age of the objects to reflect the greater importance of more recent data. Moreover, we are capable of detecting concept drift, novelty and outliers in the stream. For efficient and effective handling, we introduce the ClusTree, a compact and self-adaptive index structure for maintaining stream summaries. Our experiments show that our approach is capable of handling a multitude of different stream characteristics for accurate and scalable anytime stream clustering. <s> BIB004 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Clustering Feature <s> Clustering of streaming sensor data aims at providing online summaries of the observed stream. This task is mostly done under limited processing and storage resources. This makes the sensed stream speed (data per time) a sensitive restriction when designing stream clustering algorithms. Additionally, the varying speed of the stream is a natural characteristic of sensor data, e.g. changing the sampling rate upon detecting an event or for a certain time. In such cases, most clustering algorithms have to heavily restrict their model size such that they can handle the minimal time allowance. Recently the first anytime stream clustering algorithm has been proposed that flexibly uses all available time and dynamically adapts its model size. However, the method was not designed to precisely cluster sensor data which are usually noisy and extremely evolving. In this paper we detail the LiarTree algorithm that provides precise stream summaries and effectively handles noise, drift and novelty. We prove that the runtime of the LiarTree is logarithmic in the size of the maintained model opposed to a linear time complexity often observed in previous approaches. We demonstrate in an extensive experimental evaluation using synthetic and real sensor datasets that the LiarTree outperforms competing approaches in terms of the quality of the resulting summaries and exposes only a logarithmic time complexity. <s> BIB005 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Clustering Feature <s> Evolution-based stream clustering method supports the monitoring and the change detection of clustering structures. E-Stream is an evolution-based stream clustering method that supports different types of clustering structure evolution which are appearance, disappearance, self-evolution, merge and split. This paper presents HUE-Stream which extends E-Stream in order to support uncertainty in heterogeneous data. A distance function, cluster representation and histogram management are introduced for the different types of clustering structure evolution. We evaluate effectiveness of HUE-Stream on real-world dataset KDDCup 1999 Network Intruision Detection. Experimental results show that HUE-Stream gives better cluster quality compared with UMicro. <s> BIB006 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Clustering Feature <s> In this paper we propose a data stream clustering algorithm, called Self Organizing density based clustering over data Stream (SOStream). This algorithm has several novel features. Instead of using a fixed, user defined similarity threshold or a static grid, SOStream detects structure within fast evolving data streams by automatically adapting the threshold for density-based clustering. It also employs a novel cluster updating strategy which is inspired by competitive learning techniques developed for Self Organizing Maps (SOMs). In addition, SOStream has built-in online functionality to support advanced stream clustering operations including merging and fading. This makes SOStream completely online with no separate offline components. Experiments performed on KDD Cup'99 and artificial datasets indicate that SOStream is an effective and superior algorithm in creating clusters of higher purity while having lower space and time requirements compared to previous stream clustering algorithms. <s> BIB007 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Clustering Feature <s> We design a data stream algorithm for the k-means problem, called BICO, that combines the data structure of the SIGMOD Test of Time award winning algorithm BIRCH [27] with the theoretical concept of coresets for clustering problems. The k-means problem asks for a set C of k centers minimizing the sum of the squared distances from every point in a set P to its nearest center in C. In a data stream, the points arrive one by one in arbitrary order and there is limited storage space. <s> BIB008 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Clustering Feature <s> Clustering evolving data streams is ::: important to be performed in a limited time with a reasonable quality. The ::: existing micro clustering based methods do not consider the distribution of ::: data points inside the micro cluster. We propose LeaDen-Stream (Leader Density-based ::: clustering algorithm over evolving data Stream), a density-based ::: clustering algorithm using leader clustering. The algorithm is based on a ::: two-phase clustering. The online phase selects the proper mini-micro or ::: micro-cluster leaders based on the distribution of data points in the micro ::: clusters. Then, the leader centers are sent to the offline phase to form final ::: clusters. In LeaDen-Stream, by carefully choosing between two kinds of micro ::: leaders, we decrease time complexity of the clustering while maintaining the ::: cluster quality. A pruning strategy is also used to filter out real data from ::: noise by introducing dense and sparse mini-micro and micro-cluster leaders. Our ::: performance study over a number of real and synthetic data sets demonstrates ::: the effectiveness and efficiency of our method. <s> BIB009 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Clustering Feature <s> Streaming data clustering is becoming the most efficient way to cluster a very large data set. In this paper we present a new approach, called G-Stream, for topological clustering of evolving data streams. G-Stream allows one to discover clusters of arbitrary shape without any assumption on the number of clusters and by making one pass over the data. The topological structure is represented by a graph wherein each node represents a set of “close” data points and neighboring nodes are connected by edges. The use of the reservoir, to hold, temporarily, the very distant data points from the current prototypes, avoids needless movements of the nearest nodes to data points and therefore, improving the quality of clustering. The performance of the proposed algorithm is evaluated on both synthetic and real-world data sets. <s> BIB010 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Clustering Feature <s> BIRCH algorithm is a clustering algorithm suitable for very large data sets. In the algorithm, a CF-tree is built whose all entries in each leaf node must satisfy a uniform threshold T, and the CF-tree is rebuilt at each stage by different threshold. But using a single threshold cause many shortcomings in the birch algorithm, in this paper to propose a solution to this shortcoming by using multiple thresholds instead of a single threshold. <s> BIB011 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Clustering Feature <s> As more and more applications produce streaming data, clustering data streams has become an important technique for data and knowledge engineering. A typical approach is to summarize the data stream in real-time with an online process into a large number of so called micro-clusters. Micro-clusters represent local density estimates by aggregating the information of many data points in a defined area. On demand, a (modified) conventional clustering algorithm is used in a second offline step to recluster the micro-clusters into larger final clusters. For reclustering, the centers of the micro-clusters are used as pseudo points with the density estimates used as their weights. However, information about density in the area between micro-clusters is not preserved in the online process and reclustering is based on possibly inaccurate assumptions about the distribution of data within and between micro-clusters (e.g., uniform or Gaussian). This paper describes DBSTREAM, the first micro-cluster-based online clustering component that explicitly captures the density between micro-clusters via a shared density graph. The density information in this graph is then exploited for reclustering based on actual density between adjacent micro-clusters. We discuss the space and time complexity of maintaining the shared density graph. Experiments on a wide range of synthetic and real data sets highlight that using shared density improves clustering quality over other popular data stream clustering methods which require the creation of a larger number of smaller micro-clusters to achieve comparable results. <s> BIB012 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Clustering Feature <s> Clustering algorithms are recently regaining attention with the availability of large datasets and the rise of parallelized computing architectures. However, most clustering algorithms do not scale well with increasing dataset sizes and require proper parametrization for correct results. In this paper we present A-BIRCH, an approach for automatic threshold estimation for the BIRCH clustering algorithm using Gap Statistic. This approach renders the global clustering step of BIRCH unnecessary and does not require knowledge on the expected number of clusters beforehand. This is achieved by analyzing a small representative subset of the data to extract attributes such as the cluster radius and the minimal cluster distance. These attributes are then used to compute a threshold that results, with high probability, in the correct clustering of elements. For the analysis of the representative subset we parallelized Gap Statistic to improve performance and ensure scalability. <s> BIB013
BIRCH (Balanced Iterative Reducing and Clustering using Hierarchies) BIB001 BIB002 is one of the earliest algorithms applicable to stream clustering. It reduces the information maintained about a cluster to only a few summary statistics stored in a so called Clustering Feature (CF). The CF consists of three components: (n, LS, SS), where n is the number of data points in the cluster, LS is a d-dimensional vector that contains the linear sum of all data points for each dimension and SS is a scalar that contains the sum of squares for all data points over all dimensions. Some variations of this concept also store the sum of squares per dimension, i.e., as a vector SS. A CF provides sufficient information to calculate the centroid LS/n and also a radius, i.e., a measure of deviation from the centroid. In addition, a CF can be easily updated and merged with another CF by summing the individual components. To maintain the CFs, BIRCH incrementally builds a balanced-tree as illustrated in Figure 6 , where each node can contain a fixed number of CFs. Each Offline Clustering BIRCH BIB001 1996 landmark hierarchical clustering ScaleKM ] 1998 landmark -Single-pass k-means [Farnstrom et al., 2000] 2000 landmark - BIB003 2009 pyramidal DBSCAN ClusTree BIB004 2009 damped not specified LiarTree BIB005 2011 damped not specified HUE-Stream BIB006 2011 damped -SOStream BIB007 2012 damped -StreamKM++ 2012 pyramidal k-means FlockStream 2013 damped -BICO BIB008 2013 landmark k-means LeaDen-Stream BIB009 2013 damped DBSCAN G-Stream BIB010 2014 damped -Improved BIRCH BIB011 2014 landmark hierarchical clustering DBSTREAM BIB012 2016 damped shared density A-BIRCH BIB013 2017 landmark hierarchical clustering evoStream 2018 damped Evolutionary Algorithm Table 2 Overview of distance-based stream clustering algorithms new observation descends the tree by following the child of its closest CF until a leaf node is reached. The observation is either merged with its closest leaf-CF or used to create a new leaf-CF. For reclustering, all leaf-CF can be used as an input to a traditional algorithm such as k-means or hierarchical clustering. Improved BIRCH BIB011 ] is an extension which uses different distance thresholds per CF which are increased based on entries close to the radius boundary. Similarly, A-BIRCH BIB013 estimates the threshold parameters by using the Gap Statistics ] on a sample of the stream. ScaleKM ] is an incremental algorithm to cluster large databases which uses the concept of CFs. The algorithm fills a buffer with initial points and initializes k clusters as with standard k-means. The algorithm then decides for every point whether to discard, summarize or retain it. First, based on a distance threshold to the cluster centers and by creating a worst case perturbation of cluster centers, the algorithm identifies points that are unlikely to ever change their cluster assignments. These points are summarised in a CF per cluster and then discarded. Second, the remaining points are used to identify a larger number of micro-clusters by applying k-means and merging clusters using agglomerative hierarchical clustering. Each cluster is again summarised using a CF. All remaining points are kept as individual points. The freed space in the buffer is then filled with new points to repeat the process. Single pass k-means [Farnstrom et al., 2000 ] is a simplified version of scaleKM where the algorithm discards all data points with every iteration and only the k CFs are maintained.
Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Extended Clustering Feature <s> Recently, the continuously arriving and evolving data stream has become a common phenomenon in many fields, such as sensor networks, web click stream and internet traffic flow. One of the most important mining tasks is clustering. Clustering has attracted extensive research by both the community of machine learning and data mining. Many stream clustering methods have been proposed. These methods have proven to be efficient on specific problems. However, most of these methods are on continuous clustering and few of them are about to solve the heterogeneous clustering problems. In this paper, we propose a novel approach based on the CluStream framework for clustering data stream with heterogeneous features. The centroid of continuous attributes and the histogram of the discrete attributes are used to represent the Micro clusters, and k-prototype clustering algorithm is used to create the Micro clusters and Macro clusters. Experimental results on both synthetic and real data sets show its efficiency. <s> BIB001 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Extended Clustering Feature <s> Mining data streams poses great challenges due to the limited memory availability and real-time query response requirement. Clustering an evolving data stream is especially interesting because it captures not only the changing distribution of clusters but also the evolving behaviors of individual clusters. In this paper, we present a novel method for tracking the evolution of clusters over sliding windows. In our SWClustering algorithm, we combine the exponential histogram with the temporal cluster features, propose a novel data structure, the Exponential Histogram of Cluster Features (EHCF). The exponential histogram is used to handle the in-cluster evolution, and the temporal cluster features represent the change of the cluster distribution. Our approach has several advantages over existing methods: (1) the quality of the clusters is improved because the EHCF captures the distribution of recent records precisely; (2) compared with previous methods, the mechanism employed to adaptively maintain the in-cluster synopsis can track the cluster evolution better, while consuming much less memory; (3) the EHCF provides a flexible framework for analyzing the cluster evolution and tracking a specific cluster efficiently without interfering with other clusters, thus reducing the consumption of computing resources for data stream clustering. Both the theoretical analysis and extensive experiments show the effectiveness and efficiency of the proposed method. <s> BIB002 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Extended Clustering Feature <s> Data stream clustering is an important task in data stream mining. In this paper, we propose SDStream, a new method for performing density-based data streams clustering over sliding windows. SDStream adopts CluStream clustering framework. In the online component, the potential core-micro-cluster and outlier micro-cluster structures are introduced to maintain the potential clusters and outliers. They are stored in the form of Exponential Histogram of Cluster Feature (EHCF) in main memory and are maintained by the maintenance of EHCFs. Outdated micro-clusters which need to be deleted are found by the value of t in Temporal Cluster Feature (TCF). In the offline component, the final clusters of arbitrary shape are generated according to all the potential core-micro-clusters maintained online by DBSCAN algorithm. Experimental results show that SDStream which can generate clusters of arbitrary shape has a much higher clustering quality than CluStream which generates spherical clusters. <s> BIB003
CluStream ] extends the CF from BIRCH which allows to perform clustering over different time-horizons rather than the entire data stream. The extended CF is defined as (LS, SS, LS (t) , SS (t) , n), where LS (t) and SS (t) are the linear and squared sum of all timestamps of a cluster. The online algorithm is initialized by collecting a chunk of data and using the k-means algorithm to create q clusters. When a new data point arrives, it is absorbed by its closest micro-cluster if it lies within an adaptive radius threshold. Otherwise, it is used to create a new cluster. In order to keep the number of micro-clusters constant, outdated clusters are removed based on a threshold on their average time stamp. If this is not possible, the two closest micro-clusters are merged. To support different time-horizons, the algorithm regularly stores snapshots of the current CFs following a pyramidal scheme. While some snapshots are regularly updated, others are less frequently updated to maintain information about historic data. A desired portion of the stream can be approximated by subtracting the current CFs from a stored snapshot of previous CFs. The extracted micro-clusters are then used to run a variant of k-means to generate the macro-clusters. HCluStream BIB001 extends CluStream for categorical data by storing the frequency of attribute-levels for all categorical features. Based on this, it defines a separate categorical distance measure which is combined with the traditional distance measure for continuous attributes. SWClustering BIB002 uses the extended CF and pyramidal time window from CluStream. The algorithm maintains CFs in an Exponential Histogram of Cluster Features (EHCF) which stores data in different levels of granularity, depending on their recency. While the most recent observation is always stored individually, older observations are grouped and summarized. In particular, this step is organized in granularity levels. Once more than 1/ + 1 CFs of a granularity level exist, the next CF contains twice as many entries (cf. Figure 7) . A new observation is either inserted into its closest CF or used to initialize a new one based on a radius threshold, similar to BIRCH. If the initialization creates too many individual CFs, the oldest two individual CFs are merged and this process cascades down the different granularity levels. An old CF is removed if its time-stamp is older than the last N observed time stamps. To generate the final clustering all CFs are used for reclustering, similar to BIRCH. SDStream BIB003 ] combines the EHCF from SWClustering to represent the potential core and outlier micro-clusters from DenStream. The algorithm also enforces an upper limit on the number of micro-clusters by either Figure 7 Granularity levels in an EHCF with = 1. Recent observations are stored individually, whereas older data points are iteratively summarized merging the two most similar micro-clusters or deleting outlier micro-clusters. The offline component applies DBSCAN to the centers of the potential core-micro clusters.
Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Time-Faded Clustering Feature <s> Clustering algorithms are attractive for the task of class identification in spatial databases. However, the application to large spatial databases rises the following requirements for clustering algorithms: minimal requirements of domain knowledge to determine the input parameters, discovery of clusters with arbitrary shape and good efficiency on large databases. The well-known clustering algorithms offer no solution to the combination of these requirements. In this paper, we present the new clustering algorithm DBSCAN relying on a density-based notion of clusters which is designed to discover clusters of arbitrary shape. DBSCAN requires only one input parameter and supports the user in determining an appropriate value for it. We performed an experimental evaluation of the effectiveness and efficiency of DBSCAN using synthetic data and real data of the SEQUOIA 2000 benchmark. The results of our experiments demonstrate that (1) DBSCAN is significantly more effective in discovering clusters of arbitrary shape than the well-known algorithm CLAR-ANS, and that (2) DBSCAN outperforms CLARANS by a factor of more than 100 in terms of efficiency. <s> BIB001 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Time-Faded Clustering Feature <s> Data streams have recently attracted attention for their applicability to numerous domains including credit fraud detection, network intrusion detection, and click streams. Stream clustering is a technique that performs cluster analysis of data streams that is able to monitor the results in real time. A data stream is continuously generated sequences of data for which the characteristics of the data evolve over time. A good stream clustering algorithm should recognize such evolution and yield a cluster model that conforms to the current data. In this paper, we propose a new technique for stream clustering which supports five evolutions that are appearance, disappearance, self-evolution, merge and split. <s> BIB002 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Time-Faded Clustering Feature <s> For mining new pattern from evolving data streams, most algorithms are inherited from DenStream framework which is realized via a sliding window. So at the early stage of a pattern emerges, its knowledge points can be easily mistaken as outliers and dropped. In most cases, these points can be ignored, but in some special applications which need to quickly and precisely master the emergence rule of some patterns, these points must play their rules. Based on DenStream, this paper proposes a three-step clustering algorithm, rDenStream, which presents the concept of outlier retrospect. In rDenStream clustering, dropped micro-clusters are stored on outside memory temporarily, and will be given new chance to attend clustering to improve the clustering accuracy. Experiments modeled the arrival of data stream in Poisson process, and the results over standard data set showed its advantage over other methods in the early phase of new pattern discovery. <s> BIB003 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Time-Faded Clustering Feature <s> Data stream clustering is an importance issue in data stream mining. In most of the existing algorithms, only the continuous features are used for clustering. In this paper, we introduce an algorithm HDenStream for clustering data stream with heterogeneous features. The HDenstream is also a density-based algorithm, so it is capable enough to cluster arbitrary shapes and handle outliers. Theoretic analysis and experimental results show that HDenStream is effective and efficient. <s> BIB004
DenStream ] presents a temporal extension of the CFs from BIRCH. It maintains two types of clusters: Potential core micro-clusters are stable structures that are denoted using a time-faded CF LS (ω) , SS (ω) , n (ω) . The superscript (ω) denotes that each entry of the CF is decayed over time using a decay function ω(∆t) = β −λ∆t . In addition, their weight n (ω) is required to be greater than a threshold value. Outlier micro-clusters are unstable structures whose weight is less than the threshold and they additionally maintain their creation time. At first, DBSCAN BIB001 ] is used to initialize a set of potential core micro-clusters. Similar to BIRCH, a new observation is assigned to its closest potential core micro-cluster if the addition does not increase the radius beyond a threshold. If it does, the same attempt is made for the closest outlier-cluster and the outlier-cluster is promoted to a potential core if it satisfies the weight threshold. If both cannot absorb the point, a new outlier-cluster is initialized. In regular intervals, the weight of all micro-clusters is evaluated. Potential core-micro clusters that no longer have enough weight are degraded to outlier micro-clusters and outlier micro-clusters that decayed below a threshold based on their creation time are removed. Macro-clusters are generated by applying a variant of DBSCAN BIB001 ] to potential core micro-clusters. C-DenStream ] is an extension of DenStream which allows to include domain knowledge in the form of instance-level constraints into the clustering process. Instance-level constraints describe observations that must or cannot belong to the same cluster. Another extension is rDenStream BIB003 . Instead of discarding outlier micro-clusters which cannot be converted into potential core microclusters, the algorithm temporarily stores them away in an outlier buffer. After the offline component, the algorithm attempts to relearn the data points that have been cached in the buffer in order to refine the clustering. HDenStream BIB004 ] combines D-Stream with the categorical distance measure of HCluStream to make it applicable to categorical features. BIB002 uses the time-faded CF from DenStream in combination with a histogram which bins the data points. New observations are either added to their closest cluster or used to initialize a new one. Existing clusters are split if one of the dimensions shows a significant valley in their histogram. When a cluster is split along a dimension, the other dimensions are weighted by the size of the split. Additionally, clusters can be merged if they move into close proximity.
Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> E-Stream <s> Clustering streaming data requires algorithms which are capable of updating clustering results for the incoming data. As data is constantly arriving, time for processing is limited. Clustering has to be performed in a single pass over the incoming data and within the possibly varying inter-arrival times of the stream. Likewise, memory is limited, making it impossible to store all data. For clustering, we are faced with the challenge of maintaining a current result that can be presented to the user at any given time. In this work, we propose a parameter free algorithm that automatically adapts to the speed of the data stream. It makes best use of the time available under the current constraints to provide a clustering of the objects seen up to that point. Our approach incorporates the age of the objects to reflect the greater importance of more recent data. Moreover, we are capable of detecting concept drift, novelty and outliers in the stream. For efficient and effective handling, we introduce the ClusTree, a compact and self-adaptive index structure for maintaining stream summaries. Our experiments show that our approach is capable of handling a multitude of different stream characteristics for accurate and scalable anytime stream clustering. <s> BIB001 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> E-Stream <s> Clustering streaming data requires algorithms that are capable of updating clustering results for the incoming data. As data is constantly arriving, time for processing is limited. Clustering has to be performed in a single pass over the incoming data and within the possibly varying inter-arrival times of the stream. Likewise, memory is limited, making it impossible to store all data. For clustering, we are faced with the challenge of maintaining a current result that can be presented to the user at any given time. In this work, we propose a parameter-free algorithm that automatically adapts to the speed of the data stream. It makes best use of the time available under the current constraints to provide a clustering of the objects seen up to that point. Our approach incorporates the age of the objects to reflect the greater importance of more recent data. For efficient and effective handling, we introduce the ClusTree, a compact and self-adaptive index structure for maintaining stream summaries. Additionally we present solutions to handle very fast streams through aggregation mechanisms and propose novel descent strategies that improve the clustering result on slower streams as long as time permits. Our experiments show that our approach is capable of handling a multitude of different stream characteristics for accurate and scalable anytime stream clustering. <s> BIB002 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> E-Stream <s> Evolution-based stream clustering method supports the monitoring and the change detection of clustering structures. E-Stream is an evolution-based stream clustering method that supports different types of clustering structure evolution which are appearance, disappearance, self-evolution, merge and split. This paper presents HUE-Stream which extends E-Stream in order to support uncertainty in heterogeneous data. A distance function, cluster representation and histogram management are introduced for the different types of clustering structure evolution. We evaluate effectiveness of HUE-Stream on real-world dataset KDDCup 1999 Network Intruision Detection. Experimental results show that HUE-Stream gives better cluster quality compared with UMicro. <s> BIB003 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> E-Stream <s> In stream data mining, stream clustering algorithms provide summaries of the relevant data objects that arrived in the stream. The model size of the clustering, i.e. the granularity, is usually determined by the speed (data per time) of the data stream. For varying streams, e.g. daytime or seasonal changes in the amount of data, most algorithms have to heavily restrict their model size such that they can handle the minimal time allowance. Recently the first anytime stream clustering algorithm has been proposed that flexibly uses all available time and dynamically adapts its model size. However, the method exhibits several drawbacks, as no noise detection is performed, since every point is treated equally, and new concepts can only emerge within existing ones. In this paper we propose the LiarTree algorithm, which is capable of anytime clustering and at the same time robust against noise and novelty to deal with arbitrary data streams. <s> BIB004 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> E-Stream <s> Clustering of streaming sensor data aims at providing online summaries of the observed stream. This task is mostly done under limited processing and storage resources. This makes the sensed stream speed (data per time) a sensitive restriction when designing stream clustering algorithms. Additionally, the varying speed of the stream is a natural characteristic of sensor data, e.g. changing the sampling rate upon detecting an event or for a certain time. In such cases, most clustering algorithms have to heavily restrict their model size such that they can handle the minimal time allowance. Recently the first anytime stream clustering algorithm has been proposed that flexibly uses all available time and dynamically adapts its model size. However, the method was not designed to precisely cluster sensor data which are usually noisy and extremely evolving. In this paper we detail the LiarTree algorithm that provides precise stream summaries and effectively handles noise, drift and novelty. We prove that the runtime of the LiarTree is logarithmic in the size of the maintained model opposed to a linear time complexity often observed in previous approaches. We demonstrate in an extensive experimental evaluation using synthetic and real sensor datasets that the LiarTree outperforms competing approaches in terms of the quality of the resulting summaries and exposes only a logarithmic time complexity. <s> BIB005 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> E-Stream <s> Clustering evolving data streams is ::: important to be performed in a limited time with a reasonable quality. The ::: existing micro clustering based methods do not consider the distribution of ::: data points inside the micro cluster. We propose LeaDen-Stream (Leader Density-based ::: clustering algorithm over evolving data Stream), a density-based ::: clustering algorithm using leader clustering. The algorithm is based on a ::: two-phase clustering. The online phase selects the proper mini-micro or ::: micro-cluster leaders based on the distribution of data points in the micro ::: clusters. Then, the leader centers are sent to the offline phase to form final ::: clusters. In LeaDen-Stream, by carefully choosing between two kinds of micro ::: leaders, we decrease time complexity of the clustering while maintaining the ::: cluster quality. A pruning strategy is also used to filter out real data from ::: noise by introducing dense and sparse mini-micro and micro-cluster leaders. Our ::: performance study over a number of real and synthetic data sets demonstrates ::: the effectiveness and efficiency of our method. <s> BIB006
HUE-Stream BIB003 is an extension of E-Stream which also supports categorical data and can also handle uncertain data streams. To model uncertainty, each observation is assumed to follow a probability distribution. In this case, the vectors of linear and squared sum become the sum of expectation, faded over time. ClusTree BIB001 BIB002 uses the time-faded CF and applies it to the tree structure of BIRCH. Additionally, it allows to handle data streams where entries arrive faster than they can be processed. A new entry descends into its closest leaf where it is inserted as a new CF. Whenever a node is full, it is split and its entries combined in two groups such that the intra-cluster distance is minimized. However, if a new observation arrives before a node could be split, the new entry is merged with its closest CFs instead. If a new observation arrives while an entry descends the tree, that entry is temporarily stored in a buffer at its current location. It remains there until another entry descends into the same branch and is then carried further down the tree as a 'hitchhiker'. Again, the leafs can be used as an input to a traditional algorithm to generate the macro-clusters. LiarTree BIB004 BIB005 is an extension of ClusTree with better noise and novelty handling. It does so by adding a timeweighted CF to each node of the tree which serves as a buffer for noise. Data points are considered noise with respect to a node based on a threshold on their distance to the node's mean, relative to the node's standard deviation. The noise buffer is promoted to a regular cluster when its density is comparable to other CFs in the node. FlockStream ] employs a flocking behavior inspired by nature to identify emerging flocks and swarms of objects. Similar to DenStream, the algorithm distinguishes between potential core and outlier micro-clusters and uses a time-faded CF. It projects a batch of data onto a two-dimensional grid where each data point is represented by a basic agent. Each agent then makes movement decisions solely based on other agents in close proximity. The movement of agents is similar to the behavior of a flock of birds in flight: (1) Agents steer in the same direction as their neighbors; (2) Agents steer towards the location of their neighbors; (3) Agents avoid collisions with neighbors. When agents meet, they can be merged depending on a distance or radius threshold. After a number of flocking steps, the next batch of data is used to fill the grid with new agents in order to repeat the process. LeaDen-Stream BIB006 (Leader Density-based clustering algorithm over evolving data Stream) can choose multiple representatives per cluster to increase accuracy when clusters are not uniformly distributed. To do so, the algorithm maintains two different granularity levels. First, Micro Leader Clusters (MLC) correspond to the concept of traditional micro-clusters. However, they maintain a list of more fine granular information in the form of Mini Micro Leader Clusters (MMLC). These mini micro-clusters contain more detailed information and are represented by a time-faded CF. For new observations, the algorithm finds the closest MLC using the Mahalanobis distance. If the distance is within a threshold, the closest MMLC within the MLC is identified. If it is also within a distance threshold, the point is added to the MMLC. If one of the thresholds is violated, a new MLC or MMLC is created respectively. For reclustering all selected leaders are used to run DBSCAN.
Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Medoids <s> The k-means method is a widely used clustering technique that seeks to minimize the average squared distance between points in the same cluster. Although it offers no accuracy guarantees, its simplicity and speed are very appealing in practice. By augmenting k-means with a very simple, randomized seeding technique, we obtain an algorithm that is Θ(logk)-competitive with the optimal clustering. Preliminary experiments show that our augmentation improves both the speed and the accuracy of k-means, often quite dramatically. <s> BIB001 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Medoids <s> Advances in data acquisition have allowed large data collections of millions of time varying records in the form of data streams. The challenge is to effectively process the stream data with limited resources while maintaining sufficient historical information to define the changes and patterns over time. This paper describes an evidence-based approach that uses representative points to incrementally process stream data by using a graph based method to cluster points based on connectivity and density. Critical cluster features are archived in repositories to allow the algorithm to cope with recurrent information and to provide a rich history of relevant cluster changes if analysis of past data is required. We demonstrate our work with both synthetic and real world data sets. <s> BIB002 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Medoids <s> We present an incremental graph-based clustering algorithm whose design was motivated by a need to extract and retain meaningful information from data streams produced by applications such as large scale surveillance, network packet inspection and financial transaction monitoring. To this end, the method we propose utilises representative points to both incrementally cluster new data and to selectively retain important cluster information within a knowledge repository. The repository can then be subsequently used to assist in the processing of new data, the archival of critical features for off-line analysis, and in the identification of recurrent patterns. <s> BIB003 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Medoids <s> We design a data stream algorithm for the k-means problem, called BICO, that combines the data structure of the SIGMOD Test of Time award winning algorithm BIRCH [27] with the theoretical concept of coresets for clustering problems. The k-means problem asks for a set C of k centers minimizing the sum of the squared distances from every point in a set P to its nearest center in C. In a data stream, the points arrive one by one in arbitrary order and there is limited storage space. <s> BIB004
An alternative to storing Clustering Features is to maintain medoids of clusters, i.e., representatives. RepStream BIB002 BIB003 , for example, incrementally updates a graph of nearest neighbors to identify suitable cluster representatives. New observations are inserted as a new node in the graph and edges are inserted between the node and its nearestneighbors. The point is assigned to an existing cluster if it is mutually connected to a representative of that cluster. Otherwise it is used as a representative to initialize a new cluster. Representatives are also inserted in a separate representative graph which maintains the nearest neighbors only between representatives. To split and merge existing clusters, the distance between them is compared to the average distance to their nearest neighbors in the representative graph. In order to reduce space requirements, non-representative points are discarded using a sliding time window. In addition, if a new representative is found but space limitations prevent it from being added to the representative graph, it can replace an existing representative depending on its age and number of nearest neighbors. streamKM++ ] is a variant of k-means++ BIB001 which computes a small weighted sample that represents the data called coreset. The coreset is constructed in a binary tree by using a divisive clustering approach. The tree is initialized by selecting a random representative point from the data. To split an existing cluster, the algorithm starts at the root node and iteratively chooses a child node relative to their weights until a leaf is reached. From the selected leaf, a data point is chosen as a second centre based on its distance to the initial centre of the cluster. Finally, the cluster is split by assigning each data point to the closest of the two centers. To handle data streams, the algorithm uses a similar approach as SWClustering (see Section 4.2). First, new observations are inserted into a coreset tree. Once the tree is full, all its points are moved to the next tree. If the next tree already contains points, the coreset between the points in both trees is computed. This cascades further until an empty tree is found. For reclustering, the union of all points is used to compute a coreset and the representatives are used to apply the k-means++ algorithm BIB001 . BICO BIB004 combines the data structure of BIRCH (see Section 4.1) with the coreset of streamKM++. BICO maintains the coreset in a tree structure where each node represents one CF. The algorithm is initialized by using the first data point in the stream to open a CF on the first level of the empty tree and the data point is kept as a representative for the CF. For every consecutive point, the algorithm attempts to insert it into an existing CF, starting on the first level. The insertion fails if the distance of the new point to the representative of its closest CF is larger than a threshold. In this case, a new CF is opened on the same level, using the new point as the reference point. Additionally, the insertion fails if the cluster's deviation from the mean would grow beyond a threshold. In this case the algorithm attempts to insert the point into the children of the closest CF. The final clustering is generated by applying k-means++ to the representatives of the leafs.
Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Centroids <s> Streaming data analysis has recently attracted attention in numerous applications including telephone records, Web documents and click streams. For such analysis, single-pass algorithms that consume a small amount of memory are critical. We describe such a streaming algorithm that effectively clusters large data streams. We also provide empirical evidence of the algorithm's performance on synthetic and real data streams. <s> BIB001 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Centroids <s> The data stream model has recently attracted attention for its applicability to numerous types of data, including telephone records, Web documents, and clickstreams. For analysis of such data, the ability to process the data in a single pass, or a small number of passes, while using little memory, is crucial. We describe such a streaming algorithm that effectively clusters large data streams. We also provide empirical evidence of the algorithm's performance on synthetic and real data streams. <s> BIB002 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Centroids <s> A machine learning approach that is capable of treating data streams presents new challenges and enables the analysis of a variety of real problems in which concepts change over time. In this scenario, the ability to identify novel concepts as well as to deal with concept drift are two important attributes. This paper presents a technique based on the k-means clustering algorithm aimed at considering those two situations in a single learning strategy. Experimental results performed with data from various domains provide insight into how clustering algorithms can be used for the discovery of new concepts in streams of data. <s> BIB003
A simpler approach to maintain clusters is to store their centroids directly. However, this makes it generally more difficult to update clusters over time. As an example, STREAM BIB001 BIB002 ] only stores the centroids of k clusters. Its core idea is to treat the k-Median clustering problem as a facility planning problem. To do so, distances from data points to their closest cluster have associated costs. This reduces the clustering task to a cost minimization problem in order to find the number and position of facilities that yield the lowest costs. In order to generate a certain number of clusters, the algorithm adjusts the facility costs in each iteration by using a binary search for the costs that yield the desired number of centers k. To deal with streaming data, the algorithm processes the stream in chunks and solves the k-Median problem for each chunk individually. Assuming n different chunks, a total of nk clusters are created. To generate the final clustering or if available storage is exceeded, these intermediate clusters are again clustered using the same approach. OLINDDA BIB003 (Online Novelty and Drift Detection Algorithm) relies on cluster centroids to identify new and drifting clusters in a data stream. Initially, k-means is used to generate a set of clusters. For each cluster the distance from its centre to its furthest observation is considered a boundary. Points that do not fall into the boundary of any cluster are considered as an unknown concept and kept in a buffer. This buffer is regularly scanned for emerging structures using k-means. If an emerging cluster is of similar variance as the existing cluster, it is considered valid. To distinguish a new cluster from a drifting cluster, the algorithm assumes that drifts occur close to the existing clusters whereas new clusters form further away from the existing model.
Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Competitive Learning <s> Clustering algorithms are attractive for the task of class identification in spatial databases. However, the application to large spatial databases rises the following requirements for clustering algorithms: minimal requirements of domain knowledge to determine the input parameters, discovery of clusters with arbitrary shape and good efficiency on large databases. The well-known clustering algorithms offer no solution to the combination of these requirements. In this paper, we present the new clustering algorithm DBSCAN relying on a density-based notion of clusters which is designed to discover clusters of arbitrary shape. DBSCAN requires only one input parameter and supports the user in determining an appropriate value for it. We performed an experimental evaluation of the effectiveness and efficiency of DBSCAN using synthetic data and real data of the SEQUOIA 2000 benchmark. The results of our experiments demonstrate that (1) DBSCAN is significantly more effective in discovering clusters of arbitrary shape than the well-known algorithm CLAR-ANS, and that (2) DBSCAN outperforms CLARANS by a factor of more than 100 in terms of efficiency. <s> BIB001 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Competitive Learning <s> This work contains a theoretical study and computer simulations of a new self-organizing process. The principal discovery is that in a simple network of adaptive physical elements which receives signals from a primary event space, the signal representations are automatically mapped onto a set of output responses in such a way that the responses acquire the same topological order as that of the primary events. In other words, a principle has been discovered which facilitates the automatic formation of topologically correct maps of features of observable events. The basic self-organizing system is a one- or two-dimensional array of processing units resembling a network of threshold-logic units, and characterized by short-range lateral feedback between neighbouring units. Several types of computer simulations are used to demonstrate the ordering process as well as the conditions under which it fails. <s> BIB002 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Competitive Learning <s> In this paper we propose a data stream clustering algorithm, called Self Organizing density based clustering over data Stream (SOStream). This algorithm has several novel features. Instead of using a fixed, user defined similarity threshold or a static grid, SOStream detects structure within fast evolving data streams by automatically adapting the threshold for density-based clustering. It also employs a novel cluster updating strategy which is inspired by competitive learning techniques developed for Self Organizing Maps (SOMs). In addition, SOStream has built-in online functionality to support advanced stream clustering operations including merging and fading. This makes SOStream completely online with no separate offline components. Experiments performed on KDD Cup'99 and artificial datasets indicate that SOStream is an effective and superior algorithm in creating clusters of higher purity while having lower space and time requirements compared to previous stream clustering algorithms. <s> BIB003 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Competitive Learning <s> Streaming data clustering is becoming the most efficient way to cluster a very large data set. In this paper we present a new approach, called G-Stream, for topological clustering of evolving data streams. G-Stream allows one to discover clusters of arbitrary shape without any assumption on the number of clusters and by making one pass over the data. The topological structure is represented by a graph wherein each node represents a set of “close” data points and neighboring nodes are connected by edges. The use of the reservoir, to hold, temporarily, the very distant data points from the current prototypes, avoids needless movements of the nearest nodes to data points and therefore, improving the quality of clustering. The performance of the proposed algorithm is evaluated on both synthetic and real-world data sets. <s> BIB004 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Competitive Learning <s> As more and more applications produce streaming data, clustering data streams has become an important technique for data and knowledge engineering. A typical approach is to summarize the data stream in real-time with an online process into a large number of so called micro-clusters. Micro-clusters represent local density estimates by aggregating the information of many data points in a defined area. On demand, a (modified) conventional clustering algorithm is used in a second offline step to recluster the micro-clusters into larger final clusters. For reclustering, the centers of the micro-clusters are used as pseudo points with the density estimates used as their weights. However, information about density in the area between micro-clusters is not preserved in the online process and reclustering is based on possibly inaccurate assumptions about the distribution of data within and between micro-clusters (e.g., uniform or Gaussian). This paper describes DBSTREAM, the first micro-cluster-based online clustering component that explicitly captures the density between micro-clusters via a shared density graph. The density information in this graph is then exploited for reclustering based on actual density between adjacent micro-clusters. We discuss the space and time complexity of maintaining the shared density graph. Experiments on a wide range of synthetic and real data sets highlight that using shared density improves clustering quality over other popular data stream clustering methods which require the creation of a larger number of smaller micro-clusters to achieve comparable results. <s> BIB005
More recently, algorithms also use competitive learning strategies in order to adapt the centroids of clusters over time. This is inspired by Self-Organizing Maps (SOMs) BIB002 where clusters compete to represent an observation, typically by moving cluster centers towards new observations based on their proximity. SOStream BIB003 (Self Organizing density based clustering over data Stream) combines DBSCAN BIB001 with Self-Organizing Maps (SOMs) BIB002 for stream clustering. It stores a time-faded weight, radius and centre for the cluster directly. A new observation is merged into its closest cluster if it lies within its radius. Following the idea of competitive learning, the algorithm also moves the k-nearest neighbors of the absorbing cluster in its direction. If clusters move into close proximity during this step, they are also merged. DBSTREAM BIB005 (Density-based Stream Clustering) is based on SOStream (see Section 4.6) but uses the shared density between two micro-clusters in order to decide whether micro-clusters belong to the same macro-cluster. A new observation x is merged into micro-clusters if it falls within the radius from their centre. Subsequently, the centers of all clusters that absorb the observation are updated by moving the centre towards x. If no cluster absorbs the point, it is used to initialize a new micro-cluster. Additionally, the algorithm maintains the shared density between two microclusters as the density of points in the intersection of their radii, relative to the size of the intersection area. In regular intervals it removes micro-clusters and shared densities whose weight decayed below a respective threshold. In the offline component, micro-clusters with high shared density are merged into the same cluster. evoStream (Evolutionary Stream Clustering) makes use of an evolutionary algorithm in order to bridge the gap between the online and offline component. Evolutionary algorithms are inspired by natural evolution where promising solutions are combined and slightly modified to create offsprings which can yield an improved solution. By iteratively selecting the best solutions, an evolutionary pressure is created which improves the result over time. evoStream uses this concept in order to iteratively improve the macroclusters through recombinations and small variations. Since macro-clusters are created incrementally, the evolutionary steps can be performed while the online components waits for new observations, i.e., when the algorithm would usually idle. As a result, the computational overhead of the offline component is removed and clusters are available at any time. The online component is similar to DBSTREAM but does not maintain a shared-density since it is not necessary for reclustering. G-Stream BIB004 (Growing Neural Gas over Data Streams) utilizes the concept of Neural Gas for data streams. The algorithm maintains a graph where each node represents a cluster. Nodes that share similar data points are connected by edges. Each edge has an associated age and nodes maintain an error term denoting the cluster's deviation. For a new observation x the two nearest clusters C 1 and C 2 are identified. If x does not fit into the radius of its closest cluster C 1 , it is temporarily stored away and later re-inserted. Otherwise, it is inserted into C 1 . Additionally, the centre of C 1 and all its connected neighbors are moved in the direction of x. Next, the age of all outgoing edges of C 1 are incremented and an edge from C 1 to C 2 is either inserted or its weight is reset to zero. The age of edges serves a similar purpose as a fading function. Edges who have grown too old, are removed as they contain outdated information. In regular intervals, the algorithm inserts new nodes between the node with the largest deviation and its neighbor with the largest deviation.
Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Summary <s> This work contains a theoretical study and computer simulations of a new self-organizing process. The principal discovery is that in a simple network of adaptive physical elements which receives signals from a primary event space, the signal representations are automatically mapped onto a set of output responses in such a way that the responses acquire the same topological order as that of the primary events. In other words, a principle has been discovered which facilitates the automatic formation of topologically correct maps of features of observable events. The basic self-organizing system is a one- or two-dimensional array of processing units resembling a network of threshold-logic units, and characterized by short-range lateral feedback between neighbouring units. Several types of computer simulations are used to demonstrate the ordering process as well as the conditions under which it fails. <s> BIB001 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Summary <s> Conceptual clustering is an important way of summarizing and explaining data. However, the recent formulation of this paradigm has allowed little exploration of conceptual clustering as a means of improving performance. Furthermore, previous work in conceptual clustering has not explicitly dealt with constraints imposed by real world environments. This article presents COBWEB, a conceptual clustering system that organizes data so as to maximize inference ability. Additionally, COBWEB is incremental and computationally economical, and thus can be flexibly applied in a variety of domains. <s> BIB002 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Summary <s> In data clustering, many approaches have been proposed such as K-means method and hierarchical method. One of the problems is that the results depend heavily on initial values and criterion to combine clusters. In this investigation, we propose a new method to cluster stream data while avoiding this deficiency. Here we assume there exists aspects of local regression in data. Then we develop our theory to combine clusters using F values by regression analysis as criterion and to adapt to stream data. We examine experiments and show how well the theory works. <s> BIB003 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Summary <s> Tools for automatically clustering streaming data are becoming increasingly important as data acquisition technology continues to advance. In this paper we present an extension of conventional kernel density clustering to a spatio-temporal setting, and also develop a novel algorithmic scheme for clustering data streams. Experimental results demonstrate both the high efficiency and other benefits of this new approach. <s> BIB004 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Summary <s> Clustering data streams has been attracting a lot of research efforts recently. However, this problem has not received enough consideration when the data streams are generated in a distributed fashion, whereas such a scenario is very common in real life applications. There exist constraining factors in clustering the data streams in the distributed environment: the data records generated are noisy or incomplete due to the unreliable distributed system; the system needs to on-line process a huge volume of data; the communication is potentially a bottleneck of the system. All these factors pose great challenge for clustering the distributed data streams. In this paper, we proposed an EM-based (Expectation Maximization) framework to effectively cluster the distributed data streams, with the above fundamental challenges in mind. In the presence of noisy or incomplete data records, our algorithms learn the distribution of underlying data streams by maximizing the likelihood of the data clusters. A test-and-cluster strategy is proposed to reduce the average processing cost, which is especially effective for online clustering over large data streams. Our extensive experimental studies show that the proposed algorithms can achieve a high accuracy with less communication cost, memory consumption and CPU time. <s> BIB005 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Summary <s> In this paper, we propose a novel data stream clustering algorithm, termed SVStream, which is based on support vector domain description and support vector clustering. In the proposed algorithm, the data elements of a stream are mapped into a kernel space, and the support vectors are used as the summary information of the historical elements to construct cluster boundaries of arbitrary shape. To adapt to both dramatic and gradual changes, multiple spheres are dynamically maintained, each describing the corresponding data domain presented in the data stream. By allowing for bounded support vectors (BSVs), the proposed SVStream algorithm is capable of identifying overlapping clusters. A BSV decaying mechanism is designed to automatically detect and remove outliers (noise). We perform experiments over synthetic and real data streams, with the overlapping, evolving, and noise situations taken into consideration. Comparison results with state-of-the-art data stream clustering methods demonstrate the effectiveness and efficiency of the proposed method. <s> BIB006
Distance-based algorithms are by far the most common and popular approaches in stream clustering. They allow to create accurate summaries of the entire stream with rather simple insertion rules. Since it is infeasible to store all observations within the clusters, distance-based algorithms usually summarize the observations associated with a cluster. A popular example of this are Clustering Features which only store the information required to calculate the location and radius of a cluster. Alternatively, some algorithms maintain medoids, i.e., representatives of clusters or store the cluster centroids directly. In order to update cluster centroids over time, some algorithms also make use of competitive learning strategies, similar to Self-Organizing Maps (SOM) BIB001 . Generally, distance-based algorithms are computationally inexpensive and will suit the majority of stream clustering scenarios well. However, they often rely on many parameters such as distance and weight thresholds, radii or cleanup intervals. This makes it more difficult to apply them in practice and requires either expert knowledge or extensive parameter configuration. Another common issue is that distance-based algorithms can often only find spherical clusters. However, this is usually due to the choice of offline component which can be easily replaced by other approaches that can detect arbitrary clusters such as DBSCAN or hierarchical clustering with single linkage. While the popular algorithms BIRCH, CluStream and DenStream face many problems, either due to lack of fading or complicated maintenance steps, we find newer algorithms such as DBSTREAM particularly interesting due to their simpler design. Grid-based approaches are a popular alternative to density-based algorithms due to their simple design and support for arbitrarily shaped clusters. While many distance-based algorithms are only able to detect spherical clusters, almost all grid-based algorithms can identify cluster of arbitrary shape. This is mostly because the grid-structure allows an easy design of an offline-component where dense cells with common faces form clusters. The majority of grid-based algorithms partition the data space once into cells of fixed size. However, some algorithms do this recursively to create a more adaptive grid. Less common are algorithms where the size of cells is determined dynamically, mostly because of the increased computational costs. Lastly, some algorithms employ a hybrid strategy where a grid is used to establish distance-based approaches. Generally, the grid structure is less efficient than distance-based approaches due to its inflexible structure. For this reason, grid-based approaches often have higher memory requirements and need more micro-clusters to achieve the same quality as distance-based approaches. Empirical evidence has also shown this to be true for the most popular grid-based algorithm D-Stream. COBWEB BIB002 1987 landmark ICFR BIB003 2004 damped WStream BIB004 2006 damped CluDistream BIB005 2007 landmark SWEM 2009 sliding SVStream BIB006 2013 damped Table 4 Overview of model-based stream clustering algorithms Maximization (EM) algorithm. Table 4 gives an overview of 6 model-based algorithms. This class of algorithms is highly diverse and few interdependencies exist between the presented algorithms. CluDistream BIB005 uses the EM algorithm to process distributed data streams. At each location, it maintains a number of Gaussian mixture distributions and a coordinator node is used to combine the distributions. For each location, the stream is processed in chunks and the first chunk is used to initialize a new clustering using EM. For subsequent chunks, the algorithm checks whether the current models can represent the chunk sufficiently well. This is done by calculating the difference between the average log-likelihood of the existing model and the average log-likelihood of the chunk under the existing model. If the difference is less than a threshold, the weight of the model is incremented. Else, the current model is stored and a new model is initialized by applying EM to the current chunk. Whenever the weight of a model is updated or a new model is initialized, the coordinator receives the update and incorporates the new information into a global model by merging or splitting the Gaussian distributions. SWEM (Sliding Window with Expectation Maximization) applies the EM to chunks of data. Starting with random initial parameters, a set of m distributions is calculated for the first chunk and points are assigned to their most likely cluster. Each cluster is then summarized using a CF and k macro-cluster are generated by applying EM again. For a new chunk, the algorithm sets the initial values to the converged values of the previous chunk and incrementally applies EM to generate m new distributions. If a cluster grows too large or too small during this phase, the corresponding distributions can be split or merged. Finally the m new clusters are summarized in CFs and used with the existing k clusters to apply EM again. COBWEB BIB002 maintains a classification tree where each node describes a cluster. The tree is built incrementally by descending a new entry x from the root to a leaf. On each level the algorithm makes one of four clustering decisions that yields the highest clustering quality: (1) Insert x into most fitting child, (2) Create a new cluster for x, (3) Combine the two nodes that can best absorb x and add existing nodes as children of the new node, (4) Split the two nodes that can best absorb x and move its children up one level. The quality of each decision is evaluated using a measure called Category Utility (CU) which defines a trade-off between intra-class similarity and inter-class distance. ICFR BIB003 (Incremental Clustering using F-value by Regression analysis) uses concepts from linear regression in order to cluster data streams. The algorithm assigns points to existing clusters based on their cosine similarity. To merge clusters the algorithm finds the two closest clusters based on the Mahalanobis distance. If the merged clusters yield a greater F -value than the sum of individual F -values, the clusters are merged. The F -value is a measure of model validity in linear regressions. If the clusters cannot be merged, the next closest two clusters are evaluated until the closest pair exceeds a distance threshold. WStream BIB004 uses multivariate kernel density estimates to maintain a number of rectangular windows in the data space. The idea is to use local maxima of a density estimate as cluster centers and the local minima as cluster boundaries. WStream transfers this approach to data streams. New data points are either assigned to an existing window and their centre is moved towards the new point or it is used to initialize a new window of default size. Windows can enlarge or contract depending on the ratio of points close to their centre and close to their border. SVStream BIB006 (Support Vector based Stream Clustering) is based on Support Vector Clustering(SVC) . SVC transforms the data into a higher dimensional space and identifies the smallest sphere that encloses most points. When mapping the sphere back to the input space, the sphere forms a number of contour lines that represent clusters. SVStream iteratively maintains a number of spheres. The stream is processed in chunks and the first chunk is used to run SVC. For each subsequent chunk, the algorithm evaluates what portion of the chunk does not fall into the radius of existing spheres. If too many do not fit the current spheres, these values are used to initialize a new sphere. The remaining values are used to update the existing spheres. Model-based stream clustering algorithms are far less common than distancebased and grid-based approaches. Typically strategies try to find a mixture of distributions that fits the data stream, e.g. CluDiStream or SWEM. Unfortunately, no implementation of model-based algorithms is readily available which limits their usefulness in practice. In addition, they are often more computationally complex than comparable algorithms from the other categories. Projected stream clustering algorithms serve a niche for high dimensional data streams where it is not possible to perform prior feature selection in order to reduce the dimensionality. In general, these algorithms have added complexity associated with the selection of subspaces for each cluster. In return, they can identify clusters in very high dimensional space and can gracefully handle the curse of dimensionality . The most influential and popular algorithm of this category has been HPStream.
Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> D-Stream <s> In the field of data stream analysis,conventional methods seem not quite useful.Because neither they can adapt to the dynamic environment of data stream,nor the mining models and results can meet users' needs.A grid and density based clustering method is proposed to effectively address the problem.With this method,the mining procedure is divided into online and offline two parts and grid and density based clustering method is used to get final clusters for data stream. <s> BIB001 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> D-Stream <s> Clustering real-time stream data is an important and challenging problem. Existing algorithms such as CluStream are based on the k-means algorithm. These clustering algorithms have difficulties finding clusters of arbitrary shapes and handling outliers. Further, they require the knowledge of k and user-specified time window. To address these issues, this article proposes D-Stream, a framework for clustering stream data using a density-based approach. Our algorithm uses an online component that maps each input data record into a grid and an offline component that computes the grid density and clusters the grids based on the density. The algorithm adopts a density decaying technique to capture the dynamic changes of a data stream and a attraction-based mechanism to accurately generate cluster boundaries. Exploiting the intricate relationships among the decay factor, attraction, data density, and cluster structure, our algorithm can efficiently and effectively generate and adjust the clusters in real time. Further, a theoretically sound technique is developed to detect and remove sporadic grids mapped by outliers in order to dramatically improve the space and time efficiency of the system. The technique makes high-speed data stream clustering feasible without degrading the clustering quality. The experimental results show that our algorithm has superior quality and efficiency, can find clusters of arbitrary shapes, and can accurately recognize the evolving behaviors of real-time data streams. <s> BIB002
In BIB002 , the authors extended their concept by a measure of attraction that incorporates positional information of data within a grid-cell. This variant only merges neighboring cells if they share many points at the cell border. BIB001 ] is a small extension on how to handle points that lie exactly on the grid boundaries. For such a point, the distance to adjacent cell centers is computed and the point is assigned to its closest cell. If the observation has the same distance to multiple cells, it is assigned to the one with higher density. If this also does not break the tie, it is inserted into cell that has been updated more recently.
Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> DD-Stream <s> Clustering is a widely used knowledge discovery technique. It helps uncovering structures in data that were not previously known. The clustering of large data sets has received a lot of attention in recent years, however, clustering is a still a challenging task since many published algorithms fail to do well in scaling with the size of the data set and the number of dimensions that describe the points, or in finding arbitrary shapes of clusters, or dealing effectively with the presence of noise. In this paper, we present a new clustering algorithm, based in self-similarity properties of the data sets. Self-similarity is the property of being invariant with respect to the scale used to look at the data set. While fractals are self-similar at every scale used to look at them, many data sets exhibit self-similarity over a range of scales. Self-similarity can be measured using the fractal dimension. The new algorithm which we call Fractal Clustering (FC) places points incrementally in the cluster for which the change in the fractal dimension after adding the point is the least. This is a very natural way of clustering points, since points in the same cluster have a great degree of self-similarity among them (and much less self-similarity with respect to points in other clusters). FC requires one scan of the data, is suspendable at will, providing the best answer possible at that point, and is incremental. We show via experiments that FC effectively deals with large data sets, high-dimensionality and noise and is capable of recognizing clusters of arbitrary shape. <s> BIB001 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> DD-Stream <s> Clustering for evolving data stream demands that the algorithm should be capable of adapting the discovered clustering model to the changes in data characteristics. ::: ::: In this paper we propose an algorithm for exclusive and complete clustering of data streams. We explain the concept of completeness of a stream clustering algorithm and show that the proposed algorithm guarantees detection of cluster if one exists. The algorithm has an on-line component with constant order time complexity and hence delivers predictable performance for stream processing. The algorithm is capable of detecting outliers and change in data distribution. Clustering is done by growing dense regions in the data space, honouring recency constraint. The algorithm delivers complete description of clusters facilitating semantic interpretation. <s> BIB002 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> DD-Stream <s> In recent years, the uncertain data stream which is related in many real applications attracts more and more attention of researchers. As one aspect of uncertain character, existence-uncertainty can affect the clustering process and results significantly. The lately reported clustering algorithms are all based on K-Means algorithm with the inhere shortage. DCUStream algorithm which is density-based clustering algorithm over uncertain data stream is proposed in this paper. It can find arbitrary shaped clusters with less time cost in high dimension data stream. In the meantime, a dynamic density threshold is designed to accommodate the changing density of grids with time in data stream. The experiment results show that DCUStream algorithm can acquire more accurate clustering result and execute the clustering process more efficiently on progressing uncertain data stream. <s> BIB003
ExCC BIB002 (Exclusive and Complete Clustering) constructs a grid where the number of cells and grid boundaries are chosen by the user. This allows to handle categorical data, where the number of cells is chosen to be equal to the number of attribute levels. Clusters are identified as neighboring dense cells. Cells of numeric variables are considered neighbors if they share a common vertex. Cells of categorical variables employ a threshold on a similarity function between the attribute levels. To form macro-clusters, the algorithm iteratively chooses an unvisited dense cell and initializes a new cluster. Each neighboring grid-cell is then placed in the same cluster. This is repeated until all cells have been visited. DCUStream BIB003 (Density-based Clustering algorithm of Uncertain data Stream) aims to handle uncertain data streams, similar to HUE-Stream (see Section 4.3), where each observation is assumed to have an existence probability. The algorithm is initialized by collecting a batch of data and mapping it to a grid of fixed size. The density of a cell is defined as the sum of all existence probabilities faded over time. A grid is considered dense when its density is above a dynamic threshold. To generate a clustering, the algorithm selects the dense-cell with highest density and assigns all its neighboring cells to the same cluster. neighboring sparse-cells are considered the boundary of a cluster. This is repeated for all dense cells. DENGRIS-Stream (Density Grid-based algorithm for clustering data streams over Sliding window) is a grid-based algorithm that uses a sliding window model. New observations are mapped into a fixed size grid and the cell's densities are maintained. Densities are implicitly decayed by considering them relative to the total number of observations in the stream. In regular intervals, cells whose density decayed below a threshold or cells that are no longer inside the sliding window are removed. Macro-clusters are formed by grouping neighboring dense cells into the same cluster. Fractal Clustering BIB001 ] follows an usual grid-based approach. It uses the concept of fractal dimensions as a measure of size for a set of points. A common way to calculate the fractal dimension is by dividing the space into grid-cells of size and counting the number of cells that are occupied by points in the data N (r). Then, the fractal dimension can be calculated as: Fractal Clustering is first initialized with a sample by recursively placing close points into the same cluster (similar to DBSCAN). For a new observation, the algorithm then evaluates which influence in fractal dimension the addition of the point would have for each cluster. It then inserts the point into the cluster whose fractal dimension changes the least. However, if the change in fractal dimension is too large, the observation is considered noise instead.
Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Recursive Partitioning <s> A data stream is a massive unbounded sequence of data elements continuously generated at a rapid rate. Due to this reason, most algorithms for data streams sacrifice the correctness of their results for fast processing time. The processing time is greatly influenced by the amount of information that should be maintained. This paper proposes a statistical grid-based approach to clustering data elements of a data stream. Initially, the multidimensional data space of a data stream is partitioned into a set of mutually exclusive equal-size initial cells. When the support of a cell becomes high enough, the cell is dynamically divided into two mutually exclusive intermediate cells based on its distribution statistics. Three different ways of partitioning a dense cell are introduced. Eventually, a dense region of each initial cell is recursively partitioned until it becomes the smallest cell called a unit cell. A cluster of a data stream is a group of adjacent dense unit cells. In order to minimize the number of cells, a sparse intermediate or unit cell is pruned if its support becomes much less than a minimum support. Furthermore, in order to confine the usage of memory space, the size of a unit cell is dynamically minimized such that the result of clustering becomes as accurate as possible. The proposed algorithm is analyzed by a series of experiments to identify its various characteristics. <s> BIB001 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Recursive Partitioning <s> To effectively trace the clusters of recently generated data elements in an on-line data stream, a sibling list and a cell tree are proposed in this paper. Initially, the multi-dimensional data space of a data stream is partitioned into mutually exclusive equal-sized grid-cells. Each grid-cell monitors the recent distribution statistics of data elements within its range. The old distribution statistics of each grid-cell are diminished by a predefined decay rate as time goes by, so that the effect of the obsolete information on the current result of clustering can be eliminated without maintaining any data element physically. Given a partitioning factor h, a dense grid-cell is partitioned into h equal-size smaller grid-cells. Such partitioning is continued until a grid-cell becomes the smallest one called a unit grid-cell. Conversely, a set of consecutive sparse grid-cells can be merged into a single grid-cell. A sibling list is a structure to manage the set of all grid-cells in a one-dimensional data space and it acts as an index for locating a specific grid-cell. Upon creating a dense unit grid-cell on a one-dimensional data space, a new sibling list for another dimension is created as a child of the grid-cell. In such a way, a cell tree is created. By repeating this process, a multi-dimensional dense unit grid-cell is identified by a path of a cell tree. Furthermore, in order to confine the usage of memory space, the size of a unit grid-cell is adaptively minimized such that the result of clustering becomes as accurate as possible at all times. The proposed method is comparatively analyzed by a series of experiments to identify its various characteristics. <s> BIB002 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Recursive Partitioning <s> In data stream clustering, it is desirable to have algorithms that are able to detect clusters of arbitrary shape, clusters that evolve over time, and clusters with noise. Existing stream data clustering algorithms are generally based on an online-offline approach: The online component captures synopsis information from the data stream (thus, overcoming real-time and memory constraints) and the offline component generates clusters using the stored synopsis. The online-offline approach affects the overall performance of stream data clustering in various ways: the ease of deriving synopsis from streaming data; the complexity of data structure for storing and managing synopsis; and the frequency at which the offline component is used to generate clusters. In this article, we propose an algorithm that (1) computes and updates synopsis information in constant time; (2) allows users to discover clusters at multiple resolutions; (3) determines the right time for users to generate clusters from the synopsis information; (4) generates clusters of higher purity than existing algorithms; and (5) determines the right threshold function for density-based clustering based on the fading model of stream data. To the best of our knowledge, no existing data stream algorithms has all of these features. Experimental results show that our algorithm is able to detect arbitrarily shaped, evolving clusters with high quality. <s> BIB003
Stats-Grid BIB001 is an early algorithm which recursively partitions grid-cells. The algorithm begins by splitting the data into grid-cells of fixed size. Each cell maintains its density, mean and standard deviation. The algorithm then recursively partitions grid-cells until cells become sufficiently small unit cells. The aim is to find adjacent unit cells with large density which can be used to form macro-clusters. The algorithm splits a cell in two subcells whenever it has reached sufficient density. The size of the subcells is dynamically adapted based on the distribution of data within the cell. The authors propose three separate splitting strategies, for example choosing the dimension where Figure 9 Tree structure in MR-Stream the cell's standard deviation is the largest and splitting at the mean. Since the weight of cells is calculated relative to the total number of observations, outdated cells can be removed and their statistics returned to the parent cell. Cell-Tree BIB002 is an extension of Stats-Grid which also tries to find adjacent unit cells of sufficient density. In contrast to Stats-Grid, subcells are not dynamically sized based on the distribution of the cell. Instead, they are split into a pre-defined number of evenly sized subcells. The summary statistics of the subcells are initialized by distributing the statistics of the parent cell following the normal distribution. To efficiently maintain the cells, the authors propose a siblings list. The siblings list is a linear list where each node contains a number of grid-cells along one dimension as well as a link to the next node. Whenever a cell is split, the created subcells replace their parent cell in its node. To maintain a siblings list over multiple dimensions, a first-child / next-sibling tree can be used where subsequent dimensions are added as children of the list-nodes. The splitting strategy of MR-Stream BIB003 ] is similar but splits each dimension in half, effectively creating a tree of cells as shown in Figure 9 . New observations start at the root cell and are recursively assigned to the appropriate child-cell. If a child does not exist yet, it is created until a maximum depth is reached. If the insertion causes a parent to only contain children of high density, the children are discarded since the parent node is able to represent this information already. Additionally, the tree is regularly pruned by removing leafs with insufficient weight and removing children of nodes that only contain dense or only sparse children. To generate the macro-clusters, the user can choose a desired height of the tree. For every unclustered cell, the algorithm initializes a new macro-cluster and adds all neighboring dense cells. If the size and weight of the cluster is too low, it is considered noise. PKSStream ] is similar to MR-Stream but does not require a subcell on all heights of the tree. It only maintains intermediate nodes when there are more than K − 1 non-empty children. Each observation is iteratively descended down the tree until either a leaf is reached or the child does not exist. In the latter case a new cell is initialized. In regular intervals, the algorithm evaluates all leaf nodes and removes those with insufficient weight. The offline component is the same as in MR-Stream for the leafs of the tree.
Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Hybrid Grid-Approaches <s> Clustering algorithms are attractive for the task of class identification in spatial databases. However, the application to large spatial databases rises the following requirements for clustering algorithms: minimal requirements of domain knowledge to determine the input parameters, discovery of clusters with arbitrary shape and good efficiency on large databases. The well-known clustering algorithms offer no solution to the combination of these requirements. In this paper, we present the new clustering algorithm DBSCAN relying on a density-based notion of clusters which is designed to discover clusters of arbitrary shape. DBSCAN requires only one input parameter and supports the user in determining an appropriate value for it. We performed an experimental evaluation of the effectiveness and efficiency of DBSCAN using synthetic data and real data of the SEQUOIA 2000 benchmark. The results of our experiments demonstrate that (1) DBSCAN is significantly more effective in discovering clusters of arbitrary shape than the well-known algorithm CLAR-ANS, and that (2) DBSCAN outperforms CLARANS by a factor of more than 100 in terms of efficiency. <s> BIB001 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Hybrid Grid-Approaches <s> Density-based method has emerged as a worthwhile class for clustering data streams. Recently, a number of density-based algorithms have been developed for clustering data streams. However, existing density-based data stream clustering algorithms are not without problem. There is a dramatic decrease in the quality of clustering when there is a range in density of data. In this paper, a new method, called the MuDi-Stream, is developed. It is an online-offline algorithm with four main components. In the online phase, it keeps summary information about evolving multi-density data stream in the form of core mini-clusters. The offline phase generates the final clusters using an adapted density-based clustering algorithm. The grid-based method is used as an outlier buffer to handle both noises and multi-density data and yet is used to reduce the merging time of clustering. The algorithm is evaluated on various synthetic and real-world datasets using different quality metrics and further, scalability results are compared. The experimental results show that the proposed method in this study improves clustering quality in multi-density environments. <s> BIB002
HDCStream (hybrid density-based clustering for data stream) first combined grid-based algorithms with the concept of distancebased algorithms. In particular, it maintains a grid where dense cells can be promoted to become micro-clusters as known from distanced-based algorithms (see Section 4). Each observation in the stream is assigned to its closest microcluster if it lies within a radius threshold. Otherwise, it is inserted into the grid instead. Once a grid-cell has accumulated sufficient density, its points are used to initialize a new micro-cluster. Finally, the cell is no longer maintained, as its information has been transferred to the micro-cluster. In regular intervals, all micro-clusters and cells are evaluated and removed if their density decayed below a respective threshold. Whenever a clustering request arrives, the microclusters are considered virtual points in order to apply DBSCAN. Mudi-Stream BIB002 (Multi Density Data Stream) is an extension of HDCStream that can handle varying degrees of density within the same data stream. It uses the same insertion strategy as HDCStream with both, grid-cells and micro-clusters. However, the offline component applies a variant of DBSCAN BIB001 called M-DBSCAN to all micro-clusters. M-DBSCAN only requires a M inP ts parameter and then estimates the parameter from the mean and standard deviation around the centre.
Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Projected Approaches <s> Many clustering algorithms tend to break down in high-dimensional feature spaces, because the clusters often exist only in specific subspaces (attribute subsets) of the original feature space. Therefore, the task of projected clustering (or subspace clustering) has been defined recently. As a solution to tackle this problem, we propose the concept of local subspace preferences, which captures the main directions of high point density. Using this concept, we adopt density-based clustering to cope with high-dimensional data. In particular, we achieve the following advantages over existing approaches: Our proposed method has a determinate result, does not depend on the order of processing, is robust against noise, performs only one single scan over the database, and is linear in the number of dimensions. A broad experimental evaluation shows that our approach yields results of significantly better quality than recent work on clustering high-dimensional data. <s> BIB001 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Projected Approaches <s> A real-life data stream usually contains many dimensions and some dimensional values of its data elements may be missing. In order to effectively extract the on-going change of a data stream with respect to all the subsets of the dimensions of the data stream, a grid-based subspace clustering algorithm is proposed in this paper. Given an n-dimensional data stream, the on-going distribution statistics of data elements in each one-dimension data space is firstly monitored by a list of grid-cells called a sibling list. Once a dense grid-cell of a first-level sibling list becomes a dense unit grid-cell, new second-level sibling lists are created as its child nodes in order to trace any cluster in all possible two-dimensional rectangular subspaces. In such a way, a sibling tree grows up to the nth level at most and a k-dimensional subcluster can be found in the kth level of the sibling tree. The proposed method is comparatively analyzed by a series of experiments to identify its various characteristics. <s> BIB002 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Projected Approaches <s> To effectively trace the clusters of recently generated data elements in an on-line data stream, a sibling list and a cell tree are proposed in this paper. Initially, the multi-dimensional data space of a data stream is partitioned into mutually exclusive equal-sized grid-cells. Each grid-cell monitors the recent distribution statistics of data elements within its range. The old distribution statistics of each grid-cell are diminished by a predefined decay rate as time goes by, so that the effect of the obsolete information on the current result of clustering can be eliminated without maintaining any data element physically. Given a partitioning factor h, a dense grid-cell is partitioned into h equal-size smaller grid-cells. Such partitioning is continued until a grid-cell becomes the smallest one called a unit grid-cell. Conversely, a set of consecutive sparse grid-cells can be merged into a single grid-cell. A sibling list is a structure to manage the set of all grid-cells in a one-dimensional data space and it acts as an index for locating a specific grid-cell. Upon creating a dense unit grid-cell on a one-dimensional data space, a new sibling list for another dimension is created as a child of the grid-cell. In such a way, a cell tree is created. By repeating this process, a multi-dimensional dense unit grid-cell is identified by a path of a cell tree. Furthermore, in order to confine the usage of memory space, the size of a unit grid-cell is adaptively minimized such that the result of clustering becomes as accurate as possible at all times. The proposed method is comparatively analyzed by a series of experiments to identify its various characteristics. <s> BIB003 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Projected Approaches <s> In this paper, we have proposed, developed and experimentally validated our novel subspace data stream clustering, termed PreDeConStream. The technique is based on the two phase mode of mining streaming data, in which the first phase represents the process of the online maintenance of a data structure, that is then passed to an offline phase of generating the final clustering model. The technique works on incrementally updating the output of the online phase stored in a micro-cluster structure, taking into consideration those micro-clusters that are fading out over time, speeding up the process of assigning new data points to existing clusters. A density based projected clustering model in developing PreDeConStream was used. With many important applications that can benefit from such technique, we have proved experimentally the superiority of the proposed methods over state-of-the-art techniques. <s> BIB004
A special category of stream clustering algorithms deals with high dimensional data streams. These types of algorithms address the curse of dimensionality , i.e., the problem that almost all points have an equal distance in very high dimensional space. In such scenarios, clusters are defined Offline Clustering HPStream 2004 damped k-means SiblingTree BIB002 2007 damped -HDDStream 2012 damped PreDeCon BIB001 PreDeConStream BIB004 2012 damped PreDeCon BIB001 Table 5 according to a subset of dimensions where each cluster has an associated set of dimensions in which it exists. Even though these algorithms often use concepts from distance and grid-based algorithms their application scenarios and strategies are unique and deserve their own category. Table 5 summarizes 4 projected clustering algorithms and Figure 10 shows the relationship between the algorithms. Despite their similarity, HDDStream and PreDeConStream have been developed independently. HPStream (High-dimensional Projected Stream clustering) is an extension of CluStream (see Section 4.2) for high dimensional data. The algorithm uses a time-faded CF with an additional bit vector that denotes the associated dimensions of a cluster. The algorithm normalizes each dimension by regularly sampling the current standard deviation and adjusting the existing clusters accordingly. The algorithm initializes with k-means and associates each cluster with the l dimensions in which it has the smallest radius. The cluster assignment is then updated by only considering the associated dimensions for each cluster. Finally, the process is repeated until the cluster and dimensions converge. A new data point is tentatively added to each cluster to update the dimension association and added to its closest cluster if it does not increase the cluster radius above a threshold. SiblingTree BIB002 is an extension of CellTree BIB003 ] (see Section 5.2). It uses the same tree-structure but allows for subspace clusters. To do so, the algorithm creates a siblings list for each dimension as children of the root. New data points are recursively assigned to the grid-cells using a depth first approach. If a cell's density increases beyond a threshold, it is split as in CellTree. If a unit cell's density increases beyond a threshold, new sibling lists for each remaining dimension are created as children of the cell. Additionally, if a cell's density decays below a density threshold, its children are removed and it is merged with consecutive sparse cells. Clusters in the tree are defined as adjacent unit-grid-cells with enough density. HDDStream (Density-based Projected Clustering over High Dimensional Data Streams) is initialized by collecting a batch of observations and applying PreDeCon BIB001 . PreDeCon can be considered a subspace version of DBSCAN. The update procedure is similar to DenStream (see Section 4.3): A new observation is assigned to its closest potential core micro-cluster if its projected radius does not increase beyond a threshold. Else, the same attempt is made for the closest outlier-cluster. If both cannot absorb the observation, a new cluster is initialized. Periodically, the algorithm downgrades potential core micro clusters if their weight is too low or if the number of associated dimensions is too large. Outlier-clusters are removed as in DenStream. To generate the macro-clusters a variant of PreDeCon BIB001 is used. PreDeConStream BIB004 (Subspace Preference weighted Density Connected clustering of Streaming data) was developed simultaneously to HDDStream (see Section 7) and both share many concepts. The algorithm is also initialized using the PreDeCon BIB001 algorithm and the insertion strategy is the same as in DenStream (see Section 4.3). Additionally, the algorithm adjusts the clustering in regular intervals using a modified part of the PreDeCon algorithm on the micro-clusters that were changed during the online phase.
Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Application and Software <s> The data stream model has recently attracted attention for its applicability to numerous types of data, including telephone records, Web documents, and clickstreams. For analysis of such data, the ability to process the data in a single pass, or a small number of passes, while using little memory, is crucial. We describe such a streaming algorithm that effectively clusters large data streams. We also provide empirical evidence of the algorithm's performance on synthetic and real data streams. <s> BIB001 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Application and Software <s> In data clustering, many approaches have been proposed such as K-means method and hierarchical method. One of the problems is that the results depend heavily on initial values and criterion to combine clusters. In this investigation, we propose a new method to cluster stream data while avoiding this deficiency. Here we assume there exists aspects of local regression in data. Then we develop our theory to combine clusters using F values by regression analysis as criterion and to adapt to stream data. We examine experiments and show how well the theory works. <s> BIB002 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Application and Software <s> Existing data-stream clustering algorithms such as CluStream arebased on k-means. These clustering algorithms are incompetent tofind clusters of arbitrary shapes and cannot handle outliers. Further, they require the knowledge of k and user-specified time window. To address these issues, this paper proposes D-Stream, a framework for clustering stream data using adensity-based approach. The algorithm uses an online component which maps each input data record into a grid and an offline component which computes the grid density and clusters the grids based on the density. The algorithm adopts a density decaying technique to capture the dynamic changes of a data stream. Exploiting the intricate relationships between the decay factor, data density and cluster structure, our algorithm can efficiently and effectively generate and adjust the clusters in real time. Further, a theoretically sound technique is developed to detect and remove sporadic grids mapped to by outliers in order to dramatically improve the space and time efficiency of the system. The technique makes high-speed data stream clustering feasible without degrading the clustering quality. The experimental results show that our algorithm has superior quality and efficiency, can find clusters of arbitrary shapes, and can accurately recognize the evolving behaviors of real-time data streams. <s> BIB003 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Application and Software <s> Clustering streaming data requires algorithms which are capable of updating clustering results for the incoming data. As data is constantly arriving, time for processing is limited. Clustering has to be performed in a single pass over the incoming data and within the possibly varying inter-arrival times of the stream. Likewise, memory is limited, making it impossible to store all data. For clustering, we are faced with the challenge of maintaining a current result that can be presented to the user at any given time. In this work, we propose a parameter free algorithm that automatically adapts to the speed of the data stream. It makes best use of the time available under the current constraints to provide a clustering of the objects seen up to that point. Our approach incorporates the age of the objects to reflect the greater importance of more recent data. Moreover, we are capable of detecting concept drift, novelty and outliers in the stream. For efficient and effective handling, we introduce the ClusTree, a compact and self-adaptive index structure for maintaining stream summaries. Our experiments show that our approach is capable of handling a multitude of different stream characteristics for accurate and scalable anytime stream clustering. <s> BIB004 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Application and Software <s> Clustering real-time stream data is an important and challenging problem. Existing algorithms such as CluStream are based on the k-means algorithm. These clustering algorithms have difficulties finding clusters of arbitrary shapes and handling outliers. Further, they require the knowledge of k and user-specified time window. To address these issues, this article proposes D-Stream, a framework for clustering stream data using a density-based approach. Our algorithm uses an online component that maps each input data record into a grid and an offline component that computes the grid density and clusters the grids based on the density. The algorithm adopts a density decaying technique to capture the dynamic changes of a data stream and a attraction-based mechanism to accurately generate cluster boundaries. Exploiting the intricate relationships among the decay factor, attraction, data density, and cluster structure, our algorithm can efficiently and effectively generate and adjust the clusters in real time. Further, a theoretically sound technique is developed to detect and remove sporadic grids mapped by outliers in order to dramatically improve the space and time efficiency of the system. The technique makes high-speed data stream clustering feasible without degrading the clustering quality. The experimental results show that our algorithm has superior quality and efficiency, can find clusters of arbitrary shapes, and can accurately recognize the evolving behaviors of real-time data streams. <s> BIB005 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Application and Software <s> In the recent past, telecommunication industry has gone through tremendous growth. It has resulted in huge production of data on daily basis for the telecom companies. Now it is a challenge for the telecom operators to manage huge amount of data and then use this data for decision and policy making processes. The data generated in telecom industry is in the form of data stream as it is continuously being generated. So we need such a clustering algorithm which can perform well with streams or continuous data. At the same time we also need to detect outliers and erroneous data. Clusters are not necessarily of globular shape as we don't have prior knowledge of number of clusters and their shape. To address all the above mentioned problems we have used D-Stream clustering algorithm in our implementation to get desired results. This paper, discusses the algorithm and implementation of DStream algorithm along with its experimental results on synthetically generated telecommunication data. <s> BIB006 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Application and Software <s> Unsupervised identification of groups in large data sets is important for many machine learning and knowledge discovery applications. Conventional clustering approaches (k-means, hierarchical clustering, etc.) typically do not scale well for very large data sets. In recent years, data stream clustering algorithms have been proposed which can deal efficiently with potentially unbounded streams of data. This paper is the first to investigate the use of data stream clustering algorithms as light-weight alternatives to conventional algorithms on large non-streaming data. We will discuss important issue including order dependence and report the results of an initial study using several synthetic and real-world data sets. <s> BIB007 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Application and Software <s> Clustering evolving data streams is ::: important to be performed in a limited time with a reasonable quality. The ::: existing micro clustering based methods do not consider the distribution of ::: data points inside the micro cluster. We propose LeaDen-Stream (Leader Density-based ::: clustering algorithm over evolving data Stream), a density-based ::: clustering algorithm using leader clustering. The algorithm is based on a ::: two-phase clustering. The online phase selects the proper mini-micro or ::: micro-cluster leaders based on the distribution of data points in the micro ::: clusters. Then, the leader centers are sent to the offline phase to form final ::: clusters. In LeaDen-Stream, by carefully choosing between two kinds of micro ::: leaders, we decrease time complexity of the clustering while maintaining the ::: cluster quality. A pruning strategy is also used to filter out real data from ::: noise by introducing dense and sparse mini-micro and micro-cluster leaders. Our ::: performance study over a number of real and synthetic data sets demonstrates ::: the effectiveness and efficiency of our method. <s> BIB008 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Application and Software <s> We design a data stream algorithm for the k-means problem, called BICO, that combines the data structure of the SIGMOD Test of Time award winning algorithm BIRCH [27] with the theoretical concept of coresets for clustering problems. The k-means problem asks for a set C of k centers minimizing the sum of the squared distances from every point in a set P to its nearest center in C. In a data stream, the points arrive one by one in arbitrary order and there is limited storage space. <s> BIB009 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Application and Software <s> We introduce Cloud DIKW (Data, Information, Knowledge, Wisdom) as an analysis environment supporting scientific discovery through integrated parallel batch and streaming processing, and apply it to one representative domain application: social media data stream clustering. In this context, recent work demonstrated that high-quality clusters can be generated by representing the data points using high-dimensional vectors that reflect textual content and social network information. However, due to the high cost of similarity computation, sequential implementations of even single-pass algorithms cannot keep up with the speed of real-world streams. This paper presents our efforts in meeting the constraints of realtimesocial media stream clustering through parallelization in Cloud DIKW. Specifically, we focus on two system-level issues. Firstly, most stream processing engines such as Apache Storm organize distributed workers in the form of a directed acyclic graph (DAG), which makes it difficult to dynamically synchronize the state of parallel clustering workers. We tackle this challenge by creating a separate synchronization channel using a pub-sub messaging system (ActiveMQ in our case). Secondly, due to the sparsity of the high-dimensional vectors, the size of cancroids grows quickly as new data points are assigned tithe clusters. As a result, traditional synchronization that directly broadcasts cluster cancroids becomes too expensive and limits the scalability of the parallel algorithm. We address this problem by communicating only dynamic changes of the clusters rather than the whole centred vectors. Our algorithm under Cloud DIKWcan process the Twitter 10% data stream ("gardenhose") in realtimewith 96-way parallelism. By natural improvements to CloudDIKW, including advanced collective communication techniques developed in our Harp project, we will be able to process the full Twitter data stream in real-time with 1000-way parallelism. Our use of powerful general software subsystems will enable many other applications that need integration of streaming and batch data analytics. <s> BIB010 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Application and Software <s> Online services are increasingly dependent on user participation. Whether it's online social networks or crowdsourcing services, understanding user behavior is important yet challenging. In this paper, we build an unsupervised system to capture dominating user behaviors from clickstream data (traces of users' click events), and visualize the detected behaviors in an intuitive manner. Our system identifies "clusters" of similar users by partitioning a similarity graph (nodes are users; edges are weighted by clickstream similarity). The partitioning process leverages iterative feature pruning to capture the natural hierarchy within user clusters and produce intuitive features for visualizing and understanding captured user behaviors. For evaluation, we present case studies on two large-scale clickstream traces (142 million events) from real social networks. Our system effectively identifies previously unknown behaviors, e.g., dormant users, hostile chatters. Also, our user study shows people can easily interpret identified behaviors using our visualization tool. <s> BIB011 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Application and Software <s> As more and more applications produce streaming data, clustering data streams has become an important technique for data and knowledge engineering. A typical approach is to summarize the data stream in real-time with an online process into a large number of so called micro-clusters. Micro-clusters represent local density estimates by aggregating the information of many data points in a defined area. On demand, a (modified) conventional clustering algorithm is used in a second offline step to recluster the micro-clusters into larger final clusters. For reclustering, the centers of the micro-clusters are used as pseudo points with the density estimates used as their weights. However, information about density in the area between micro-clusters is not preserved in the online process and reclustering is based on possibly inaccurate assumptions about the distribution of data within and between micro-clusters (e.g., uniform or Gaussian). This paper describes DBSTREAM, the first micro-cluster-based online clustering component that explicitly captures the density between micro-clusters via a shared density graph. The density information in this graph is then exploited for reclustering based on actual density between adjacent micro-clusters. We discuss the space and time complexity of maintaining the shared density graph. Experiments on a wide range of synthetic and real data sets highlight that using shared density improves clustering quality over other popular data stream clustering methods which require the creation of a larger number of smaller micro-clusters to achieve comparable results. <s> BIB012 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Application and Software <s> Density-based method has emerged as a worthwhile class for clustering data streams. Recently, a number of density-based algorithms have been developed for clustering data streams. However, existing density-based data stream clustering algorithms are not without problem. There is a dramatic decrease in the quality of clustering when there is a range in density of data. In this paper, a new method, called the MuDi-Stream, is developed. It is an online-offline algorithm with four main components. In the online phase, it keeps summary information about evolving multi-density data stream in the form of core mini-clusters. The offline phase generates the final clusters using an adapted density-based clustering algorithm. The grid-based method is used as an outlier buffer to handle both noises and multi-density data and yet is used to reduce the merging time of clustering. The algorithm is evaluated on various synthetic and real-world datasets using different quality metrics and further, scalability results are compared. The experimental results show that the proposed method in this study improves clustering quality in multi-density environments. <s> BIB013
An increasing number of physical devices these days is interconnected. This trend is generally described as the Internet of Things (IoT) where every-day devices are collecting and exchanging data. Popular examples of this are Smart Refrigerators that remind you to restock or Smart Home devices such as thermostats, locks or speakers which can remote control your home. Due to this, many modern applications produce large and fast amounts of data as Maintenance where necessary maintenance tasks are predicted from the sensors of the devices. Clustering can help to find a cluster of devices that are likely to fail next. This can help to prevent machine failures but also reduce unnecessary maintenance tasks. Additionally, stream clustering could be applied for market or customer segmentation where customers that have similar preference or behavior are identified from a stream of transactions. These segments can be engaged differently using appropriate marketing strategies. Stream clustering has also been successfully applied to mine conversational topics from chat data or analyze user behavior based on web clickstreams BIB011 . In addition, it was used to analyze transactional data in order to detect fraudulent plastic card transactions and to detect malicious network connections from computer network data BIB012 BIB013 BIB001 ]. Further, it was used to analyze sensor readings BIB012 , social network data BIB010 , weather monitoring BIB002 , telecommunication data BIB006 , stock prices or the monitoring of automated grid computing, e.g. for anomaly detection . Other application scenarios include social media analysis or the analysis of eye tracking data as in our initial example. Unfortunately, there is not a single solutions that can fit all application scenarios and problems. For this reason, the choice of algorithm depends on the characteristics and requirements of the stream. An important characteristic is the speed of the stream. For very fast streams, more efficient algorithms are required. In particular, anytime algorithms such as ClusTree BIB004 or evoStream are able to output a clustering result at anytime during the stream and handle faster streams better. On the other hand, some algorithms store additional positional information alongside micro-clusters. While this often helps to achieve better clustering results, it makes algorithms such as LeaDen-Stream BIB008 , D-Stream with attraction BIB005 and DBSTREAM less suitable for faster streams. Another important characteristic is the desired or expected shape of clusters. For example, many algorithms can only recognize compact clusters, as shown in Figure 11 (a). This type of clusters often corresponds to our natural understanding of a cluster and is usually well recognised by distance-based approaches such as BICO BIB009 or ClusTree BIB004 . Some streams, however, consists of mostly long and straggly clusters as shown in Figure 11(b) . These clusters are generally easier to detect for grid-based approaches where clusters of arbitrary shape are formed by dense neighboring cells. Nevertheless, distance-based approaches can also detect these clusters when using a reclustering algorithm that can identify arbitrary shapes, e.g., as used by DBSTREAM BIB012 . In addition, clusters may be of different density as as shown in Figure 11 (c). This is a niche problem and only MuDi-Stream BIB013 currently addresses it. Furthermore, the dimensionality of the problem plays an important role. Generally, faster algorithms are desirable as the dimensionality increases. However, for very high-dimensional data, projected approaches such as HPStream are necessary in order to find meaningful clusters. Finally, the expected amount of concept-shift of the stream is important. If the structure of clusters changes regularly, an algorithm that applies a damped time-window model should be used. This includes algorithms such as DenStream , D-Stream BIB003 , ClusTree BIB004 ] and many more. For streams without concept-shift, most algorithms are applicable. For example, algorithms using a damped time window model can set the fading factor λ = 0. Note, however, that algorithms such as DenStream rely on the fading mechanism in order to remove noise BIB007 .
Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Software <s> The data stream model has recently attracted attention for its applicability to numerous types of data, including telephone records, Web documents, and clickstreams. For analysis of such data, the ability to process the data in a single pass, or a small number of passes, while using little memory, is crucial. We describe such a streaming algorithm that effectively clusters large data streams. We also provide empirical evidence of the algorithm's performance on synthetic and real data streams. <s> BIB001 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Software <s> Data clustering is an important technique for exploratory data analysis, and has been studied for several years. It has been shown to be useful in many practical domains such as data classification and image processing. Recently, there has been a growing emphasis on exploratory analysis of very large datasets to discover useful patterns and/or correlations among attributes. This is called data mining, and data clustering is regarded as a particular branch. However existing data clustering methods do not adequately address the problem of processing large datasets with a limited amount of resources (e.g., memory and cpu cycles). So as the dataset size increases, they do not scale up well in terms of memory requirement, running time, and result quality. ::: ::: In this paper, an efficient and scalable data clustering method is proposed, based on a new in-memory data structure called CF-tree, which serves as an in-memory summary of the data distribution. We have implemented it in a system called BIRCH (Balanced Iterative Reducing and Clustering using Hierarchies), and studied its performance extensively in terms of memory requirements, running time, clustering quality, stability and scalability; we also compare it with other available methods. Finally, BIRCH is applied to solve two real-life problems: one is building an iterative and interactive pixel classification tool, and the other is generating the initial codebook for image compression. <s> BIB002 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Software <s> We present an incremental graph-based clustering algorithm whose design was motivated by a need to extract and retain meaningful information from data streams produced by applications such as large scale surveillance, network packet inspection and financial transaction monitoring. To this end, the method we propose utilises representative points to both incrementally cluster new data and to selectively retain important cluster information within a knowledge repository. The repository can then be subsequently used to assist in the processing of new data, the archival of critical features for off-line analysis, and in the identification of recurrent patterns. <s> BIB003 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Software <s> We design a data stream algorithm for the k-means problem, called BICO, that combines the data structure of the SIGMOD Test of Time award winning algorithm BIRCH [27] with the theoretical concept of coresets for clustering problems. The k-means problem asks for a set C of k centers minimizing the sum of the squared distances from every point in a set P to its nearest center in C. In a data stream, the points arrive one by one in arbitrary order and there is limited storage space. <s> BIB004 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Software <s> Most available static data are becoming more and more high-dimensional. Therefore, subspace clustering, which aims at finding clusters not only within the full dimension but also within subgroups of dimensions, has gained a significant importance. Recently, OpenSubspace framework was proposed to evaluate and explorate subspace clustering algorithms in WEKA with a rich body of most state of the art subspace clustering algorithms and measures. Parallel to it, MOA (Massive Online Analysis) framework was developed also above WEKA to provide algorithms and evaluation methods for mining tasks on evolving data streams over the full space only. <s> BIB005
An important aspect to apply stream clustering in practice is available software and tools. In general, availability of stream clustering implementations is rather scarce and only the most prominent algorithms are available. Only few authors provide reference implementations for their algorithms. As an example, C, C++ or R implementations are available for BIRCH BIB002 , STREAM BIB001 , streamKM++ , BICO BIB004 and evoStream . Previously, also an implementation of RepStream BIB003 was available. More recently, several projects aim to create unified frameworks for stream data mining, including implementations for stream clustering. The most popular framework for data stream mining is the Massive Online Analysis (MOA) framework. It is implemented in Java and provides the stream clustering algorithms CobWeb, D-Stream, DenStream, ClusTree, CluStream, streamKM++ and BICO. For faster prototyping there also exists the stream package for the statistical programming language R. It contains general methods for working with data streams and also implements the D-Stream, DBSTREAM, BICO, BIRCH and evoStream algorithm. There is also an extension package streamMOA which interfaces the MOA implementations of DenStream, ClusTree and CluStream. For working with data in high-dimensional space, the Subspace MOA framework BIB005 provides Java implementations for HDDStream and PreDeConStream. Again, the R-package subspaceMOA interfaces both methods to make them accessible with the stream package. Alternatively, the streamDM [Huawei Noah's Ark Lab, 2015] project provides methods for data mining with Spark Streaming which is an extension for the Spark engine. Currently it implements the CluStream and streamKM++ algorithms with plans to extend the project with more stream clustering algorithms.
Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Algorithm Configuration <s> Data streams have recently attracted attention for their applicability to numerous domains including credit fraud detection, network intrusion detection, and click streams. Stream clustering is a technique that performs cluster analysis of data streams that is able to monitor the results in real time. A data stream is continuously generated sequences of data for which the characteristics of the data evolve over time. A good stream clustering algorithm should recognize such evolution and yield a cluster model that conforms to the current data. In this paper, we propose a new technique for stream clustering which supports five evolutions that are appearance, disappearance, self-evolution, merge and split. <s> BIB001 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Algorithm Configuration <s> The identification of performance-optimizing parameter settings is an important part of the development and application of algorithms. We describe an automatic framework for this algorithm configuration problem. More formally, we provide methods for optimizing a target algorithm's performance on a given class of problem instances by varying a set of ordinal and/or categorical parameters. We review a family of local-search-based algorithm configuration procedures and present novel techniques for accelerating them by adaptively limiting the time spent for evaluating individual configurations. We describe the results of a comprehensive experimental evaluation of our methods, based on the configuration of prominent complete and incomplete algorithms for SAT. We also present what is, to our knowledge, the first published work on automatically configuring the CPLEX mixed integer programming solver. All the algorithms we considered had default parameter settings that were manually identified with considerable effort. Nevertheless, using our automated algorithm configuration procedures, we achieved substantial and consistent performance improvements. <s> BIB002 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Algorithm Configuration <s> Ensembles of classifiers are among the best performing classifiers available in many data mining applications, including the mining of data streams. Rather than training one classifier, multiple classifiers are trained, and their predictions are combined according to a given voting schedule. An important prerequisite for ensembles to be successful is that the individual models are diverse. One way to vastly increase the diversity among the models is to build an heterogeneous ensemble, comprised of fundamentally different model types. However, most ensembles developed specifically for the dynamic data stream setting rely on only one type of base-level classifier, most often Hoeffding Trees. We study the use of heterogeneous ensembles for data streams. We introduce the Online Performance Estimation framework, which dynamically weights the votes of individual classifiers in an ensemble. Using an internal evaluation on recent training data, it measures how well ensemble members performed on this and dynamically updates their weights. Experiments over a wide range of data streams show performance that is competitive with state of the art ensemble techniques, including Online Bagging and Leveraging Bagging, while being significantly faster. All experimental results from this work are easily reproducible and publicly available online. <s> BIB003
Streaming data in general pose considerable challenges for respective algorithms, especially due to the requirement of real-time capability, the high probability of non-stationary data and the lack of availability of the original data over time. Moreover, many clustering approaches in general require standardized data. In order to standardize a data stream which evolves over time, one could either estimate the values for centering and scaling from an initial portion of the stream . Alternatively, in a more sophisticated manner, CF based approaches can also incrementally adapt the values for scaling and update the existing micro-clusters accordingly . Specifically, as we have seen throughout the discussion of available stream clustering algorithms, most of them require a multitude of parameters to be set by the user a-priori. These settings control the behavior and performance of the algorithm over time. Usually, density-based algorithms require at least a distance or radius threshold and grid-based algorithms need the grid's size. The same applies to their extensions for projected stream clustering and modelbased algorithms mostly make use of a similarity-threshold. In practice, such parameters are often unintuitive to choose appropriately even with expert knowledge. As an example, it might be possible to find appropriate distance thresholds for a given scenario but choosing appropriate weight thresholds or cleanup intervals tends to be very difficult for a users, especially considering possible drift of the stream. A notable exception from this problem is the ClusTree (see Section 4.3) algorithm which at least makes an effort to be parameter-free. Therefore, a systematic online approach for automated parameter configuration is required. However, state-of-the art automated parameter configuration approaches such as irace , ParamILS BIB002 or SMAC are not perfectly suited for the streaming data scenario. First of all, they are mostly set-based, thus not focussed on online learning on single, specific data. Moreover, they require static and stationary data so that they can only be applied in a prequential manner, i.e. in regular intervals or on an initial sample of the stream in order to determine and adjust appropriate settings over time which does not really meet the efficiency requirement of the real-time capability. However, an initial approach on configuring and benchmarking stream clustering approaches based on irace has been presented by . Very promising are ensemble-based approaches, both for algorithm selection and configuration on data streams, which have successfully been applied in the context of classification algorithms already BIB001 BIB003 .
Computer Science and Game Theory: A Brief Survey <s> Introduction <s> Over the past fifty years, researchers in Theoretical Computer Science have sought and achieved a productive foundational understanding of the von Neumann computer and its software, employing the mathematical tools of Logic and Combinatorics. The next half century appears now much more confusing (half- centuries tend to look like that in the beginning). What computational artifact will be the object of the next great modeling adventure of our field? And what mathematical tools will be handy in this endeavor? <s> BIB001 </s> Computer Science and Game Theory: A Brief Survey <s> Introduction <s> I consider issues in distributed computation that should be of relevance to game theory. In particular, I focus on (a) representing knowledge and uncertainty, (b) dealing with failures, and (c) specification of mechanisms. <s> BIB002 </s> Computer Science and Game Theory: A Brief Survey <s> Introduction <s> A new era of theoretical computer science addresses fundamental problems about auctions, networks, and human behavior. <s> BIB003
There has been a remarkable increase in work at the interface of computer science and game theory in the past decade. Game theory forms a significant component of some major computer science conferences (see, for example, [Kearns and Reiter 2005; Sandholm and Yakoo 2003] ); leading computer scientists are often invited to speak at major game theory conferences, such as the World Congress on Game Theory 2000 and 2004. In this article I survey some of the main themes of work in the area, with a focus on the work in computer science. Given the length constraints, I make no attempt at being comprehensive, especially since other surveys are also available, including BIB002 BIB001 , and a comprehensive survey book will appear shortly BIB003 ]. The survey is organized as follows. I look at the various roles of computational complexity in game theory in Section 2, including its use in modeling bounded rationality, its role in mechanism design, and the problem of computing Nash equilibria. In Section 3, I consider a game-theoretic problem that originated in the computer science literature, but should be of interest to the game theory community: computing the price of anarchy, that is, the cost of using decentralizing solution to a problem. In Section 4 I consider interactions between distributed computing and game theory. I conclude in Section 6 with a discussion of a few other topics of interest.
Computer Science and Game Theory: A Brief Survey <s> Complexity Considerations <s> One may define a concept of an n -person game in which each player has a finite set of pure strategies and in which a definite set of payments to the n players corresponds to each n -tuple of pure strategies, one strategy being taken for each player. For mixed strategies, which are probability distributions over the pure strategies, the pay-off functions are the expectations of the players, thus becoming polylinear forms … <s> BIB001 </s> Computer Science and Game Theory: A Brief Survey <s> Complexity Considerations <s> This book is a rigorous exposition of formal languages and models of computation, with an introduction to computational complexity. The authors present the theory in a concise and straightforward manner, with an eye out for the practical applications. Exercises at the end of each chapter, including some that have been solved, help readers confirm and enhance their understanding of the material. This book is appropriate for upper-level computer science undergraduates who are comfortable with mathematical arguments. <s> BIB002 </s> Computer Science and Game Theory: A Brief Survey <s> Complexity Considerations <s> If it is common knowledge that the players in a game are Bayesian utility maximizers who treat uncertainty about other players' actions like any other uncertainty, then the outcome is necessarily a correlated equilibrium. Random strategies appear as an expression of each player's uncertainty about what the others will do, not as the result of willful randomization. Use is made of the common prior assumption, according to which differences in probability assessments by different individuals are due to the different information that they have (where "information" may be interpreted broadly, to include experience, upbringing, and genetic makeup). Copyright 1987 by The Econometric Society. <s> BIB003 </s> Computer Science and Game Theory: A Brief Survey <s> Complexity Considerations <s> Once we have developed an algorithm (q.v.) for solving a computational problem and analyzed its worst-case time requirements as a function of the size of its input (most usefully, in terms of the O-notation; see ALGORITHMS, ANALYSIS OF), it is inevitable to ask the question: "Can we do better?" In a typical problem, we may be able to devise new algorithms for the problem that are more and more efficient. But eventually, this line of research often seems to hit an invisible barrier, a level beyond whch improvements are very difficult, seemingly impossible, to come by. After many unsuccessful attempts, algorithm designers inevitably start to wonder if there is something inherent in the problem that makes it impossible to devise algorithms that are faster than the current one. They may try to develop mathematical techniques for proving formally that there can be no algorithm for the given problem which runs faster than the current one. Such a proof would be valuable, as it would suggest that it is futile to keep working on improved algorithms for this problem, that further improvements are certainly impossible. The realm of mathematical models and techniques for establishing such impossibility proofs is called computational complexity. <s> BIB004 </s> Computer Science and Game Theory: A Brief Survey <s> Complexity Considerations <s> We define several new complexity classes of search problems, ''between'' the classes FP and FNP. These new classes are contained, along with factoring, and the class PLS, in the class TFNP of search problems in FNP that always have a witness. A problem in each of these new classes is defined in terms of an implicitly given, exponentially large graph. The existence of the solution sought is established via a simple graph-theoretic argument with an inefficiently constructive proof; for example, PLS can be thought of as corresponding to the lemma ''every dag has a sink.'' The new classes, are based on lemmata such as ''every graph has an even number of odd-degree nodes.'' They contain several important problems for which no polynomial time algorithm is presently known, including the computational versions of Sperner's lemma, Brouwer's fixpoint theorem, Chevalley's theorem, and the Borsuk-Ulam theorem, the linear complementarity problem for P-matrices, finding a mixed equilibrium in a non-zero sum game, finding a second Hamilton circuit in a Hamiltonian cubic graph, a second Hamiltonian decomposition in a quartic graph, and others. Some of these problems are shown to be complete. <s> BIB005 </s> Computer Science and Game Theory: A Brief Survey <s> Complexity Considerations <s> We argue that the tools of decision theory should be taken more seriously in the specification and analysis of systems. We illustrate this by considering a simple problem involving reliable communication, showing how considerations of utility and probability can be used to decide when it is worth sending heartbeat messages and, if they are sent, how often they should be sent. <s> BIB006 </s> Computer Science and Game Theory: A Brief Survey <s> Complexity Considerations <s> Abstract A new algorithm is presented for computing Nash equilibria of finite games. Using Kohlberg and Mertens’ structure theorem we show that a homotopy method can be represented as a dynamical system and implemented by Smale's global Newton method. The algorithm is outlined and computational experience is reported. <s> BIB007 </s> Computer Science and Game Theory: A Brief Survey <s> Complexity Considerations <s> We present two simple search methods for computing a sample Nash equilibrium in a normal-form game: one for 2- player games and one for n-player games. We test these algorithms on many classes of games, and show that they perform well against the state of the art- the Lemke-Howson algorithm for 2-player games, and Simplicial Subdivision and Govindan-Wilson for n-player games. <s> BIB008 </s> Computer Science and Game Theory: A Brief Survey <s> Complexity Considerations <s> The notion of bounded rationality was initiated in the 1950s by Herbert Simon; only recently has it influenced mainstream economics. In this book, Ariel Rubinstein defines models of bounded rationality as those in which elements of the process of choice are explicitly embedded. The book focuses on the challenges of modeling bounded rationality, rather than on substantial economic implications. In the first part of the book, the author considers the modeling of choice. After discussing some psychological findings, he proceeds to the modeling of procedural rationality, knowledge, memory, the choice of what to know, and group decisions. In the second part, he discusses the fundamental difficulties of modeling bounded rationality in games. He begins with the modeling of a game with procedural rational players and then surveys repeated games with complexity considerations. He ends with a discussion of computability constraints in games. The final chapter includes a critique by Herbert Simon of the author's methodology and the author's response. (This abstract was borrowed from another version of this item.) <s> BIB009 </s> Computer Science and Game Theory: A Brief Survey <s> Complexity Considerations <s> We initiate the systematic study of algorithmic issues involved in finding equilibria (Nash and correlated) in games with a large number of players; such games, in order to be computationally meaningful, must be presented in some succinct, game-specific way. We develop a general framework for obtaining polynomial-time algorithms for optimizing over correlated equilibria in such settings, and show how it can be applied successfully to symmetric games (for which we actually find an exact polytopal characterization), graphical games, and congestion games, among others. We also present complexity results implying that such algorithms are not possible in certain other such games. Finally, we present a polynomial-time algorithm, based on quantifier elimination, for finding a Nash equilibrium in symmetric games when the number of strategies is relatively small. <s> BIB010
The influence of computer science in game theory has perhaps been most strongly felt through complexity theory. I consider some of the strands of this research here. There are a numerous basic texts on complexity theory that the reader can consult for more background on notions like NP-completeness and finite automata, including BIB002 BIB004 . 1 k for each player. sharpen this result by showing that if at least one of the players has fewer than 2 cǫn states, where c ǫ = ǫ 12(1+ǫ) , then for sufficiently large n, then there is an equilibrium where each player's average payoff per round is greater than 3 − ǫ. Thus, computational limitations can lead to cooperation in prisoner's dilemma. There have been a number of other attempts to use complexity-theoretic ideas from computer science to model bounded rationality; see BIB009 for some examples. However, it seems that there is much more work to be done here. Computing Nash Equilibrium: BIB001 showed every finite game has a Nash equilibrium in mixed strategies. But how hard is it to actually find that equilibrium? On the positive side, there are well known algorithms for computing Nash equilibrium, going back to the classic algorithm, with a spate of recent improvements (see, for example, BIB007 BIB008 ). Moreover, for certain classes of games (for example, symmetric games BIB010 ), there are known to be polynomial-time algorithms. On the negative side, many questions about Nash equilibrium are known to be NP-hard. For example, showed that, for a game presented in normal form, deciding whether there exists a Nash equilibrium where each player gets a payoff of at least r is NP-complete. (Interestingly, Gilboa and Zemel also show that computing whether there exists a correlated equilibrium BIB003 where each player gets a payoff of at least r is computable in polynomial time. In general, questions regarding correlated equilibrium seem easier than the analogous questions for Nash equilibrium; see also BIB010 for further examples.) BIB006 prove similar NP-completeness results if the game is represented in extensive form, even if all players have the same payoffs (a situation which arises frequently in computer science applications, where we can view the players as agents of some designer, and take the payoffs to be the designer's payoffs). give a compendium of hardness results for various questions one can ask about Nash equilibria. Nevertheless, there is a sense in which it seems that the problem of finding a Nash equilibrium is easier than typical NP-complete problems, because every game is guaranteed to have a Nash equilibrium. By way of contrast, for a typical NP-complete problem like propositional satisfiability, whether or not a propositional formula is satisfiable is not known. Using this observation, it can be shown that if finding a Nash equilibrium is NP-complete, then NP = coNP. Recent work has in a sense has completely characterized the complexity of finding a Nash equilibrium in normal-form games: it is a PPPAD-complete problem . PPAD stands for "polynomial party argument (directed case)"; see BIB005 ] for a formal definition and examples of other PPAD problems. It is believed that PPAD-complete problems are not solvable in polynomial time, but are simpler than NP-complete problems, although this remains an open problem. See ] for an overview of this work.
Computer Science and Game Theory: A Brief Survey <s> Algorithmic Mechanism Design: <s> This paper analyzes the problem of inducing the members of an organization to behave as if they formed a team. Considered is a conglomerate-type organization consisting of a set of semi-autonomous subunits that are coordinated by the organization's head. The head's incentive problem is to choose a set of employee compensation rules that will induce his subunit managers to communicate accurate information and take optimal decisions. The main result exhibits a particular set of compensation rules, an optimal incentive structure, that leads to team behavior. Particular attention is directed to the informational aspects of the problem. An extended example of a resource allocation model is discussed and the optimal incentive structure is interpreted in terms of prices charged by the head for resources allocated to the subunits. <s> BIB001 </s> Computer Science and Game Theory: A Brief Survey <s> Algorithmic Mechanism Design: <s> Consider a committee which must select one alternative from a set of three or more alternatives. Committee members each cast a ballot which the voting procedure counts. The voting procedure is strategy-proof if it always induces every committee member to cast a ballot revealing his preference. I prove three theorems. First, every strategy-proof voting procedure is dictatorial. Second, this paper’s strategy-proofness condition for voting procedures corresponds to Arrow’s rationality, independence of irrelevant alternatives, nonnegative response, and citizens’ sovereignty conditions for social welfare functions. Third, Arrow’s general possibility theorem is proven in a new manner. 1. INTR~OUOTI~N <s> BIB002 </s> Computer Science and Game Theory: A Brief Survey <s> Algorithmic Mechanism Design: <s> The authors show how to design truthful (dominant strategy) mechanisms for several combinatorial problems where each agent's secret data is naturally expressed by a single positive real number. The goal of the mechanisms we consider is to allocate loads placed on the agents, and an agent's secret data is the cost she incurs per unit load. We give an exact characterization for the algorithms that can be used to design truthful mechanisms for such load balancing problems using appropriate side payments. We use our characterization to design polynomial time truthful mechanisms for several problems in combinatorial optimization to which the celebrated VCG mechanism does not apply. For scheduling related parallel machines (Q/spl par/C/sub max/), we give a 3-approximation mechanism based on randomized rounding of the optimal fractional solution. This problem is NP-complete, and the standard approximation algorithms (greedy load-balancing or the PTAS) cannot be used in truthful mechanisms. We show our mechanism to be frugal, in that the total payment needed is only a logarithmic factor more than the actual costs incurred by the machines, unless one machine dominates the total processing power. We also give truthful mechanisms for maximum flow, Q/spl par//spl Sigma/C/sub j/ (scheduling related machines to minimize the sum of completion times), optimizing an affine function over a fixed set, and special cases of uncapacitated facility location. In addition, for Q/spl par//spl Sigma/w/sub j/C/sub j/ (minimizing the weighted sum of completion times), we prove a lower bound of 2//spl radic/3 for the best approximation ratio achievable by truthful mechanism. <s> BIB003 </s> Computer Science and Game Theory: A Brief Survey <s> Algorithmic Mechanism Design: <s> We consider algorithmic problems in a distributed setting where the participants cannot be assumed to follow the algorithm but rather their own self-interest. As such participants, termed agents, are capable of manipulating the algorithm, the algorithm designer should ensure in advance that the agents' interests are best served by behaving correctly. Following notions from the field of mechanism design, we suggest a framework for studying such algorithms. Our main technical contribution concerns the study of a representative task scheduling problem for which the standard mechanism design tools do not suffice. Journal of Economic Literature Classification Numbers: C60, C72, D61, D70, D80. <s> BIB004 </s> Computer Science and Game Theory: A Brief Survey <s> Algorithmic Mechanism Design: <s> In the problem of finding an efficient allocation when agents' utilities are privately known, we examine the effect of restricting attention to mechanisms using "demand queries," which ask agents to report an optimal allocation given a price list. We construct a combinatorial allocation problem with m items and two agents whose valuations lie in a certain class, such that (i) efficiency can be obtained with a mechanism using O (m) bits, but (ii) any demand-query mechanism guaranteeing a higher efficiency than giving all items to one agent uses a number of queries that is exponential in m. The same is proven for any demand-query mechanism achieving an improvement in expected efficiency, for a constructed joint probability distribution over agents' valuations from the class. These results cast doubt on the usefulness of such common combinatorial allocation mechanisms as "iterative auctions" and other "preference elicitation" mechanisms using demand queries, as well as "value queries" and "order queries" (which are easily replicated with demand queries in our setting). <s> BIB005 </s> Computer Science and Game Theory: A Brief Survey <s> Algorithmic Mechanism Design: <s> We consider the problem of selecting a low cost s --- t path in a graph, where the edge costs are a secret known only to the various economic agents who own them. To solve this problem, Nisan and Ronen applied the celebrated Vickrey-Clarke-Groves (VCG) mechanism, which pays a premium to induce edges to reveal their costs truthfully. We observe that this premium can be unacceptably high. There are simple instances where the mechanism pays Θ(k) times the actual cost of the path, even if there is alternate path available that costs only (1 + ε) times as much. This inspires the frugal path problem, which is to design a mechanism that selects a path and induces truthful cost revelation without paying such a high premium.This paper contributes negative results on the frugal path problem. On two large classes of graphs, including ones having three node-disjoint s - t paths, we prove that no reasonable mechanism can always avoid paying a high premium to induce truthtelling. In particular, we introduce a general class of min function mechanisms, and show that all min function mechanisms can be forced to overpay just as badly VCG. On the other hand, we prove that (on two large classes of graphs) every truthful mechanism satisfying some reasonable properties is a min function mechanism. <s> BIB006
The problem of mechanism design is to design a game such that the agents playing the game, motivated only by self-interest, achieve the designer's goals. This problem has much in common with the standard computer science problem of designing protocols that satisfy certain specifications (for example, designing a distributed protocol that achieves Byzantine agreement; see Section 4). Work on mechanism design has traditionally ignored computational concerns. But KfirDahav, show that, even in simple settings, optimizing social welfare is NP-hard, so that perhaps the most common approach to designing mechanisms, applying the VickreyGroves-Clarke (VCG) procedure BIB001 , is not going to work in large systems. We might hope that, even if we cannot compute an optimal mechanism, we might be able to compute a reasonable approximation to it. However, as Ronen [2000, 2001] show, in general, replacing a VCG mechanism by an approximation does not preserve truthfulness. That is, even though truthfully revealing one's type is an optimal strategy in a VCG mechanism, it may no longer be optimal in an approximation. Following Nisan and Ronen's work, there has been a spate of papers either describing computationally tractable mechanisms or showing that no computationally tractable mechanism exists for a number of problems, ranging from task allocation BIB003 BIB004 to costsharing for multicast trees (where the problem is to share the cost of sending, for example, a movie over a network among the agents who actually want the movie) to finding low-cost paths between nodes in a network BIB006 . The problem that has attracted perhaps the most attention is combinatorial auctions, where bidders can bid on bundles of items. This becomes of particular interest in situations where the value to a bidder of a bundle of goods cannot be determined by simply summing the value of each good in isolation. To take a simple example, the value of a pair of shoes is much higher than that of the individual shoes; perhaps more interestingly, an owner of radio stations may value having a license in two adjacent cities more than the sum of the individual licenses. Combinatorial auctions are of great interest in a variety of settings including spectrum auctions, airport time slots (i.e., takeoff and landing slots), and industrial procurement. There are many complexity-theoretic issues related to combinatorial auctions. For a detailed discussion and references, see ; I briefly discuss a few of the issues involved here. Suppose that there are n items being auctioned. Simply for a bidder to communicate her bids to the auctioneer can take, in general, exponential time, since there are 2 n bundles. In many cases, we can identify a bid on a bundle with the bidder's valuation of the bundle. Thus, we can try to carefully design a bidding language in which a bidder can communicate her valuations succinctly. Simple informationtheoretic arguments can be used to show that, for every bidding language, there will be valuations that will require length at least 2 n to express in that language. Thus, the best we can hope for is to design a language that can represent the "interesting" bids succinctly. See ] for an overview of various bidding languages and their expressive power. Given bids from each of the bidders in a combinatorial auction, the auctioneer would like to then determine the winners. More precisely, the auctioneer would like to allocate the m items in an auction so as to maximize his revenue. This problem, called the winner determination problem, is NP-complete in general, even in relatively simple classes of combinatorial auctions with only two bidders making rather restricted bids. Moreover, it is not even polynomial-time approximable, in the sense that there is no constant d and polynomial-time algorithm such that the algorithm produces an allocation that gives revenue that is at least 1/d of optimal. On the other hand, there are algorithms that provably find a good solution, seem to work well in practice, and, if they seem to taking too long, can be terminated early, usually with a good feasible solution in hand. See for an overview of the results in this area. In most mechanism design problems, computational complexity is seen as the enemy. There is one class of problems in which it may be a friend: voting. One problem with voting mechanisms is that of manipulation by voters. That is, voters may be tempted to vote strategically rather than ranking the candidates according to their true preferences, in the hope that the final outcome will be more favorable. This situation arises frequently in practice; in the 2000 election, American voters who preferred Nader to Gore to Bush were encouraged to vote for Gore, rather than "wasting" a vote on Nader. The classic Gibbard-Satterthwaite theorem BIB002 shows that, if there are at least three alternatives, then in any nondictatorial voting scheme (i.e., one where it is not the case that one particular voter dictates the final outcome, irrespective of how the others vote), there are preferences under which an agent is better off voting strategically. The hope is that, by constructing the voting mechanism appropriately, it may be computationally intractable to find a manipulation that will be beneficial. While finding manipulations for majority voting (the candidate with the most votes wins) is easy, there are well-known voting protocols for which manipulation is hard in the presence of three or more candidates. See ] for a summary of results and further pointers to the literature. studies how much communication is needed for a set of n agents to compute the value of a function f : × n i=1 Θ i → X, where each agent i knows θ i ∈ Θ i . To see the relevance of this to economics, consider, for example, the problem of mechanism design. Most mechanisms in the economics literature are designed so that agents truthfully reveal their preferences (think of θ i as characterizing agent i's preferences here). However, in some settings, revealing one's full preferences can require a prohibitive amount of communication. For example, in a combinatorial auction of m items, revealing one's full preferences may require revealing what one would be willing to pay for each of the 2 m − 1 possible bundles of items. Even if m = 30, this requires revealing more than one billion numbers. This leads to an obvious question: how much communication is required by various mechanisms? BIB005 show that a standard approach for conducting combinatorial auctions, where prices are listed, agents are expected to make demands based on these prices, and then prices are adjusted (according to some pre-specified rule) based on demand, requires an exponential amount of communication for a certain class of valuations. This is among the first of preliminary steps to understanding the communication complexity of mechanisms; the general problem remains wide open.
Computer Science and Game Theory: A Brief Survey <s> The Price of Anarchy <s> In a system in which noncooperative agents share a common resource, we propose the ratio between the worst possible Nash equilibrium and the social optimum as a measure of the effectiveness of the system. Deriving upper and lower bounds for this ratio in a model in which several agents share a very simple network leads to some interesting mathematics, results, and open problems. <s> BIB001 </s> Computer Science and Game Theory: A Brief Survey <s> The Price of Anarchy <s> We consider the problem of routing traffic to optimize the performance of a congested network. We are given a network, a rate of traffic between each pair of nodes, and a latency function for each edge specifying the time needed to traverse the edge given its congestion; the objective is to route traffic such that the sum of all travel times---the total latency---is minimized.In many settings, it may be expensive or impossible to regulate network traffic so as to implement an optimal assignment of routes. In the absence of regulation by some central authority, we assume that each network user routes its traffic on the minimum-latency path available to it, given the network congestion caused by the other users. In general such a "selfishly motivated" assignment of traffic to paths will not minimize the total latency; hence, this lack of regulation carries the cost of decreased network performance.In this article, we quantify the degradation in network performance due to unregulated traffic. We prove that if the latency of each edge is a linear function of its congestion, then the total latency of the routes chosen by selfish network users is at most 4/3 times the minimum possible total latency (subject to the condition that all traffic must be routed). We also consider the more general setting in which edge latency functions are assumed only to be continuous and nondecreasing in the edge congestion. Here, the total latency of the routes chosen by unregulated selfish network users may be arbitrarily larger than the minimum possible total latency; however, we prove that it is no more than the total latency incurred by optimally routing twice as much traffic. <s> BIB002 </s> Computer Science and Game Theory: A Brief Survey <s> The Price of Anarchy <s> We consider the following class of problems. The value of an outcome to a society is measured via a submodular utility function (submodularity has a natural economic interpretation: decreasing marginal utility). Decisions, however, are controlled by non-cooperative agents who seek to maximise their own private utility. We present, under basic assumptions, guarantees on the social performance of Nash equilibria. For submodular utility functions, any Nash equilibrium gives an expected social utility within a factor 2 of optimal, subject to a function-dependent additive term. For non-decreasing, submodular utility functions, any Nash equilibrium gives an expected social utility within a factor 1+/spl delta/ of optimal, where 0/spl les//spl delta//spl les/1 is a number based upon discrete curvature of the function. A condition under which all sets of social and private utility functions induce pure strategy Nash equilibria is presented. The case in which agents themselves make use of approximation algorithms in decision making is discussed and performance guarantees given. Finally we present specific problems that fall into our framework. These include competitive versions of the facility location problem and k-median problem, a maximisation version of the traffic routing problem studied by Roughgarden and Tardos (2000), and multiple-item auctions. <s> BIB003 </s> Computer Science and Game Theory: A Brief Survey <s> The Price of Anarchy <s> Efficient spectrum-sharing mechanisms are crucial to alleviate the bandwidth limitation in wireless networks. In this paper, we consider the following question: can free spectrum be shared efficiently? We study this problem in the context of 802.11 or WiFi networks. Each access point (AP) in a WiFi network must be assigned a channel for it to service users. There are only finitely many possible channels that can be assigned. Moreover, neighboring access points must use different channels so as to avoid interference. Currently these channels are assigned by administrators who carefully consider channel conflicts and network loads. Channel conflicts among APs operated by different entities are currently resolved in an ad hoc manner (i.e., not in a coordinated way) or not resolved at all. We view the channel assignment problem as a game, where the players are the service providers and APs are acquired sequentially. We consider the price of anarchy of this game, which is the ratio between the total coverage of the APs in the worst Nash equilibrium of the game and what the total coverage of the APs would be if the channel assignment were done optimally by a central authority. We provide bounds on the price of anarchy depending on assumptions on the underlying network and the type of bargaining allowed between service providers. The key tool in the analysis is the identification of the Nash equilibria with the solutions to a maximal coloring problem in an appropriate graph. We relate the price of anarchy of these games to the approximation factor of local optimization algorithms for the maximum k-colorable subgraph problem. We also study the speed of convergence in these games. <s> BIB004 </s> Computer Science and Game Theory: A Brief Survey <s> The Price of Anarchy <s> Much work in AI deals with the selection of proper actions in a given (known or unknown) environment. However, the way to select a proper action when facing other agents is quite unclear. Most work in AI adopts classical game-theoretic equilibrium analysis to predict agent behavior in such settings. This approach however does not provide us with any guarantee for the agent. In this paper we introduce competitive safety analysis. This approach bridges the gap between the desired normative AI approach, where a strategy should be selected in order to guarantee a desired payoff, and equilibrium analysis. We show that a safety level strategy is able to guarantee the value obtained in a Nash equilibrium, in several classical computer science settings. Then, we discuss the concept of competitive safety strategies, and illustrate its use in a decentralized load balancing setting, typical to network problems. In particular, we show that when we have many agents, it is possible to guarantee an expected payoff which is a factor of 8/9 of the payoff obtained in a Nash equilibrium. Our discussion of competitive safety analysis for decentralized load balancing is further developed to deal with many communication links and arbitrary speeds. Finally, we discuss the extension of the above concepts to Bayesian games, and illustrate their use in a basic auctions setup. <s> BIB005
In a computer system, there are situations where we may have a choice between invoking a centralized solution to a problem or a decentralized solution. By "centralized" here, I mean that each agent in the system is told exactly what to do and must do so; in the decentralized solution, each agent tries to optimize his own selfish interests. Of course, centralization comes at a cost. For one thing, there is a problem of enforcement. For another, centralized solutions tend to be more vulnerable to failure. On the other hand, a centralized solution may be more socially beneficial. How much more beneficial can it be? BIB001 formalized this question by comparing the ratio of the social welfare of the centralized solution to that of the social welfare of the Nash equilibrium with the worst social welfare (assuming that the social welfare function is always positive). They called this ratio the the price of anarchy, and proved a number of results regarding the price of anarchy for a scheduling problem on parallel machines. Since the original paper, the price of anarchy has been studied in many settings, including traffic routing BIB002 , facility location games (e.g., where is the best place to put a factory) BIB003 , and spectrum sharing (how should channels in a WiFi network be assigned) BIB004 ]. To give a sense of the results, consider the traffic-routing context of BIB002 . Suppose that the travel time on a road increases in a known way with the congestion on the road. The goal is to minimize the average travel time for all drivers. Given a road network and a given traffic load, a centralized solution would tell each driver which road to take. For example, there could be a rule that cars with odd-numbered license plates take road 1, while those with even-numbered plates take road 2, to minimize congestion on either road. Roughgarden and Tardos show that the price of anarchy is unbounded if the travel time can be a nonlinear function of the congestion. On the other hand, if it is linear, they show that the price of anarchy is at most 4/3. The price of anarchy is but one way of computing the "cost" of using a Nash equilibrium. Others have been considered in the computer science literature. For example, BIB005 compares the safety level of a game-the optimal amount that an agent can guarantee himself, independent of what the other agents do-to what the agent gets in a Nash equilibrium, and shows, for interesting classes of games, including load-balancing games and first-price auctions, the ratio between the safety level and the Nash equilibrium is bounded. For example, in the case of first-price auctions, it is bounded by the constant e.
Computer Science and Game Theory: A Brief Survey <s> Game Theory and Distributed Computing <s> The problem addressed here concerns a set of isolated processors, some unknown subset of which may be faulty, that communicate only by means of two-party messages. Each nonfaulty processor has a private value of information that must be communicated to each other nonfaulty processor. Nonfaulty processors always communicate honestly, whereas faulty processors may lie. The problem is to devise an algorithm in which processors communicate their own values and relay values received from others that allows each nonfaulty processor to infer a value for each other processor. The value inferred for a nonfaulty processor must be that processor's private value, and the value inferred for a faulty one must be consistent with the corresponding value inferred by each other nonfaulty processor. It is shown that the problem is solvable for, and only for, n ≥ 3 m + 1, where m is the number of faulty processors and n is the total number. It is also shown that if faulty processors can refuse to pass on information but cannot falsely relay information, the problem is solvable for arbitrary n ≥ m ≥ 0. This weaker assumption can be approximated in practice using cryptographic methods. <s> BIB001 </s> Computer Science and Game Theory: A Brief Survey <s> Game Theory and Distributed Computing <s> The consensus problem involves an asynchronous system of processes, some of which may be unreliable. The problem is for the reliable processes to agree on a binary value. We show that every protocol for this problem has the possibility of nontermination, even with only one faulty process. By way of contrast, solutions are known for the synchronous case, the "Byzantine Generals" problem. <s> BIB002 </s> Computer Science and Game Theory: A Brief Survey <s> Game Theory and Distributed Computing <s> The designer of a fault-tolerant distributed system faces numerous alternatives. Using a stochastic model of processor failure times, we investigate design choices such as replication level, protocol running time, randomized versus deterministic protocols, fault detection, and authentication. We use the probability with which a system produces the correct output as our evaluation criterion. This contrasts with previous fault-tolerance results that guarantee correctness only if the percentage of faulty processors in the system can be bounded. Our results reveal some subtle and counterintuitive interactions between the design parameters and system reliability. <s> BIB003 </s> Computer Science and Game Theory: A Brief Survey <s> Game Theory and Distributed Computing <s> The Internet exhibits forms of interactions which are not captured by existing models in economics, artificial intelligence and game theory. New models are needed to deal with these multi-agent interactions. In this paper we present a new model -- distributed games. In such a model each player controls a number of agents which participate in asynchronous parallel multi-agent interactions (games). The agents jointly and strategically (partially) control the level of information monitoring and the level of recall by broadcasting messages. As an application, we show that the cooperative outcome of the Prisoner's Dilemma game can be obtained in equilibrium in such a setting. <s> BIB004 </s> Computer Science and Game Theory: A Brief Survey <s> Game Theory and Distributed Computing <s> The theory of mechanism design in economics/game theory deals with a center who wishes to maximize an objective function which depends on a vector of information variables. The value of each variable is known only to a selfish agent, which is not controlled by the center. In order to obtain its objective the center constructs a game, in which every agent participates and reveals its information, because these actions maximize its utility. However, several crucial new issues arise when one tries to transform existing economic mechanisms into protocols to be used in computational environments. In this paper we deal with two such issues: 1. The communication structure, and 2. the representation (syntax) of the agents' information. The existing literature on mechanism design implicitly assumes that these two features are not relevant. In particular, it assumes a communication structure in which every agent is directly connected to the center. We present new protocols that can be implemented in a large variety of communication structures, and discuss the sensitivity of these protocols to the way in which information is presented. <s> BIB005 </s> Computer Science and Game Theory: A Brief Survey <s> Game Theory and Distributed Computing <s> In this paper we investigate the implementation problem arising when some of the players are “faulty” in the sense that they fail to act optimally. The planner and the non-faulty players only know that there can be at most k faulty players in the population. However, they know neither the identity of the faulty players, their exact number nor how faulty players behave. We define a solution concept which requires a player to optimally respond to the non-faulty players regardless of the identity and actions of the faulty players. We introduce a notion of fault tolerant implementation, which unlike standard notions of full implementation, also requires robustness to deviations from the equilibrium. The main result of this paper establishes that under symmetric information any choice rule that satisfies two properties—k-monotonicity and no veto power—can be implemented by a strategic game form if there are at least three players and the number of faulty players is less than 1\2n−1. As an application of our result we present examples of simple mechanisms that implement the constrained Walrasian function and a choice rule for the efficient allocation of an indivisible good. Copyright 2002, Wiley-Blackwell. <s> BIB006 </s> Computer Science and Game Theory: A Brief Survey <s> Game Theory and Distributed Computing <s> We consider the problems of secret sharing and multiparty computation, assuming that agents prefer to get the secret (resp., function value) to not getting it, and secondarily, prefer that as few as possible of the other agents get it. We show that, under these assumptions, neither secret sharing nor multiparty function computation is possible using a mechanism that has a fixed running time. However, we show that both are possible using randomized mechanisms with constant expected running time. <s> BIB007
Distributed computing and game theory are interested in much the same problems: dealing with systems where there are many agents, facing uncertainty, and having possibly different goals. In practice, however, there has been a significant difference in emphasis in the two areas. In distributed computing, the focus has been on problems such as fault tolerance, asynchrony, scalability, and proving correctness of algorithms; in game theory, the focus has been on strategic concerns. I discuss here some issues of common interest. 1 To understand the relevance of fault tolerance and asynchrony, consider the Byzantine agreement problem, a paradigmatic problem in the distributed systems literature. In this problem, there are assumed to be n soldiers, up to t of which may be faulty (the t stands for traitor); n and t are assumed to be common knowledge. Each soldier starts with an initial preference, to either attack or retreat. (More precisely, there are two types of nonfaulty agents-those that prefer to attack, and those that prefer to retreat.) We want a protocol that guarantees that (1) all nonfaulty soldiers reach the same decision, and (2) This was introduced by BIB001 , and has been studied in detail since then; , , and provide overviews. Whether the Byzantine agreement problem is solvable depends in part on what types of failures are considered, on whether the system is synchronous or asynchronous, and on the ratio of n to t. Roughly speaking, a system is synchronous if there is a global clock and agents move in lockstep; a "step" in the system corresponds to a tick of the clock. In an asynchronous system, there is no global clock. The agents in the system can run at arbitrary rates relative to each other. One step for agent 1 can correspond to an arbitrary number of steps for agent 2 and vice versa. Synchrony is an implicit assumption in essentially all games. Although it is certainly possible to model games where player 2 has no idea how many moves player 1 has taken when player 2 is called upon to move, it is not typical to focus on the effects of synchrony (and its lack) in games. On the other hand, in distributed systems, it is typically a major focus. Suppose for now that we restrict to crash failures, where a faulty agent behaves according to the protocol, except that it might crash at some point, after which it sends no messages. In the round in which an agent fails, the agent may send only a subset of the messages that it is supposed to send according to its protocol. Further suppose that the system is synchronous. In this case, the following rather simple protocol achieves Byzantine agreement: • In the first round, each agent tells every other agent its initial preference. • For rounds 2 to t + 1, each agent tells every other agent everything it has heard in the previous round. (Thus, for example, in round 3, agent 1 may tell agent 2 that it heard from agent 3 that its initial preference was to attack, and that it (agent 3) heard from agent 2 that its initial preference was to attack, and it heard from agent 4 that its initial preferences was to retreat, and so on. This means that messages get exponentially long, but it is not difficult to represent this information in a compact way so that the total communication is polynomial in n, the number of agents.) • At the end of round t + 1, if an agent has heard from any other agent (including itself) that its initial preference was to attack, it decides to attack; otherwise, it decides to retreat. Why is this correct? Clearly, if all agents are correct and want to retreat (resp., attack), then the final decision will be to retreat (resp., attack), since that is the only preference that agents hear about (recall that for now we are considering only crash failures). It remains to show that if some agents prefer to attack and others to retreat, then all the nonfaulty agents reach the same final decision. So suppose that i and j are nonfaulty and i decides to attack. That means that i heard that some agent's initial preference was to attack. If it heard this first at some round t ′ < t + 1, then i will forward this message to j, who will receive it and thus also attack. On the other hand, suppose that i heard it first at round t + 1 in a message from i t+1 . Thus, this message must be of the form "i t said at round t that . . . that i 2 said at round 2 that i 1 said at round 1 that its initial preference was to attack." Moreover, the agents i 1 , . . . , i t+1 must all be distinct. Indeed, it is easy to see that i k must crash in round k before sending its message to i (but after sending its message to i k+1 ), for k = 1, . . . , t, for otherwise i must have gotten the message from i k , contradicting the assumption that i first heard at round t + 1 that some agent's initial preference was to attack. Since at most t agents can crash, it follows that i t+1 , the agent that sent the message to i, is not faulty, and thus sends the message to j. Thus, j also decides to attack. A symmetric argument shows that if j decides to attack, then so does i. It should be clear that the correctness of this protocol depends on both the assumptions made: crash failures and synchrony. Suppose instead that Byzantine failures are allowed, so that faulty agents can deviate in arbitrary ways from the protocol; they may "lie", send deceiving messages, and collude to fool the nonfaulty agents in the most malicious ways. In this case, the protocol will not work at all. In fact, it is known that agreement can be reached in the presence of Byzantine failures iff t < n/3, that is, iff fewer than a third of the agents can be faulty BIB001 . The effect of asynchrony is even more devastating: in an asynchronous system, it is impossible to reach agreement using a deterministic protocol even if t = 1 (so that there is at most one failure) and only crash failures are allowed BIB002 . The problem in the asynchronous setting is that if none of the agents have heard from, say, agent 1, they have no way of knowing whether agent 1 is faulty or just slow. Interestingly, there are randomized algorithms (i.e., behavior strategies) that achieve agreement with arbitrarily high probability in an asynchronous setting ]. Byzantine agreement can be viewed as a game where, at each step, an agent can either send a message or decide to attack or retreat. It is essentially a game between two teams, the nonfaulty agents and the faulty agents, whose composition is unknown (at least by the correct agents). To model it as a game in the more traditional sense, we could imagine that the nonfaulty agents are playing against a new player, the "adversary". One of adversary's moves is that of "corrupting" an agent: changing its type from "nonfaulty" to "faulty". Once an agent is corrupted, what the adversary can do depends on the failure type being considered. In the case of crash failures, the adversary can decide which of a corrupted agent's messages will be delivered in the round in which the agent is corrupted; however, it cannot modify the messages themselves. In the case of Byzantine failures, the adversary essentially gets to make the moves for agents that have been corrupted; in particular, it can send arbitrary messages. Why has the distributed systems literature not considered strategic behavior in this game? Crash failures are used to model hardware and software failures; Byzantine failures are used to model random behavior on the part of a system (for example, messages getting garbled in transit), software errors, and malicious adversaries (for example, hackers). With crash failures, it does not make sense to view the adversary's behavior as strategic, since the adversary is not really viewed as having strategic interests. While it would certainly make sense, at least in principle, to consider the probability of failure (i.e., the probability that the adversary corrupts an agent), this approach has by and large been avoided in the literature because it has proved difficult to characterize the probability distribution of failures over time. Computer components can perhaps be characterized as failing according to an exponential distribution (see BIB003 ] for an analysis of Byzantine agreement in such a setting), but crash failures can be caused by things other than component failures (faulty software, for example); these can be extremely difficult to characterize probabilistically. The problems are even worse when it comes to modeling random Byzantine behavior. With malicious Byzantine behavior, it may well be reasonable to impute strategic behavior to agents (or to an adversary controlling them). However, it is often difficult to characterize the payoffs of a malicious agent. The goals of the agents may vary from that of simply trying to delay a decision to that of causing disagreement. It is not clear what the appropriate payoffs should be for attaining these goals. Thus, the distributed systems literature has chosen to focus instead on algorithms that are guaranteed to satisfy the specification without making assumptions about the adversary's payoffs (or nature's probabilities, in the case of crash failures). Recently, there has been some working adding strategic concerns to standard problems in distributed computing (see, for example, BIB007 ) as well as adding concerns of fault tolerance and asynchrony to standard problems in game theory (see, for example, BIB006 BIB004 BIB005 and the definitions in the next section). This seems to be an area that is ripe for further developments. One such development is the subject of the next section.
Computer Science and Game Theory: A Brief Survey <s> Implementing Mediators <s> We characterize the set of agreements that the players of a non-cooperative game may reach when they have the opportunity to communicate prior to play. We show that communication allows the players to correlate their actions. Therefore, we take the set of correlated strategies as the space of agreements. Since we consider situations where agreements are non-binding, they must not be subject to profitable self-enforcing deviations by coalitions of players. A coalition-proof equilibrium is a correlated strategy from which no coalition has an improving and self-enforcing deviation. A coalition-proof equilibrium exists when there is a correlated strategy which (i) has a support contained in the set of actions that survive the iterated elimination of strictly dominated strategies, and (ii) weakly Pareto dominates every other correlated strategy whose support is contained in that set. Consequently, the unique equilibrium of a dominance solvable game is coalition-proof. <s> BIB001 </s> Computer Science and Game Theory: A Brief Survey <s> Implementing Mediators <s> The main contribution of this paper is the development and application of cryptographic techniques to the design of strategic communication mechanisms. One of the main assumptions in cryptography is the limitation of the computational power available to agents. We introduce the concept of limited computational complexity, and by borrowing results from cryptography, we construct a communication protocol to establish that every correlated equilibrium of a two-person game with rational payoffs can be achieved by means of computationally restricted unmediated communication. This result provides an example in game theory where limitations of computational abilities of players are helpful in solving implementation problems. More specifically, it is possible to construct mechanisms with the property that profitable deviations are too complicated to compute. Copyright The Econometric Society 2002. <s> BIB002 </s> Computer Science and Game Theory: A Brief Survey <s> Implementing Mediators <s> Abstract The paper studies Bayesian games which are extended by adding pre-play communication. Let Γ be a Bayesian game with full support and with three or more players. The main result is that if players can send private messages to each other and make public announcements then every communication equilibrium outcome, q , that is rational (i.e., involves probabilities that are rational numbers) can be implemented in a sequential equilibrium of a cheap talk extension of Γ , provided that the following condition is satisfied: There exists a Bayesian Nash equilibrium s in Γ such that for each type t i of each player i the expected payoff of t i in q is larger than the expected payoff of t i in s . <s> BIB003 </s> Computer Science and Game Theory: A Brief Survey <s> Implementing Mediators <s> We show the role of unmediated talk with computational complexity bounds as both an information transmission and a coordination device for the class of two-player games with incomplete information and rational parameters. We prove that any communication equilibrium payoff of such games can be reached as a Bayesian-Nash equilibrium payoff of the game extended by a two phase universal mechanism of interim computationally restricted pre-play communication. The communication protocols are designed with the help of modern cryptographic tools. A familiar context in which our results could be applied is bilateral trading with incomplete information. Copyright Springer-Verlag Berlin/Heidelberg 2004 <s> BIB004 </s> Computer Science and Game Theory: A Brief Survey <s> Implementing Mediators <s> We study k-resilient Nash equilibria, joint strategies where no member of a coalition C of size up to k can do better, even if the whole coalition defects. We show that such k-resilient Nash equilibria exist for secret sharing and multiparty computation, provided that players prefer to get the information than not to get it. Our results hold even if there are only 2 players, so we can do multiparty computation with only two rational agents. We extend our results so that they hold even in the presence of up to t players with "unexpected" utilities. Finally, we show that our techniques can be used to simulate games with mediators by games without mediators. <s> BIB005 </s> Computer Science and Game Theory: A Brief Survey <s> Implementing Mediators <s> We provide new and tight lower bounds on the ability of players to implement equilibria using cheap talk, that is, just allowing communication among the players. One of our main results is that, in general, it is impossible to implement three-player Nash equilibria in a bounded number of rounds. We also give the first rigorous connection between Byzantine agreement lower bounds and lower bounds on implementation. To this end we consider a number of variants of Byzantine agreement and introduce reduction arguments. We also give lower bounds on the running time of two player implementations. All our results extended to lower bounds on (k, t)-robust equilibria, a solution concept that tolerates deviations by coalitions of size up to k and deviations by up to t players with unknown utilities (who may be malicious). <s> BIB006 </s> Computer Science and Game Theory: A Brief Survey <s> Implementing Mediators <s> This paper analyzes the implementation of correlated equilibria that are immune to joint deviations of coalitions by cheap-talk protocols. We construct a cheap-talk protocol that is resistant to deviations of fewer than half the players, and using it, we show that a large set of correlated equilibria can be implemented as Nash equilibria in the extended game with cheap-talk. Furthermore, we demonstrate that in general there is no cheap-talk protocol that is resistant for deviations of half the players. <s> BIB007
The question of whether a problem in a multiagent system that can be solved with a trusted mediator can be solved by just the agents in the system, without the mediator, has attracted a great deal of attention in both computer science (particularly in the cryptography community) and game theory. In cryptography, the focus on the problem has been on secure multiparty computation. Here it is assumed that each agent i has some private information x i . Fix functions f 1 , . . . , f n . The goal is have agent i learn f i (x 1 , . . . , x n ) without learning anything about x j for j = i beyond what is revealed by the value of f i (x 1 , . . . , x n ). With a trusted mediator, this is trivial: each agent i just gives the mediator its private value x i ; the mediator then sends each agent i the value f i (x 1 , . . . , x n ). Work on multiparty computation provides conditions under which this can be done. In game theory, the focus has been on whether an equilibrium in a game with a mediator can be implemented using what is called cheap talk-that is, just by players communicating among themselves (cf. BIB003 BIB007 BIB002 BIB004 ). As suggested in the previous section, the focus in the computer science literature has been in doing multiparty computation in the presence of possibly malicious adversaries, who do everything they can to subvert the computation, while in the game theory literature, the focus has been on strategic agents. In recent work, BIB005 BIB006 considered deviations by both rational players, deviations by both rational players, who have preferences and try to maximize them, and players who can viewed as malicious, although it is perhaps better to think of them as rational players whose utilities are not known by the other players or mechanism designer. I briefly sketch their results here. 2 The idea of tolerating deviations by coalitions of players goes back to ; more recent refinements have been considered by BIB001 . Aumann's definition is essentially the following. Definition 1 σ is a k-resilient ′ equilibrium if, for all sets C of players with |C| ≤ k, it is not the case that there exists a strategy τ such that As usual, the strategy ( τ C , σ −C ) is the one where each player i ∈ C plays τ i and each player i / ∈ C plays σ i . As the prime notation suggests, this is not quite the definition we want to work with. The trouble with this definition is that it suggests that coalition members cannot communicate with each other beyond agreeing on what strategy to use. Perhaps surprisingly, allowing communication can prevent certain equilibria (see BIB006 ] for an example). Since we should expect coalition members to communicate, the following definition seems to capture a more reasonable notion of resilient equilibrium. Let the cheap-talk extension of a game Γ be, roughly speaking, the the game where players are allowed to communicate among themselves in addition to performing the actions of Γ and the payoffs are just as in Γ. Definition 2 σ is a k-resilient equilibrium in a game Γ if σ is a k-resilient ′ equilibrium in the cheap-talk extension of Γ (where we identify the strategy σ i in the game Γ with the strategy in the cheap-talk game where player i never sends any messages beyond those sent according to σ i ). A standard assumption in game theory is that utilities are (commonly) known; when we are given a game we are also given each player's utility.When players make decision, they can take other players' utilities into account. However, in large systems, it seems almost invariably the case that there will be some fraction of users who do not respond to incentives the way we expect. For example, in a peer-to-peer network like Kazaa or Gnutella, it would seem that no rational agent should share files. Whether or not you can get a file depends only on whether other people share files; on the other hand, it seems that there are disincentives for sharing (the possibility of lawsuits, use of bandwidth, etc.). Nevertheless, people do share files. However, studies of the Gnutella network have shown almost 70 percent of users share no files and nearly 50 percent of responses are from the top 1 percent of sharing hosts [Adar and Huberman 2000] . One reason that people might not respond as we expect is that they have utilities that are different from those we expect. Alternatively, the players may be irrational, or (if moves are made using a computer) they may be playing using a faulty computer and thus not able to make the move they would like, or they may not understand how to get the computer to make the move they would like. Whatever the reason, it seems important to design strategies that tolerate such unanticipated behaviors, so that the payoffs of the users with "standard" utilities do not get affected by the nonstandard players using different strategies. This can be viewed as a way of adding fault tolerance to equilibrium notions.
Computer Science and Game Theory: A Brief Survey <s> Definition 3 A joint strategy σ is t-immune if, for all T ⊆ N with |T | ≤ t, all joint strategies τ , and all <s> Secure function evaluation (SFE) enables a group of players, by themselves, to evaluate a function on private inputs as securely as if a trusted third party had done it for them. A completely fair SFE is a protocol in which, conceptually, the function values are learned atomically.We provide a completely fair SFE protocol which is secure for any number of malicious players, using a novel combination of computational and physical channel assumptions.We also show how completely fair SFE has striking applications togame theory. In particular, it enables "cheap-talk" protocol that (a) achieve correlated-equilibrium payoffs in any game, (b) are the first protocols which provably give no additional power to any coalition of players, and (c) are exponentially more efficient than prior counterparts. <s> BIB001 </s> Computer Science and Game Theory: A Brief Survey <s> Definition 3 A joint strategy σ is t-immune if, for all T ⊆ N with |T | ≤ t, all joint strategies τ , and all <s> Secure computation essentially guarantees that whatever computation n players can do with the help of a trusted party, they can also do by themselves. Fundamentally, however, this notion depends on the honesty of at least some players. We put forward and implement a stronger notion, rational secure computation, that does not depend on player honesty, but solely on player rationality. The key to our implementation is showing that the ballot-box - the venerable device used throughout the world to tally secret votes securely - can actually be used to securely compute any function. Our work bridges the fields of game theory and cryptography, and has broad implications for mechanism design. <s> BIB002 </s> Computer Science and Game Theory: A Brief Survey <s> Definition 3 A joint strategy σ is t-immune if, for all T ⊆ N with |T | ≤ t, all joint strategies τ , and all <s> We study k-resilient Nash equilibria, joint strategies where no member of a coalition C of size up to k can do better, even if the whole coalition defects. We show that such k-resilient Nash equilibria exist for secret sharing and multiparty computation, provided that players prefer to get the information than not to get it. Our results hold even if there are only 2 players, so we can do multiparty computation with only two rational agents. We extend our results so that they hold even in the presence of up to t players with "unexpected" utilities. Finally, we show that our techniques can be used to simulate games with mediators by games without mediators. <s> BIB003 </s> Computer Science and Game Theory: A Brief Survey <s> Definition 3 A joint strategy σ is t-immune if, for all T ⊆ N with |T | ≤ t, all joint strategies τ , and all <s> We provide new and tight lower bounds on the ability of players to implement equilibria using cheap talk, that is, just allowing communication among the players. One of our main results is that, in general, it is impossible to implement three-player Nash equilibria in a bounded number of rounds. We also give the first rigorous connection between Byzantine agreement lower bounds and lower bounds on implementation. To this end we consider a number of variants of Byzantine agreement and introduce reduction arguments. We also give lower bounds on the running time of two player implementations. All our results extended to lower bounds on (k, t)-robust equilibria, a solution concept that tolerates deviations by coalitions of size up to k and deviations by up to t players with unknown utilities (who may be malicious). <s> BIB004
The notion of t-immunity and k-resilience address different concerns. For t immunity, we consider the payoffs of the players not in C; for resilience, we consider the payoffs of players in C. It is natural to combine both notions. Given a game Γ, let Γ T τ be the game that is identical to Γ except that the players in T are fixed to playing strategy τ . Definition 4 σ is a (k, t)-robust equilibrium if σ is t-immune and, for all T ⊆ N such that |T | ≤ t and all joint strategies τ , σ −T is a k-resilient strategy of Γ τ T . To state the results of BIB003 BIB004 on implementing mediators, three games need to be considered: an underlying game Γ, an extension Γ d of Γ with a mediator, and a cheap-talk extension Γ CT of Γ. Assume that Γ is a normal-form Bayesian game: each player has a type from some type space with a known distribution over types, and the utilities of the agents depend on the types and actions taken. Roughly speaking, a cheap talk game implements a game with a mediator if it induces the same distribution over actions in the underlying game, for each type vector of the players. With this background, I can summarize the results of BIB003 BIB004 . • If n > 3k + 3t, a (k, t)-robust strategy σ with a mediator can be implemented using cheap talk (that is, there is a (k, t)-robust strategy σ ′ in a cheap talk game such that σ and σ ′ induce the same distribution over actions in the underlying game). Moreover, the implementation requires no knowledge of other agents' utilities, and the cheap talk protocol has bounded running time that does not depend on the utilities. • If n ≤ 3k + 3t then, in general, mediators cannot be implemented using cheap talk without knowledge of other agents' utilities. Moreover, even if other agents' utilities are known, mediators cannot, in general, be implemented without having a (k+t)-punishment strategy (that is, a strategy that, if used by all but at most k + t players, guarantees that every player gets a worse outcome than they do with the equilibrium strategy) nor with bounded running time. • If n > 2k + 3t, then mediators can be implemented using cheap talk if there is a punishment strategy (and utilities are known) in finite expected running time that does not depend on the utilities. • If n ≤ 2k + 3t then mediators cannot, in general, be implemented, even if there is a punishment strategy and utilities are known. • If n > 2k + 2t and there are broadcast channels then, for all ǫ, mediators can be ǫ-implemented (intuitively, there is an implementation where players get utility within ǫ of what they could get by deviating) using cheap talk, with bounded expected running time that does not depend on the utilities. • If n ≤ 2k+2t then mediators cannot, in general, be ǫ-implemented, even with broadcast channels. Moreover, even assuming cryptography and polynomially-bounded players, the expected running time of an implementation must depend on the utility functions of the players and ǫ. • If n > k + 3t then, assuming cryptography and polynomially-bounded players, mediators can be ǫ-implemented using cheap talk, but if n ≤ 2k + 2t, then the running time depends on the utilities in the game and ǫ. • If n ≤ k + 3t, then even assuming cryptography, polynomially-bounded players, and a (k + t)-punishment strategy, mediators cannot, in general, be ǫ-implemented using cheap talk. • If n > k + t then, assuming cryptography, polynomially-bounded players, and a public-key infrastructure (PKI), we can ǫ-implement a mediator. The proof of these results makes heavy use of techniques from computer science. All the possibility results showing that mediators can be implemented use techniques from secure multiparty computation. The results showing that that if n ≤ 3k + 3t, then we cannot implement a mediator without knowing utilities, even if there is a punishment strategy, uses the fact that Byzantine agreement cannot be reached if t < n/3; the impossibility result for n ≤ 2k + 3t also uses a variant of Byzantine agreement. A related line of work considers implementing mediators assuming stronger primitives (which cannot be implemented in computer networks); see BIB002 BIB001 ] for details.
Computer Science and Game Theory: A Brief Survey <s> Other Topics <s> Several new logics for belief and knowledge are introduced and studied, all of which have the property that agents are not logically omniscient. In particular, in these logics, the set of beliefs of an agent does not necessarily contain all valid formulas. Thus, these logics are more suitable than traditional logics for modelling beliefs of humans (or machines) with limited reasoning capabilities. Our first logic is essentially an extension of Levesque's logic of implicit and explicit belief, where we extend to allow multiple agents and higher-level belief (i.e., beliefs about beliefs). Our second logic deals explicitly with "awareness," where, roughly speaking, it is necessary to be aware of a concept before one can have beliefs about it. Our third logic gives a model of "local reasoning," where an agent is viewed as a "society of minds," each with its own cluster of beliefs, which may contradict each other. <s> BIB001 </s> Computer Science and Game Theory: A Brief Survey <s> Other Topics <s> This is the first of two papers where we present a formal model of unawareness. We contrast unawareness with certainty and uncertainty. A subject is certain of something when he knows that thing; he is uncertain when he does not know it, but he knows he does not: he is consciously uncertain. On the other hand, he isunaware of something when he does not know it, and he does not know he does not know, and so on ad infinitum: he does not perceive, does not have in mind, the object of knowledge. The opposite of unawareness is awareness, which includes certainty and uncertainty. <s> BIB002 </s> Computer Science and Game Theory: A Brief Survey <s> Other Topics <s> Reasoning about knowledge—particularly the knowledge of agents who reason about the world and each other's knowledge—was once the exclusive province of philosophers and puzzle solvers. More recently, this type of reasoning has been shown to play a key role in a surprising number of contexts, from understanding conversations to the analysis of distributed computer algorithms. Reasoning About Knowledge is the first book to provide a general discussion of approaches to reasoning about knowledge and its applications to distributed systems, artificial intelligence, and game theory. It brings eight years of work by the authors into a cohesive framework for understanding and analyzing reasoning about knowledge that is intuitive, mathematically well founded, useful in practice, and widely applicable. The book is almost completely self-contained and should be accessible to readers in a variety of disciplines, including computer science, artificial intelligence, linguistics, philosophy, cognitive science, and game theory. Each chapter includes exercises and bibliographic notes. <s> BIB003 </s> Computer Science and Game Theory: A Brief Survey <s> Other Topics <s> We show that a very broad class of models, including possibility correspondences, necessarily fail to capture very simple and intuitive implications of unawareness. We explain why standard state–space formulations suffer from this problem, illustrating the point with an example of a nonstandard state–space model which avoids the difficulty. <s> BIB004 </s> Computer Science and Game Theory: A Brief Survey <s> Other Topics <s> In economics, most noncooperative game theory has focused on equilibrium in games, especially Nash equilibrium and its refinements. The traditional explanation for when and why equilibrium arises is that it results from analysis and introspection by the players in a situation where the rules of the game, the rationality of the players, and the players' payoff functions are all common knowledge. Both conceptually and empirically, this theory has many problems. In The Theory of Learning in Games Drew Fudenberg and David Levine develop an alternative explanation that equilibrium arises as the long-run outcome of a process in which less than fully rational players grope for optimality over time. The models they explore provide a foundation for equilibrium theory and suggest useful ways for economists to evaluate and modify traditional equilibrium concepts. <s> BIB005 </s> Computer Science and Game Theory: A Brief Survey <s> Other Topics <s> We provide a self-contained, selective overview of the literature on the role of knowledge and beliefs in game theory. We focus on recent results on the epistemic foundations of solution concepts, including correlated equilibrium, rationalizability in dynamic games, forward and backward induction. <s> BIB006 </s> Computer Science and Game Theory: A Brief Survey <s> Other Topics <s> Abstract We claim first that simple uncertainty is not an adequate model of a subject's ignorance, because a major component of it is the inability to give a complete description of the states of the world, and we provide a formal model of unawareness. In Modica and Rustichini (1994) we showed a difficulty in the project, namely that without weakening of the inference rules of the logic one would face the unpleasant alternative between full awareness and full unawareness. In this paper we study a logical system where non full awareness is possible, and prove that a satisfactory solution to the problem can be found by introducing limited reasoning ability of the subject. A determination theorem for this system is proved, and the appearance of partitional informational structures with unawareness is analysed. Journal of Economic Literature Classification Numbers : D80, D83. <s> BIB007 </s> Computer Science and Game Theory: A Brief Survey <s> Other Topics <s> Modica and Rustichini [Theory and Decision 37, 1994] provided a logic for reasoning about knowledge where agents may be unaware of certain propositions. However, their original approach had the unpleasant property that nontrivial unawareness was incompatible with partitional information structures. More recently, Modica and Rustichini [Games and Economic Behavior 27:2, 1999] have provided an approach that allows for nontrivial unawareness in partitional information structures. Here it is shown that their approach can be viewed as a special case of a general approach to unawareness considered by Fagin and Halpern [Artificial Intelligence 34, 1988]. <s> BIB008 </s> Computer Science and Game Theory: A Brief Survey <s> Other Topics <s> The traditional representations of games using the extensive form or the strategic (normal) form obscure much of the structure that is present in real-world games. In this paper, we propose a new representation language for general multi-player noncooperative games --- multi-agent influence diagrams (MAIDs). This representation extends graphical models for probability distributions to a multi-agent decision-making context. The basic elements in the MAID representation are variables rather than strategies (as in the normal form) or events (as in the extensive form). They can thus explicitly encode structure involving the dependence relationships among variables. As a consequence, we can define a notion of strategic relevence of one decision variable to another. D' is strategically relevant to D if, to optimize the decision rule at D, the decision maker needs to take into consideration the decision rule at D. We provide a sound and complete graphical criterion for determining strategic relevance. We then show how strategic relevance can be used to detect structure in games, allowing large games to be broken up into a set of interacting smaller games, which can be solved in sequence. We show that this decomposition can lead to substantial savings in the computational cost of finding Nash equilibria in these games. <s> BIB009 </s> Computer Science and Game Theory: A Brief Survey <s> Other Topics <s> We introduce a novel game that models the creation of Internet-like networks by selfish node-agents without central design or coordination. Nodes pay for the links that they establish, and benefit from short paths to all destinations. We study the Nash equilibria of this game, and prove results suggesting that the "price of anarchy" [4] in this context (the relative cost of the lack of coordination) may be modest. Several interesting: extensions are suggested. <s> BIB010 </s> Computer Science and Game Theory: A Brief Survey <s> Other Topics <s> Models for the processes by which ideas and influence propagate through a social network have been studied in a number of domains, including the diffusion of medical and technological innovations, the sudden and widespread adoption of various strategies in game-theoretic settings, and the effects of "word of mouth" in the promotion of new products. Recently, motivated by the design of viral marketing strategies, Domingos and Richardson posed a fundamental algorithmic problem for such social network processes: if we can try to convince a subset of individuals to adopt a new product or innovation, and the goal is to trigger a large cascade of further adoptions, which set of individuals should we target?We consider this problem in several of the most widely studied models in social network analysis. The optimization problem of selecting the most influential nodes is NP-hard here, and we provide the first provable approximation guarantees for efficient algorithms. Using an analysis framework based on submodular functions, we show that a natural greedy strategy obtains a solution that is provably within 63% of optimal for several classes of models; our framework suggests a general approach for reasoning about the performance guarantees of algorithms for these types of influence problems in social networks.We also provide computational experiments on large collaboration networks, showing that in addition to their provable guarantees, our approximation algorithms significantly out-perform node-selection heuristics based on the well-studied notions of degree centrality and distance centrality from the field of social networks. <s> BIB011 </s> Computer Science and Game Theory: A Brief Survey <s> Other Topics <s> We present new algorithms for reinforcement learning and prove that they have polynomial bounds on the resources required to achieve near-optimal return in general Markov decision processes. After observing that the number of actions required to approach the optimal return is lower bounded by the mixing time T of the optimal policy (in the undiscounted case) or by the horizon time T (in the discounted case), we then give algorithms requiring a number of actions and total computation time that are only polynomial in T and the number of states and actions, for both the undiscounted and discounted cases. An interesting aspect of our algorithms is their explicit handling of the Exploration-Exploitation trade-off. <s> BIB012 </s> Computer Science and Game Theory: A Brief Survey <s> Other Topics <s> Two people, 1 and 2, are said to have common knowledge of an event E if both know it, 1 knows that 2 knows it, 2 knows that 1 knows it, 1 knows that 2 knows that 1 knows it, and so on. <s> BIB013 </s> Computer Science and Game Theory: A Brief Survey <s> Other Topics <s> Awareness has been shown to be a useful addition to standard epistemic logic for many applications. However, standard propositional logics for knowledge and awareness cannot express the fact that an agent knows that there are facts of which he is unaware without there being an explicit fact that the agent knows he is unaware of. We propose a logic for reasoning about knowledge of unawareness, by extending Fagin and Halpern's Logic of General Awareness. The logic allows quantification over variables, so that there is a formula in the language that can express the fact that "an agent explicitly knows that there exists a fact of which he is unaware". Moreover, that formula can be true without the agent explicitly knowing that he is unaware of any particular formula. We provide a sound and complete axiomatization of the logic, using standard axioms from the literature to capture the quantification operator. Finally, we show that the validity problem for the logic is recursively enumerable, but not decidable. <s> BIB014 </s> Computer Science and Game Theory: A Brief Survey <s> Other Topics <s> We introduce Game networks (G nets), a novel representation for multi-agent decision problems. Compared to other game-theoretic representations, such as strategic or extensive forms, G nets are more structured and more compact; more fundamentally, G nets constitute a computationally advantageous framework for strategic inference, as both probability and utility independencies are captured in the structure of the network and can be exploited in order to simplify the inference process. An important aspect of multi-agent reasoning is the identification of some or all of the strategic equilibria in a game; we present original convergence methods for strategic equilibrium which can take advantage of strategic separabilities in the G net structure in order to simplify the computations. Specifically, we describe a method which identifies a unique equilibrium as a function of the game payoffs, and one which identifies all equilibria. <s> BIB015 </s> Computer Science and Game Theory: A Brief Survey <s> Other Topics <s> We introduce a compact graph-theoretic representation for multi-party game theory. Our main result is a provably correct and efficient algorithm for computing approximate Nash equilibria in one-stage games represented by trees or sparse graphs. <s> BIB016
There are many more areas of interaction between computer science than I have indicated in this brief survey. I briefly mention a few others here: • Interactive epistemology: Since the publication of BIB013 seminal paper, there has been a great deal of activity in trying to understand the role of knowledge in games, and providing epistemic analyses of solution concepts (see BIB006 for a survey). In computer science, there has been a parallel literature applying epistemic logic to reason about distributed computation. One focus of this work has been on characterizing the level of knowledge needed to solve certain problems. For example, to achieve Byzantine agreement common knowledge among the nonfaulty agents of an initial value is necessary and sufficient. More generally, in a precise sense, common knowledge is necessary and sufficient for coordination. Another focus has been on defining logics that capture the reasoning of resource-bounded agents. This work has ranged from logics for reasoning about awareness, a topic that has been explored in both computer science and game theory (see, for example, BIB004 BIB001 BIB008 BIB014 BIB002 BIB007 ) and logics for capturing algorithmic knowledge, an approach that takes seriously the assumption that agents must explicitly compute what they know. See BIB003 for an overview of the work in epistemic logic in computer science. • Network growth: If we view networks as being built by selfish players (who decide whether or not to build links), what will the resulting network look like? How does the growth of the network affect its functionality? For example, how easily will influence spread through the network? How easy is it to route traffic? See BIB010 BIB011 ] for some recent computer science work in this burgeoning area. • Efficient representation of games: Game theory has typically focused on "small" games, often 2-or 3-player games, that are easy to describe, such as prisoner's dilemma, in order to understand subtleties regarding basic issues such as rationality. To the extent that game theory is used to tackle larger, more practical problems, it will become important to find efficient techniques for describing and analyzing games. By way of analogy, 2 n − 1 numbers are needed to describe a probability distribution on a space characterized by n binary random variables. For n = 100 (not an unreasonable number in practical situations), it is impossible to write down the probability distribution in the obvious way, let alone do computations with it. The same issues will surely arise in large games. Computer scientists use graphical approaches, such as Bayesian networks and Markov networks ], for representing and manipulating probability measures on large spaces. Similar techniques seem applicable to games; see, for example, BIB009 BIB015 BIB016 , and ] for a recent overview. Note that representation is also an issue when we consider the complexity of problems such as computing Nash or correlated equilibria. The complexity of a problem is a function of the size of the input, and the size of the input (which in this case is a description of the game) depends on how the input is represented. • Learning in games: There has been a great deal of work in both computer science and game theory on learning to play well in different settings (see BIB005 ] for an overview of the work in game theory). One line of research in computer science has involved learning to play optimally in a reinforcement learning setting, where an agent interacts with an unknown (but fixed) environment. The agent then faces a fundamental tradeoff between exploration and exploitation. The question is how long it takes to learn to play well (i.e., to get a reward within some fixed ǫ of optimal); see BIB012 for the current state of the art. A related question is efficiently finding a strategy minimizes regret-that is, finding a strategy that is guaranteed to do not much worse than the best strategy would have done in hindsight (that is, even knowing what the opponent would have done). See for a recent overview of work on this problem.
Conflict-free Replicated Data Types: An Overview <s> Introduction <s> This invention pertains to the performance of a microroutine in accordance with a program being run on a processor having user's instructions in main memory. External devices having discrete device numbers cause interrupt signals to be transmitted to the microroutine which acknowledges the interrupt signal and obtains the device number associated with the signal. A service pointer is fetched from the service pointer table in the main memory corresponding to the device number, which in turn defines a service block function to be fetched from the main memory. Input - output service is then performed for the external devices in accordance with the interrupt service block function without interrupting the program that is being currently run. <s> BIB001 </s> Conflict-free Replicated Data Types: An Overview <s> Introduction <s> Reliability at massive scale is one of the biggest challenges we face at Amazon.com, one of the largest e-commerce operations in the world; even the slightest outage has significant financial consequences and impacts customer trust. The Amazon.com platform, which provides services for many web sites worldwide, is implemented on top of an infrastructure of tens of thousands of servers and network components located in many datacenters around the world. At this scale, small and large components fail continuously and the way persistent state is managed in the face of these failures drives the reliability and scalability of the software systems. This paper presents the design and implementation of Dynamo, a highly available key-value storage system that some of Amazon's core services use to provide an "always-on" experience. To achieve this level of availability, Dynamo sacrifices consistency under certain failure scenarios. It makes extensive use of object versioning and application-assisted conflict resolution in a manner that provides a novel interface for developers to use. <s> BIB002 </s> Conflict-free Replicated Data Types: An Overview <s> Introduction <s> A Commutative Replicated Data Type (CRDT) is one where all concurrent operations commute. The replicas of a CRDT converge automatically, without complex concurrency control. This paper describes Treedoc, a novel CRDT design for cooperative text editing. An essential property is that the identifiers of Treedoc atoms are selected from a dense space. We discuss practical alternatives for implementing the identifier space based on an extended binary tree. We also discuss storage alternatives for data and meta-data, and mechanisms for compacting the tree. In the best case, Treedoc incurs no overhead with respect to a linear text buffer. We validate the results with traces from existing edit histories. <s> BIB003 </s> Conflict-free Replicated Data Types: An Overview <s> Introduction <s> Cassandra is a distributed storage system for managing very large amounts of structured data spread out across many commodity servers, while providing highly available service with no single point of failure. Cassandra aims to run on top of an infrastructure of hundreds of nodes (possibly spread across different data centers). At this scale, small and large components fail continuously. The way Cassandra manages the persistent state in the face of these failures drives the reliability and scalability of the software systems relying on this service. While in many ways Cassandra resembles a database and shares many design and implementation strategies therewith, Cassandra does not support a full relational data model; instead, it provides clients with a simple data model that supports dynamic control over data layout and format. Cassandra system was designed to run on cheap commodity hardware and handle high write throughput while not sacrificing read efficiency. <s> BIB004 </s> Conflict-free Replicated Data Types: An Overview <s> Introduction <s> Geo-replicated, distributed data stores that support complex online applications, such as social networks, must provide an "always-on" experience where operations always complete with low latency. Today's systems often sacrifice strong consistency to achieve these goals, exposing inconsistencies to their clients and necessitating complex application logic. In this paper, we identify and define a consistency model---causal consistency with convergent conflict handling, or causal+---that is the strongest achieved under these constraints. We present the design and implementation of COPS, a key-value store that delivers this consistency model across the wide-area. A key contribution of COPS is its scalability, which can enforce causal dependencies between keys stored across an entire cluster, rather than a single server like previous systems. The central approach in COPS is tracking and explicitly checking whether causal dependencies between keys are satisfied in the local cluster before exposing writes. Further, in COPS-GT, we introduce get transactions in order to obtain a consistent view of multiple keys without locking or blocking. Our evaluation shows that COPS completes operations in less than a millisecond, provides throughput similar to previous systems when using one server per cluster, and scales well as we increase the number of servers in each cluster. It also shows that COPS-GT provides similar latency, throughput, and scaling to COPS for common workloads. <s> BIB005 </s> Conflict-free Replicated Data Types: An Overview <s> Introduction <s> Replicating data under Eventual Consistency (EC) allows any replica to accept updates without remote synchronisation. This ensures performance and scalability in large-scale distributed systems (e.g., clouds). However, published EC approaches are ad-hoc and error-prone. Under a formal Strong Eventual Consistency (SEC) model, we study sufficient conditions for convergence. A data type that satisfies these conditions is called a Conflict-free Replicated Data Type (CRDT). Replicas of any CRDT are guaranteed to converge in a self-stabilising manner, despite any number of failures. This paper formalises two popular approaches (state- and operation-based) and their relevant sufficient conditions. We study a number of useful CRDTs, such as sets with clean semantics, supporting both add and remove operations, and consider in depth the more complex Graph data type. CRDT types can be composed to develop large-scale distributed applications, and have interesting theoretical properties. <s> BIB006 </s> Conflict-free Replicated Data Types: An Overview <s> Introduction <s> Spanner is Google's scalable, multi-version, globally-distributed, and synchronously-replicated database. It provides strong transactional semantics, consistent replication, and high performance reads and writes for a variety of Google's applications. I'll discuss the design and implementation of Spanner, as well as some of the lessons we have learned along the way. I'll also discuss some open challenges that we still see in building scalable distributed storage systems. <s> BIB007 </s> Conflict-free Replicated Data Types: An Overview <s> Introduction <s> Web service providers have been using NoSQL datastores to provide scalability and availability for globally distributed data at the cost of sacrificing transactional guarantees. Recently, major web service providers like Google have moved towards building storage systems that provide ACID transactional guarantees for globally distributed data. For example, the newly published system, Spanner, uses Two-Phase Commit and Two-Phase Locking to provide atomicity and isolation for globally distributed data, running on top of Paxos to provide fault-tolerant log replication. We show in this paper that it is possible to provide the same ACID transactional guarantees for multi-datacenter databases with fewer cross-datacenter communication trips, compared to replicated logging. Instead of replicating the transactional log, we replicate the commit operation itself, by running Two-Phase Commit multiple times in different datacenters and using Paxos to reach consensus among datacenters as to whether the transaction should commit. Doing so not only replaces several inter-datacenter communication trips with intra-datacenter communication trips, but also allows us to integrate atomic commitment and isolation protocols with consistent replication protocols to further reduce the number of cross-datacenter communication trips needed for consistent replication; for example, by eliminating the need for an election phase in Paxos. We analyze our approach in terms of communication trips to compare it against the log replication approach, then we conduct an extensive experimental study to compare the performance and scalability of both approaches under various multi-datacenter setups. <s> BIB008 </s> Conflict-free Replicated Data Types: An Overview <s> Introduction <s> National Science Foundation (U.S.) (Directorate for Computer and Information Science and Engineering (CISE Expeditions award CCF-1139158)) <s> BIB009 </s> Conflict-free Replicated Data Types: An Overview <s> Introduction <s> This paper proposes a Geo-distributed key-value datastore, named ChainReaction, that offers causal+ consistency, with high performance, fault-tolerance, and scalability. ChainReaction enforces causal+ consistency which is stronger than eventual consistency by leveraging on a new variant of chain replication. We have experimentally evaluated the benefits of our approach by running the Yahoo! Cloud Serving Benchmark. Experimental results show that ChainReaction has better performance in read intensive workloads while offering competitive performance for other workloads. Also we show that our solution requires less metadata when compared with previous work. <s> BIB010 </s> Conflict-free Replicated Data Types: An Overview <s> Introduction <s> Client-side logic and storage are increasingly used in web and mobile applications to improve response time and availability. Current approaches tend to be ad-hoc and poorly integrated with the server-side logic. We present a principled approach to integrate client-and server-side storage. We support both mergeable and strongly consistent transactions that target either client or server replicas and provide access to causally-consistent snapshots efficiently. In the presence of infrastructure faults, a client-assisted failover solution allows client execution to resume immediately and seamlessly access consistent snapshots without waiting. We implement this approach in SwiftCloud, the first transactional system to bring geo-replication all the way to the client machine. Example applications show that our programming model is useful across a range of application areas. Our experimental evaluation shows that SwiftCloud provides better fault tolerance and at the same time can improve both latency and throughput by up to an order of magnitude, compared to classical geo-replication techniques. <s> BIB011 </s> Conflict-free Replicated Data Types: An Overview <s> Introduction <s> Transactions with strong consistency and high availability simplify building and reasoning about distributed systems. However, previous implementations performed poorly. This forced system designers to avoid transactions completely, to weaken consistency guarantees, or to provide single-machine transactions that require programmers to partition their data. In this paper, we show that there is no need to compromise in modern data centers. We show that a main memory distributed computing platform called FaRM can provide distributed transactions with strict serializability, high performance, durability, and high availability. FaRM achieves a peak throughput of 140 million TATP transactions per second on 90 machines with a 4.9 TB database, and it recovers from a failure in less than 50 ms. Key to achieving these results was the design of new transaction, replication, and recovery protocols from first principles to leverage commodity networks with RDMA and a new, inexpensive approach to providing non-volatile DRAM. <s> BIB012 </s> Conflict-free Replicated Data Types: An Overview <s> Introduction <s> Most geo-replicated storage systems use weak consistency to avoid the performance penalty of coordinating replicas in different data centers. This departure from strong semantics poses problems to application programmers, who need to address the anomalies enabled by weak consistency. In this paper we use a recently proposed isolation level, called Non-Monotonic Snapshot Isolation, to achieve ACID transactions with low latency. To this end, we present Blotter, a geo-replicated system that leverages these semantics in the design of a new concurrency control protocol that leaves a small amount of local state during reads to make commits more efficient, which is combined with a configuration of Paxos that is tailored for good performance in wide area settings. Read operations always run on the local data center, and update transactions complete in a small number of message steps to a subset of the replicas. We implemented Blotter as an extension to Cassandra. Our experimental evaluation shows that Blotter has a small overhead at the data center scale, and performs better across data centers when compared with our implementations of the core Spanner protocol and of Snapshot Isolation on the same codebase. <s> BIB013
Internet-scale distributed systems often replicate data at multiple geographic locations to provide low latency and high availability, despite node and network failures. Some systems BIB007 BIB008 BIB012 BIB009 75, BIB013 adopt strong consistency models, where the execution of an operation needs to involve coordination of a quorum of replicas. Although it is possible to improve the throughput of these systems (e.g., through batching), the required coordination leads to high latency for executing one operation, that depends on the round-trip time among replicas and the protocol used. Additionally, in the presence of a network partition or other faults, some nodes might be unable to contact the necessary quorum for executing their operations. An alternative approach is to rely on weaker consistency models, such as eventual consistency BIB002 BIB001 BIB004 or causal consistency BIB005 BIB010 BIB011 , where any replica can accept updates, which are propagated asynchronously to other replicas. These models are also interesting for supporting applications running in mobile devices, to mask the high latency of mobile communications and the period of disconnection or poor connectivity. Systems that adopt a weak consistency model allow replicas to temporarily diverge, requiring a mechanism for merging concurrent updates into a common state. Conflict-free Replicated Data Types (CRDT) provide a principled approach to address this problem. A CRDT is an abstract data type, with a well defined interface, designed to be replicated at multiple nodes and exhibiting the following properties: (i) any replica can be modified without coordinating with any other replicas; (ii) when any two replicas have received the same set of updates, they reach the same state, deterministically, by adopting mathematically sound rules to guarantee state convergence. Since our first works proposing a CRDT for concurrent editing BIB003 and later laying the theoretical foundations of CRDTs BIB006 , CRDTs have become mainstream and are used in a large number of systems serving millions of users worldwide. Currently, an application can use CRDTs by either using a storage system that offers CRDTs in its interface,by embedding an existing CRDT library or implementing its own support. This document presents an overview of Conflict-free Replicated Data Types research and practice, organized as follows. Section 2 discusses the aspects that are important for an application developer that uses CRDTs to maintain the state of her application. As any abstract data type, a CRDT implements some given functionality and important aspects that must be considered include the time complexity for operation execution and the space complexity for storing data and for synchronizing replicas. However, as CRDTs are designed to be replicated and to allow uncoordinated updates, a key aspect of a CRDT is its semantics in the presence of concurrency -this section focuses on this aspect. Section 3 discusses the aspects that are important for the system developer that needs to create a system that includes CRDTs. These developers need to focus on another key aspect of CRDTs: the synchronization model. The synchronization model defines the requirements that the system must meet so that CRDTs work correctly. Finally, Section 4 discusses the aspects that are important for the CRDT developer, focusing on the key ideas and techniques used in the design of existing CRDTs.
Conflict-free Replicated Data Types: An Overview <s> Concurrency semantics <s> The concept of one event happening before another in a distributed system is examined, and is shown to define a partial ordering of the events. A distributed algorithm is given for synchronizing a system of logical clocks which can be used to totally order the events. The use of the total ordering is illustrated with a method for solving synchronization problems. The algorithm is then specialized for synchronizing physical clocks, and a bound is derived on how far out of synchrony the clocks can become. <s> BIB001 </s> Conflict-free Replicated Data Types: An Overview <s> Concurrency semantics <s> There is a gap between the theory and practice of distributed systems in terms of the use of time. The theory of distributed systems shunned the notion of time, and introduced “causality tracking” as a clean abstraction to reason about concurrency. The practical systems employed physical time (NTP) information but in a best effort manner due to the difficulty of achieving tight clock synchronization. In an effort to bridge this gap and reconcile the theory and practice of distributed systems on the topic of time, we propose a hybrid logical clock, HLC, that combines the best of logical clocks and physical clocks. HLC captures the causality relationship like logical clocks, and enables easy identification of consistent snapshots in distributed systems. Dually, HLC can be used in lieu of physical/NTP clocks since it maintains its logical clock to be always close to the NTP clock. Moreover HLC fits in to 64 bits NTP timestamp format, and is masking tolerant to NTP kinks and uncertainties.We show that HLC has many benefits for wait-free transaction ordering and performing snapshot reads in multiversion globally distributed databases. <s> BIB002 </s> Conflict-free Replicated Data Types: An Overview <s> Concurrency semantics <s> Geographically distributed systems often rely on replicated eventually consistent data stores to achieve availability and performance. To resolve conflicting updates at different replicas, researchers and practitioners have proposed specialized consistency protocols, called replicated data types, that implement objects such as registers, counters, sets or lists. Reasoning about replicated data types has however not been on par with comparable work on abstract data types and concurrent data types, lacking specifications, correctness proofs, and optimality results. To fill in this gap, we propose a framework for specifying replicated data types using relations over events and verifying their implementations using replication-aware simulations. We apply it to 7 existing implementations of 4 data types with nontrivial conflict-resolution strategies and optimizations (last-writer-wins register, counter, multi-value register and observed-remove set). We also present a novel technique for obtaining lower bounds on the worst-case space overhead of data type implementations and use it to prove optimality of 4 implementations. Finally, we show how to specify consistency of replicated stores with multiple objects axiomatically, in analogy to prior work on weak memory models. Overall, our work provides foundational reasoning tools to support research on replicated eventually consistent stores. <s> BIB003
An abstract data type (or simply data type) defines a set of operations, that can be classified in queries, when they have no influence in the result of subsequent operations, and updates, when their execution may influence the result of subsequent operations. In an implementation of a data type, a query will not modify the internal state of the implementation, while an update might modify the internal state. For the replication of an object, we consider a system with n nodes. Each node keeps a replica of the object. Applications interact with the an object by executing operation in a replica of the object. Updates execute initially in a replica and are propagated asynchronously to all other replicas. In Section 3, we discuss how updates are propagated. The updates defined in a data type may intrinsically commute or not. Consider for instance a Counter, a shared integer that supports increments and decrements. As these updates commute (i.e., executing them in any order yields the same result), the Counter naturally converges towards the same expected result independently of the order in which updates are applied. In this case, it is natural that the state of a CRDT object reflects all executed updates. Unfortunately, for most data types, this is not the case and several concurrency semantics are reasonable, with different semantics being suitable for different applications. For instance, consider a shared set object supporting add and remove updates. There is no correct outcome when concurrently adding and removing the same element. Happens-before relation: When defining the concurrency semantics, an important concept is that of the happens-before relation BIB001 . In a distributed system, an event e 1 happened-before an event e 2 , e 1 ≺ e 2 , iff: (i) e 1 occurred before e 2 in the same process; or (ii) e 1 is the event of sending message m, and e 2 is the event of receiving that message; or (iii) there exists an event e such that e 1 ≺ e and e ≺ e 2 . When applied to CRDTs, we can say that an update u 1 happened-before an update u 2 , u 1 ≺ u2, iff the effects of u 1 had been applied in the replica where u 2 was executed initially. As an example, if an event is "Alice reserved the meeting room", it is relevant to know if that was known when "Bob reserved the meeting room". If that is the case, one reasonable semantics is to give priority to Alice's prior reservation. Otherwise, the events were concurrent and the concurrency semantics must define some arbitration rule to give priority to one update over the other. As discussed later, many CRDTs implements concurrency semantics that give priority to one update over the other concurrent updates. Total order among updates: Another relation that can be useful for defining the concurrency semantics is that of a total order among updates and particularly a total order that approximates wall-clock time. In distributed systems, it is common to maintain nodes with their physical clocks loosely synchronized. When combining the clock time with a site identifier, we have unique timestamps that are totally ordered. Due to the clock skew among multiple nodes, although these timestamps approximate an ideal global physical time, they do not necessarily respect the happens-before relation. This can be achieved by combining physical and logical clocks, as shown by Hybrid Logical Clocks BIB002 . This relation allows to define the last-writer-wins semantics, where the value written by the last writer wins over the values written previously, according to the defined total order. We now show how these relations can be used to define sensible concurrency semantics for CRDTs. During our presentation, when defining the value of a CRDT and following Burckhardt et. al. BIB003 1 , we consider the value defined as a function of the set of updates O known at a given replica, the happens-before relation, ≺, established among updates and, for some data types, of the total order, <, defined among updates.
Conflict-free Replicated Data Types: An Overview <s> List / sequence <s> Rd-time group editors dow a group of users to view and edit, the same document at the same time horn geograpbicdy di.~ersed sites connected by communication networks. Consistency maintenance is one of the most si@cant &alwiges in the design and implementation of thwe types of systems. R=earch on rd-time group editors in the past decade has invented au inuolative tetique for consistency maintenance, ded operational transformation This paper presents an integrative review of the evolution of operational tra=formation techniques, with the go~ of identifying the major is-m~s, dgotiths, achievements, and remaining Mlenges. In addition, this paper contribut= a new optimized generic operational transformation control algorithm. Ke~vords Consistency maint enauce, operational transformation, convergence, CauS*ty pras~ation, intention pre~tion, group e&tors, groupware, distributed computing. <s> BIB001 </s> Conflict-free Replicated Data Types: An Overview <s> List / sequence <s> A Commutative Replicated Data Type (CRDT) is one where all concurrent operations commute. The replicas of a CRDT converge automatically, without complex concurrency control. This paper describes Treedoc, a novel CRDT design for cooperative text editing. An essential property is that the identifiers of Treedoc atoms are selected from a dense space. We discuss practical alternatives for implementing the identifier space based on an extended binary tree. We also discuss storage alternatives for data and meta-data, and mechanisms for compacting the tree. In the best case, Treedoc incurs no overhead with respect to a linear text buffer. We validate the results with traces from existing edit histories. <s> BIB002 </s> Conflict-free Replicated Data Types: An Overview <s> List / sequence <s> Massive collaborative editing becomes a reality through leading projects such as Wikipedia. This massive collaboration is currently supported with a costly central service. In order to avoid such costs, we aim to provide a peer-to-peer collaborative editing system. Existing approaches to build distributed collaborative editing systems either do not scale in terms of number of users or in terms of number of edits. We present the Logoot approach that scales in these both dimensions while ensuring causality, consistency and intention preservation criteria. We evaluate the Logoot approach and compare it to others using a corpus of all the edits applied on a set of the most edited and the biggest pages of Wikipedia. <s> BIB003 </s> Conflict-free Replicated Data Types: An Overview <s> List / sequence <s> For distributed applications requiring collaboration, responsive and transparent interactivity is highly desired. Though such interactivity can be achieved with optimistic replication, maintaining replica consistency is difficult. To support efficient implementations of collaborative applications, this paper extends a few representative abstract data types (ADTs), such as arrays, hash tables, and growable arrays (or linked lists), into replicated abstract data types (RADTs). In RADTs, a shared ADT is replicated and modified with optimistic operations. Operation commutativity and precedence transitivity are two principles enabling RADTs to maintain consistency despite different execution orders. Especially, replicated growable arrays (RGAs) support insertion/deletion/update operations. Over previous approaches to the optimistic insertion and deletion, RGAs show significant improvement in performance, scalability, and reliability. <s> BIB004
A list (or sequence) data type maintains an ordered collection of elements, and exports two updates: (i) ins(i, e), for inserting element e in the position i, shifting element in position i, if any, and subsequent elements to the right; and (ii) rmv(i), for removing element in the position i, if any, and shifting subsequent elements to the left. The problem in implementing a list CRDT BIB002 BIB003 BIB004 is that the position of elements is shifted when inserting and deleting an element. For example, consider the example of Figure 2 , where both replicas start with the list of six characters "012345". In replica A, element A is inserted in position 2 (considering the first position of the list as having position 0). In replica B, element B is inserted in position 4. When synchronizing, if updates would be replayed by executing the original operations, in replica A, B would be inserted before the "3" leading to "01A2B345", as this is the element in position 4 after inserting element "A". The concurrency semantics of lists have been studied extensively in the context of collaborative editing systems BIB001 , with the following being the generally accepted correct semantics. A rmv(i) update should remove the element present in position i in the replica where the update was initially executed. An ins(i, e) update inserts element e after the elements that precede element in position i in the replica where the update was initially executed, and before the subsequent elements. In the presence of concurrent updates, if the previous rule does not define a total order on the elements, the order of the elements that could be in the same position is arbitrated using some deterministic rule (that must guarantee that the relative order of elements remains constant over time) . Returning to our example, the final result in both replicas should be "01A23B45" because when ins(4, B) executed in replica B, it inserts "B" between elements "3" and "4". Thus, when the update is applied in replica A, it should also insert "B" between elements "3" and "4".
Conflict-free Replicated Data Types: An Overview <s> Map of CRDTs: <s> Conflict-Free Replicated Data-Types (CRDTs) [6] provide greater safety properties to eventually-consistent distributed systems without requiring synchronization. CRDTs ensure that concurrent, uncoordinated updates have deterministic outcomes via the properties of bounded join-semilattices. We discuss the design of a new convergent (state-based) replicated data-type, the Map, as implemented by the Riak DT library [4] and the Riak data store [3]. Like traditional dictionary data structures, the Map associates keys with values, and provides operations to add, remove, and mutate entries. Unlike traditional dictionaries, all values in the Map data structure are also state-based CRDTs and updates to embedded values preserve their convergence semantics via lattice inflations [1] that propagate upward to the top-level. Updates to the Map and its embedded values can also be applied atomically in batches. Metadata required for ensuring convergence is minimized in a manner similar to the optimized OR-set [5]. This design allows greater flexibility to application developers working with semi-structured data, while removing the need for the developer to design custom conflict-resolution routines for each class of application data. We also discuss the experimental validation of the data-type using stateful property-based tests with QuickCheck [2]. <s> BIB001 </s> Conflict-free Replicated Data Types: An Overview <s> Map of CRDTs: <s> Abstract Conflict-free Replicated Data Types (CRDTs) are distributed data types that make eventual consistency of a distributed object possible and non ad-hoc. Specifically, state-based CRDTs ensure convergence through disseminating the entire state, that may be large, and merging it to other replicas. We introduce Delta State Conflict-Free Replicated Data Types ( δ -CRDT) that can achieve the best of both operation-based and state-based CRDTs: small messages with an incremental nature, as in operation-based CRDTs, disseminated over unreliable communication channels, as in traditional state-based CRDTs. This is achieved by defining δ -mutators to return a delta-state, typically with a much smaller size than the full state, that to be joined with both local and remote states. We introduce the δ -CRDT framework, and we explain it through establishing a correspondence to current state-based CRDTs. In addition, we present an anti-entropy algorithm for eventual convergence, and another one that ensures causal consistency. Finally, we introduce several δ -CRDT specifications of both well-known replicated datatypes and novel datatypes, including a generic map composition. <s> BIB002
A more interesting case is to allow to associate a key with a CRDT (we call such CRDT, an embedded CRDT 4 ). In this case, besides the put and remove updates, we must consider the fact that some updates update the state of the embedded CRDT -we refer to these updates generically as upd(k, op). Using this formulation, we can consider that the put update, that associates a key with an object, can be encoded as an update upd(k, init(o)) that sets the initial value of the object associated with k. The map allows embedding another map, leading to a recursive data type. For simplicity of presentation, we consider that the key is simple, although when a map embeds another map, the key will be composed by multiple parts, one for each of the maps. For defining the concurrency semantics of a map, it is necessary to consider the following cases. First, the case of concurrent updates performed to the same embedded object. In this case, with the objects being CRDTs, a natural choice is to rely on the semantics of the embedded CRDT to combine the concurrent updates. An aspect that must be considered is that updates executed for a given key should be compatible with the type of the object associated with the key. A special case is when first associating an object with a key, it is possible that objects of different types are associated with the same key. A pragmatic solution to address this issue, proposed by Riak developers, is to have, for each key, one object for each CRDT type BIB001 . In this case, when accessing the value of a key, it is necessary to specify which type should be accessedthis can be seen as the key of the map is the pair (key, data type). Second, the case of a concurrent remove of the key and update of the object associated with the key (or of an object embedded in the object associated with the key). To address this case, several concurrency semantics can be proposed. Remove-as-recursive-reset map: A first possible concurrency semantics is the remove-as-recursive-reset, where a remove of a key k is transformed in executing a reset update in the object o associated with k, and recursively in all objects embedded in o. A reset update is a type-specific operation that sets the value of the object to a bottom value. Concurrent updates to the same object, including reset updates, are then solved by relying on the concurrency semantics defined for the CRDT. This approach requires every object that can be embedded to define a reset update. Additionally, in the concrete implementations of the designs proposed in literature BIB001 BIB002 , for some data types, it might be difficult to have a bot-tom value that differs from a value of the domain -e.g. for counter, 0 is often used as the bottom value. In this case, it might become impossible to distinguish between a removed object and an object that was assigned the bottom value. Consider the example of Figure 3 , where the map is used to keep a shared shopping list, where a product is associated with a counter. Now suppose that in replica A the entry "flour" is incremented (e.g. because one of the users of the shopping list realized that he needs more flour to bake a cake). Concurrently, another user has done a checkout, which, as a side effect removed all entries in the shopping list. After synchronizing, the state of the map will show the value of 1 associated with "flour", as the reset update for counters just sets the value to 0. This seems a sensible semantics for this use-case. Remove-wins map: A second possible concurrency semantics is the remove-wins semantics, that gives priority to removes over updates. In this case, intuitively, a remove of key k cancels the effects of all updates to k (or any descendant of k) that either happened-before or are concurrent with the remove. More formally, given the full set of updates O, the set of updates that must be considered for determining the final state of a map is .e., all updates to a key k, such that for all rmv(k ′ ), with k ′ a prefix of k, the update happened after the remove of k ′ . Consider the example of Figure 4 where a map is used to store the state of a game. Player Alice has 10 coins and she has collected an hammer. Now suppose that the system processes concurrently the following operations. In replica A, the state is updated by reflecting that Alice has collected a nail. In replica B, Alice is removed from the game, by removing the contents of her state. Using the remove-wins semantics, when combining both updates, the state of the map will include no information for Alice, as the remove wins over concurrent updates. This is a sensible semantics, assuming that removing a player is a definite action and we do not want to keep any state about the removed player. We note that if we have used the remove as recursive reset semantics, the final state would include for Alice, only the concurrently inserted object, the nail. This seems unreasonable in this use-case. Objects → {hammer}}} upd(Alice.Objects, Objects → {hammer,nail}}} / / sync ▲ ▲ ▲ % % ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ Objects → {hammer}}} Update-wins map: A third possible semantics is the update-wins semantics, that gives priority to updates over removes. Intuitively, we want an update to cancel the effects of a concurrent remove. In the previous example, we want the final state to reflect all updates that modified Alice's state, as adding a nail to her Objects cancels the effect of the concurrent remove -the expected run is presented in Figure 5 . This could be used, for example, in a situation where a player is removed because she does not execute updates for some period of time. In such case, if there is an update concurrent with the remove, it seems sensible to restore the player's state and reflect all updates executed in that state. This would not be the case if any of the previous semantics was used. Objects → {hammer}}} upd(Alice.Objects, Objects → {hammer}}} Objects → {hammer,nail}}} Precisely defining the concurrency semantics of an update-wins map is a bit more challenging. Consider the example of Figure 6 . In this case, if the semantics was simply defined as an update canceling the effects of concurrent removes, the final value of the map would include the complete information for Alice, as the removes in both replicas would have no effects due to the concurrent updates. Objects → {hammer}}} Objects → {hammer}}} We propose a different concurrency semantics for update-wins. Intuitively, the idea is that if at any moment, a key is removed in all replicas, the updates on that key that happened before that state, in all replicas, will have no side-effects. In our example, this means that the final state would have 5 Coins for Alice, as this is the value written in the state of Alice, after she had been removed. A little bit more formally, consider the transitive reduction of the happensbefore graph of updates, which includes the edges that target a given update iff all source updates are concurrent among them. In the reduced graph, for deciding which updates are relevant for computing the state of key k, find the latest vertex-cut 5 that includes only rmv(k) updates -the relevant updates for computing the value associated with k are the ones that happened-after any of the rmv(k) updates in the vertex-cut. If there is no such cut, the effect of removes is canceled by concurrent updates, and all updates (except removes) should be considered for determining the value of the map.
Conflict-free Replicated Data Types: An Overview <s> Other CRDTs <s> Replicating data under Eventual Consistency (EC) allows any replica to accept updates without remote synchronisation. This ensures performance and scalability in large-scale distributed systems (e.g., clouds). However, published EC approaches are ad-hoc and error-prone. Under a formal Strong Eventual Consistency (SEC) model, we study sufficient conditions for convergence. A data type that satisfies these conditions is called a Conflict-free Replicated Data Type (CRDT). Replicas of any CRDT are guaranteed to converge in a self-stabilising manner, despite any number of failures. This paper formalises two popular approaches (state- and operation-based) and their relevant sufficient conditions. We study a number of useful CRDTs, such as sets with clean semantics, supporting both add and remove operations, and consider in depth the more complex Graph data type. CRDT types can be composed to develop large-scale distributed applications, and have interesting theoretical properties. <s> BIB001 </s> Conflict-free Replicated Data Types: An Overview <s> Other CRDTs <s> Many applications model their data in a general-purpose storage format such as JSON. This data structure is modified by the application as a result of user input. Such modifications are well understood if performed sequentially on a single copy of the data, but if the data is replicated and modified concurrently on multiple devices, it is unclear what the semantics should be. In this paper we present an algorithm and formal semantics for a JSON data structure that automatically resolves concurrent modifications such that no updates are lost, and such that all replicas converge towards the same state (a conflict-free replicated datatype or CRDT). It supports arbitrarily nested list and map types, which can be modified by insertion, deletion and assignment. The algorithm performs all merging client-side and does not depend on ordering guarantees from the network, making it suitable for deployment on mobile devices with poor network connectivity, in peer-to-peer networks, and in messaging systems with end-to-end encryption. <s> BIB002
A number of other CRDTs have been proposed in literature, including CRDTs for elementary data structures, such as Graphs BIB001 , and more complex structures, such as JSON documents BIB002 . For each of these CRDTs, the developers have defined and implemented a type specific concurrency semantics.
Conflict-free Replicated Data Types: An Overview <s> Discussion <s> A concurrent object is a data object shared by concurrent processes. Linearizability is a correctness condition for concurrent objects that exploits the semantics of abstract data types. It permits a high degree of concurrency, yet it permits programmers to specify and reason about concurrent objects using known techniques from the sequential domain. Linearizability provides the illusion that each operation applied by concurrent processes takes effect instantaneously at some point between its invocation and its response, implying that the meaning of a concurrent object's operations can be given by pre- and post-conditions. This paper defines linearizability, compares it to other correctness conditions, presents and demonstrates a method for proving the correctness of implementations, and shows how to reason about concurrent objects, given they are linearizable. <s> BIB001 </s> Conflict-free Replicated Data Types: An Overview <s> Discussion <s> Replicating data under Eventual Consistency (EC) allows any replica to accept updates without remote synchronisation. This ensures performance and scalability in large-scale distributed systems (e.g., clouds). However, published EC approaches are ad-hoc and error-prone. Under a formal Strong Eventual Consistency (SEC) model, we study sufficient conditions for convergence. A data type that satisfies these conditions is called a Conflict-free Replicated Data Type (CRDT). Replicas of any CRDT are guaranteed to converge in a self-stabilising manner, despite any number of failures. This paper formalises two popular approaches (state- and operation-based) and their relevant sufficient conditions. We study a number of useful CRDTs, such as sets with clean semantics, supporting both add and remove operations, and consider in depth the more complex Graph data type. CRDT types can be composed to develop large-scale distributed applications, and have interesting theoretical properties. <s> BIB002 </s> Conflict-free Replicated Data Types: An Overview <s> Discussion <s> This paper studies the semantics of sets under eventual consistency. The set is a pervasive data type, used either directly or as a component of more complex data types, such as maps or graphs. Eventual consistency of replicated data supports concurrent updates, reduces latency and improves fault tolerance, but forgoes strong consistency (e.g., linearisability). Accordingly, several cloud computing platforms implement eventually-consistent replicated sets [2,4]. <s> BIB003
We now discuss several aspects related to the properties of concurrency semantics, and how they related with the semantics under sequential execution. Preservation of sequential semantics: When modeling an abstract data type that has an established semantics under sequential execution, CRDTs should preserve that semantics under a sequential execution. For instance, CRDT sets should ensure that if the last update in a sequence of updates to a set added a given element, then a query immediately after that one will show the element to be present on the set. Conversely, if the last update removed an element, then a subsequent query should not show its presence. Sequential execution can occur even in distributed settings if synchronization is frequent. Replica A can be updated, merged into another replica B and updated there, and merged back into replica A before being updated again in replica A. In this case we have a sequential execution, even though updates have been executed in different replicas. Historically, not all CRDT designs have met this property, typically by restricting the functionality. For example, the two-phase set CRDT BIB002 does not allow re-adding an element that was removed, and thus it breaks the common sequential semantics. Principle of permutation equivalence: When defining the concurrency semantics for a given abstract data type, if all sequential permutations of updates lead to the same state, then the final state of a CRDT under concurrent execution should also be that state (principle of permutation equivalence BIB003 ). As far as we know, all CRDTs proposed in literature that preserve the sequential semantics follow this principle 6 . Equivalence to a sequential execution: For some concurrency semantics, the state of a CRDT, as observed by executing a query, can be explained by a single sequential execution of the updates (that might have been executed concurrently initially). For example, the state of a LWW register (or LWW set) can be explained by the sequential execution of all updates according to the total order defined among updates. We note that this property differs from linearizable BIB001 , as the property we are defining refers to a single replica and to the updates known at that replica, while linearizability concerns the complete system. Extended behavior under concurrency: Not all CRDTs need or can be explained by sequential executions. The add-wins set is an example of a CRDT where there might be no sequential execution of updates that respect the happens-before relation to explain the state observed, as Figure 7 shows. In this example, the state of the set after all updates propagate to all replicas includes a and b, but in any sequential extension of the causal order a remove update would always be the last update, and consequently the removed element could not belong to the set. Some other CRDTs can exhibit states that are only attained when concurrency does occur. An example is the multi-value register, a register that supports a simple write and read interface (see Section 2.1.1). If used sequentially, sequential semantics is preserved, and a read will show the outcome of the most recent write in the sequence. However if two or more value are written concurrently, the subsequent read will show all those values (as the multi-value name implies), and there is no sequential execution that can explain this result. We also note that a follow up write can overwrite both a single value and multiple values. In this section we identify a number of situations and discuss how to address them in each of the synchronization models.
Conflict-free Replicated Data Types: An Overview <s> Stability of arbitration: <s> Collaborative text editing systems allow users to concurrently edit a shared document, inserting and deleting elements (e.g., characters or lines). There are a number of protocols for collaborative text editing, but so far there has been no precise specification of their desired behavior, and several of these protocols have been shown not to satisfy even basic expectations. This paper provides a precise specification of a replicated list object, which models the core functionality of replicated systems for collaborative text editing. We define a strong list specification, which we prove is implemented by an existing protocol, as well as a weak list specification, which admits additional protocol behaviors. A major factor determining the efficiency and practical feasibility of a collaborative text editing protocol is the space overhead of the metadata that the protocol must maintain to ensure correctness. We show that for a large class of list protocols, implementing either the strong or the weak list specification requires a metadata overhead that is at least linear in the number of elements deleted from the list. The class of protocols to which this lower bound applies includes all list protocols that we are aware of, and we show that one of these protocols almost matches the bound. <s> BIB001 </s> Conflict-free Replicated Data Types: An Overview <s> Stability of arbitration: <s> In order to converge in the presence of concurrent updates, modern eventually consistent replication systems rely on causality information and operation semantics. It is relatively easy to use semantics of highlevel operations on replicated data structures, such as sets, lists, etc. However, it is difficult to exploit semantics of operations on registers, which store opaque data. In existing register designs, concurrent writes are resolved either by the application, or by arbitrating them according to their timestamps. The former is complex and may require user intervention, whereas the latter causes arbitrary updates to be lost. In this work, we identify a register construction that generalizes existing ones by combining runtime causality ordering, to identify concurrent writes, with static data semantics, to resolve them. We propose a simple conflict resolution template based on an application-predefined order on the domain of values. It eliminates or reduces the number of conflicts that need to be resolved by the user or by an explicit application logic. We illustrate some variants of our approach with use cases, and how it generalizes existing designs. <s> BIB002
The concurrency semantics often includes arbitrating between concurrent updates, in which one update is given priority over some other, which will have no influence in determining the state of the object -we call these updates cast-off updates. For example, in the add-wins set, an add update is given priority over concurrent remove updates. A property that might influence both the implementation of the system and the way users reason about CRDTs is the stability of arbitration. Intuitively, we say that the arbitration is stable iff an update that was cast-off at some point, will remain cast-off in the future. For defining this property more formally, we start by defining that two object states are observable equivalent if the result of executing any query in both states is the same. An update c is a cast-off update for a set of updates O iff the states that result from applying the set of updates O and O \ {c} are observable equivalent BIB001 . We say that the concurrency semantics defined for a CRDT is arbitration stable iff for any cast-off update c in some set of updates O, when we consider a set of updates O ′ that are concurrent or & & ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ Figure 8 : Example of instability of arbitration in a register with data-driven conflict resolution. As an example, consider a register CRDT for which, in the presence of concurrent write updates, the value of the register is that of the largest written value BIB002 . More precisely, for a set of updates O, let O max be the causal frontier defined as the set of updates for which there is no subsequent update in the happens-before order, i.e., The value of the set is max({v | wr(v) ∈ O max }). In the run presented in Figure 8 , replica A, after receiving the update wr(5) from replica B arbitrates that wr(5) should be prioritized over wr(4) because the written value is larger (and wr(4) becomes a cast-off update). However, after later receiving wr(2), wr(5) no longer belong to the O max set of O and thus wr(4) should be given priority. The instability of arbitration raises two issues. First, the state in replica A evolves in a somehow unexpected way for an observer, as the integration of a remote update makes the object move to a prior value. Second, for an implementation it may have implications on when the information about an update can be discarded -the information of a cast-off update cannot be discarded if the update may later become relevant (not cast-off) . Although we used a simple register for exemplifying the instability of arbitration, we conjecture that this issue will occur for any CRDT in which the value is decided by arbitrating over the updates in the causal frontier of updates and there are more than two possible values. We further conjecture that in CRDTs that exhibit arbitration instability, a cast-off update cannot be discarded in a replica before the replica has received all concurrent updates. In this section we discuss two topics that are relevant for the application programmer and that have not been addressed extensively in literature.
Conflict-free Replicated Data Types: An Overview <s> Encapsulating invariant preservation <s> A method is presented for permitting record updates by long-lived transactions without forbidding simultaneous access by other users to records modified. Earlier methods presented separately by Gawlick and Reuter are comparable but concentrate on “hot-spot” situations, where even short transactions cannot lock frequently accessed fields without causing bottlenecks. The Escrow Method offered here is designed to support nonblocking record updates by transactions that are “long lived” and thus require long periods to complete. Recoverability of intermediate results prior to commit thus becomes a design goal, so that updates as of a given time can be guaranteed against memory or media failure while still retaining the prerogative to abort. This guarantee basically completes phase one of a two-phase commit, and several advantages result: (1) As with Gawlick's and Reuter's methods, high-concurrency items in the database will not act as a bottleneck; (2) transaction commit of different updates can be performed asynchronously, allowing natural distributed transactions; indeed, distributed transactions in the presence of delayed messages or occasional line disconnection become feasible in a way that we argue will tie up minimal resources for the purpose intended; and (3) it becomes natural to allow for human interaction in the middle of a transaction without loss of concurrent access or any special difficulty for the application programmer. The Escrow Method, like Gawlick's Fast Path and Reuter's Method, requires the database system to be an “expert” about the type of transactional updates performed, most commonly updates involving incremental changes to aggregate quantities. However, the Escrow Method is extendable to other types of updates. <s> BIB001 </s> Conflict-free Replicated Data Types: An Overview <s> Encapsulating invariant preservation <s> Advances in computer and telecommunication technologies have made mobile computing a reality. However, greater mobility implies a more tenuous network connection and a higher rate of disconnection. In order to tolerate disconnections as well as to reduce the delays and cost of wireless communication, it is necessary to support autonomous mobile operations on data shared by stationary hosts. This would allow the part of a computation executing on a mobile host to continue executing while the mobile host is not connected to the network. In this paper, we examine whether object semantics can be exploited to facilitate autonomous and disconnected operation in mobile database applications. We define the class of fragmentable objects which may be split among a number of sites, operated upon independently at each site, and then recombined in a semantically consistent fashion. A number of objects with such characteristics are presented and an implementation of fragmentable stacks is shown and discussed. <s> BIB002 </s> Conflict-free Replicated Data Types: An Overview <s> Encapsulating invariant preservation <s> Traditional protocols for distributed database management have a high message overhead; restrain or lock access to resources during protocol execution; and may become impractical for some scenarios like real-time systems and very large distributed databases. In this article, we present the demarcation protocol; it overcomes these problems by using explicit consistency constraints as the correctness criteria. The method establishes safe limits as "lines drawn in the sand" for updates, and makes it possible to change these limits dynamically, enforcing the constraints at all times. We show how this technique can be applied to linear arithmetic, existential, key, and approximate copy constraints. <s> BIB003 </s> Conflict-free Replicated Data Types: An Overview <s> Encapsulating invariant preservation <s> Minimizing coordination, or blocking communication between concurrently executing operations, is key to maximizing scalability, availability, and high performance in database systems. However, uninhibited coordination-free execution can compromise application correctness, or consistency. When is coordination necessary for correctness? The classic use of serializable transactions is sufficient to maintain correctness but is not necessary for all applications, sacrificing potential scalability. In this paper, we develop a formal framework, invariant confluence, that determines whether an application requires coordination for correct execution. By operating on application-level invariants over database states (e.g., integrity constraints), invariant confluence analysis provides a necessary and sufficient condition for safe, coordination-free execution. When programmers specify their application invariants, this analysis allows databases to coordinate only when anomalies that might violate invariants are possible. We analyze the invariant confluence of common invariants and operations from real-world database systems (i.e., integrity constraints) and applications and show that many are invariant confluent and therefore achievable without coordination. We apply these results to a proof-of-concept coordination-avoiding database prototype and demonstrate sizable performance gains compared to serializable execution, notably a 25-fold improvement over prior TPC-C New-Order performance on a 200 server cluster. <s> BIB004 </s> Conflict-free Replicated Data Types: An Overview <s> Encapsulating invariant preservation <s> Geo-replicated databases often operate under the principle of eventual consistency to offer high-availability with low latency on a simple key/value store abstraction. Recently, some have adopted commutative data types to provide seamless reconciliation for special purpose data types, such as counters. Despite this, the inability to enforce numeric invariants across all replicas still remains a key shortcoming of relying on the limited guarantees of eventual consistency storage. We present a new replicated data type, called bounded counter, which adds support for numeric invariants to eventually consistent geo-replicated databases. We describe how this can be implemented on top of existing cloud stores without modifying them, using Riak as an example. Our approach adapts ideas from escrow transactions to devise a solution that is decentralized, fault-tolerant and fast. Our evaluation shows much lower latency and better scalability than the traditional approach of using strong consistency to enforce numeric invariants, thus alleviating the tension between consistency and availability. <s> BIB005 </s> Conflict-free Replicated Data Types: An Overview <s> Encapsulating invariant preservation <s> Geo-replicated storage systems are at the core of current Internet services. The designers of the replication protocols used by these systems must choose between either supporting low-latency, eventually-consistent operations, or ensuring strong consistency to ease application correctness. We propose an alternative consistency model, Explicit Consistency, that strengthens eventual consistency with a guarantee to preserve specific invariants defined by the applications. Given these application-specific invariants, a system that supports Explicit Consistency identifies which operations would be unsafe under concurrent execution, and allows programmers to select either violation-avoidance or invariant-repair techniques. We show how to achieve the former, while allowing operations to complete locally in the common case, by relying on a reservation system that moves coordination off the critical path of operation execution. The latter, in turn, allows operations to execute without restriction, and restore invariants by applying a repair operation to the database state. We present the design and evaluation of Indigo, a middleware that provides Explicit Consistency on top of a causally-consistent data store. Indigo guarantees strong application invariants while providing similar latency to an eventually-consistent system in the common case. <s> BIB006
CRDTs encapsulate the merge of concurrent updates, making complex concurrency semantics available to any application programmer. As mentioned in the introduction, some applications require global invariants to be maintained for correctness. The CRDTs discussed in Section 2.1 are unable to enforce such global invariants. We now discuss how some global invariants can be enforced by encapsulating the necessary algorithms inside a CRDT, thus empowering application developers with much more powerful abstractions. It is known that some global invariants can be enforced under weak consistency BIB004 . Other invariants, such as numeric invariants that usually are enforced with global coordination, can be enforced in a conflict free manner by using escrow techniques BIB001 that split the available resources by the different replicas. The Bounded Counter CRDT BIB005 defines a counter that never goes negative, by encapsulating an implementation of escrow techniques that runs under the same system model of CRDTs. The Bounded Counter assigns to each replica a number of allowed decrements under the condition that the sum of all allowed decrements do not exceed the value of the counter. While its assigned decrements are not exhausted, replicas can accept decrements without coordinating with other replicas. After a replica exhaust its allowed decrements, a new decrement will either fail or require synchronizing with some replica that still can decrement. It is possible to generalize this approach to enforce other system wide invariants, including invariants that enforce conditions over the state of multiple CRDTs BIB006 . An interesting research question is which other invariants can be enforced (e.g. by adapting other algorithms proposed in the past BIB003 BIB002 ) and what other functionality can be encapsulated in objects that are updated using an eventual consistency model.
Conflict-free Replicated Data Types: An Overview <s> Transactions and other programming models <s> We describe the design and implementation of Walter, a key-value store that supports transactions and replicates data across distant sites. A key feature behind Walter is a new property called Parallel Snapshot Isolation (PSI). PSI allows Walter to replicate data asynchronously, while providing strong guarantees within each site. PSI precludes write-write conflicts, so that developers need not worry about conflict-resolution logic. To prevent write-write conflicts and implement PSI, Walter uses two new and simple techniques: preferred sites and counting sets. We use Walter to build a social networking application and port a Twitter-like application. <s> BIB001 </s> Conflict-free Replicated Data Types: An Overview <s> Transactions and other programming models <s> Client-side logic and storage are increasingly used in web and mobile applications to improve response time and availability. Current approaches tend to be ad-hoc and poorly integrated with the server-side logic. We present a principled approach to integrate client-and server-side storage. We support both mergeable and strongly consistent transactions that target either client or server replicas and provide access to causally-consistent snapshots efficiently. In the presence of infrastructure faults, a client-assisted failover solution allows client execution to resume immediately and seamlessly access consistent snapshots without waiting. We implement this approach in SwiftCloud, the first transactional system to bring geo-replication all the way to the client machine. Example applications show that our programming model is useful across a range of application areas. Our experimental evaluation shows that SwiftCloud provides better fault tolerance and at the same time can improve both latency and throughput by up to an order of magnitude, compared to classical geo-replication techniques. <s> BIB002 </s> Conflict-free Replicated Data Types: An Overview <s> Transactions and other programming models <s> Online services often use replication for improving the performance of user-facing services. However, using replication for performance comes at a price of weakening the consistency levels of the replicated service. To address this tension, recent proposals from academia and industry allow operations to run at different consistency levels. In these systems, the programmer has to decide which level to use for each operation. We present SIEVE, a tool that relieves Java programmers from this errorprone decision process, allowing applications to automatically extract good performance when possible, while resorting to strong consistency whenever required by the target semantics. Taking as input a set of application-specific invariants and a few annotations about merge semantics, SIEVE performs a combination of static and dynamic analysis, offline and at runtime, to determine when it is necessary to use strong consistency to preserve these invariants and when it is safe to use causally consistent commutative replicated data types (CRDTs). We evaluate SIEVE on two web applications and show that the automatic classification overhead is low. <s> BIB003 </s> Conflict-free Replicated Data Types: An Overview <s> Transactions and other programming models <s> We propose Lasp, a new programming model designed to simplify large-scale distributed programming. Lasp combines ideas from deterministic dataflow programming together with conflict-free replicated data types (CRDTs). This provides support for computations where not all participants are online together at a given moment. The initial design presented here provides powerful primitives for composing CRDTs, which lets us write long-lived fault-tolerant distributed applications with nonmonotonic behavior in a monotonic framework. Given reasonable models of node-to-node communications and node failures, we prove formally that a Lasp program can be considered as a functional program that supports functional reasoning and programming techniques. We have implemented Lasp as an Erlang library built on top of the Riak Core distributed systems framework. We have developed one nontrivial large-scale application, the advertisement counter scenario from the SyncFree research project. We plan to extend our current prototype into a general-purpose language in which synchronization is used as little as possible. <s> BIB004 </s> Conflict-free Replicated Data Types: An Overview <s> Transactions and other programming models <s> Developers of cloud-scale applications face a difficult decision of which kind of storage to use, summarised by the CAP theorem. Currently the choice is between classical CP databases, which provide strong guarantees but are slow, expensive, and unavailable under partition, and NoSQL-style AP databases, which are fast and available, but too hard to program against. We present an alternative: Cure provides the highest level of guarantees that remains compatible with availability. These guarantees include: causal consistency (no ordering anomalies), atomicity (consistent multi-key updates), and support for high-level data types (developer friendly API) with safe resolution of concurrent updates (guaranteeing convergence). These guarantees minimise the anomalies caused by parallelism and distribution, thus facilitating the development of applications. This paper presents the protocols for highly available transactions, and an experimental evaluation showing that Cure is able to achieve scalability similar to eventually-consistent NoSQL databases, while providing stronger guarantees. <s> BIB005
CRDTs have been used in several storage systems that provide different forms of transactions. SwiftCloud BIB002 and Antidote BIB005 provide a weak form of transactions that is highly available and never abort, with reads observing a snapshot of the database and concurrent writes being merged using CRDT's defined concurrency semantics. Walter BIB001 and Sieve BIB003 provide support for both weak and strong forms of transactions. While weak transactions can execute concurrently, with concurrent updates being merged using CRDTs, strong transactions (that may access non-CRDT objects) are executed according to a serial order. Lasp BIB004 proposes a dataflow programming-model where the state of a CRDT is the result of executing a computation over the state of some other CRDTs. Those former CRDTs are similar to views in database systems, and the Lasp runtime executes an algorithm that tries to achieve a goal similar to that of incremental materialized view maintenance in database systems .
Conflict-free Replicated Data Types: An Overview <s> State-based synchronization <s> This invention pertains to the performance of a microroutine in accordance with a program being run on a processor having user's instructions in main memory. External devices having discrete device numbers cause interrupt signals to be transmitted to the microroutine which acknowledges the interrupt signal and obtains the device number associated with the signal. A service pointer is fetched from the service pointer table in the main memory corresponding to the device number, which in turn defines a service block function to be fetched from the main memory. Input - output service is then performed for the external devices in accordance with the interrupt service block function without interrupting the program that is being currently run. <s> BIB001 </s> Conflict-free Replicated Data Types: An Overview <s> State-based synchronization <s> Many mobile environments require optimistic replication for improved performance and reliability. Peer-to-peer replication strategies provide advantages over traditional client-server models by enabling any-to-any communication. These advantages are especially useful in mobile environments, when communicating with close peers can be cheaper than communicating with a distant server. However, most peer solutions require that all replicas store the entire replication unit. Such strategies are inefficient and expensive, forcing users to store unneeded data and to spend scarce resources maintaining consistency on that data. ::: ::: We have developed a set of algorithms and controls that implement selective replication, the ability to independently replicate individual portions of the large replication unit. We present a description of the algorithms and their implementation, as well as a performance analysis. We argue that these methods permit the practical use of peer optimistic replication. <s> BIB002 </s> Conflict-free Replicated Data Types: An Overview <s> State-based synchronization <s> Reliability at massive scale is one of the biggest challenges we face at Amazon.com, one of the largest e-commerce operations in the world; even the slightest outage has significant financial consequences and impacts customer trust. The Amazon.com platform, which provides services for many web sites worldwide, is implemented on top of an infrastructure of tens of thousands of servers and network components located in many datacenters around the world. At this scale, small and large components fail continuously and the way persistent state is managed in the face of these failures drives the reliability and scalability of the software systems. This paper presents the design and implementation of Dynamo, a highly available key-value storage system that some of Amazon's core services use to provide an "always-on" experience. To achieve this level of availability, Dynamo sacrifices consistency under certain failure scenarios. It makes extensive use of object versioning and application-assisted conflict resolution in a manner that provides a novel interface for developers to use. <s> BIB003 </s> Conflict-free Replicated Data Types: An Overview <s> State-based synchronization <s> Replicating data under Eventual Consistency (EC) allows any replica to accept updates without remote synchronisation. This ensures performance and scalability in large-scale distributed systems (e.g., clouds). However, published EC approaches are ad-hoc and error-prone. Under a formal Strong Eventual Consistency (SEC) model, we study sufficient conditions for convergence. A data type that satisfies these conditions is called a Conflict-free Replicated Data Type (CRDT). Replicas of any CRDT are guaranteed to converge in a self-stabilising manner, despite any number of failures. This paper formalises two popular approaches (state- and operation-based) and their relevant sufficient conditions. We study a number of useful CRDTs, such as sets with clean semantics, supporting both add and remove operations, and consider in depth the more complex Graph data type. CRDT types can be composed to develop large-scale distributed applications, and have interesting theoretical properties. <s> BIB004
In state-based synchronization, replicas synchronize by establishing bi-directional (or unidirectional) synchronization sessions, where both (one, resp.) replicas send their state to a peer replica. When a replica receives the state of a peer, it merges the received state with its local state. CRDTs designed for state-based replication define a merge function to integrate the state of a remote replica. It has been shown BIB004 that all replicas of a CRDT converge if: (i) the possible states of the CRDT are partially ordered according to ≤ forming a join semilattice; (ii) an update modifies the state s of a replica by an inflation, producing a new state that is larger or equal to the original state according to ≤, i.e., for any update u, s ≤ u(s); (iii) the merge function produces the join (least upper bound) of two states, i.e. for states s 1 , s 2 it derives s 1 ⊔ s 2 . For guaranteeing that all updates eventually reach all replicas, it is only necessary to guarantee that the synchronization graph is connected. A number of replicated systems BIB001 BIB002 BIB003 ] used this approach, adopting different synchronization schedules, topologies and mechanisms for deciding if and what data to exchange.
Conflict-free Replicated Data Types: An Overview <s> Operation-based synchronization <s> A broadcast protocol system which uses a plurality of distributed computers which are electrically connected by a broadcast medium. Each message is broadcast to all computers with a header in which there is a message identifier containing the identity of the broadcasting processor and a message sequence number. A retransmission number is also included in the identifier to distinguish retransmissions. Each processor maintains an acknowledgement list of message identifiers with positive and negative acknowledgements. <s> BIB001 </s> Conflict-free Replicated Data Types: An Overview <s> Operation-based synchronization <s> The design and correctness of a communication facility for a distributed computer system are reported on. The facility provides support for fault-tolerant process groups in the form of a family of reliable multicast protocols that can be used in both local- and wide-area networks. These protocols attain high levels of concurrency, while respecting application-specific delivery ordering constraints, and have varying cost and performance that depend on the degree of ordering desired. In particular, a protocol that enforces causal delivery orderings is introduced and shown to be a valuable alternative to conventional asynchronous communication protocols. The facility also ensures that the processes belonging to a fault-tolerant process group will observe consistent orderings of events affecting the group as a whole, including process failures, recoveries, migration, and dynamic changes to group properties like member rankings. A review of several uses for the protocols in the ISIS system, which supports fault-tolerant resilient objects and bulletin boards, illustrates the significant simplification of higher level algorithms made possible by our approach. <s> BIB002 </s> Conflict-free Replicated Data Types: An Overview <s> Operation-based synchronization <s> Whru a dilt~lhSC is replicated at, many sites2 maintaining mutual consistrnry among t,he sites iu the fac:e of updat,es is a signitirant problem. This paper descrikrs several randomized algorit,hms for dist,rihut.ing updates and driving t,he replicas toward consist,c>nc,y. The algorit Inns are very simple and require few guarant,ees from the underlying conllllunicat.ioll system, yc+ they rnsutc t.hat. the off(~c~t, of (‘very update is evcnt,uwlly rf+irt-ted in a11 rq1ica.s. The cost, and parformancc of t,hr algorithms arc tuned I>? c%oosing appropriat,c dist,rilMions in t,hc randoinizat,ioii step. TIN> idgoritlmls ilr(’ c*los~*ly analogoIls t,o epidemics, and t,he epidcWliolog)litc\ratiirc, ilitlh iii Illld~~rsti4lldill~ tlicir bc*liavior. One of tlW i$,oritlims 11&S brc>n implrmcWrd in the Clraringhousr sprv(brs of thr Xerox C’orporat~c~ Iiitcrnc4, solviiig long-standing prol>lf~lns of high traffic and tlatirl>ilsr inconsistcllcp. <s> BIB003 </s> Conflict-free Replicated Data Types: An Overview <s> Operation-based synchronization <s> Replicating data under Eventual Consistency (EC) allows any replica to accept updates without remote synchronisation. This ensures performance and scalability in large-scale distributed systems (e.g., clouds). However, published EC approaches are ad-hoc and error-prone. Under a formal Strong Eventual Consistency (SEC) model, we study sufficient conditions for convergence. A data type that satisfies these conditions is called a Conflict-free Replicated Data Type (CRDT). Replicas of any CRDT are guaranteed to converge in a self-stabilising manner, despite any number of failures. This paper formalises two popular approaches (state- and operation-based) and their relevant sufficient conditions. We study a number of useful CRDTs, such as sets with clean semantics, supporting both add and remove operations, and consider in depth the more complex Graph data type. CRDT types can be composed to develop large-scale distributed applications, and have interesting theoretical properties. <s> BIB004
In operation-based synchronization, replicas converge by propagating updates to every other replica. When an update is received in a replica, it is applied to the local replica state. Besides requiring that every update operation is reliably delivered to all replicas, some CRDT designs require updates to be delivered according to some specific order, with causal order being the most common. CRDTs designed for operation-based replication must define, for each update, a generator and an effector function. The generator function executes in the replica where the update is submitted, it has no side-effects and generates an effector that encodes the side-effects of the update. In other words, the effector is a closure created by the generator depending on the state of the origin replica. The effector operation must be reliably executed in all replicas, where it updates the replica state. It has been shown BIB004 that if effector operations are delivered in causal order, replicas will converge to the same state if concurrent effector operations commute. If effector operations may be delivered without respecting causal order, then all effector operations must commute. If effector operation may be delivered more than once, then all effector operations must be idempotent. Most operation-based CRDT designs require exactly-once and causal delivery. To guarantee that all updates are reliably delivered to all replicas, it is possible to rely on any reliable multicast communication subsystem. A large number of protocols have been proposed for achieving this goal BIB001 BIB002 BIB003 . In practice, it is also possible to rely on a reliable publish-subscribe system, such as Apache Kafka 10 for this purpose.
Conflict-free Replicated Data Types: An Overview <s> Alternative synchronization models <s> Disconnected operation is a mode of operation that enables a client to continue accessing critical data during temporary failures of a shared data repository. An important, though not exclusive, application of disconnected operation is in supporting portable computers. In this paper, we show that disconnected operation is feasible, efficient and usable by describing its design and implementation in the Coda File System. The central idea behind our work is that caching of data, now widely used for performance, can also be exploited to improve availability. <s> BIB001 </s> Conflict-free Replicated Data Types: An Overview <s> Alternative synchronization models <s> The Rover toolkit combines relocatable dynamic objects and queued remote procedure calls to provide unique services for roving mobile applications. A relocatable dynamic object is an object with a well-defined interface that can be dynamically loaded into a client computer from a server computer (or vice versa) to reduce client-server communication requirements. Queued remote procedure call is a communication system that permits applications to continue to make non-blocking remote procedure call requests even when a host is disconnected, with requests and responses being exchanged upon network reconnection. The challenges of mobile environments include intermittent connectivity, limited bandwidth, and channel-use optimization. Experimental results from a Rover-based mail reader, calendar program, and two non-blocking versions of World-Wide Web browsers show that Rover's services are a good match to these challenges. The Rover toolkit also offers advantages for workstation applications by providing a uniform distributed object architecture for code shipping, object caching, and asynchronous object invocation. <s> BIB002 </s> Conflict-free Replicated Data Types: An Overview <s> Alternative synchronization models <s> In asynchronous collaborative applications, users usually collaborate accessing and modifying shared information independently. We have designed and implemented a replicated object store to support such applications in distributed environments that include mobile computers. Unlike most data management systems, awareness support is integrated in the system. To improve the chance for new contributions, the system provides high data availability. The development of applications is supported by an object framework that decomposes objects in several components, each one managing a different aspect of object "execution". New data types may be created relying on pre-defined components to handle concurrent updates, awareness information, etc. <s> BIB003 </s> Conflict-free Replicated Data Types: An Overview <s> Alternative synchronization models <s> Abstract Conflict-free Replicated Data Types (CRDTs) are distributed data types that make eventual consistency of a distributed object possible and non ad-hoc. Specifically, state-based CRDTs ensure convergence through disseminating the entire state, that may be large, and merging it to other replicas. We introduce Delta State Conflict-Free Replicated Data Types ( δ -CRDT) that can achieve the best of both operation-based and state-based CRDTs: small messages with an incremental nature, as in operation-based CRDTs, disseminated over unreliable communication channels, as in traditional state-based CRDTs. This is achieved by defining δ -mutators to return a delta-state, typically with a much smaller size than the full state, that to be joined with both local and remote states. We introduce the δ -CRDT framework, and we explain it through establishing a correspondence to current state-based CRDTs. In addition, we present an anti-entropy algorithm for eventual convergence, and another one that ensures causal consistency. Finally, we introduce several δ -CRDT specifications of both well-known replicated datatypes and novel datatypes, including a generic map composition. <s> BIB004 </s> Conflict-free Replicated Data Types: An Overview <s> Alternative synchronization models <s> Replication is a key technique for providing both fault tolerance and availability in distributed systems. However, managing replicated state, and ensuring that these replicas remain consistent, is a non trivial task, in particular in scenarios where replicas can reside on the client-side, as clients might have unreliable communication channels and hence, exhibit highly dynamic communication patterns. One way to simplify this task is to resort to CRDTs, which are data types that enable replication and operation over replicas with no coordination, ensuring eventual state convergence when these replicas are synchronized. However, when the communication patters, and therefore synchronization patterns, are highly dynamic, existing designs of CRDTs might incur in excessive communication overhead. To address those scenarios, in this paper we propose a new design for CRDTs which we call Δ-CRDT, and experimentally show that under dynamic communication patters, this novel design achieves better network utilization than existing alternatives. <s> BIB005 </s> Conflict-free Replicated Data Types: An Overview <s> Alternative synchronization models <s> Replication is a key technique in the design of efficient and reliable distributed systems. As information grows, it becomes difficult or even impossible to store all information at every replica. A common approach to deal with this problem is to rely on partial replication, where each replica maintains only a part of the total system information. As a consequence, a remote replica might need to be contacted for computing the reply to some given query, which leads to high latency costs particularly in geo-replicated settings. In this work, we introduce the concept of non-uniform replication, where each replica stores only part of the information, but where all replicas store enough information to answer every query. We apply this concept to eventual consistency and conflict-free replicated data types. We show that this model can address useful problems and present two data types that solve such problems. Our evaluation shows that non-uniform replication is more efficient than traditional replication, using less storage space and network bandwidth. <s> BIB006
Delta-based synchronization: When comparing state-based and operation-based synchronization, a simple observation is that if an update only modifies part of the state, propagating the complete state for synchronization to a remote replica is inefficient, as the remote replica already knows most of the state. On the other hand, if a large number of updates modify the same state, e.g. increments in a counter, propagating the state once is much more efficient than propagating all update operations. To address this issue, it is possible to design CRDTs that allow being synchronized using both state-based and operation-based approaches . Delta-state CRDTs BIB004 address this issue in a principled way by propagating delta-mutators, that encode the changes that have been made to a replica since the last communication. The first time a replica communicates with some other replica, the full state needs to be propagated. Big delta state CRDTs BIB005 improve the cost of first synchronization by being able to compute delta-mutators from a summary of the remote state. Joindecompositions are an alternative approach to achieve the same goal by computing digests that help determining which parts of a remote state are needed. In the context of operation-based replication, effector operations should be applied immediately in the source replica, that executed the generator. However, propagation to other replicas can be deferred for some period and effectors stored in an outbound log, presenting an opportunity to compress the log by rewriting some operations -e.g. two add(1) operations in a counter can be converted in a add(2) operation. Several systems BIB002 BIB001 BIB003 BIB006 have included mechanisms for compressing the log of operations used for replication. We note that delta-mutators can also be seen as a compressed representation of a log of operations. Pure operation-based synchronization: The operation-based CRDTs require executing a generator function against the replica state to compute an effector operation. In some scenarios, this may introduce an unacceptable delay for propagating an update. Pure-operation based CRDTs address this issue by allowing the original update operations to be propagated to all replicas, typically at the cost of more complex operation representation and of having to store more metadata in the CRDT state.
Conflict-free Replicated Data Types: An Overview <s> Applications <s> We describe the design and implementation of Walter, a key-value store that supports transactions and replicates data across distant sites. A key feature behind Walter is a new property called Parallel Snapshot Isolation (PSI). PSI allows Walter to replicate data asynchronously, while providing strong guarantees within each site. PSI precludes write-write conflicts, so that developers need not worry about conflict-resolution logic. To prevent write-write conflicts and implement PSI, Walter uses two new and simple techniques: preferred sites and counting sets. We use Walter to build a social networking application and port a Twitter-like application. <s> BIB001 </s> Conflict-free Replicated Data Types: An Overview <s> Applications <s> Client-side logic and storage are increasingly used in web and mobile applications to improve response time and availability. Current approaches tend to be ad-hoc and poorly integrated with the server-side logic. We present a principled approach to integrate client-and server-side storage. We support both mergeable and strongly consistent transactions that target either client or server replicas and provide access to causally-consistent snapshots efficiently. In the presence of infrastructure faults, a client-assisted failover solution allows client execution to resume immediately and seamlessly access consistent snapshots without waiting. We implement this approach in SwiftCloud, the first transactional system to bring geo-replication all the way to the client machine. Example applications show that our programming model is useful across a range of application areas. Our experimental evaluation shows that SwiftCloud provides better fault tolerance and at the same time can improve both latency and throughput by up to an order of magnitude, compared to classical geo-replication techniques. <s> BIB002 </s> Conflict-free Replicated Data Types: An Overview <s> Applications <s> Developers of cloud-scale applications face a difficult decision of which kind of storage to use, summarised by the CAP theorem. Currently the choice is between classical CP databases, which provide strong guarantees but are slow, expensive, and unavailable under partition, and NoSQL-style AP databases, which are fast and available, but too hard to program against. We present an alternative: Cure provides the highest level of guarantees that remains compatible with availability. These guarantees include: causal consistency (no ordering anomalies), atomicity (consistent multi-key updates), and support for high-level data types (developer friendly API) with safe resolution of concurrent updates (guaranteeing convergence). These guarantees minimise the anomalies caused by parallelism and distribution, thus facilitating the development of applications. This paper presents the protocols for highly available transactions, and an experimental evaluation showing that Cure is able to achieve scalability similar to eventually-consistent NoSQL databases, while providing stronger guarantees. <s> BIB003
CRDTs have been used in a large number of distributed systems and applications that adopt weak consistency models. The adoption of CRDTs simplifies the development of these systems and applications, as CRDTs guarantee that replicas converge to the same state when all updates are propagated to all replicas. We can group the systems and applications that use CRDTs into two groups: storage systems that provide CRDTs as their data model; and applications that embed CRDTs to maintain their internal data. CRDTs have been integrated in several storage systems that make them available to applications. An application uses these CRDTs to store their data, being the responsibility of the storage systems to synchronize the multiple replicas. The following commercial systems use CRDTs: Riak 11 , Redis and Akka 12 . A number of research prototypes have also used CRDTs, including Walter BIB001 , SwiftCloud BIB002 and AntidoteDB 13 BIB003 . CRDTs have also been embedded in multiple applications. In this case, developers either used one of the available CRDT libraries, implemented themselves some previously proposed design or designed new CRDTs to meet their specific requirements. An example of this latter use is Roshi 14 , a LWWelement-set CRDT used for maintaining an index in SoundCloud stream.
Conflict-free Replicated Data Types: An Overview <s> CRDTs and transactions <s> We describe the design and implementation of Walter, a key-value store that supports transactions and replicates data across distant sites. A key feature behind Walter is a new property called Parallel Snapshot Isolation (PSI). PSI allows Walter to replicate data asynchronously, while providing strong guarantees within each site. PSI precludes write-write conflicts, so that developers need not worry about conflict-resolution logic. To prevent write-write conflicts and implement PSI, Walter uses two new and simple techniques: preferred sites and counting sets. We use Walter to build a social networking application and port a Twitter-like application. <s> BIB001 </s> Conflict-free Replicated Data Types: An Overview <s> CRDTs and transactions <s> Client-side logic and storage are increasingly used in web and mobile applications to improve response time and availability. Current approaches tend to be ad-hoc and poorly integrated with the server-side logic. We present a principled approach to integrate client-and server-side storage. We support both mergeable and strongly consistent transactions that target either client or server replicas and provide access to causally-consistent snapshots efficiently. In the presence of infrastructure faults, a client-assisted failover solution allows client execution to resume immediately and seamlessly access consistent snapshots without waiting. We implement this approach in SwiftCloud, the first transactional system to bring geo-replication all the way to the client machine. Example applications show that our programming model is useful across a range of application areas. Our experimental evaluation shows that SwiftCloud provides better fault tolerance and at the same time can improve both latency and throughput by up to an order of magnitude, compared to classical geo-replication techniques. <s> BIB002 </s> Conflict-free Replicated Data Types: An Overview <s> CRDTs and transactions <s> Online services often use replication for improving the performance of user-facing services. However, using replication for performance comes at a price of weakening the consistency levels of the replicated service. To address this tension, recent proposals from academia and industry allow operations to run at different consistency levels. In these systems, the programmer has to decide which level to use for each operation. We present SIEVE, a tool that relieves Java programmers from this errorprone decision process, allowing applications to automatically extract good performance when possible, while resorting to strong consistency whenever required by the target semantics. Taking as input a set of application-specific invariants and a few annotations about merge semantics, SIEVE performs a combination of static and dynamic analysis, offline and at runtime, to determine when it is necessary to use strong consistency to preserve these invariants and when it is safe to use causally consistent commutative replicated data types (CRDTs). We evaluate SIEVE on two web applications and show that the automatic classification overhead is low. <s> BIB003 </s> Conflict-free Replicated Data Types: An Overview <s> CRDTs and transactions <s> Multicore main-memory database performance can collapse when many transactions contend on the same data. Contending transactions are executed serially--either by locks or by optimistic concurrency control aborts--in order to ensure that they have serializable effects. This leaves many cores idle and performance poor. We introduce a new concurrency control technique, phase reconciliation, that solves this problem for many important workloads. Doppel, our phase reconciliation database, repeatedly cycles through joined, split, and reconciliation phases. Joined phases use traditional concurrency control and allow any transaction to execute. When workload contention causes unnecessary serial execution, Doppel switches to a split phase. There, updates to contended items modify per-core state, and thus proceed in parallel on different cores. Not all transactions can execute in a split phase; for example, all modifications to a contended item must commute. A reconciliation phase merges these per-core states into the global store, producing a complete database ready for joined-phase transactions. A key aspect of this design is determining which items to split, and which operations to allow on split items. ::: ::: Phase reconciliation helps most when there are many updates to a few popular database records. Its throughput is up to 38x higher than conventional concurrency control protocols on microbenchmarks, and up to 3x on a larger application, at the cost of increased latency for some transactions. <s> BIB004 </s> Conflict-free Replicated Data Types: An Overview <s> CRDTs and transactions <s> Developers of cloud-scale applications face a difficult decision of which kind of storage to use, summarised by the CAP theorem. Currently the choice is between classical CP databases, which provide strong guarantees but are slow, expensive, and unavailable under partition, and NoSQL-style AP databases, which are fast and available, but too hard to program against. We present an alternative: Cure provides the highest level of guarantees that remains compatible with availability. These guarantees include: causal consistency (no ordering anomalies), atomicity (consistent multi-key updates), and support for high-level data types (developer friendly API) with safe resolution of concurrent updates (guaranteeing convergence). These guarantees minimise the anomalies caused by parallelism and distribution, thus facilitating the development of applications. This paper presents the protocols for highly available transactions, and an experimental evaluation showing that Cure is able to achieve scalability similar to eventually-consistent NoSQL databases, while providing stronger guarantees. <s> BIB005
As mentioned in Section 2.3.3, some database systems include support for weak forms of transactions that execute queries and updates in CRDTs. A number of transactional protocols have been proposed in literature BIB002 BIB005 BIB001 . The use of CRDTs in these transactional protocols raises two main challenges. First, all updates executed by concurrent transactions need to be considered to define the final state of each CRDT. The protocols implemented in most systems BIB002 BIB005 BIB001 BIB003 use operation-based CRDTs, with all updates being replayed in all replicas. Second, in some geo-replicated systems BIB002 BIB005 , it is possible that new updates are received from remote replicas while a local transaction is running. To allow transactions to access a consistent snapshot, the system must be able to maintain multiple versions for each object. Some of these database systems BIB001 BIB003 also provide support for strong forms of transaction by executing protocols that guarantee that these transaction execute in a serial order. Droppel BIB004 provide serializability by combining a multi-phase algorithm where some operations are only allowed in one of the phase, with constructs similar to CRDTs for allowing the concurrent access of multiple transactions to the same objects.
Conflict-free Replicated Data Types: An Overview <s> Support for large objects <s> Geo-replicated, distributed data stores that support complex online applications, such as social networks, must provide an "always-on" experience where operations always complete with low latency. Today's systems often sacrifice strong consistency to achieve these goals, exposing inconsistencies to their clients and necessitating complex application logic. In this paper, we identify and define a consistency model---causal consistency with convergent conflict handling, or causal+---that is the strongest achieved under these constraints. We present the design and implementation of COPS, a key-value store that delivers this consistency model across the wide-area. A key contribution of COPS is its scalability, which can enforce causal dependencies between keys stored across an entire cluster, rather than a single server like previous systems. The central approach in COPS is tracking and explicitly checking whether causal dependencies between keys are satisfied in the local cluster before exposing writes. Further, in COPS-GT, we introduce get transactions in order to obtain a consistent view of multiple keys without locking or blocking. Our evaluation shows that COPS completes operations in less than a millisecond, provides throughput similar to previous systems when using one server per cluster, and scales well as we increase the number of servers in each cluster. It also shows that COPS-GT provides similar latency, throughput, and scaling to COPS for common workloads. <s> BIB001 </s> Conflict-free Replicated Data Types: An Overview <s> Support for large objects <s> This paper proposes a Geo-distributed key-value datastore, named ChainReaction, that offers causal+ consistency, with high performance, fault-tolerance, and scalability. ChainReaction enforces causal+ consistency which is stronger than eventual consistency by leveraging on a new variant of chain replication. We have experimentally evaluated the benefits of our approach by running the Yahoo! Cloud Serving Benchmark. Experimental results show that ChainReaction has better performance in read intensive workloads while offering competitive performance for other workloads. Also we show that our solution requires less metadata when compared with previous work. <s> BIB002 </s> Conflict-free Replicated Data Types: An Overview <s> Support for large objects <s> Client-side logic and storage are increasingly used in web and mobile applications to improve response time and availability. Current approaches tend to be ad-hoc and poorly integrated with the server-side logic. We present a principled approach to integrate client-and server-side storage. We support both mergeable and strongly consistent transactions that target either client or server replicas and provide access to causally-consistent snapshots efficiently. In the presence of infrastructure faults, a client-assisted failover solution allows client execution to resume immediately and seamlessly access consistent snapshots without waiting. We implement this approach in SwiftCloud, the first transactional system to bring geo-replication all the way to the client machine. Example applications show that our programming model is useful across a range of application areas. Our experimental evaluation shows that SwiftCloud provides better fault tolerance and at the same time can improve both latency and throughput by up to an order of magnitude, compared to classical geo-replication techniques. <s> BIB003 </s> Conflict-free Replicated Data Types: An Overview <s> Support for large objects <s> The Riak DT library [2] provides a composable, convergent replicated dictionary called the Riak DT map, designed for use in the Riak [1] replicated data store. This data type provides the ability for the composition of conflict-free replicated data types (CRDT) [7] through embedding. Composition by embedding works well when the total object size of the composed CRDTs is small, however suffers a performance penalty as object size increases. The root of this problem is based in how replication is achieved in the Riak data store using Erlang distribution. [4] We propose a solution for providing an alternative composition mechanism, composition by reference, which provides support for arbitrarily large objects while ensuring predictable performance and high availability. We explore the use of this new composition mechanism by examining a common use case for the Riak data store. <s> BIB004 </s> Conflict-free Replicated Data Types: An Overview <s> Support for large objects <s> Designers of large user-oriented distributed applications, such as social networks and mobile applications, have adopted measures to improve the responsiveness of their applications. Latency is a major concern as people are very sensitive to it. Geo-replication is a commonly used mechanism to bring the data closer to clients. Nevertheless, reaching the closest datacenter can still be considerably slow. Thus, in order to further reduce the access latency, mobile and web applications may be forced to replicate data at the client-side. Unfortunately, fully replicating large data structures may still be a waste of resources, specially for thin-clients. We propose a replication mechanism built upon conflict-free replicated data types (CRDT) to seamlessly replicate parts of large data structures. The mechanism is transparent to developers and gives improvements without increasing application complexity. We define partial replication and give an approach to keep the strong eventual consistency properties of CRDTs with partial replicas. We integrate our mechanism into SwiftCloud, a transactional system that brings geo-replication to clients. We evaluate the solution with a content-sharing application. Our results show improvements in bandwidth, memory, and latency over both classical geo-replication and the existing SwiftCloud solution. <s> BIB005
Creating CRDTs that can be used for storing complex application data may simplify application development, but it can lead to performance problems as the system needs to handle these potentially large data objects. This problem occurs both in the servers, as a small update to a large object may lead to loading and storing large amounts of data from disk, and when transferring CRDTs to clients, as large objects may be transferred when only a part of the data is necessary. To address this problem, for objects that are compositions of other objects, instead of having a single object that embeds all other objects, it is possible to maintain each object separately and use references to link the object BIB004 -e.g. in a map, an object associated with a key would be stored separately and the map would have only a reference to it. To guarantee the correctness of replication for application updates that span multiple objects, the system may need to guarantee that updates are executed according to some specific ordering or atomically, depending on the operations. For example, when associating a new object with a key, it is necessary that the new object is created in all replicas before adding the reference to it. This can be achieved by resorting to causal consistency BIB001 BIB002 BIB003 or to atomic updates (as discussed in Section 3.3.1). Sometimes objects are large because they hold large amounts of information -e.g. a set with a very large number of elements. In this case, the same performance problems may arise. A possible solution to address this problem is to split the object in multiple sub-objects. For example, a set can be implemented by maintaining multiple subsets with disjoint information and a simple root object that only keeps references to these subsets. When adding, removing or querying for an element, the operation should be forwarded to the subset that holds the given element. This approach has been adopted in the Riak database for efficiently supporting sets and maps with large amounts of information. Briquemont et. al. BIB005 have proposed a principled approach to define partial replicas adopting this idea.
Conflict-free Replicated Data Types: An Overview <s> Techniques for designing CRDTs <s> Eventual consistency aims to ensure that replicas of some mutable shared object converge without foreground synchronisation. Previous approaches to eventual consistency are ad-hoc and error-prone. We study a principled approach: to base the design of shared data types on some simple formal conditions that are sufficient to guarantee eventual consistency. We call these types Convergent or Commutative Replicated Data Types (CRDTs). This paper formalises asynchronous object replication, either state based or operation based, and provides a sufficient condition appropriate for each case. It describes several useful CRDTs, including container data types supporting both \add and \remove operations with clean semantics, and more complex types such as graphs, montonic DAGs, and sequences. It discusses some properties needed to implement non-trivial CRDTs. <s> BIB001 </s> Conflict-free Replicated Data Types: An Overview <s> Techniques for designing CRDTs <s> Geographically distributed systems often rely on replicated eventually consistent data stores to achieve availability and performance. To resolve conflicting updates at different replicas, researchers and practitioners have proposed specialized consistency protocols, called replicated data types, that implement objects such as registers, counters, sets or lists. Reasoning about replicated data types has however not been on par with comparable work on abstract data types and concurrent data types, lacking specifications, correctness proofs, and optimality results. To fill in this gap, we propose a framework for specifying replicated data types using relations over events and verifying their implementations using replication-aware simulations. We apply it to 7 existing implementations of 4 data types with nontrivial conflict-resolution strategies and optimizations (last-writer-wins register, counter, multi-value register and observed-remove set). We also present a novel technique for obtaining lower bounds on the worst-case space overhead of data type implementations and use it to prove optimality of 4 implementations. Finally, we show how to specify consistency of replicated stores with multiple objects axiomatically, in analogy to prior work on weak memory models. Overall, our work provides foundational reasoning tools to support research on replicated eventually consistent stores. <s> BIB002
As shown by Burckhardt et. al. BIB002 , a CRDT can be specified by relying on: (i) the full history of updates executed; (ii) the happens-before relation among updates; and (iii) an arbitration relation among updates (when necessary). A query can be specified as a function that uses this information and the value of parameters to compute the result. Given this, it is possible to implement a CRDT by just storing this information and synchronize replicas either using a state-based or operation-based approach. However, this implementation would be rather inefficient. The goal of an actual implementation is to minimize the data that needs to be stored in each replica and the data that needs to be transferred for synchronizing replicas. The goal of this section is not to provide an exhaustive catalog of CRDT implementations, but only to introduce and explain the main techniques that have been used to design CRDTs, giving examples of their use. Commutativity and associativity: For some CRDTs, the updates defined are intrinsically commutative and associative. In this case, defining an operation-based CRDT is straightforward, as each replica only need to maintain the value of the CRDT, and updates simply modify the value of the replica. Algorithm 1 exemplifies the case with an operation-based counter. The state is just an integer with the current value of the counter. The increment update receives as parameter the value to increment (that can be negative) and adds it to the value of the counter in the effector operation. Algorithm 1 Operation-based Counter CRDT (adapted from BIB001 generator (int n) 6: return (inc, [n]) ⊲ Operation name and parameters for effector 7: effector (int n) 8: val := val + n A state-based implementation can rely on the commutativity and associativity properties for computing for each replica a partial value, and aggregate the partial values into the global value. Algorithm 2 presents a state-based counter CRDT. The state of the CRDT consists of two associative arrays, one with the sum of positive values added and the other with the sum of negative values added in each replica. For each replica, these arrays maintain a partial result that reflects the updates submitted in that replica -this is only possible because the function being computed is associative. Unlike the operation-based implementation, it is necessary to keep the positive and negative values added in separate arrays to guarantee that the Counter respects the properties for being a state-based CRDT. This is necessary because when merging it is necessary to keep, for each entry in the array, the most recent value. By keeping two arrays, it is known that the most recent value for a given entry is the one with the largest absolute value, as the absolute value for an entry never decreases. This would not be the case if a single array was used, as the value of an entry would increase or decrease as the result of executing an inc with a positive and value. Algorithm 2 State-based Counter CRDT (adapted from BIB001 4: update inc (int n)
Conflict-free Replicated Data Types: An Overview <s> 12: <s> The design and correctness of a communication facility for a distributed computer system are reported on. The facility provides support for fault-tolerant process groups in the form of a family of reliable multicast protocols that can be used in both local- and wide-area networks. These protocols attain high levels of concurrency, while respecting application-specific delivery ordering constraints, and have varying cost and performance that depend on the degree of ordering desired. In particular, a protocol that enforces causal delivery orderings is introduced and shown to be a valuable alternative to conventional asynchronous communication protocols. The facility also ensures that the processes belonging to a fault-tolerant process group will observe consistent orderings of events affecting the group as a whole, including process failures, recoveries, migration, and dynamic changes to group properties like member rankings. A review of several uses for the protocols in the ISIS system, which supports fault-tolerant resilient objects and bulletin boards, illustrates the significant simplification of higher level algorithms made possible by our approach. <s> BIB001 </s> Conflict-free Replicated Data Types: An Overview <s> 12: <s> Traditional protocols for distributed database management have a high message overhead; restrain or lock access to resources during protocol execution; and may become impractical for some scenarios like real-time systems and very large distributed databases. In this article, we present the demarcation protocol; it overcomes these problems by using explicit consistency constraints as the correctness criteria. The method establishes safe limits as "lines drawn in the sand" for updates, and makes it possible to change these limits dynamically, enforcing the constraints at all times. We show how this technique can be applied to linear arithmetic, existential, key, and approximate copy constraints. <s> BIB002
effector (element e, set uids) BIB002 : Summaries of unique identifiers: For the design of state-based CRDTs, the life-cycle of unique identifiers has a problem: it is impossible to distinguish between the not created and deleted states. Thus, it is not clear what to do when merging two replicas that differ only by the fact that one includes information associated with a unique identifier and the other that does not include such information. If the unique identifier was already deleted, the latter is the most recent version; if not, the most recent version would be the former. To address this issue, some initial state-based CRDT designs recorded deleted unique identifiers as tombstones. A more ingineous approach was proposed by Bieniusa et. al. . The key idea is to have an efficient summary of unique identifiers observed in each replica. With this summary, when a replica does not maintain information for a unique identifier but that identifier is in the summary of known identifier, then it is because the information for the unique identifiers has been deleted. This leaves the question of how to create unique identifiers that can be efficiently summarized. An easy solution for this is to use a pair (timestamp, replica identifier) and summarize these pairs in a vector clock BIB001 . Figure 5 presents the design of a state-based add-wins set that adopts this approach. The summary of unique identifiers is stored in the vector clock vv, where each entry has the largest timestamp known for the given replica (a value of n r for replica r means that unique identifiers (1, r) , . . . , (n r , r) have been observed). In a given replica, the new unique identifier is created by incrementing the last generated timestamp, and using the new value in combination with the replica identifier. The add operation generates a new unique identifier and records a pair that associates it with the added element. The remove operation removes all pairs for the removed element. The merge function computes the merged state by taking the union of the pairs in each of the replicas that have not been deleted in the other replica -these pairs are those that are presented in replica r 1 (resp. r 3 ) and not in replica r 2 (resp. r 1 ), but for which the unique identifier is reflected in vv of r 2 (resp. r 1 ).
Conflict-free Replicated Data Types: An Overview <s> Algorithm 5 <s> A Commutative Replicated Data Type (CRDT) is one where all concurrent operations commute. The replicas of a CRDT converge automatically, without complex concurrency control. This paper describes Treedoc, a novel CRDT design for cooperative text editing. An essential property is that the identifiers of Treedoc atoms are selected from a dense space. We discuss practical alternatives for implementing the identifier space based on an extended binary tree. We also discuss storage alternatives for data and meta-data, and mechanisms for compacting the tree. In the best case, Treedoc incurs no overhead with respect to a linear text buffer. We validate the results with traces from existing edit histories. <s> BIB001 </s> Conflict-free Replicated Data Types: An Overview <s> Algorithm 5 <s> Massive collaborative editing becomes a reality through leading projects such as Wikipedia. This massive collaboration is currently supported with a costly central service. In order to avoid such costs, we aim to provide a peer-to-peer collaborative editing system. Existing approaches to build distributed collaborative editing systems either do not scale in terms of number of users or in terms of number of edits. We present the Logoot approach that scales in these both dimensions while ensuring causality, consistency and intention preservation criteria. We evaluate the Logoot approach and compare it to others using a corpus of all the edits applied on a set of the most edited and the biggest pages of Wikipedia. <s> BIB002
State-based Add-wins Set CRDT (adapted from ) ⊲ S: set of pairs (element e, (timestamp t, rep_id r)); initial value: ∅ 3: ⊲ vv: summary of observed unique ids; for any id, the initial value is 0 4: query lookup (element e) : boolean 5: return (∃(e, u) ∈ S) 6: update add (element e) 7: let r = getReplicaID () ⊲ Elements that are in X, but that have been deleted in Y 14: let RX := {(e, (t, r)) ∈ X.S | (e, (t, r)) ∈ Y.S ∧ Y.vv [r] ≥ t} 15: ⊲ Elements that are in Y, but that have been deleted in X 16: let RY := {(e, (t, r)) ∈ Y.S | (e, (t, r)) ∈ X.S ∧ X.vv [r] ≥ t} 17: Dense space for unique identifiers: In the list CRDT discussed in Section 2.1, an element is inserted between two other elements. Some list CRDT designs BIB001 BIB002 rely on using unique identifiers that encode the relative position among elements. Thus, for adding a new element between elements prv and nxt, it is necessary to generate a unique identifier that will be ordered between the unique identifiers of the elements prv and nxt. This requires being able to always create a unique identifier between two existing unique identifiers, i.e., to have a dense space of unique identifiers. To this end, for example, Treedoc BIB001 uses a tree, where it is always possible to insert an element between two other elements when considering an infix traversal -the unique identifier is constructed by combining the path on the tree with a pair (timestamp, replica identifier) that makes it unique. We note that the designs presented for the add-wins set can be used as the basis for creating a list CRDT. For example, for using Treedoc unique identifiers, only the following changes would be necessary. First, replace the function that creates a new unique identifier by the Treedoc function that creates a new unique identifier. We note that in the state-based design that includes a summary of unique identifiers observed, the pair (timestamp, replica identifier) is what needs to be recorded. Second, create a function that returns the data ordered by the unique identifier.
Conflict-free Replicated Data Types: An Overview <s> Space complexity of CRDTs <s> Many distributed systems for wide-area networks can be built conveniently, and operate efficiently and correctly, using a weak consistency group communication mechanism. This mechanism organizes a set of principals into a single logical entity, and provides methods to multicast messages to the members. A weak consistency distributed system allows the principals in the group to differ on the value of shared state at any given instant, as long as they will eventually converge to a single, consistent value. A group containing many principals and using weak consistency can provide the reliability, performance, and scalability necessary for wide-area systems. ::: I have developed a framework for constructing group communication systems, for classifying existing distributed system tools, and for constructing and reasoning about a particular group communication model. It has four components: message delivery, message ordering, group membership, and the application. Each component may have a different implementation, so that the group mechanism can be tailored to application requirements. ::: The framework supports a new message delivery protocol, called timestamped anti-entropy, which provides reliable, eventual message delivery; is efficient; and tolerates most transient processor and network failures. It can be combined with message ordering implementations that provide ordering guarantees ranging from unordered to total, causal delivery. A new group membership protocol completes the set, providing temporarily inconsistent membership views resilient to up to k simultaneous principal failures. ::: The Refdbms distributed bibliographic database system, which has been constructed using this framework, is used as an example. Refdbms databases can be replicated on many different sites, using the group communication system described here. <s> BIB001 </s> Conflict-free Replicated Data Types: An Overview <s> Space complexity of CRDTs <s> Conflicts naturally arise in optimistically replicated systems. The common way to detect update conflicts is via version vectors, whose storage and communication overhead are number of replicas × number of objects. These costs may be prohibitive for large systems. This paper presents predecessor vectors with exceptions (PVEs), a novel optimistic replication technique developed for Microsoft’s WinFS system. The paper contains a systematic study of PVE’s performance gains over traditional schemes. The results demonstrate a dramatic reduction of storage and communication overhead in normal scenarios, during which communication disruptions are infrequent. Moreover, they identify a cross-over threshold in communication failure-rate, beyond which PVEs loses efficiency compared with traditional schemes. <s> BIB002 </s> Conflict-free Replicated Data Types: An Overview <s> Space complexity of CRDTs <s> Client-side logic and storage are increasingly used in web and mobile applications to improve response time and availability. Current approaches tend to be ad-hoc and poorly integrated with the server-side logic. We present a principled approach to integrate client-and server-side storage. We support both mergeable and strongly consistent transactions that target either client or server replicas and provide access to causally-consistent snapshots efficiently. In the presence of infrastructure faults, a client-assisted failover solution allows client execution to resume immediately and seamlessly access consistent snapshots without waiting. We implement this approach in SwiftCloud, the first transactional system to bring geo-replication all the way to the client machine. Example applications show that our programming model is useful across a range of application areas. Our experimental evaluation shows that SwiftCloud provides better fault tolerance and at the same time can improve both latency and throughput by up to an order of magnitude, compared to classical geo-replication techniques. <s> BIB003 </s> Conflict-free Replicated Data Types: An Overview <s> Space complexity of CRDTs <s> To achieve high availability in the face of network partitions, many distributed databases adopt eventual consistency, allow temporary conflicts due to concurrent writes, and use some form of per-key logical clock to detect and resolve such conflicts. Furthermore, nodes synchronize periodically to ensure replica convergence in a process called anti-entropy, normally using Merkle Trees. We present the design of DottedDB, a Dynamo-like key-value store, which uses a novel node-wide logical clock framework, overcoming three fundamental limitations of the state of the art: (1) minimize the metadata per key necessary to track causality, avoiding its growth even in the face of node churn; (2) correctly and durably delete keys, with no need for tombstones; (3) offer a lightweight anti-entropy mechanism to converge replicated data, avoiding the need for Merkle Trees. We evaluate DottedDB against MerkleDB, an otherwise identical database, but using per-key logical clocks and Merkle Trees for anti-entropy, to precisely measure the impact of the novel approach. Results show that: causality metadata per object always converges rapidly to only one id-counter pair; distributed deletes are correctly achieved without global coordination and with constant metadata; divergent nodes are synchronized faster, with less memory-footprint and with less communication overhead than using Merkle Trees. <s> BIB004
The space complexity of a CRDT depends on the data stored in each moment, on the data structures used and on the metadata stored. Some old CRDT designs (mostly for state-based synchronization) stored tombstones for removed data, which would make the amount of data stored to be larger (and sometimes much larger) than the actual data. Modern designs avoid this overhead, but still typically store metadata that grows linearly with the number of replicas for tracking concurrency and causal predecessors . For example, Algorithm 5 shows a state-based add-wins set CRDT that includes a vector clock with complexity O(n), with n the number of replicas. In this example, for each element in the set, at least one unique identifier is present. Although these unique identifiers do no influence the space complexity of the CRDTs (as they are of constant size), they represent an important overhead for the state of the object (storage and communications). To reduce the size of this metadata, possible solutions include more compact causality representations when multiple replicas are synchronized among the same nodes BIB002 BIB003 BIB004 . A complementary approach proposed by Baquero et. al. is to discard meta-data after the relevant updates become stable. Although the proposal has only been used in the context of pure-operation based synchronization and required close integration with the underlying synchronization subsystem, we believe the idea could be adapted to other synchronization models by relying on mechanisms for establishing the stability of updates in weakly consistent systems BIB001 .
Conflict-free Replicated Data Types: An Overview <s> Reversible computation <s> Peer-to-peer systems provide scalable content distribution for cheap and resist to censorship attempts. However, P2P networks mainly distribute immutable content and provide poor support for highly dynamic content such as produced by collaborative systems. A new class of algorithms called CRDT (Commutative Replicated Data Type), which ensures consistency of highly dynamic content on P2P networks, is emerging. However, if existing CRDT algorithms support the "edit anywhere, anytime” feature, they do not support the "undo anywhere, anytime” feature. In this paper, we present the Logoot-Undo CRDT algorithm, which integrates the "undo anywhere, anytime” feature. We compare the performance of the proposed algorithm with related algorithms and measure the impact of the undo feature on the global performance of the algorithm. We prove that the cost of the undo feature remains low on a corpus of data extracted from Wikipedia. <s> BIB001 </s> Conflict-free Replicated Data Types: An Overview <s> Reversible computation <s> Client-side logic and storage are increasingly used in web and mobile applications to improve response time and availability. Current approaches tend to be ad-hoc and poorly integrated with the server-side logic. We present a principled approach to integrate client-and server-side storage. We support both mergeable and strongly consistent transactions that target either client or server replicas and provide access to causally-consistent snapshots efficiently. In the presence of infrastructure faults, a client-assisted failover solution allows client execution to resume immediately and seamlessly access consistent snapshots without waiting. We implement this approach in SwiftCloud, the first transactional system to bring geo-replication all the way to the client machine. Example applications show that our programming model is useful across a range of application areas. Our experimental evaluation shows that SwiftCloud provides better fault tolerance and at the same time can improve both latency and throughput by up to an order of magnitude, compared to classical geo-replication techniques. <s> BIB002
Non trivial Internet services require the composition of multiple sub-systems, to provide storage, data dissemination, event notification, monitoring and other needed components. When composing sub-systems, that can fail independently or simply reject some operations, it is useful to provide a CRDT interface that undoes previously accepted operations. Another scenario that would benefit from undo is collaborative editing of shared documents, where undo is typically a feature available to users. Undoing an increment on a counter CRDT can be achieved by a decrement. Logoot-Undo BIB001 proposes a solution for undoing (and redoing) operations for a sequence CRDT used for collaborative editing. The implementation of SwiftCloud BIB002 includes CRDT designs that allow accessing an old value of the CRDT. However providing an uniform approach to undoing, reversing, operations over the whole CRDT catalog is still an open research direction. The support of undo is also likely to limit the level of compression that can be applied to CRDT metadata.
Conflict-free Replicated Data Types: An Overview <s> Verification <s> Geographically distributed systems often rely on replicated eventually consistent data stores to achieve availability and performance. To resolve conflicting updates at different replicas, researchers and practitioners have proposed specialized consistency protocols, called replicated data types, that implement objects such as registers, counters, sets or lists. Reasoning about replicated data types has however not been on par with comparable work on abstract data types and concurrent data types, lacking specifications, correctness proofs, and optimality results. To fill in this gap, we propose a framework for specifying replicated data types using relations over events and verifying their implementations using replication-aware simulations. We apply it to 7 existing implementations of 4 data types with nontrivial conflict-resolution strategies and optimizations (last-writer-wins register, counter, multi-value register and observed-remove set). We also present a novel technique for obtaining lower bounds on the worst-case space overhead of data type implementations and use it to prove optimality of 4 implementations. Finally, we show how to specify consistency of replicated stores with multiple objects axiomatically, in analogy to prior work on weak memory models. Overall, our work provides foundational reasoning tools to support research on replicated eventually consistent stores. <s> BIB001 </s> Conflict-free Replicated Data Types: An Overview <s> Verification <s> Online services often use replication for improving the performance of user-facing services. However, using replication for performance comes at a price of weakening the consistency levels of the replicated service. To address this tension, recent proposals from academia and industry allow operations to run at different consistency levels. In these systems, the programmer has to decide which level to use for each operation. We present SIEVE, a tool that relieves Java programmers from this errorprone decision process, allowing applications to automatically extract good performance when possible, while resorting to strong consistency whenever required by the target semantics. Taking as input a set of application-specific invariants and a few annotations about merge semantics, SIEVE performs a combination of static and dynamic analysis, offline and at runtime, to determine when it is necessary to use strong consistency to preserve these invariants and when it is safe to use causally consistent commutative replicated data types (CRDTs). We evaluate SIEVE on two web applications and show that the automatic classification overhead is low. <s> BIB002 </s> Conflict-free Replicated Data Types: An Overview <s> Verification <s> Geo-replicated storage systems are at the core of current Internet services. The designers of the replication protocols used by these systems must choose between either supporting low-latency, eventually-consistent operations, or ensuring strong consistency to ease application correctness. We propose an alternative consistency model, Explicit Consistency, that strengthens eventual consistency with a guarantee to preserve specific invariants defined by the applications. Given these application-specific invariants, a system that supports Explicit Consistency identifies which operations would be unsafe under concurrent execution, and allows programmers to select either violation-avoidance or invariant-repair techniques. We show how to achieve the former, while allowing operations to complete locally in the common case, by relying on a reservation system that moves coordination off the critical path of operation execution. The latter, in turn, allows operations to execute without restriction, and restore invariants by applying a repair operation to the database state. We present the design and evaluation of Indigo, a middleware that provides Explicit Consistency on top of a causally-consistent data store. Indigo guarantees strong application invariants while providing similar latency to an eventually-consistent system in the common case. <s> BIB003 </s> Conflict-free Replicated Data Types: An Overview <s> Verification <s> Datastores today rely on distribution and replication to achieve improved performance and fault-tolerance. But correctness of many applications depends on strong consistency properties--something that can impose substantial overheads, since it requires coordinating the behavior of multiple nodes. This paper describes a new approach to achieving strong consistency in distributed systems while minimizing communication between nodes. The key insight is to allow the state of the system to be inconsistent during execution, as long as this inconsistency is bounded and does not affect transaction correctness. In contrast to previous work, our approach uses program analysis to extract semantic information about permissible levels of inconsistency and is fully automated. We then employ a novel homeostasis protocol to allow sites to operate independently, without communicating, as long as any inconsistency is governed by appropriate treaties between the nodes. We discuss mechanisms for optimizing treaties based on workload characteristics to minimize communication, as well as a prototype implementation and experiments that demonstrate the benefits of our approach on common transactional benchmarks. <s> BIB004 </s> Conflict-free Replicated Data Types: An Overview <s> Verification <s> Large-scale distributed systems often rely on replicated databases that allow a programmer to request different data consistency guarantees for different operations, and thereby control their performance. Using such databases is far from trivial: requesting stronger consistency in too many places may hurt performance, and requesting it in too few places may violate correctness. To help programmers in this task, we propose the first proof rule for establishing that a particular choice of consistency guarantees for various operations on a replicated database is enough to ensure the preservation of a given data integrity invariant. Our rule is modular: it allows reasoning about the behaviour of every operation separately under some assumption on the behaviour of other operations. This leads to simple reasoning, which we have automated in an SMT-based tool. We present a nontrivial proof of soundness of our rule and illustrate its use on several examples. <s> BIB005 </s> Conflict-free Replicated Data Types: An Overview <s> Verification <s> Data replication is used in distributed systems to maintain up-to-date copies of shared data across multiple computers in a network. However, despite decades of research, algorithms for achieving consistency in replicated systems are still poorly understood. Indeed, many published algorithms have later been shown to be incorrect, even some that were accompanied by supposed mechanised proofs of correctness. In this work, we focus on the correctness of Conflict-free Replicated Data Types (CRDTs), a class of algorithm that provides strong eventual consistency guarantees for replicated data. We develop a modular and reusable framework in the Isabelle/HOL interactive proof assistant for verifying the correctness of CRDT algorithms. We avoid correctness issues that have dogged previous mechanised proofs in this area by including a network model in our formalisation, and proving that our theorems hold in all possible network behaviours. Our axiomatic network model is a standard abstraction that accurately reflects the behaviour of real-world computer networks. Moreover, we identify an abstract convergence theorem, a property of order relations, which provides a formal definition of strong eventual consistency. We then obtain the first machine-checked correctness theorems for three concrete CRDTs: the Replicated Growable Array, the Observed-Remove Set, and an Increment-Decrement Counter. We find that our framework is highly reusable, developing proofs of correctness for the latter two CRDTs in a few hours and with relatively little CRDT-specific code. <s> BIB006 </s> Conflict-free Replicated Data Types: An Overview <s> Verification <s> Repliss is a tool for the verification of programs which are built on top of weakly consistent databases. As one part of Repliss, we have built an automated, property based testing engine. It explores executions of a given application with randomized invocations and scheduling while checking for invariant violations. When an invariant is broken, Repliss minimizes the execution and displays a visualization of the minimized failing execution. Our contributions are 1. heuristics used to quickly find invariant violations, 2. a strategy to shrink executions, and 3. integrating a testing approach with the overall Repliss tool. <s> BIB007 </s> Conflict-free Replicated Data Types: An Overview <s> Verification <s> Distributed consistency is perhaps the most-discussed topic in distributed systems today. Coordination protocols can ensure consistency, but in practice they cause undesirable performance unless used judiciously. Scalable distributed architectures avoid coordination whenever possible, but under-coordinated systems can exhibit behavioral anomalies under fault, which are often extremely difficult to debug. This raises significant challenges for distributed system architects and developers. In this article, we present B lazes , a cross-platform program analysis framework that (a) identifies program locations that require coordination to ensure consistent executions, and (b) automatically synthesizes application-specific coordination code that can significantly outperform general-purpose techniques. We present two case studies, one using annotated programs in the Twitter Storm system and another using the Bloom declarative language. <s> BIB008
An important aspect related with the development of distributed systems that use CRDTs is the verification of the correctness of the system. This involves not only verifying the correctness of CRDT designs, but also the correctness of the system that uses CRDTs. A number of works have addressed these issues. Regarding the verification of the correctness of CRDTs, several approaches have been taken. The most commonly used approach is to have proofs when designs are proposed or to use some verification tools for the specific data type, such as TLA or Isabelle 18 . There has also been some works that proposed general techniques for the verification of CRDTs BIB001 BIB006 , which can be used by CRDT developers to verify the correctness of their designs. Some of these works BIB006 include specific frameworks that help the developer in the verification process. A number of other works have proposed techniques to verify the correctness of distributed systems that run under weak consistency, identifying when coordination is necessary BIB002 BIB003 BIB004 BIB005 BIB007 BIB008 . Some of these works focused on systems that use CRDTs. Sieve BIB002 computes, for each operation, the weakest precondition that guarantees that application invariants hold. In runtime, depending on the operation parameters, the system runs an operation under weak or strong consistency. Some works BIB003 BIB005 BIB007 require the developer to specify the properties that the distributed system must maintain, and a specification of the operations in the system (that is often independent of the actual code of the system). As a result, they state which operations cannot execute concurrently. Despite these works, the verification of the correctness of CRDT designs and of systems that use CRDTs, how these verification techniques can be made available to programmers, and how to verify the correctness of implementations, remain an open research problem.
Ordering based decision making - A survey <s> Introduction <s> Abstract This survey points out recent advances in multiple attribute decision making methods dealing with fuzzy or ill-defined information. Fuzzy MAUT as well as fuzzy outranking methods are reviewed. Aggregation procedures, choice problems and treatment of interactive attributes are covered. Trends in research and open problems are indicated. <s> BIB001 </s> Ordering based decision making - A survey <s> Introduction <s> A multiperson decision-making problem, where the information about the alternatives provided by the experts can be presented by means of diAerent preference representation structures (preference orderings, utility functions and multiplicative preference relations) is studied. Assuming the multiplicative preference relation as the uniform element of the preference representation, a multiplicative decision model based on fuzzy majority is presented to choose the best alternatives. In this decision model, several transformation functions are obtained to relate preference orderings and utility functions with multiplicative preference relations. The decision model uses the ordered weighted geometric operator to aggregate information and two choice degrees to rank the alternatives, quantifier guided dominance degree and quantifier guided non-dominance degree. The consistency of the model is analysed to prove that it acts coherently. ” 2001 Elsevier Science B.V. All rights reserved. <s> BIB002 </s> Ordering based decision making - A survey <s> Introduction <s> This paper consists of three parts: 1) some theories and an efficient algorithm for ranking and screening multicriteria alternatives when there exists partial information on the decision maker's preferences; 2) generation of partial information using variety of methods; and 3) the existence of ordinal and cardinal functions based on and strengths of preferences. We demonstrate that strengths of preference concept can be very effectively used to generate the partial information on preferences. We propose axioms for ordinal and cardinal (measurable) value functions. An algorithm is developed for ranking and screening alternatives when there exists partial information about the preferences and the ordering of alternatives. The proposed algorithm obtains the same information very efficiently while by solving one mathematical programming problem many alternatives can be ranked and screened. Several examples are discussed and results of some computational experiments are reported. <s> BIB003 </s> Ordering based decision making - A survey <s> Introduction <s> In many applications, the reliability relation associated with available information is only partially defined, while most of existing uncertainty frameworks deal with totally ordered pieces of knowledge. Partial pre-orders offer more flexibility than total pre-orders to represent incomplete knowledge. Possibilistic logic, which is an extension of classical logic, deals with totally ordered information. It offers a natural qualitative framework for handling uncertain information. Priorities are encoded by means of weighted formulas, where weights are lower bounds of necessity measures. This paper proposes an extension of possibilistic logic for dealing with partially ordered pieces of knowledge. We show that there are two different ways to define a possibilistic logic machinery which both extend the standard one. <s> BIB004 </s> Ordering based decision making - A survey <s> Introduction <s> This paper addresses the problem of merging uncertain information in the framework of possibilistic logic. It presents several syntactic combination rules to merge possibilistic knowledge bases, provided by different sources, into a new possibilistic knowledge base. These combination rules are first described at the meta-level outside the language of possibilistic logic. Next, an extension of possibilistic logic, where the combination rules are inside the language, is proposed. A proof system in a sequent form is given, sound and complete with respect to the possibilistic logic semantics. Possibilistic fusion modes are illustrated on an example inspired from an application of a localization of a robot in an uncertain environment. <s> BIB005 </s> Ordering based decision making - A survey <s> Introduction <s> This paper presents a comprehensive overview of currently known applications of computing with words (CWW) in risk assessment. It is largely grouped into the following 5 categories: (1) fuzzy number based risk assessment; (2) fuzzy rule-based risk assessment; (3) fuzzy extension of typical probabilistic risk assessment; (4) ordinal linguistic approach for risk assessment; and (5) miscellaneous applications. In addition, the role of CWW within the broad area of risk assessment is briefly characterized. <s> BIB006 </s> Ordering based decision making - A survey <s> Introduction <s> This book focuses on systematic and intelligent information processing theories and methods based on linguistic values. The main contents include topics such as the 2-tuple model of linguistic values, 2-tuple linguistic aggregation operator hedge algebras, complete hedge algebras and linguistic value reasoning, reasoning with linguistic quantifiers, linguistic reasoning based on a random set, the fuzzy number model of linguistic values, linguistic value aggregation based on the fuzzy number model, extraction of linguistic proposition from a database and handling of linguistic information on the Internet, linguistic truth value lattice implication algebras, linguistic truth value resolution automatic reasoning, linguistic truth value reasoning based on lattice implication algebra, and related applications in decision-making and forecast. <s> BIB007 </s> Ordering based decision making - A survey <s> Introduction <s> Abstract Although the analytic hierarchy process (AHP) and the extent analysis method (EAM) of fuzzy AHP are extensively adopted in diverse fields, inconsistency increases as hierarchies of criteria or alternatives increase because AHP and EAM require rather complicated pairwise comparisons amongst elements (attributes or alternatives). Additionally, decision makers normally find that assigning linguistic variables to judgments is simpler and more intuitive than to fixed value judgments. Hence, Wang and Chen proposed fuzzy linguistic preference relations (Fuzzy LinPreRa) to address the above problem. This study adopts Fuzzy LinPreRa to re-examine three numerical examples. The re-examination is intended to compare our results with those obtained in earlier works and to demonstrate the advantages of Fuzzy LinPreRa. This study demonstrates that, in addition to reducing the number of pairwise comparisons, Fuzzy LinPreRa also increases decision making efficiency and accuracy. <s> BIB008 </s> Ordering based decision making - A survey <s> Introduction <s> Information about user preferences plays a key role in automated decision making. In many domains it is desirable to assess such preferences in a qualitative rather than quantitative way. In this paper, we propose a qualitative graphical representation of preferences that reflects conditional dependence and independence of preference statements under a ceteris paribus (all else being equal) interpretation. Such a representation is often compact and arguably quite natural in many circumstances. We provide a formal semantics for this model, and describe how the structure of the network can be exploited in several inference tasks, such as determining whether one outcome dominates (is preferred to) another, ordering a set outcomes according to the preference relation, and constructing the best outcome subject to available evidence. <s> BIB009
Decision making is the final and crucial step in many real applications such as organization management, financial planning, products evaluation, risk assessment and recommendation, which, in many cases, can be seen as the process for choosing the most appropriate one among a set of alternatives under provided criteria, objectives, or preferences BIB006 . Along with the social and economic development, it becomes more and more difficult to make decision based on simple personal judgements. Various decision making models and methods are then developed to support human for making decisions under complex situations, but it is still a hard task to make a good decision, especially in the complex, dynamic and uncertain socio-economic environment . Orderings provide a very natural and effective way for representing and reasoning with indeterminate situations which are pervasive in commonsense reasoning involved in real-life decision problems. How to handle different ordering relationships in decision making is always an essential research problem BIB007 . In general, an order is an arrangement of elements according to some defined standards or natural relationships, such as alphabetical order, numerical order, and power set order whose ordering relation is the inclusion relation between subsets. Ordinal information in real life, especially in decision making areas, includes ordinal attributes, preference relation and so on. For example, you might use A, B, C, D, and F to grade a student with the assumption that A>B>C>D>F; or you may "prefer beef to lamb" when ordering a meal, i.e., you put beef before lamb in a preferential order. Ordering based decision making is therefore to find the suitable method for evaluating candidates or ranking alternatives based on provided ordinal information and criteria, and this in many cases is to rank alternatives based on preferential ordering information. In ordering based decision making, it is always more natural and reasonable for decision makers to express a qualitative preferential ordering among alternatives than to provide quantitative preference degrees. Additionally, there are always many conditions or criteria in real decision making problems, not all of which can be satisfied simultaneously due to the uncertainty and complexity involved BIB008 . Sometimes, it may not always be feasible or realistic to acquire exact judgment on each attribute due to time restriction, lack of knowledge or data, limited expertise related to the problem domain and so on. For instance, in expressing preferences about movies, it is much easier for most people to express their preferences over two movies they have seen rather than describing preferences over attributes like director or main actors. Some people may not know well about the name of director or actors, even when they may have a preference for movies with good directors. To conduct ordering based decision making, the first stage is to find some suitable structures for representing the ordinal information involved, which is known as information representation. We then need to choose the suitable aggregation algorithm or inference mechanism in order to aggregate or rank the alternatives according to the provided ordinal information. The final step is to choose the "best" alternative, which is normally made up by two phases: (a) The aggregation of ordering relations for obtaining a collective performance value on the alternatives, and (b) The exploitation of the collective performance value in order to establish a rank ordering among the alternatives for choosing the best one BIB001 BIB002 . In order to represent ordinal information, many kinds of ordered structures have been defined, due to the diversity of orders in real problems, such as totally ordered structure, partially ordered structure and lattice structure [14] . On the other hand, totally ordered sets are widely used for ordering information representation in decision making due to their simplicity, but are often forced to make simplifying assumptions about reality when using only totally ordered sets. We use totally ordered sets to represent all ordering relations when we lack the ability or tool to handle nonlinear ones. Actually, most relations in real world are nonlinear, due to the fact that humans' intelligent activities, especially decision making, are always associated with many uncertainties. Incomparability is such a kind of uncertainty, which is mainly caused by ambiguity, conflicting opinions or missing information. For example, we always find it difficult to make a decision in real life when the decision is based on multiple criteria where conflicting opinions always exist. Partially ordered set or lattice structure is more suitable and flexible for information representation under these situations BIB004 . Partially ordered pieces of knowledge appear in many applications because of the dynamics of knowledge and when we merge multiple sources information BIB005 BIB003 . For example, one situation may be better in one dimension but worse in another. Partial orders offer more flexibility than total orders to represent incomplete knowledge. Moreover, they avoid comparing unrelated pieces of information. The following two examples BIB009 will show that partial order is ubiquitous in our daily decision making problems.
Ordering based decision making - A survey <s> Example 1. <s> Formal theories and rational choice methods have become increasingly prominent in most social sciences in the past few decades. Proponents of formal theoretical approaches argue that these methods are more scientific and sophisticated than other approaches, and that formal methods have already generated significant theoretical progress. As more and more social scientists adopt formal theoretical approaches, critics have argued that these methods are flawed and that they should not become dominant in most social-science disciplines.Rational Choice and Security Studies presents opposing views on the merits of formal rational choice approaches as they have been applied in the subfield of international security studies. This volume includes Stephen Walt's article "Rigor or Rigor Mortis? Rational Choice and Security Studies," critical replies from prominent political scientists, and Walt's rejoinder to his critics.Walt argues that formal approaches have not led to creative new theoretical explanations, that they lack empirical support, and that they have contributed little to the analysis of important contemporary security problems. In their replies, proponents of rational choice approaches emphasize that formal methods are essential for achieving theoretical consistency and precision. <s> BIB001 </s> Ordering based decision making - A survey <s> Example 1. <s> The subject of this work is to establish a mathematical framework that provides the basis and tool for synthesis and evaluation analysis in decision making, especially from the logic point of view. This paper focuses on a flexible and realistic approach, i.e., the use of linguistic assessment in decision making, specially, the symbolic approach acts by direct computation on linguistic values. A lattice-valued linguistic algebra model, which is based on a logical algebraic structure, i.e., lattice implication algebra, is applied to represent imprecise information and deal with both comparable and incomparable linguistic values (i.e., non-ordered linguistic values). Within this framework, some known weighted aggregation functions are analyzed and extended to deal with these kinds of lattice-value linguistic information. <s> BIB002 </s> Ordering based decision making - A survey <s> Example 1. <s> In this paper, we investigate group decision making problems with multiple types of linguistic preference relations. The paper has two parts with similar structures. In the first part, we transform the uncertain additive linguistic preference relations into the expected additive linguistic preference relations, and present a procedure for group decision making based on multiple types of additive linguistic preference relations. By using the deviation measures between additive linguistic preference relations, we give some straightforward formulas to determine the weights of decision makers, and propose a method to reach consensus among the individual preferences and the group's opinion. In the second part, we extend the above results to group decision making based on multiple types of multiplicative linguistic preference relations, and finally, a practical example is given to illustrate the application of the results. <s> BIB003 </s> Ordering based decision making - A survey <s> Example 1. <s> When addressing decision-making problems, decision-makers typically express their opinions using fuzzy preference relations. In some instances, decision-makers may have to deal with the problems in which only partial information is available. Consequently, decision-makers embody their preferences as incomplete fuzzy preference relations. The values of incomplete fuzzy preference relations have been considered crisp in recent studies. To allow decision-makers to provide vague or imprecise responses, this study proposes a novel method, called incomplete fuzzy linguistic preference relations, that uses fuzzy linguistic assessment variables instead of crisp values of incomplete fuzzy preference relations to ensure comparison consistency. The proposed method reflects an environment in which some uncertainty or vagueness exists. Examples are included that illustrate the effectiveness of the proposed method. <s> BIB004 </s> Ordering based decision making - A survey <s> Example 1. <s> Abstract Although the analytic hierarchy process (AHP) and the extent analysis method (EAM) of fuzzy AHP are extensively adopted in diverse fields, inconsistency increases as hierarchies of criteria or alternatives increase because AHP and EAM require rather complicated pairwise comparisons amongst elements (attributes or alternatives). Additionally, decision makers normally find that assigning linguistic variables to judgments is simpler and more intuitive than to fixed value judgments. Hence, Wang and Chen proposed fuzzy linguistic preference relations (Fuzzy LinPreRa) to address the above problem. This study adopts Fuzzy LinPreRa to re-examine three numerical examples. The re-examination is intended to compare our results with those obtained in earlier works and to demonstrate the advantages of Fuzzy LinPreRa. This study demonstrates that, in addition to reducing the number of pairwise comparisons, Fuzzy LinPreRa also increases decision making efficiency and accuracy. <s> BIB005 </s> Ordering based decision making - A survey <s> Example 1. <s> We investigate the computational complexity of testing dominance and consistency in CP-nets. Previously, the complexity of dominance has been determined for restricted classes in which the dependency graph of the CP-net is acyclic. However, there are preferences of interest that define cyclic dependency graphs; these are modeled with general CP-nets. In our main results, we show here that both dominance and consistency for general CP-nets are PSPACE-complete. We then consider the concept of strong dominance, dominance equivalence and dominance incomparability, and several notions of optimality, and identify the complexity of the corresponding decision problems. The reductions used in the proofs are from STRIPS planning, and thus reinforce the earlier established connections between both areas. <s> BIB006
(My dinner I) Consider a simple example shown in Fig. 1 that expresses my preference over dinner configurations, where S and W stand for the soup and wine respectively. An arrow going from alternative x i to x j indicates that x i is preferred to x j . The figure shows that I strictly prefer fish soup (S f ) to vegetable soup (S v ), while my preference between red (W r ) and white (W w ) wine is conditioned on the soup to be served: I prefer white wine if served a fish soup, and red wine if served a vegetable soup. The preferential relation shown in Example 1 is a totally ordered set, but it will become partial order when adding the main course M as another variable, which can be shown in Example 2. This is a partially ordered set where M sc ∧ S f ∧ W r and M sc ∧ S v ∧ W w are incomparable, which can be easily elicited from reallife decision making problems. As a special kind of partially ordered structure, lattice has been shown to be a suitable and efficient structure for representing ordering relationship in the real world due to its better properties and additional operations [14] . Although these additional properties and operations will in turn restrict its application areas, there are still many real problems which can be modelled by lattice structure. For example, when evaluating the quality of some products, one may express his opinions as "high", "very high", "low", and so on, that can be illustrated in Fig. 3 as a lattice structure BIB002 . Furthermore, lattice also plays an important role as the truth-value field of logic, that is, it has a close relation to logic and can serve as a bridge between real problem and logical foundation. Fig. 3 . A kind of lattice ordering structure I=more high, b=high, c=less high, a=less low, d=low, O=more low. After the representation of ordinal information involved, the next step is to use aggregation algorithm or inference mechanism to obtain the final decision, usually an ordering of alternatives. For getting the final decision result, current ordering based decision making methods are mainly from the information aggregation point of view, i.e., how to combine all the decision makers' opinions to obtain a final ordering or evaluation. For multi-criteria decision making, the judgments provided by decision makers for different criteria are usually assumed to preference relations in the same form BIB005 BIB004 or similar forms BIB006 BIB003 . While in many real cases, the provided preference relations are always in different forms which can be illustrated using the following example. Example 3. In a movie ranking problem, the customers are asked to provide qualitative preferential orderings about the movies they have enjoyed. It is very common that one customer can only provide the rating for some movies, but not other if he/she did not get chance to watch yet. Ratings given by each customer therefore can be transferred into a preference relation among the movies, which, for example, might be simply illustrated in Fig. 4 , where five customers are asked to express their preferences among five different movies a, b, c, d, e, and the arrow directed from alternative x to y indicates that x is preferred to y. Take the upper right one as an example, this means that this customer cannot express his/her preference between movies b and c, but it does not mean that he/she has no opinion about movie b or c individually, because he/she prefers a to b, and a to c as well. Taking into account the fact that most existing ordering based decision making approaches are mainly focus on totally ordered information while the information involved in real decision making problems is mostly partially ordered, this paper has discussed the importance and necessity of decision making with partially ordered information throughout the Introduction. Although there are some decision making approaches that can deal with partially ordered information, they usually transform qualitative information into quantitative scale, which will cause loss of information and is time consuming. It will be more natural and reasonable to represent and reason about qualitative ordering information in its original form, i.e., through symbolic way. From the viewpoint of symbolism, it is important and necessary to establish the logical foundation for decision making approach. As put by Zagare BIB001 : "Without a logically consistent theoretical structure to explain them, empirical observations are impossible to evaluate; without a logically consistent theoretical structure to constrain them, original and creative theories are of limited utility; and without a logically consistent argument to support them, even entirely laudable conclusions … lose much of their intellectual force." That is: Logic serves as the most important foundation and standard for justifying or evaluating the soundness and consistency of our methods. In summary, we want to highlight in this survey paper the decision making with partially ordered information from the algebraic and logic point of view. Due to the fact that decision making is a very general research domain where almost all human involved activities are concerned, we are not able to cover all the related topics in this paper, and so this survey mainly reviews existing ordering based decision making approaches, especially those which consider partially ordered qualitative information or are logic-oriented. Furthermore, this survey is mainly from the methodology point of view, that is, we mainly review the methods and the underlying theories, not the applications. Based on the over 100 articles and books collected (searched via Google Scholar, IEEE Xplore, ScienceDirect, SpringerLink, Wiley Online Library, Web of Knowledge, and so on), three issues are examined, including: (i) Which approaches can handle partially ordered information? (ii) Which approaches are designed for treating qualitative information? (iii) Which approaches are from the algebraic and logic point of view? The rest of this paper is organized as follows: Section 2 reviews the existing methods for representing ordinal qualitative information in decision making, which serves as the basis of most decision making approaches reviewed in this paper. Representation and aggregation methods for preference based decision making approaches are reviewed in Section 3, and the decision making methods from the logic point of view are reviewed in Section 4. Section 5 summarizes the reviewed methods through a table and draws some concluding remarks with some thinking into the future research directions.
Ordering based decision making - A survey <s> Representation of Ordinal Qualitative Information <s> This paper continues our investigation on hedge albebras (6). We extend hedge algebras by two additional operations corresponding to infimum and supremum of the so-called concept category of an element x, i.e. the set which is generated from x by means of the hedge operations. It is shown that every extended hedge algebra with a lattice of the primary generators is a lattice. In the symmetrical extended hedge algebras we are able to define negation and implication, called concept-negation and concept- implication. Furthermore, it is proved that there exists an isomorphism from a subaigebra of a symmetrical extended hedge algebra of a linguistic truth variable into the closed unit interval (0, 1), under which the concept-negation and the concept-implication correspond to the negation and a kind of implication in multiple-valued logic based on the unit- interval (0, 1). <s> BIB001 </s> Ordering based decision making - A survey <s> Representation of Ordinal Qualitative Information <s> This article is devoted to defining some aggregation operations between linguistic labels. First, from some remarks about the meaning of label addition, a formal and general definition of a label space is introduced. After, addition, difference, and product by a positive real number are formally defined on that space. the more important properties of these operations are studied, paying special attention to the convex combination labels. the article concludes with some numerical examples. © 1993 John Wiley Sons, Inc. <s> BIB002 </s> Ordering based decision making - A survey <s> Representation of Ordinal Qualitative Information <s> As its name suggests, computing with words (CW) is a methodology in which words are used in place of numbers for computing and reasoning. The point of this note is that fuzzy logic plays a pivotal role in CW and vice-versa. Thus, as an approximation, fuzzy logic may be equated to CW. There are two major imperatives for computing with words. First, computing with words is a necessity when the available information is too imprecise to justify the use of numbers, and second, when there is a tolerance for imprecision which can be exploited to achieve tractability, robustness, low solution cost, and better rapport with reality. Exploitation of the tolerance for imprecision is an issue of central importance in CW. In CW, a word is viewed as a label of a granule; that is, a fuzzy set of points drawn together by similarity, with the fuzzy set playing the role of a fuzzy constraint on a variable. The premises are assumed to be expressed as propositions in a natural language. In coming years, computing with words is likely to evolve into a basic methodology in its own right with wide-ranging ramifications on both basic and applied levels. <s> BIB003 </s> Ordering based decision making - A survey <s> Representation of Ordinal Qualitative Information <s> Abstract This survey points out recent advances in multiple attribute decision making methods dealing with fuzzy or ill-defined information. Fuzzy MAUT as well as fuzzy outranking methods are reviewed. Aggregation procedures, choice problems and treatment of interactive attributes are covered. Trends in research and open problems are indicated. <s> BIB004 </s> Ordering based decision making - A survey <s> Representation of Ordinal Qualitative Information <s> Discusses a methodology for reasoning and computing with perceptions rather than measurements. An outline of such a methodology-referred to as a computational theory of perceptions is presented in this paper. The computational theory of perceptions, or CTP for short, is based on the methodology of CW. In CTP, words play the role of labels of perceptions and, more generally, perceptions are expressed as propositions in a natural language. CW-based techniques are employed to translate propositions expressed in a natural language into what is called the Generalized Constraint Language (GCL). In this language, the meaning of a proposition is expressed as a generalized constraint, N is R, where N is the constrained variable, R is the constraining relation and isr is a variable copula in which r is a variable whose value defines the way in which R constrains S. Among the basic types of constraints are: possibilistic, veristic, probabilistic, random set, Pawlak set, fuzzy graph and usuality. The wide variety of constraints in GCL makes GCL a much more expressive language than the language of predicate logic. In CW, the initial and terminal data sets, IDS and TDS, are assumed to consist of propositions expressed in a natural language. These propositions are translated, respectively, into antecedent and consequent constraints. Consequent constraints are derived from antecedent constraints through the use of rules of constraint propagation. The principal constraint propagation rule is the generalized extension principle. The derived constraints are retranslated into a natural language, yielding the terminal data set (TDS). The rules of constraint propagation in CW coincide with the rules of inference in fuzzy logic. A basic problem in CW is that of explicitation of N, R, and r in a generalized constraint, X is R, which represents the meaning of a proposition, p, in a natural language. <s> BIB005 </s> Ordering based decision making - A survey <s> Representation of Ordinal Qualitative Information <s> The Fuzzy Linguistic Approach has been applied successfully to different areas. The use of linguistic information for modelling expert preferences implies the use of processes of Computing with Words. To accomplish these processes different approaches has been proposed in the literature: (i) Computational model based on the Extension Principle, (ii) the symbolic one(also called ordinal approach), and (iii) the 2-tuple linguistic computational model. The main problem of the classical approaches, (i) and (ii), is the loss of information and lack of precision during the computational processes. In this paper, we want to compare the linguistic description, accuracy and consistency of the results obtained using each model over the rest ones. To do so, we shall solve a Multiexpert Multicriteria Decision-Making problem defined in a multigranularity linguistic context using the different computational approaches. This comparison helps us to decide what model is more adequated for computing with words. <s> BIB006 </s> Ordering based decision making - A survey <s> Representation of Ordinal Qualitative Information <s> In the paper, we shall examine the fuzziness measure (FM) of terms or of complete and linear hedge algebras of a linguistic variable. The notion of semantically quantifying mappings (SQMs) previously examined by the first author will be redefined more generally and a closed relation between the FM of linguistic terms and a family of SQMs with the parameters to be the FM of primary terms and linguistic hedges will be established. A semantics-based topology of hedge algebras and a closed and interesting relation between this topology, the FM and the above family of SQMs will be discovered and examined. An applicability of the FM and SQMs will be shown by an examination of some application examples. <s> BIB007 </s> Ordering based decision making - A survey <s> Representation of Ordinal Qualitative Information <s> Decision making is inherent to mankind, as human beings daily face situations in which they should choose among different alternatives by means of reasoning and mental processes. Many of these decision problems are under uncertain environments with vague and imprecise information. This type of information is usually modelled by linguistic information because of the common use of language by the experts involved in the given decision situations, originating linguistic decision making. The use of linguistic information in decision making demands processes of Computing with Words to solve the related decision problems. Different methodologies and approaches have been proposed to accomplish such processes in an accurate and interpretable way. The good performance of linguistic computing dealing with uncertainty has caused a spread use of it in different types of decision based applications. This paper overviews the more significant and extended linguistic computing models due to its key role in linguistic decision making and a wide range of the most recent applications of linguistic decision support models. <s> BIB008 </s> Ordering based decision making - A survey <s> Representation of Ordinal Qualitative Information <s> In multi-expert decision making (MEDM) problems the experts provide their preferences about the alternatives according to their knowledge. Because they can have different knowledge, educational backgrounds, or experiences, it seems logical that they might use different evaluation scales to express their opinions. In the present article, we focus on decision problems defined in uncertain contexts where such uncertainty is modeled by means of linguistic information, therefore the decision makers would use different linguistic scales to express their evaluations on the alternatives, i.e., multigranular linguistic scales. Several computational approaches have been presented to manage multigranular linguistic scales in decision problems. Although they provide good results in some cases, still present limitations. A new approach, so-called extended linguistic hierarchies, is presented here for managing multigranular linguistic scales to overcome those limitations, an MEDM case study is given to illustrate the proposed method. <s> BIB009 </s> Ordering based decision making - A survey <s> Representation of Ordinal Qualitative Information <s> Many real world problems need to deal with uncertainty, therefore the management of such uncertainty is usually a big challenge. Hence, different proposals to tackle and manage the uncertainty have been developed. Probabilistic models are quite common, but when the uncertainty is not probabilistic in nature other models have arisen such as fuzzy logic and the fuzzy linguistic approach. The use of linguistic information to model and manage uncertainty has given good results and implies the accomplishment of processes of computing with words. A bird's eye view in the recent specialized literature about linguistic decision making, computing with words, linguistic computing models and their applications shows that the 2-tuple linguistic representation model [44] has been widely-used in the topic during the last decade. This use is because of reasons such as, its accuracy, its usefulness for improving linguistic solving processes in different applications, its interpretability, its ease managing of complex frameworks in which linguistic information is included and so forth. Therefore, after a decade of extensive and intensive successful use of this model in computing with words for different fields, it is the right moment to overview the model, its extensions, specific methodologies, applications and discuss challenges in the topic. <s> BIB010 </s> Ordering based decision making - A survey <s> Representation of Ordinal Qualitative Information <s> Dealing with uncertainty is always a challenging problem, and different tools have been proposed to deal with it. Recently, a new model that is based on hesitant fuzzy sets has been presented to manage situations in which experts hesitate between several values to assess an indicator, alternative, variable, etc. Hesitant fuzzy sets suit the modeling of quantitative settings; however, similar situations may occur in qualitative settings so that experts think of several possible linguistic values or richer expressions than a single term for an indicator, alternative, variable, etc. In this paper, the concept of a hesitant fuzzy linguistic term set is introduced to provide a linguistic and computational basis to increase the richness of linguistic elicitation based on the fuzzy linguistic approach and the use of context-free grammars by using comparative terms. Then, a multicriteria linguistic decision-making model is presented in which experts provide their assessments by eliciting linguistic expressions. This decision model manages such linguistic expressions by means of its representation using hesitant fuzzy linguistic term sets. <s> BIB011
Qualitative information is frequently used in the area of decision-making, such as judgments/opinions from experts. Human beings always give their judgments/opinions about things using natural language (linguistic terms). Linguistic terms, not like numerical ones whose value are crisp numbers, are always vague and imprecise. Sometimes, it is difficult to clarify the boundary for some linguistic terms or words, but one can understand their common meaning well. Linguistic information involved with decision making problems is always in some ordering relation. For example, when we are evaluating the quality of a computer, the evaluations may be "bad," "acceptable," "good," "very good," and these evaluations are in an order according to their semantic meanings. There are generally two types of ways for decision making with linguistic information: fuzzy set based method, and symbolic method. The conventional fuzzy set based method BIB004 BIB003 BIB005 uses membership function or fuzzy number to represent linguistic information and need a linguistic approximation of the final computed result, which are time consuming and computationally complex BIB006 . Symbolic approaches BIB002 BIB008 use symbols (usually in a structure) to represent linguistic information directly without the numerical approximation, and aggregate or reason about these symbols to obtain the final result. One of the representative linguistic valued information processing approaches is fuzzy ordinal linguistic approach BIB002 BIB008 . This method uses an ordered structure, linguistic labels with indexes, to represent the set of linguistic terms, with the assumption that the terms under discussion is totally ordered . The 2-tuple linguistic representation (or computational) model BIB006 BIB010 , one of most popular extended fuzzy ordinal linguistic approaches, is a continuous linguistic representation model. In this model, the linguistic 2-tuple is used to represent the linguistic information, and it is a pair of values, (L i , α i ), where L i ∈S is a linguistic label and α i ∈[-0.5, 0.5) is a number, called the symbolic translation, which supports "difference of information" between the result obtained after aggregation and the closest one in the set of linguistic terms. Take the simple linguistic term set S={s -4 =extremely low, s -3 =very low, s -2 =low, s -1 =slightly low, s 0 =fair, s 1 = slightly high, s 2 =high, s 3 =very high, s 4 =extremely high} as an example, (s 2 , 0) is "high", while (s 2 , -0.2) expresses the evaluation has a difference of -0.2 to "high", that is, it is 0.2 lower than "high". These representative symbolic approaches use linguistic labels with indexes to represent linguistic terms and make operations on these indexes, and the representation and manipulation of linguistic terms are explored in a qualitative setting , trying to avoid an underlying numerical approximation needed by fuzzy set based method. Suppose that S={s 0 , s 1 , …, s g } is a set of linguistic terms under consideration, which is an ordered structure, i.e., s i <s j iff i<j. In the linguistic 2-tuple method, the 2-tuple representation (L i , α i ) will be transformed into a number by combining the index i and the number α i , which will be used for aggregation, then the result obtained after aggregation will be retransformed to a 2-tuple. The transformation between a 2-tuple and its equivalent numerical value β∈[0, g], where g+1 is the cardinality of the linguistic term set S, is defined as: where round is the usual round operation, i the index of the closest linguistic term, s i to β, and α the value of symbolic translation. Based on the idea of hesitant fuzzy sets , Rodríguez et al. BIB011 introduced the concept of hesitant fuzzy linguistic term sets which allow the decision makers to express their opinions using several linguistic terms instead of a single term. For example, the judgment of one decision maker on some alternative may be {slightly low, fair}. Hesitant fuzzy linguistic term sets are more appropriate for the situations where decision makers hesitate among several linguistic values for assessing the alternatives, than the traditional fuzzy linguistic approach which does not allow the decision makers to use more than one linguistic term to assess each alternative. Although fuzzy ordinal linguistic approach has the advantages of computational simplicity without the need of membership function and avoiding loss of information BIB006 BIB009 , it requires the ordering relation discussed should be in total order and can be represented by indexes. This limits the application of fuzzy ordinal linguistic approach to more general situations where partially ordered information often involved. Hedge algebra , a linguistic information representation method from the algebraic point of view, is proposed as an ordered algebraic structure for modelling linguistic terms. In later years, Ho et al. proposed extended hedge algebras BIB001 , refined hedge algebras , complete hedge algebras BIB007 . Generally speaking, linguistic hedges can be seen as some kind of linguistic modifiers such as "very," "little," "possibly," etc. on the prime terms such as "high and low," "true and false." These linguistic hedges can strengthen or weaken the meaning of the prime terms. Hedge algebra is then constructed by applying the set of hedges to the prime terms (also called generators), which is essentially a partially ordered set according to the natural meanings of its elements, and generally a lattice as simply shown in Fig. 3 . Hedge algebra takes the academic idea that there exists a semantic ordering relation among these linguistic terms, and linguistic hedges, which can strengthen or weaken the meaning of the prime terms, play a vital role for generating the algebraic structure . Hedge algebras are logical algebras, so logic systems and the corresponding approximate reasoning methods can be built intuitively based on hedge algebras, but no such work has been done.
Ordering based decision making - A survey <s> Preference Based Decision Making <s> A multiperson decision-making problem, where the information about the alternatives provided by the experts can be presented by means of diAerent preference representation structures (preference orderings, utility functions and multiplicative preference relations) is studied. Assuming the multiplicative preference relation as the uniform element of the preference representation, a multiplicative decision model based on fuzzy majority is presented to choose the best alternatives. In this decision model, several transformation functions are obtained to relate preference orderings and utility functions with multiplicative preference relations. The decision model uses the ordered weighted geometric operator to aggregate information and two choice degrees to rank the alternatives, quantifier guided dominance degree and quantifier guided non-dominance degree. The consistency of the model is analysed to prove that it acts coherently. ” 2001 Elsevier Science B.V. All rights reserved. <s> BIB001 </s> Ordering based decision making - A survey <s> Preference Based Decision Making <s> Abstract The purpose of this paper is to study a fuzzy multipurpose decision making problem, where the information about the alternatives provided by the experts can be of a diverse nature. the information can be represented by means of preference orderings, utility functions and fuzzy preference relations, and our objective is to establish a general model which cover all possible representations. Firstly, we must make the information uniform, using fuzzy preference relations as uniform preference context. Secondly, we present some selection processes for multiple preference relations based on the concept of fuzzy majority. Fuzzy majority is represented by a fuzzy quantifier, and applied in the aggregation, by means of an OWA operator whose weights are calculated by the fuzzy quantifier. We use two quantifier guided choice degrees of alternatives, a dominance degree used to quantify the dominance that one alternative has over all the others, in a fuzzy majority sense, and a non dominance degree, that generalises Orlovski's non dominated alternative concept. The application of the two above choice degrees can be carried out according to two different selection processes, a sequential selection process and a conjunction selection process. <s> BIB002 </s> Ordering based decision making - A survey <s> Preference Based Decision Making <s> The aim of this paper is to study the integration of multiplicative preference relation as a preference representation structure in fuzzy multipurpose decision-makingproblems. Assumingfuzzy multipurpose decision-makingproblems under di4erent preference representation structures (ordering, utilities and fuzzy preference relations) and using the fuzzy preference relations as uniform representation elements, the multiplicative preference relations are incorporated in the decision problem by means of a transformation function between multiplicative and fuzzy preference relations. A consistency study of this transformation function, which demonstrates that it does not change the informative content of multiplicative preference relation, is shown. As a consequence, a selection process based on fuzzy majority for multipurpose decision-makingproblems under multiplicative preference relations is presented. To design it, an aggregation operator of information, called ordered weighted geometric operator, is introduced, and two choice degrees, the quanti7er-guided dominance degree and the quanti7er-guided non-dominance degree, are de7ned for multiplicative preference relations. c 2001 Elsevier Science B.V. All rights reserved. <s> BIB003 </s> Ordering based decision making - A survey <s> Preference Based Decision Making <s> An alternative voting system, referred to as probabilistic Borda rule, is developed and analyzed. The winning alternative under this system is chosen by lottery where the weights are determined from each alternative’s Borda score relative to all Borda points possible. Advantages of the lottery include the elimination of strategic voting on the set of alternatives under consideration and breaking the tyranny of majority coalitions. Disadvantages include an increased incentive for strategic introduction of new alternatives to alter the lottery weights, and the possible selection of a Condorcet loser. Normative axiomatic properties of the system are also considered. It is shown this system satisfies the axiomatic properties of the standard Borda procedure in a probabilistic fashion. <s> BIB004 </s> Ordering based decision making - A survey <s> Preference Based Decision Making <s> In this article, we introduce the induced ordered weighted geometric (IOWG) operator and its properties. This is a more general type of OWG operator, which is based on the induced ordered weighted averaging (IOWA) operator. We provide some IOWG operators to aggregate multiplicative preference relations in group decision-making (GDM) problems. In particular, we present the importance IOWG (I-IOWG) operator, which induces the ordering of the argument values based on the importance of the information sources; the consistency IOWG (C-IOWG) operator, which induces the ordering of the argument values based on the consistency of the information sources; and the preference IOWG (P-IOWG) operator, which induces the ordering of the argument values based on the relative preference values associated with each one of them. We also provide a procedure to deal with “ties” regarding the ordering induced by the application of one of these IOWG operators. This procedure consists of a sequential application of the aforementioned IOWG operators. Finally, we analyze the reciprocity and consistency properties of the collective multiplicative preference relations obtained using IOWG operators. © 2004 Wiley Periodicals, Inc. <s> BIB005 </s> Ordering based decision making - A survey <s> Preference Based Decision Making <s> We develop the ordinal theory of (semi)continuous multi-utility representation for incomplete preference relations. We investigate the cases in which the representing sets of utility functions are either arbitrary or finite, and those cases in which the maps contained in these sets are required to be (semi)continuous. With the exception of the case where the representing set is required to be finite, we find that the requirements of such representations are surprisingly weak, pointing to a wide range of applicability of the representation theorems reported here. Some applications to decision theory under uncertainty and consumer theory are also considered. <s> BIB006 </s> Ordering based decision making - A survey <s> Preference Based Decision Making <s> When the functional form of utility is unknown, conventional measures of risk aversion are often approximated by applying a Taylor series expansion to expected utility. This is shown to produce counterintuitive rank-orderings of risk preferences for individuals who are willing to pay equal reservation prices in lotteries with different prizes. Moreover, individuals who are unwilling to participate in favorable lotteries may be incorrectly identified as having a finite aversion to risk. Correct orderings are obtained by applying a discrete measure of relative risk aversion. The contrast between the conventional and discrete measures is illustrated with data from three Dutch surveys. <s> BIB007 </s> Ordering based decision making - A survey <s> Preference Based Decision Making <s> In this paper, we present a new approach for fuzzy multiple attributes group decision-making based on fuzzy preference relations. First, we construct fuzzy importance matrices for decision-makers with respect to attributes and construct fuzzy evaluating matrices for decision-makers with respect to the attributes of the alternatives. Based on the fuzzy importance matrices and the fuzzy evaluating matrices, we construct fuzzy rating vectors for decision-makers with respect to the alternatives. Then, we defuzzify the trapezoidal fuzzy numbers in the constructed fuzzy rating vectors to get the rating vectors for the decision-makers. Based on the rating vectors, we construct fuzzy preference relations for the decision-makers with respect to the alternatives. Based on the fuzzy preference relations, we calculate the average rating value of each decision-maker with respect to the alternatives. Then, we sort these average rating values in a descending sequence and assign them different scores. Then, we calculate the summation values of the scores of the alternatives with respect to each decision-maker, respectively. The larger the summation values of the scores, the better the choice of the alternative. The proposed method is simpler than Chen's method (2000) and Li's method (2007) for handling fuzzy multiple attributes group decision-making problems. It provides us with a useful way to handle fuzzy multiple attributes group decision-making problems. <s> BIB008
Preference from the experts is the most commonly used ordinal information in real world decision making problems. Examples include risk aversion in economics and finance BIB007 , quality assessment of service, food or textile products , dance competition adjudication, meta-search engine whose goal is to combine the preference relations of several WWW search engines , and so on. This kind of preference always appears as partially ordered structure . For decision making under this situation, a preference aggregation procedure is always applied to combine these partial orders to produce an overall preference ordering, and this again can be a partial order. Normally, the preference representation structures can be categorized into three types according to the decision making objective and the provided information BIB001 BIB002 BIB003 BIB005 BIB006 BIB008 : (1) Utility function. A utility function is a function given by each decision maker which assigns each alternative a real number , utility value, which indicates evaluations of the decision maker k on the ith alternative. The utility values are mainly given by the decision makers subjectively according to some requirements. (2) Preference relation. This preference representation structure asks the decision maker expresses his preferences on the set of alternatives based on pair comparison. This is usually described as a preference relation matrix whose element depicts the degree, usually numerical, of the decision maker's preference of one alternative to another. (3) Preference ordering. In the case of preference ordering, each decision maker expresses his preferences on the set of alternatives as a preference ordering, which can be a total order which orders alternatives from the best to the worst or a partial order with some alternatives are incomparable. Although utility function, preference relation and preference ordering are the three major representation models of preference, and they are applicable to different certain situations, there are also some overlaps among them. For example, Borda count BIB004 , the widely used preference based ordering method firstly asks decision makers to rank the alternatives, and then allocates absolute scores to the alternatives according to the rating, which are similar to utility values. There are also some transformation methods from one representation structure to another, such as the methods presented by Chiclana and Herrera et al. BIB001 BIB002 BIB003 for transforming utility values and preference orderings into preference relations. A simple method for getting the preference relation from a set of utility values given by an expert
Ordering based decision making - A survey <s> Decision making based on preference relations <s> Abstract : The book presents a concise yet mathematically complete treatment of modern utility theories that covers nonprobabilistic preference theory, the von Neumann-Morgenstern expected-utility theory and its extensions, and the joint axiomatization of utility and subjective probability. <s> BIB001 </s> Ordering based decision making - A survey <s> Decision making based on preference relations <s> Abstract In most decisio-making problems a preference relation in the set of alternatives is of a fuzzy nature, reflecting for instance on the fuzziness of experts estimates of the preferences. In this paper, the corresponding fuzzy equivalence and strict preference relations are defined for a given fuzzy non-strict preference relation in an unfuzzy set of alternatives which are used to introduce in a natural way the fuzzy set of nondominated alternatives. Two types of linearity of a fuzzy relation are introduced and the equivalence of the unfuzzy nondominated alternatives is studied. It is shown that unfuzzy nondominated solutions to the decision-making problem exist, provided the original fuzzy relation satisfies some topological requirements. A simple method of calculating these solutions is indicated. <s> BIB002 </s> Ordering based decision making - A survey <s> Decision making based on preference relations <s> This chapter provides an overview of Analytic Hierarchy Process (AHP), which is a systematic procedure for representing the elements of any problem hierarchically. It organizes the basic rationality by breaking down a problem into its smaller constituent parts and then guides decision makers through a series of pair-wise comparison judgments to express the relative strength or intensity of impact of the elements in the hierarchy. These judgments are then translated to numbers. The AHP includes procedures and principles used to synthesize the many judgments to derive priorities among criteria and subsequently for alternative solutions. It is useful to note that the numbers thus obtained are ratio scale estimates and correspond to so-called hard numbers. Problem solving is a process of setting priorities in steps. One step decides on the most important elements of a problem, another on how best to repair, replace, test, and evaluate the elements, and another on how to implement the solution and measure performance. <s> BIB003 </s> Ordering based decision making - A survey <s> Decision making based on preference relations <s> Abstract The Analytic Hierarchy Process uses paired comparisons to derive a scale of relative importance for alternatives. We investigate the effect of uncertainty in judgment on the stability of the rank order of alternatives. The uncertainty experienced by decision makers in making comparisons is measured by associating with each judgment an interval of numerical values. The approach leads to estimating the probability that an alternative or project exchanges rank with other projects. These probabilities are then used to calculate the probability that project would change rank at all. We combine the priority of importance of each project with the probability that it does not change rank, to obtain the final ranking. <s> BIB004 </s> Ordering based decision making - A survey <s> Decision making based on preference relations <s> Abstract This paper serves as an introduction to the Analytic Hierarchy Process — A multicriteria decision making approach in which factors are arranged in a hierarchic structure. The principles and the philosophy of the theory are summarized giving general background information of the type of measurement utilized, its properties and applications. <s> BIB005 </s> Ordering based decision making - A survey <s> Decision making based on preference relations <s> This paper deals with preference modelling in the context of Decision Aid. In this framework, conflicting systems of logic, uncertain knowledge, ambiguous positions are always present. In order to tackle this problem, a multiple criteria methodology is proposed, mainly based on fuzzy outranking relations introduced both at one-dimensional and multi-dimensional levels. Some properties of outranking relations are investigated. Such relations are then combined using fuzzy logical connectives to generate relational systems of fuzzy preferences that are shown to be very useful to reflect the vagueness of information in the various preference situations that may be considered in the modelling process. <s> BIB006 </s> Ordering based decision making - A survey <s> Decision making based on preference relations <s> We introduce the ordered weighted averaging (OWA) operators. We look at some semantics and applications associated with these operators. We discuss the problem of obtaining the associated weighting parameters. We discuss the connection between OWA operators and linguistic quantifiers. We introduce a number of parametrized families of OWA operators; maximum entropy, S-OWA, step and window are among the most important of these families. We study the evaluation of quantified propositions using these operators. We introduce the idea of aggregate dependent weights. <s> BIB007 </s> Ordering based decision making - A survey <s> Decision making based on preference relations <s> Abstract An approach to the axiomatics of valued preference modelling is suggested. In the framework of this approach, general definitions of strict preference, indifference and incomparability relations associated with a valued preference relation are established via solving a system of functional equations. Any solution of this system satisfies most of the classical properties. Characterizations of important particular solutions are also proved. <s> BIB008 </s> Ordering based decision making - A survey <s> Decision making based on preference relations <s> Note: Index. Bibliogr. : p. 239-251 Reference Record created on 2004-09-07, modified on 2016-08-08 <s> BIB009 </s> Ordering based decision making - A survey <s> Decision making based on preference relations <s> In a linguistic framework, several group decision making processes by direct approach are presented. These processes are designed using the linguistic ordered weighted averaging (LOWA) operator. To do so, first a study is made of the properties and the axiomatic of LOWA operator, showing the rationality of its aggregation way. And secondly, we present the use of LOWA operator to solve group decision making problems from individuals linguistic preference relations. <s> BIB010 </s> Ordering based decision making - A survey <s> Decision making based on preference relations <s> One of the properties that the OWA operator satisfies is commutativity. This condition, that is not satisfied by the weighted mean, stands for equal reliability of all the information sources that supply the data. In this article we define a new combination function, the WOWA (Weighted OWA), that combines the advantages of the OWA operator and the ones of the weighted mean. We study some of its properties and show how it can be extended to deal with linguistic labels. © 1997 John Wiley & Sons, Inc. <s> BIB011 </s> Ordering based decision making - A survey <s> Decision making based on preference relations <s> In this paper, we study the existence, construction and reconstruction of fuzzy preferencestructures. Starting from the definition of a classical preference structure, we propose anatural definition of a fuzzy preference structure, merely requiring the fuzzification of theset operations involved. Upon evaluating the existence of these structures, we discover thatthe idea of fuzzy preferences is best captured when fuzzy preference structures are definedusing a L U ukasiewicz triplet. We then proceed to investigate the role of the completenesscondition in these structures. This rather extensive investigation leads to the proposal of astrongest completeness condition, and results in the definition of a one-parameter class offuzzy preference structures. Invoking earlier results by Fodor and Roubens, the constructionof these structures from a reflexive binary fuzzy relation is then easily obtained. Thereconstruction of such a structure from its fuzzy large preference relation inevitable toobtain a full characterization of these structures in analogy to the classical case is morecumbersome. The main result of this paper is the discovery of a non-trivial characterizingcondition that enables us to fully characterize the members of a two-parameter class offuzzy preference structures in terms of their fuzzy large preference relation. As a remarkableside-result, we discover three limit classes of characterizable fuzzy preference structures,traces of which are found throughout the preference modelling literature. Copyright Kluwer Academic Publishers 1998 <s> BIB012 </s> Ordering based decision making - A survey <s> Decision making based on preference relations <s> A multiperson decision-making problem, where the information about the alternatives provided by the experts can be presented by means of diAerent preference representation structures (preference orderings, utility functions and multiplicative preference relations) is studied. Assuming the multiplicative preference relation as the uniform element of the preference representation, a multiplicative decision model based on fuzzy majority is presented to choose the best alternatives. In this decision model, several transformation functions are obtained to relate preference orderings and utility functions with multiplicative preference relations. The decision model uses the ordered weighted geometric operator to aggregate information and two choice degrees to rank the alternatives, quantifier guided dominance degree and quantifier guided non-dominance degree. The consistency of the model is analysed to prove that it acts coherently. ” 2001 Elsevier Science B.V. All rights reserved. <s> BIB013 </s> Ordering based decision making - A survey <s> Decision making based on preference relations <s> Abstract The purpose of this paper is to study a fuzzy multipurpose decision making problem, where the information about the alternatives provided by the experts can be of a diverse nature. the information can be represented by means of preference orderings, utility functions and fuzzy preference relations, and our objective is to establish a general model which cover all possible representations. Firstly, we must make the information uniform, using fuzzy preference relations as uniform preference context. Secondly, we present some selection processes for multiple preference relations based on the concept of fuzzy majority. Fuzzy majority is represented by a fuzzy quantifier, and applied in the aggregation, by means of an OWA operator whose weights are calculated by the fuzzy quantifier. We use two quantifier guided choice degrees of alternatives, a dominance degree used to quantify the dominance that one alternative has over all the others, in a fuzzy majority sense, and a non dominance degree, that generalises Orlovski's non dominated alternative concept. The application of the two above choice degrees can be carried out according to two different selection processes, a sequential selection process and a conjunction selection process. <s> BIB014 </s> Ordering based decision making - A survey <s> Decision making based on preference relations <s> The ordered weighted averaging (OWA) operator was introduced by Yager.1 The fundamental aspect of the OWA operator is a reordering step in which the input arguments are rearranged in descending order. In this article, we propose two new classes of aggregation operators called ordered weighted geometric averaging (OWGA) operators and study some desired properties of these operators. Some methods for obtaining the associated weighting parameters are discussed, and the relationship between the OWA and DOWGA operators is also investigated. © 2002 Wiley Periodicals, Inc. <s> BIB015 </s> Ordering based decision making - A survey <s> Decision making based on preference relations <s> An alternative voting system, referred to as probabilistic Borda rule, is developed and analyzed. The winning alternative under this system is chosen by lottery where the weights are determined from each alternative’s Borda score relative to all Borda points possible. Advantages of the lottery include the elimination of strategic voting on the set of alternatives under consideration and breaking the tyranny of majority coalitions. Disadvantages include an increased incentive for strategic introduction of new alternatives to alter the lottery weights, and the possible selection of a Condorcet loser. Normative axiomatic properties of the system are also considered. It is shown this system satisfies the axiomatic properties of the standard Borda procedure in a probabilistic fashion. <s> BIB016 </s> Ordering based decision making - A survey <s> Decision making based on preference relations <s> In this article, we introduce the induced ordered weighted geometric (IOWG) operator and its properties. This is a more general type of OWG operator, which is based on the induced ordered weighted averaging (IOWA) operator. We provide some IOWG operators to aggregate multiplicative preference relations in group decision-making (GDM) problems. In particular, we present the importance IOWG (I-IOWG) operator, which induces the ordering of the argument values based on the importance of the information sources; the consistency IOWG (C-IOWG) operator, which induces the ordering of the argument values based on the consistency of the information sources; and the preference IOWG (P-IOWG) operator, which induces the ordering of the argument values based on the relative preference values associated with each one of them. We also provide a procedure to deal with “ties” regarding the ordering induced by the application of one of these IOWG operators. This procedure consists of a sequential application of the aforementioned IOWG operators. Finally, we analyze the reciprocity and consistency properties of the collective multiplicative preference relations obtained using IOWG operators. © 2004 Wiley Periodicals, Inc. <s> BIB017 </s> Ordering based decision making - A survey <s> Decision making based on preference relations <s> The group decision-making framework with linguistic preference relations is studied. In this context, we assume that there exist several experts who may have different background and knowledge to solve a particular problem and, therefore, different linguistic term sets (multigranular linguistic information) could be used to express their opinions. The aim of this paper is to present a model of consensus support system to assist the experts in all phases of the consensus reaching process of group decision-making problems with multigranular linguistic preference relations. This consensus support system model is based on i) a multigranular linguistic methodology, ii) two consensus criteria, consensus degrees and proximity measures, and iii) a guidance advice system. The multigranular linguistic methodology permits the unification of the different linguistic domains to facilitate the calculus of consensus degrees and proximity measures on the basis of experts' opinions. The consensus degrees assess the agreement amongst all the experts' opinions, while the proximity measures are used to find out how far the individual opinions are from the group opinion. The guidance advice system integrated in the consensus support system model acts as a feedback mechanism, and it is based on a set of advice rules to help the experts change their opinions and to find out which direction that change should follow in order to obtain the highest degree of consensus possible. There are two main advantages provided by this model of consensus support system. Firstly, its ability to cope with group decision-making problems with multigranular linguistic preference relations, and, secondly, the figure of the moderator, traditionally presents in the consensus reaching process, is replaced by the guidance advice system, and in such a way, the whole group decision-making process is automated <s> BIB018 </s> Ordering based decision making - A survey <s> Decision making based on preference relations <s> Various linguistic preference relations, including incomplete linguistic preference relation, consistent incomplete linguistic preference relation and acceptable incomplete linguistic preference relation, are introduced. Some desirable properties of the incomplete linguistic preference relation are studied. Based on the operational laws of the linguistic evaluation scale, and the acceptable incomplete linguistic preference relation with the least judgments, we develop a simple and practical method for constructing a consistent complete linguistic preference relation by using the additive transitivity property. The method not only relieves the decision maker of time pressure and makes sufficiently using of the provided preference information, but also maintains the decision maker's consistency level and avoids checking the consistency of linguistic preference relation. Furthermore, an approach to multi-person decision making based on incomplete linguistic preference relations is developed. The approach fuses all the consistent complete linguistic preference relations, constructed by using the individual acceptable incomplete linguistic preference relations with the least judgments, into a collective complete linguistic preference relation, and then the overall information corresponding to each decision alternative is aggregated. Finally, an illustrative example is given. <s> BIB019 </s> Ordering based decision making - A survey <s> Decision making based on preference relations <s> The ordered weighted averaging (OWA) operator was developed by Yager [IEEE Trans. Syst., Man, Cybernet. 18 (1998) 183]. Later, Yager and Filev [IEEE Trans. Syst., Man, Cybernet.--Part B 29 (1999) 141] introduced a more general class of OWA operators called the induced ordered weighted averaging (IOWA) operators, which take as their argument pairs, called OWA pairs, in which one component is used to induce an ordering over the second components which are exact numerical values and then aggregated. The aim of this paper is to develop some induced uncertain linguistic OWA (IULOWA) operators, in which the second components are uncertain linguistic variables. Some desirable properties of the IULOWA operators are studied, and then, the IULOWA operators are applied to group decision making with uncertain linguistic information. <s> BIB020 </s> Ordering based decision making - A survey <s> Decision making based on preference relations <s> The aim of this paper is to develop a continuous ordered weighted geometric (C-OWG) operator, which is based on the continuous ordered weighted averaging (C-OWA) operator recently introduced by the author and the geometric mean. We study some desirable properties of the C-OWG operator, and present its application to decision making with interval multiplicative preference relation, and finally, an illustrative example is pointed out. <s> BIB021 </s> Ordering based decision making - A survey <s> Decision making based on preference relations <s> In decision-making problems there may be cases in which experts do not have an in-depth knowledge of the problem to be solved. In such cases, experts may not put their opinion forward about certain aspects of the problem, and as a result they may present incomplete preferences, i.e., some preference values may not be given or may be missing. In this paper, we present a new model for group decision making in which experts' preferences can be expressed as incomplete fuzzy preference relations. As part of this decision model, we propose an iterative procedure to estimate the missing information in an expert's incomplete fuzzy preference relation. This procedure is guided by the additive-consistency (AC) property and only uses the preference values the expert provides. The AC property is also used to measure the level of consistency of the information provided by the experts and also to propose a new induced ordered weighted averaging (IOWA) operator, the AC-IOWA operator, which permits the aggregation of the experts' preferences in such a way that more importance is given to the most consistent ones. Finally, the selection of the solution set of alternatives according to the fuzzy majority of the experts is based on two quantifier-guided choice degrees: the dominance and the nondominance degree <s> BIB022 </s> Ordering based decision making - A survey <s> Decision making based on preference relations <s> Aggregation operators have been used and studied for a long time. More than ten different types of means (including the arithmetic, geometric, and harmonic ones) were already studied 2000 years ago. Nevertheless, this topic has gained relevance in recent years. This is partly due to the increasing need of methods and operators for fusing information within computer programs. In this paper, we will present a personal view of the field, focusing on the application issues and on averaging aggregation operators. We will point out some research topics and open lines for future research. <s> BIB023 </s> Ordering based decision making - A survey <s> Decision making based on preference relations <s> This paper introduces a new approach to classification which combines pairwise decomposition techniques with ideas and tools from fuzzy preference modeling. More specifically, our approach first decomposes a polychotomous classification problem involving m classes into an ensemble of binary problems, one for each ordered pair of classes. The corresponding classifiers are trained on the relevant subsets of the (transformed) original training data. In the classification phase, a new query is submitted to every binary learner. The output of each classifier is interpreted as a fuzzy degree of preference for the first in comparison with the second class. By combining the outputs of all classifiers, one thus obtains a fuzzy preference relation which is taken as a point of departure for the final classification decision. This way, the problem of classification is effectively reduced to a problem of decision making based on a fuzzy preference relation. Corresponding techniques, which have been investigated quite intensively in the field of fuzzy set theory, hence become amenable to the task of classification. In particular, by decomposing a preference relation into a strict preference, an indifference, and an incomparability relation, this approach allows one to quantify different types of uncertainty in classification and thereby supports sophisticated classification and postprocessing strategies. <s> BIB024 </s> Ordering based decision making - A survey <s> Decision making based on preference relations <s> In this paper, we investigate group decision making problems with multiple types of linguistic preference relations. The paper has two parts with similar structures. In the first part, we transform the uncertain additive linguistic preference relations into the expected additive linguistic preference relations, and present a procedure for group decision making based on multiple types of additive linguistic preference relations. By using the deviation measures between additive linguistic preference relations, we give some straightforward formulas to determine the weights of decision makers, and propose a method to reach consensus among the individual preferences and the group's opinion. In the second part, we extend the above results to group decision making based on multiple types of multiplicative linguistic preference relations, and finally, a practical example is given to illustrate the application of the results. <s> BIB025 </s> Ordering based decision making - A survey <s> Decision making based on preference relations <s> The lack of consistency in decision making can lead to inconsistent conclusions. In fuzzy analytic hierarchy process (fuzzy AHP) method, it is difficult to ensure a consistent pairwise comparison. Furthermore, establishing a pairwise comparison matrix requires nx(n-1)2 judgments for a level with n criteria (alternatives). The number of comparisons increases as the number of criteria increases. Therefore, the decision makers judgments will most likely be inconsistent. To alleviate inconsistencies, this study applies fuzzy linguistic preference relations (Fuzzy LinPreRa) to construct a pairwise comparison matrix with additive reciprocal property and consistency. In this study, the fuzzy AHP method is reviewed, and then the Fuzzy LinPreRa method is proposed. Finally, the presented method is applied to the example addressed by Kahraman et al. [C. Kahraman, D. Ruan, I. Dogan, Fuzzy group decision making for facility location selection, Information Sciences 157 (2003) 135-153]. This study reveals that the proposed method yields consistent decision rankings from only n-1 pairwise comparisons, which is the same result as in Kahraman et al. research. The presented fuzzy linguistic preference relations method is an easy and practical way to provide a mechanism for improving consistency in fuzzy AHP method. <s> BIB026 </s> Ordering based decision making - A survey <s> Decision making based on preference relations <s> Consistency of preferences is related to rationality, which is associated with the transitivity property. Many properties suggested to model transitivity of preferences are inappropriate for reciprocal preference relations. In this paper, a functional equation is put forward to model the ldquocardinal consistency in the strength of preferencesrdquo of reciprocal preference relations. We show that under the assumptions of continuity and monotonicity properties, the set of representable uninorm operators is characterized as the solution to this functional equation. Cardinal consistency with the conjunctive representable cross ratio uninorm is equivalent to Tanino's multiplicative transitivity property. Because any two representable uninorms are order isomorphic, we conclude that multiplicative transitivity is the most appropriate property for modeling cardinal consistency of reciprocal preference relations. Results toward the characterization of this uninorm consistency property based on a restricted set of (n-1) preference values, which can be used in practical cases to construct perfect consistent preference relations, are also presented. <s> BIB027 </s> Ordering based decision making - A survey <s> Decision making based on preference relations <s> Let us consider a preferential information of type preference-indifference-incomparability (P, I, J), with additional information about differences in attractiveness between pairs of alternatives. The present paper offers a theoretical framework for the study of the "level of constraint" of this kind of partial preferential information. It suggests a number of structures as potential models being less demanding than the classical one in which differences in utilities can be used to represent the comparison of differences in attractiveness. The models are characterized in the more general context of families of non-complete preference structures, according to two different perspectives (called "semantico-numerical" and "matrix"). Both perspectives open the door to further practical applications connected with elicitation of the preferences of a decision maker. <s> BIB028 </s> Ordering based decision making - A survey <s> Decision making based on preference relations <s> In this paper, the concept of multiplicative transitivity of a fuzzy preference relation, as defined by Tanino T. Tanino, Fuzzy preference orderings in group decision-making, Fuzzy Sets and Systems 12 (1984) 117-131, is extended to discover whether an interval fuzzy preference relation is consistent or not, and to derive the priority vector of a consistent interval fuzzy preference relation. We achieve this by introducing the concept of interval multiplicative transitivity of an interval fuzzy preference relation and show that, by solving numerical examples, the test of consistency and the weights derived by the simple formulas based on the interval multiplicative transitivity produce the same results as those of linear programming models proposed by Xu and Chen Z.S. Xu, J. Chen, Some models for deriving the priority weights from interval fuzzy preference relations, European Journal of Operational Research 184 (2008) 266-280. In addition, by taking advantage of interval multiplicative transitivity of an interval fuzzy preference relation, we put forward two approaches to estimate missing value(s) of an incomplete interval fuzzy preference relation, and present numerical examples to illustrate these two approaches. <s> BIB029 </s> Ordering based decision making - A survey <s> Decision making based on preference relations <s> When addressing decision-making problems, decision-makers typically express their opinions using fuzzy preference relations. In some instances, decision-makers may have to deal with the problems in which only partial information is available. Consequently, decision-makers embody their preferences as incomplete fuzzy preference relations. The values of incomplete fuzzy preference relations have been considered crisp in recent studies. To allow decision-makers to provide vague or imprecise responses, this study proposes a novel method, called incomplete fuzzy linguistic preference relations, that uses fuzzy linguistic assessment variables instead of crisp values of incomplete fuzzy preference relations to ensure comparison consistency. The proposed method reflects an environment in which some uncertainty or vagueness exists. Examples are included that illustrate the effectiveness of the proposed method. <s> BIB030 </s> Ordering based decision making - A survey <s> Decision making based on preference relations <s> In this paper, we present a new approach for fuzzy multiple attributes group decision-making based on fuzzy preference relations. First, we construct fuzzy importance matrices for decision-makers with respect to attributes and construct fuzzy evaluating matrices for decision-makers with respect to the attributes of the alternatives. Based on the fuzzy importance matrices and the fuzzy evaluating matrices, we construct fuzzy rating vectors for decision-makers with respect to the alternatives. Then, we defuzzify the trapezoidal fuzzy numbers in the constructed fuzzy rating vectors to get the rating vectors for the decision-makers. Based on the rating vectors, we construct fuzzy preference relations for the decision-makers with respect to the alternatives. Based on the fuzzy preference relations, we calculate the average rating value of each decision-maker with respect to the alternatives. Then, we sort these average rating values in a descending sequence and assign them different scores. Then, we calculate the summation values of the scores of the alternatives with respect to each decision-maker, respectively. The larger the summation values of the scores, the better the choice of the alternative. The proposed method is simpler than Chen's method (2000) and Li's method (2007) for handling fuzzy multiple attributes group decision-making problems. It provides us with a useful way to handle fuzzy multiple attributes group decision-making problems. <s> BIB031 </s> Ordering based decision making - A survey <s> Decision making based on preference relations <s> Linguistic preference relation is a useful tool for expressing preferences of decision makers in group decision making according to linguistic scales. But in the real decision problems, there usually exist interactive phenomena among the preference of decision makers, which makes it difficult to aggregate preference information by conventional additive aggregation operators. Thus, to approximate the human subjective preference evaluation process, it would be more suitable to apply non-additive measures tool without assuming additivity and independence. In this paper, based on @l-fuzzy measure, we consider dependence among subjective preference of decision makers to develop some new linguistic aggregation operators such as linguistic ordered geometric averaging operator and extended linguistic Choquet integral operator to aggregate the multiplicative linguistic preference relations and additive linguistic preference relations, respectively. Further, the procedure and algorithm of group decision making based on these new linguistic aggregation operators and linguistic preference relations are given. Finally, a supplier selection example is provided to illustrate the developed approaches. <s> BIB032 </s> Ordering based decision making - A survey <s> Decision making based on preference relations <s> Abstract Although the analytic hierarchy process (AHP) and the extent analysis method (EAM) of fuzzy AHP are extensively adopted in diverse fields, inconsistency increases as hierarchies of criteria or alternatives increase because AHP and EAM require rather complicated pairwise comparisons amongst elements (attributes or alternatives). Additionally, decision makers normally find that assigning linguistic variables to judgments is simpler and more intuitive than to fixed value judgments. Hence, Wang and Chen proposed fuzzy linguistic preference relations (Fuzzy LinPreRa) to address the above problem. This study adopts Fuzzy LinPreRa to re-examine three numerical examples. The re-examination is intended to compare our results with those obtained in earlier works and to demonstrate the advantages of Fuzzy LinPreRa. This study demonstrates that, in addition to reducing the number of pairwise comparisons, Fuzzy LinPreRa also increases decision making efficiency and accuracy. <s> BIB033 </s> Ordering based decision making - A survey <s> Decision making based on preference relations <s> When there are n criteria or alternatives in a decision matrix, a pairwise comparison methodology of analytic hierarchy process (AHP) with the time of n(n-1)/2 is frequently used to select, evaluate or rank the neighboring alternatives. But while the number of criteria or comparison level increase, the efficiency and consistency of a decision matrix decrease. To solve such problems, this study therefore uses horizontal, vertical and oblique pairwise comparisons algorithm to construct multi-criteria decision making with incomplete linguistic preference relations model (InLinPreRa). The use of pairwise comparisons will not produce the inconsistency, even allows every decision maker to choose an explicit criterion or alternative for index unrestrictedly. When there are n criteria, only n-1 pairwise comparisons need to be carried out, then one can rest on incomplete linguistic preference relations to obtain the priority value of alternative for the decision maker's reference. The decision making assessment model that constructed by this study can be extensively applied to every field of decision science and serves as the reference basis for the future research. <s> BIB034 </s> Ordering based decision making - A survey <s> Decision making based on preference relations <s> In group decision making (GDM) with multiplicative preference relations (also known as pairwise comparison matrices in the Analytical Hierarchy Process), to come to a meaningful and reliable solution, it is preferable to consider individual consistency and group consensus in the decision process. This paper provides a decision support model to aid the group consensus process while keeping an acceptable individual consistency for each decision maker. The concept of an individual consistency index and a group consensus index is introduced based on the Hadamard product of two matrices. Two algorithms are presented in the designed support model. The first algorithm is utilized to convert an unacceptable preference relation to an acceptable one. The second algorithm is designed to assist the group in achieving a predefined consensus level. The main characteristics of our model are that: (1) it is independent of the prioritization method used in the consensus process; (2) it ensures that each individual multiplicative preference relation is of acceptable consistency when the predefined consensus level is achieved. Finally, some numerical examples are given to verify the effectiveness of our model. <s> BIB035 </s> Ordering based decision making - A survey <s> Decision making based on preference relations <s> In analyzing a multiple criteria decision-making problem, the decision maker may express her/his opinions as an interval fuzzy or multiplicative preference relation. Then it is an interesting and important issue to investigate the consistency of the preference relations and obtain the reliable priority weights. In this paper, a new consistent interval fuzzy preference relation is defined, and the corresponding properties are derived. The transformation formulae between interval fuzzy and multiplicative preference relations are further given, which show that two preference relations, consistent interval fuzzy and multiplicative preference relations, can be transformed into each other. Based on the transformation formula, the definition of acceptably consistent interval fuzzy preference relation is given. Furthermore a new algorithm for obtaining the priority weights from consistent or inconsistent interval fuzzy preference relations is presented. Finally, three numerical examples are carried out to compare the results using the proposed method with those using other existing procedures. The numerical results show that the given procedure is feasible, effective and not requisite to solve any mathematical programing. <s> BIB036 </s> Ordering based decision making - A survey <s> Decision making based on preference relations <s> Two new hybrid weighted averaging operators for aggregating crisp and fuzzy information are proposed, some of which desirable properties are studied. These operators helps us to overcome the drawback in the existed reference. With respect to the proposed operators, three special types of preferred centroid of triangular fuzzy number are defined. On the base of these preferred centroid, we develop two algorithms to deal with decision making problems. Two numerical examples are provided to illustrate the practicality and validity of the proposed methods. <s> BIB037
Preference relations, generated from pairwise comparison of alternatives, are widely used to model experts' preferences in real decision-making problems. A preference relation R is usually modelled by a preference structure, a triplet (P, I, J) of three binary relations: strict preference, indifference or incomparability, on a finite set of alternatives X, which satisfy BIB027 : 1) P is irreflexive and asymmetrical; 2) I is reflexive and symmetrical; 3) J is irreflexive and symmetrical; Fig. 4 as an example, it can be expressed in preference structure as (P, I, J) where Since preference structures are restricted to classical relations, preference degrees cannot be expressed, which is seen as an important drawback from the practical point of view BIB012 . There are generally three forms of preference relations that extend the classical case: (1) Multiplicative preference relations BIB013 BIB017 BIB003 BIB005 BIB035 : In a multiplicative preference relation, the decision maker's preferences on a set of alternatives X is represented by a positive matrix A X X ⊆ × , A=(a ij ), whose element a ij is the intensity of preference of alternative x i to x j , usually measured by a numerical ratio scale. The most popular ratio scale is suggested by Saaty BIB003 BIB005 It is usually supposed that the multiplicative preference relation is multiplicative reciprocal, i.e., (2) Fuzzy (valued) preference relations BIB014 BIB031 BIB008 BIB002 BIB022 : In this case, the decision maker's preferences on the set of alternatives X is described by a fuzzy relation R (sometimes called as weak preference relation) on X X × , with its associated membership function : , and this is usually represented by a n n × fuzzy relation matrix R=(r ij ) with ( , ) denotes the degree of preference of alternative x i to x j : r ij =1 means that x i is totally preferred to x j ; r ij =0.5 shows the indifference of the decision maker's preference to x i and x j , and r ij >0.5 means that the decision maker prefers x i to x j . Similarly, it is usually supposed that the preference matrix R is additive reciprocal, i.e., 1 ij ji r r + = , , {1, , } i j n ∀ ∈ L , and this shows that r ii =0.5. (3) Linguistic preference relations BIB010 BIB018 BIB032 BIB019 : Unlike multiplicative preference relation and fuzzy preference relation using crisp numerical values to express the intensity of preference of one alternative to another, linguistic preference relation uses linguistic terms to indicate this preference level when numerical preferences are not available or difficult to obtain. Similarly, a linguistic preference relation on X is also denoted by a matrix B=(b ij ) , whose element ij L is the set of all linguistic terms considered, indicates the linguistic degree of preference of alternative x i to x j , and satisfies , where ⊕ is the operation of linguistic terms defined as s s s A good number of studies have been made on fuzzy preference modelling or the axiomatic construction of fuzzy preference structures (see e.g. BIB012 BIB002 BIB009 BIB006 BIB024 BIB028 ), which serves as the theoretical foundation of preference based decision making. The key issue for this question is how to decompose a weak preference relation R into a strict preference relation P, an indifference relation I, and an incomparability relation J such that (P, I, J) is a fuzzy preference structure, and the axiomatic construction is mainly based on De Morgan triplet (T, S, N), which consists of a t-norm T, its dual t-conorm S, and a strong negation N. A fuzzy preference structure on X is a triplet (P, I, J) of fuzzy relations satisfying 1) P and J are irreflexive, I is reflexive; 2) P is T-asymmetrical (P ∩ T P t = ∅ ), I and J are symmetrical; where P ∩ T I is the T-intersection of two sets P and I and P ∪ S I is their S-union. Condition 4) is one of the numerous completeness conditions for fuzzy preference structures, where the differences between crisp and fuzzy preference structures, and among different fuzzy preference structures mainly come from (please refer to BIB012 for more details). In order to illustrate the preference relations based decision making approaches, we use the following simple but representative example, which is adapted from BIB025 , through this section: Example 4. Suppose the information management steering committee of a company, which comprises (1) E 1 : the Chief Executive Officer, (2) E 2 : the Chief Information Officer, and (3) E 3 : the Chief Operating Officer, must prioritize for development and implementation a set of six information technology improvement projects x j (j=1, 2,…, 6), x 1 : Quality Management Information, x 2 : Inventory Control, x 3 : Customer Order Tracking, x 4 : Materials Purchasing Management, x 5 : Fleet Management, x 6 : Design Change Management. The committee is concerned that the projects are prioritized from highest to lowest potential contribution to the firm's strategic goal of gaining competitive advantage in the industry. In assessing the potential contribution of each project, one main factor considered is productivity. This is a typical ordering based decision making problem by ranking the projects according to some criteria. All the three methods can be used to represent the preference relations provided by the committee in Example 4 (project evaluation), and all the preference relations are expressed by 6×6 matrices. The only difference is that the elements of the preference relation matrices take different forms depending on the information the committee provided, numbers ranging from 1-9 for multiplicative preference relation, numbers ranging from 0-1 for fuzzy preference relation, and linguistic terms for linguistic preference relation. Take the linguistic preference relation for an example, the preference relation about the six projects provided the Chief Executive Officer could be 0 where the matrix elements are from the linguistic term set S={s -4 =extremely low, s -3 =very low, s -2 =low, s -1 =slightly low, s 0 =fair, s 1 = slightly high, s 2 =high, s 3 =very high, s 4 =extremely high}. The preference relations provided by other committee members can be obtained similarly as B 2 and B 3 . In order to avoid the difficulty of providing accurate numerical value for decision maker's preference degree, interval multiplicative preference relations BIB004 and interval fuzzy preference relations BIB029 BIB036 were proposed which use interval numbers as the judgements of the decision maker's preference. Wang et al. BIB033 BIB030 BIB026 BIB034 proposed a hybrid method, called fuzzy linguistic preference relations, for representing preference relation under linguistic environments by expressing linguistic terms as fuzzy numbers ( , , ) L M R ij ij ij P P P , which can be seen as a special case of linguistic preference relation. Decision making approaches with preference relations are usually from the aggregation point of view. Once we have the information expressed by preference relations uniformly, aggregation algorithms will be applied to obtain the collective preference relation from all the individual preference relations. An exploitation phase of the collective linguistic performance value will then be made to establish a rank ordering among the alternatives for choosing the best alternatives by using the principle of fuzzy majority or consensus BIB013 . The methods for aggregating preference relations are mainly based on the OWA (Ordered Weighted Averaging) operator proposed by Yager and further developed operators, such as Linguistic OWA BIB010 , Weighted OWA and Linguistic Weighted OWA BIB011 , Ordered Weighted Geometric Averaging Operator BIB015 , Induced Ordered Weighted Geometric Operators BIB017 , Induced Linguistic OWA BIB020 , continuous Ordered Weighted Geometric Operator BIB021 , Lattice-based Linguistic-Valued Weighted Aggregation Operator , and some hybrid Weighted Averaging Operators BIB037 . Overviews of these aggregation operators can be found in BIB037 BIB007 BIB023 . The original and wide-used OWA operator aggregates a collection of labels by always assigning the ith weighing factor to the ith biggest label, which is the reason why it is called Ordered Weighted Averaging aggregation operator. For aggregating qualitative labels, the corresponding aggregation operators always use linguistic labels with indexes to represent linguistic terms and make operations on these indexes. Consider Example 4 and suppose that the preference relation matrices take the form of Eq. (2), then linguistic aggregation operators, e.g., linguistic weighted arithmetic averaging (LWAA) operator, can be used for aggregating the preference relations B 1 , B 2 and B 3 to get the collective preference relation B whose element where w=(w 1 , w 2 , w 3 ) T is the weighting vector of B 1 , B 2 and B 3 , b is the ij-th element of B k . The detailed procedure can be found in BIB025 and other related articles. Generally, decision making approaches with preference relations need the preference for each pair of candidates is known, which actually makes all of the alternatives a total order. In fact, incomparability relation in crisp preference structure is always treated as a special case of indifference relation BIB016 BIB001 , while in fuzzy preference structure, the relation matrix cannot express the incomparability relation, while r ij =0.5 indicates indifference relation between two alternatives. However, decision makers may only be able to provide their preferences on a subset of all the alternatives in many real problems, due to incompleteness of information, unclear evaluations, and so on. This kind of preference appears exactly as a partial order.
Ordering based decision making - A survey <s> Decision making with Preference ordering <s> Originally published in 1951, Social Choice and Individual Values introduced "Arrow's Impossibility Theorem" and founded the field of social choice theory in economics and political science. This new edition, including a new foreword by Nobel laureate Eric Maskin, reintroduces Arrow's seminal book to a new generation of students and researchers. "Far beyond a classic, this small book unleashed the ongoing explosion of interest in social choice and voting theory. A half-century later, the book remains full of profound insight: its central message, 'Arrow's Theorem,' has changed the way we think."-Donald G. Saari, author of Decisions and Elections: Explaining the Unexpected <s> BIB001 </s> Ordering based decision making - A survey <s> Decision making with Preference ordering <s> The problem of aggregating n fuzzy sets F1, F2,.., Fn on a set ω is viewed as one of merging the opinions of n individuals (e.g. experts) that rate objects belonging to ω. This approach contrasts with the pure set-theoretic point of view, and leads to interpreting already known axioms underlying fuzzy connectives in a way different from that of multiple criteria aggregation. Various natural properties of a voting procedure, including the ones proposed by Arrow are expressed in the fuzzy set setting. A number of conditions limiting the choice of fuzzy set operations are proposed and classified according to whether they are imperative, mainly technical, or facultative. Families of solutions are obtained that include those proposed in earlier works. The case of non-homogeneous groups is briefly examined. Lastly the application of the voting paradigm to the management of antagonistic local decision rules in knowledge-based systems is outlined. <s> BIB002 </s> Ordering based decision making - A survey <s> Decision making with Preference ordering <s> We introduce a general framework for constraint satisfaction and optimization where classical CSPs, fuzzy CSPs, weighted CSPs, partial constraint satisfaction, and others can be easily cast. The framework is based on a semiring structure, where the set of the semiring specifies the values to be associated with each tuple of values of the variable domain, and the two semiring operations (+ and X) model constraint projection and combination respectively. Local consistency algorithms, as usually used for classical CSPs, can be exploited in this general framework as well, provided that certain conditions on the semiring operations are satisfied. We then show how this framework can be used to model both old and new constraint solving and optimization schemes, thus allowing one to both formally justify many informally taken choices in existing schemes, and to prove that local consistency techniques can be used also in newly defined schemes. <s> BIB003 </s> Ordering based decision making - A survey <s> Decision making with Preference ordering <s> We extend the Constraint Logic Programming (CLP) formalism in order to handle semiring-based constraints. This allows us to perform in the same language both constraint solving and optimization. In fact, constraints based on semirings are able to model both classical constraint solving and more sophisticated features like uncertainty, probability, fuzziness, and optimization. We then provide this class of languages with three equivalent semantics: model-theoretic, fix-point, and proof-theoretic, in the style of classical CLP programs. <s> BIB004 </s> Ordering based decision making - A survey <s> Decision making with Preference ordering <s> An alternative voting system, referred to as probabilistic Borda rule, is developed and analyzed. The winning alternative under this system is chosen by lottery where the weights are determined from each alternative’s Borda score relative to all Borda points possible. Advantages of the lottery include the elimination of strategic voting on the set of alternatives under consideration and breaking the tyranny of majority coalitions. Disadvantages include an increased incentive for strategic introduction of new alternatives to alter the lottery weights, and the possible selection of a Condorcet loser. Normative axiomatic properties of the system are also considered. It is shown this system satisfies the axiomatic properties of the standard Borda procedure in a probabilistic fashion. <s> BIB005 </s> Ordering based decision making - A survey <s> Decision making with Preference ordering <s> We introduce the notion of combinatorial vote, where a group of agents (or voters) is supposed to express preferences and come to a common decision concerning a set of non-independent variables to assign. We study two key issues pertaining to combinatorial vote, namely preference representation and the automated choice of an optimal decision. For each of these issues, we briefly review the state of the art, we try to define the main problems to be solved and identify their computational complexity. <s> BIB006 </s> Ordering based decision making - A survey <s> Decision making with Preference ordering <s> This paper addresses the problem of merging uncertain information in the framework of possibilistic logic. It presents several syntactic combination rules to merge possibilistic knowledge bases, provided by different sources, into a new possibilistic knowledge base. These combination rules are first described at the meta-level outside the language of possibilistic logic. Next, an extension of possibilistic logic, where the combination rules are inside the language, is proposed. A proof system in a sequent form is given, sound and complete with respect to the possibilistic logic semantics. Possibilistic fusion modes are illustrated on an example inspired from an application of a localization of a robot in an uncertain environment. <s> BIB007 </s> Ordering based decision making - A survey <s> Decision making with Preference ordering <s> Representing and reasoning with an agent's preferences is important in many applications of constraints formalisms. Such preferences are often only partially ordered. One class of soft constraints formalisms, semiring-based CSPs, allows a partially ordered set of preference degrees, but this set must form a distributive lattice; whilst this is convenient computationally, it considerably restricts the representational power. This paper constructs a logic of soft constraints where it is only assumed that the set of preference degrees is a partially ordered set, with a maximum element 1 and a minimum element 0. When the partially ordered set is a distributive lattice, this reduces to the idempotent semiring-based CSP approach, and the lattice operations can be used to define a sound and complete proof theory. A generalised possibilistic logic, based on partially ordered values of possibility, is also constructed, and shown to be formally very strongly related to the logic of soft constraints. It is shown how the machinery that exists for the distributive lattice case can be used to perform sound and complete deduction, using a compact embedding of the partially ordered set in a distributive lattice. <s> BIB008 </s> Ordering based decision making - A survey <s> Decision making with Preference ordering <s> Many real life optimization problems are defined in terms of both hard and soft constraints, and qualitative conditional preferences. However, there is as yet no single framework for combined reasoning about these three kinds of information. In this paper we study how to exploit classical and soft constraint solvers for handling qualitative preference statements such as those captured by the CP-nets model. In particular, we show how hard constraints are sufficient to model the optimal outcomes of a possibly cyclic CP-net, and how soft constraints can faithfully approximate the semantics of acyclic conditional preference statements whilst improving the computational efficiency of reasoning about these statements. <s> BIB009 </s> Ordering based decision making - A survey <s> Decision making with Preference ordering <s> We consider how to combine the preferences of multiple agents despite the presence of incompleteness and incomparability in their preference orderings. An agent's preference ordering may be incomplete because, for example, there is an ongoing preference elicitation process. It may also contain incomparability as this is useful, for example, in multi-criteria scenarios. We focus on the problem of computing the possible and necessary winners, that is, those outcomes which can be or always are the most preferred for the agents. Possible and necessary winners are useful in many scenarios including preference elicitation. First we show that computing the sets of possible and necessary winners is in general a difficult problem as is providing a good approximation of such sets. Then we identify general properties of the preference aggregation function which are sufficient for such sets to be computed in polynomial time. Finally, we show how possible and necessary winners can be used to focus preference elicitation. <s> BIB010 </s> Ordering based decision making - A survey <s> Decision making with Preference ordering <s> We present a framework for decision-making in relation to disaster management with a focus on situation assessment during disaster management monitoring. The use of causality reasoning based on the temporal evolution of a scenario provides a natural way to chain meaningful events and possible states of the system. There are usually different ways to analyse a problem and different strategies to follow as a solution and it is also often the case that information originating in different sources can be inconsistent or unreliable. Therefore we allow the specification of possibly conflicting situations as they are typical elements in disaster management. A decision procedure to decide on those conflicting situations is presented which not only provides a framework for the assistance of one decision-maker but also how to handle opinions from a hierarchy of decision-makers. <s> BIB011 </s> Ordering based decision making - A survey <s> Decision making with Preference ordering <s> This paper addresses the problem of expressing preferences among non- functional properties of services in a Web service architecture. In such a context, seman- tic and non-functional annotations are required on service declarations and business process calls to services in order to select the best available service for each invoca- tion. To cope with these multi-criteria decision problems, conditional and unconditional preferences are managed using a new variant of conditional preference networks (CP- nets), taking into account uncertainty related to the preferences to achieve a better satisfaction rate. This variant, called LCP-nets, uses fuzzy linguistic information in- side the whole process, from preference elicitation to outcome query computation, a qualitative approach that is more suitable to business process programmers. Indeed, in LCP-nets, preference variables and utilities take linguistic values while conditional pref- erence tables are considered as fuzzy rules which interdependencies may be complex. The expressiveness of the graphical model underlying CP-nets provides for solutions to gather all the preferences under uncertainty and to tackle interdependency problems. LCP-nets are applied to the problem of selecting the best service among a set of of- fers, given their dynamic non-functional properties. The implementation of LCP-nets is presented step-by-step through a real world example. <s> BIB012 </s> Ordering based decision making - A survey <s> Decision making with Preference ordering <s> Information about user preferences plays a key role in automated decision making. In many domains it is desirable to assess such preferences in a qualitative rather than quantitative way. In this paper, we propose a qualitative graphical representation of preferences that reflects conditional dependence and independence of preference statements under a ceteris paribus (all else being equal) interpretation. Such a representation is often compact and arguably quite natural in many circumstances. We provide a formal semantics for this model, and describe how the structure of the network can be exploited in several inference tasks, such as determining whether one outcome dominates (is preferred to) another, ordering a set outcomes according to the preference relation, and constructing the best outcome subject to available evidence. <s> BIB013 </s> Ordering based decision making - A survey <s> Decision making with Preference ordering <s> A CP-nets-based model for consumer-centric information service composition is proposed. Moreover, an algorithm was provided to construct the model. The model can explicitly express the diversity and individuality of users' needs, and the combinational logic between services graphically, and verify the correctness of services composition by using the reachability, safeness, boundedness, liveness and fairness of CP-nets. Finally, there is an example, a concrete consumer-centric information service composition for a system is modeled and validated based on CP-nets method. It shows that this modeling approach has enough capability of expressing and verifying consumer-centric information service composition. <s> BIB014 </s> Ordering based decision making - A survey <s> Decision making with Preference ordering <s> It is shown how simple geometry can be used to analyze and discover new properties about pairwise and positional voting rules as well as for those rules (e.g., runoffs and Approval Voting) that rely on these methods. The description starts by providing a geometric way to depict profiles, which simplifies the computation of the election outcomes. This geometry is then used to motivate the development of a “profile coordinate system,” which evolves into a tool to analyze voting rules. This tool, for instance, completely explains various longstanding “paradoxes,” such as why a Condorcet winner need not be elected with certain voting rules. A different geometry is developed to indicate whether certain voting “oddities” can be dismissed or must be taken seriously, and to explain why other mysteries, such as strategic voting and the no-show paradox (where a voter is rewarded by not voting), arise. Still another use of geometry extends McGarvey's Theorem about possible pairwise election rankings to identify the actual tallies that can arise (a result that is needed to analyze supermajority voting). Geometry is also developed to identify all possible positional and Approval Voting election outcomes that are admitted by a given profile; the converse becomes a geometric tool that can be used to discover new election relationships. Finally, it is shown how lessons learned in social choice, such as the seminal Arrow's and Sen's Theorems and the expanding literature about the properties of positional rules, provide insights into difficulties that are experienced by other disciplines. <s> BIB015 </s> Ordering based decision making - A survey <s> Decision making with Preference ordering <s> In Majority Judgment, Michel Balinski and Rida Laraki argue that the traditional theory of social choice offers no acceptable solution to the problems of how to elect, to judge, or to rank. They find that the traditional model--transforming the "preference lists" of individuals into a "preference list" of society--is fundamentally flawed in both theory and practice. Balinski and Laraki propose a more realistic model. It leads to an entirely new theory and method--majority judgment--proven superior to all known methods. It is at once meaningful, resists strategic manipulation, elicits honesty, and is not subject to the classical paradoxes encountered in practice, notably Condorcet's and Arrow's. They offer theoretical, practical, and experimental evidence--from national elections to figure skating competitions--to support their arguments. Drawing on insights from wine, sports, music, and other competitions, Balinski and Laraki argue that the question should not be how to transform many individual rankings into a single collective ranking, but rather, after defining a common language of grades to measure merit, how to transform the many individual evaluations of each competitor into a single collective evaluation of all competitors. The crux of the matter is a new model in which the traditional paradigm--to compare--is replaced by a new paradigm--to evaluate. <s> BIB016 </s> Ordering based decision making - A survey <s> Decision making with Preference ordering <s> We propose a directed graphical representation of utility functions, called UCP-networks, that combines aspects of two existing preference models: generalized additive models and CP-networks. The network decomposes a utility function into a number of additive factors, with the directionality of the arcs reflecting conditional dependence in the underlying (qualitative) preference ordering under a ceteris paribus interpretation. The CP-semantics ensures that computing optimization and dominance queries is very efficient. We also demonstrate the value of this representation in decision making. Finally, we describe an interactive elicitation procedure that takes advantage of the linear nature of the constraints on "tradeoff weights" imposed by a UCP-network. <s> BIB017 </s> Ordering based decision making - A survey <s> Decision making with Preference ordering <s> We investigate the computational complexity of testing dominance and consistency in CP-nets. Previously, the complexity of dominance has been determined for restricted classes in which the dependency graph of the CP-net is acyclic. However, there are preferences of interest that define cyclic dependency graphs; these are modeled with general CP-nets. In our main results, we show here that both dominance and consistency for general CP-nets are PSPACE-complete. We then consider the concept of strong dominance, dominance equivalence and dominance incomparability, and several notions of optimality, and identify the complexity of the corresponding decision problems. The reductions used in the proofs are from STRIPS planning, and thus reinforce the earlier established connections between both areas. <s> BIB018
Many decision making applications, especially in socio-economic areas, are to order alternatives based preference, which is to rank alternatives by a group of people based on each member's preference on subsets of the alternatives . Many of these approaches have been applied to Social Choice area, which blends elements of welfare economics and voting theory. Because preference in most of case of reality appears as partially ordered structure due to incompleteness of knowledge, ambiguity of opinions, and so on BIB008 , which makes the aggregation process much more complex and challenging. For decision making under this situation, a preference aggregation or inference procedure need to be applied to combine these partial orders to produce an overall preference ordering, and this again can be a partial order. Soft constraints BIB003 BIB004 BIB009 are one of the popular methods for representing and aggregating quantitative preferences. Soft constraints were originally proposed to overcome the limitations of classical methods for constraint satisfaction problems (CSPs) under fuzzy or incomplete situations. Here, each soft constraint associates a value from a partially ordered set to a set of variables. These values can be interpreted as degrees of preference, levels of consistency, and probabilities. The set of preference values is modelled by a semiring structure, a domain with two operations, additive operation "+" for ordering alternatives and multiplicative operation " × " for combining preferences to obtain the result. A relation " ≤ " is also defined over the domain by a b ≤ iff a b b + = which makes the set a partially ordered set. Soft constraints are the main tool for representing and reasoning about preferences in constraint satisfaction problems due to their expressive and powerful ability to support representing and reasoning about preferences. However, they require specifying a numeric semiring value for each variable assignment in each constraint diminishes their applicability to many situations which are qualitative in nature. In many applications, it is more natural for users to express preferences via generic qualitative (usually partial) preference relations over variable assignments BIB009 . For example, it is more natural to express the preference on the car as "I prefer silver car to black car", rather than "Silver car has preference 0.8 and black car has preference 0.4", which is required in soft constraints for assigning numeric preference values to variables. In most real cases, decision makers are always asked to express their preferences over the decision alternatives via qualitative statements, such as "If the main course is beef or lamb, I prefer red wine to white wine", or "I prefer a seat near aisle to near window". Among methods for representing and reasoning with qualitative preferences, CP-nets BIB013 BIB018 BIB014 is one of the most popular, where CP is the abbreviation of Conditional Preference or ceteris paribus (all other conditions being equal, or conditional preferential independence). A CP-net represents a preference in a straightforward form as p: x>y, which indicates that x is strictly preferred to y provided condition p, and all these conditional preferences about a certain feature with values, denoted as a node in a CP-net, will be associated with this node in a table form. For instance, the CP-net for Example 2 can be shown in Fig. 5 , where the left side cells of the tables represent the provided conditions, i.e., my preference between vegetable and fish soup is conditioned on main course and my preference between red wine and white wine depends on the soup to be served. The condition for a certain feature depends on the preferences of other features, and these nodes (features) with mutual dependent relations to each other according to conditional preferences will be put in a partial order. However, although the representation of preference ordering is succinct, the main problem of CP-nets is that it is complex to make an optimal assignment of preference values to all the features, a NP hard problem under some assumptions, as well as that accurate utility values are difficult to decide for non-specialist users BIB018 BIB009 BIB006 . Some extensions of CP-nets have been developed to overcome the above mentioned defects. Utility CP-nets, or UCP-nets BIB017 is one of the extensions which uses numerical utility factors to replace the binary relationship between node values in CP-nets. Doing so, UCP-nets allow the node values to be able to retain their qualitative form, and only preferences are quantified with utility values. LCP-nets (Linguistic Conditional Preference networks) BIB012 BIB012 is another kind of extension by combining the linguistic approach into CP-nets, and this allows for the preference modelling of more qualitative statements such as "I prefer a bit Dell laptop to Sony laptop if their CPU speeds are approximately the same and RAM sizes are more or less the same". One widely used, especially in Social Choice problems, solution for preference based ordering is the Borda count or Borda's Rule BIB005 . Borda's Rule asks decision makers to rank the alternatives, and then allocates absolute scores to the alternatives. The higher an alternative is ranked, the more points it will receive. A simple solution is to assign one point to an alternative for each competitor ranked below it in the ranking. The alternative with the most total points is declared the winner. For instance, in Example 3 (movie ranking), Borda's Rule will ask each user to rank the movies he/she have enjoyed, usually in a total order, then allocate absolute scores to the movies, e.g., 5 to the most favourite, 4 to the second, …, and 1 to the last one. All the scores associated with the same movie will be added up accordingly, and the movie with the highest total points will be declared the most popular. The primary advantage to this procedure is its ability to find a "fair" compromise since it includes more information from the decision makers than either plurality or majority rule BIB015 , while the main drawback is absolute scores are usually difficult to decide and we cannot expect the same score always means the same to every decision maker. Approval voting is another popular Social Choice system, always used for elections. Under this mechanism, each voter is allowed to vote for as many of the alternatives as he/she wishes, and each voter may vote for any combination of alternatives and may give each alternative at most one vote. The alternative who receives the most votes is declared the winner. Similar to Approval voting, Majority Judgment is also a single-winner voting system BIB016 . This voting system asks voters to freely grade each alternative in one of several named ranks, for instance from "excellent" to "reject", and the alternative with the highest median grade is the winner. If more than one alternative has the same highest median grade, all other alternatives are eliminated. Then, one copy of that median grade is removed from each remaining alternative's list of grades, and the new median is found, until there is an unambiguous winner. There are also many other voting systems, such as Plurality voting and Preferential voting. Most of them take the same drawback as that of Borda count, that is, it is usually not easy to allocate absolute scores to the alternatives in a consistent and fair way. In order to overcome the difficulty of allocating absolute scores, Cohen et al. developed an algorithm for generating an approximately optimal total order for all the alternatives from pair-wise preferences. The proposed methodology is a two-stage approach where the first stage learns a preference function PREF(u, v), which is a numerical measure of the certainty that u should be ranked above v. The preference function is a weighted combination of primitive preference functions obtained from ordering functions. An ordering function is a function f: X→S, from the set of all alternatives X to a totally ordered set S, given by experts, and the rank ordering, a special kind of preference function, R f is defined as The final preference function is obtained in the form i=1, …, N) . The second stage uses a greedy algorithm SCC-GREEDY-ORDER which is to assign each alternative v a potential value. Then the algorithm picks the alternatives one by one according to their potential values, and then an approximately optimal total order for all the alternatives is obtained according to the ranks from high to low. This method provides a novel alternative ordering method which can obtain an approximately optimal total order. It, however, is essentially an indirect and more complex way for assigning score to each alternative, the potential value, than Borda's rule. It is also a bit unreasonable for assigning 1 2 to the preference relation whenever there is no ordering relation between two elements without considering different causes. Wang et al. developed a new method for calculating the pair-wise preferences from the preference relations given by decision makers, which was applied by Augusto et al. BIB011 into situation assessment during disaster management. The main idea is to calculate the probability that each pair of alternatives should be placed in an order, which is then used as the preference function PREF(u, v) in Cohen's method to generate the potential value of each alternative. This probability is obtained by considering each preference as a sequence and calculating the number of all the common sub-sequences of the considered pair and every sequence in the set of preferences given by decision makers. All the probabilities are then fed to PREF(u, v) and the corresponding algorithm SCC-GREEDY-ORDER to generate the final approximately optimal total order. Let us consider the lower two preferences in Example 3 by omitting the equal preferences, which can be rewritten as a set of sequences: I={eac, dab; ab, ac, ad, ae}. Assuming equal weighting, we can calculate the probability G of each pair of movies should be placed in an order, e.g., BIB007 ( ) nK G ab = , where K is a normalization factor, and n=6 is the number of sequences in I. The method proposed in has provided a new method for calculating preference value from a different point of view by taking each preference given by decision makers as a sequence, but it is not expressive under situations where some alternatives can be equally ranked as in Example 3, and the method for calculating the probability of a pair of alternatives should be placed in an order can be extended accordingly. An interesting result given by Rossi et al. is that "aggregating preferences cannot be fair", which is a generalization of Arrow's impossibility theorem for aggregating total orders BIB001 . This result is of course disappointing to some extent. The problem is: what does a "fair" aggregation mean. The requirements for a preference aggregating system to be "fair" given by in are freeness, independence to irrelevant alternatives, monotonicity, and non-dictatorial, and the statement is: "If there are at least two agents and three outcomes to order, a preference aggregation system cannot be fair if agents use partial orders with a unique top and unique bottom, and the result is a partial order with a unique top or bottom." Fortunately, this is not a big problem under a fuzzy or qualitative context which can provide more flexibility BIB002 . In fact the aggregation result we need is one that can take all the opinions into account and reflect the opinions of most decision makers, that is, we are looking for a sound and acceptable consensus/collective result, not the best result. We can also try to look for the approximation of the optimal result although incompleteness and incomparability existing in preference aggregation BIB010 .
Ordering based decision making - A survey <s> Logic Based Decision Making <s> In a decision making context, multiple choices of actions are usually offered to solve a particular problem. Consequently, the question of preferences among the actions will occur. The ordering of recommended actions by preference is made by taking into account the states of the universe of discourse. We develop here a logic Lp for reasoning about preferences in such circumstances. The language of the logic is propositional extended with a special binary relation of preference among formulae. The model theory of the logic is studied and the soundness and completeness theorem is established. <s> BIB001 </s> Ordering based decision making - A survey <s> Logic Based Decision Making <s> As its name suggests, computing with words (CW) is a methodology in which words are used in place of numbers for computing and reasoning. The point of this note is that fuzzy logic plays a pivotal role in CW and vice-versa. Thus, as an approximation, fuzzy logic may be equated to CW. There are two major imperatives for computing with words. First, computing with words is a necessity when the available information is too imprecise to justify the use of numbers, and second, when there is a tolerance for imprecision which can be exploited to achieve tractability, robustness, low solution cost, and better rapport with reality. Exploitation of the tolerance for imprecision is an issue of central importance in CW. In CW, a word is viewed as a label of a granule; that is, a fuzzy set of points drawn together by similarity, with the fuzzy set playing the role of a fuzzy constraint on a variable. The premises are assumed to be expressed as propositions in a natural language. In coming years, computing with words is likely to evolve into a basic methodology in its own right with wide-ranging ramifications on both basic and applied levels. <s> BIB002 </s> Ordering based decision making - A survey <s> Logic Based Decision Making <s> This paper proposes syntactic combination rules for merging uncertain propositional knowledge bases provided by different sources of information, in the framework of possibilistic logic. These rules are the counterparts of combination rules which can be applied to the possibility distributions (defined on the set of possible worlds), which represent the semantics of each propositional knowledge base. Combination modes taking into account the levels of conflict, the relative reliability of the sources, or having reinforcement effects are considered. <s> BIB003 </s> Ordering based decision making - A survey <s> Logic Based Decision Making <s> Formal theories and rational choice methods have become increasingly prominent in most social sciences in the past few decades. Proponents of formal theoretical approaches argue that these methods are more scientific and sophisticated than other approaches, and that formal methods have already generated significant theoretical progress. As more and more social scientists adopt formal theoretical approaches, critics have argued that these methods are flawed and that they should not become dominant in most social-science disciplines.Rational Choice and Security Studies presents opposing views on the merits of formal rational choice approaches as they have been applied in the subfield of international security studies. This volume includes Stephen Walt's article "Rigor or Rigor Mortis? Rational Choice and Security Studies," critical replies from prominent political scientists, and Walt's rejoinder to his critics.Walt argues that formal approaches have not led to creative new theoretical explanations, that they lack empirical support, and that they have contributed little to the analysis of important contemporary security problems. In their replies, proponents of rational choice approaches emphasize that formal methods are essential for achieving theoretical consistency and precision. <s> BIB004 </s> Ordering based decision making - A survey <s> Logic Based Decision Making <s> Discusses a methodology for reasoning and computing with perceptions rather than measurements. An outline of such a methodology-referred to as a computational theory of perceptions is presented in this paper. The computational theory of perceptions, or CTP for short, is based on the methodology of CW. In CTP, words play the role of labels of perceptions and, more generally, perceptions are expressed as propositions in a natural language. CW-based techniques are employed to translate propositions expressed in a natural language into what is called the Generalized Constraint Language (GCL). In this language, the meaning of a proposition is expressed as a generalized constraint, N is R, where N is the constrained variable, R is the constraining relation and isr is a variable copula in which r is a variable whose value defines the way in which R constrains S. Among the basic types of constraints are: possibilistic, veristic, probabilistic, random set, Pawlak set, fuzzy graph and usuality. The wide variety of constraints in GCL makes GCL a much more expressive language than the language of predicate logic. In CW, the initial and terminal data sets, IDS and TDS, are assumed to consist of propositions expressed in a natural language. These propositions are translated, respectively, into antecedent and consequent constraints. Consequent constraints are derived from antecedent constraints through the use of rules of constraint propagation. The principal constraint propagation rule is the generalized extension principle. The derived constraints are retranslated into a natural language, yielding the terminal data set (TDS). The rules of constraint propagation in CW coincide with the rules of inference in fuzzy logic. A basic problem in CW is that of explicitation of N, R, and r in a generalized constraint, X is R, which represents the meaning of a proposition, p, in a natural language. <s> BIB005 </s> Ordering based decision making - A survey <s> Logic Based Decision Making <s> In many applications, the reliability relation associated with available information is only partially defined, while most of existing uncertainty frameworks deal with totally ordered pieces of knowledge. Partial pre-orders offer more flexibility than total pre-orders to represent incomplete knowledge. Possibilistic logic, which is an extension of classical logic, deals with totally ordered information. It offers a natural qualitative framework for handling uncertain information. Priorities are encoded by means of weighted formulas, where weights are lower bounds of necessity measures. This paper proposes an extension of possibilistic logic for dealing with partially ordered pieces of knowledge. We show that there are two different ways to define a possibilistic logic machinery which both extend the standard one. <s> BIB006 </s> Ordering based decision making - A survey <s> Logic Based Decision Making <s> We introduce the notion of combinatorial vote, where a group of agents (or voters) is supposed to express preferences and come to a common decision concerning a set of non-independent variables to assign. We study two key issues pertaining to combinatorial vote, namely preference representation and the automated choice of an optimal decision. For each of these issues, we briefly review the state of the art, we try to define the main problems to be solved and identify their computational complexity. <s> BIB007 </s> Ordering based decision making - A survey <s> Logic Based Decision Making <s> This paper addresses the problem of merging uncertain information in the framework of possibilistic logic. It presents several syntactic combination rules to merge possibilistic knowledge bases, provided by different sources, into a new possibilistic knowledge base. These combination rules are first described at the meta-level outside the language of possibilistic logic. Next, an extension of possibilistic logic, where the combination rules are inside the language, is proposed. A proof system in a sequent form is given, sound and complete with respect to the possibilistic logic semantics. Possibilistic fusion modes are illustrated on an example inspired from an application of a localization of a robot in an uncertain environment. <s> BIB008 </s> Ordering based decision making - A survey <s> Logic Based Decision Making <s> The subject of this work is to establish a mathematical framework that provide the basis and tool for uncertainty reasoning based on linguistic information. This paper focuses on a flexible and realistic approach, i.e., the use of linguistic terms, specially, the symbolic approach acts by direct computation on linguistic terms. An algebra model with linguistic terms, which is based on a logical algebraic structure, i.e., lattice implication algebra, is applied to represent imprecise information and deals with both comparable and incomparable linguistic terms (i.e., non-ordered linguistic terms). Within this framework, some inferential rules are analyzed and extended to deal with these kinds of lattice-valued linguistic information. <s> BIB009 </s> Ordering based decision making - A survey <s> Logic Based Decision Making <s> The subject of this work is to establish a mathematical framework that provides the basis and tool for synthesis and evaluation analysis in decision making, especially from the logic point of view. This paper focuses on a flexible and realistic approach, i.e., the use of linguistic assessment in decision making, specially, the symbolic approach acts by direct computation on linguistic values. A lattice-valued linguistic algebra model, which is based on a logical algebraic structure, i.e., lattice implication algebra, is applied to represent imprecise information and deal with both comparable and incomparable linguistic values (i.e., non-ordered linguistic values). Within this framework, some known weighted aggregation functions are analyzed and extended to deal with these kinds of lattice-value linguistic information. <s> BIB010 </s> Ordering based decision making - A survey <s> Logic Based Decision Making <s> Representing and reasoning with an agent's preferences is important in many applications of constraints formalisms. Such preferences are often only partially ordered. One class of soft constraints formalisms, semiring-based CSPs, allows a partially ordered set of preference degrees, but this set must form a distributive lattice; whilst this is convenient computationally, it considerably restricts the representational power. This paper constructs a logic of soft constraints where it is only assumed that the set of preference degrees is a partially ordered set, with a maximum element 1 and a minimum element 0. When the partially ordered set is a distributive lattice, this reduces to the idempotent semiring-based CSP approach, and the lattice operations can be used to define a sound and complete proof theory. A generalised possibilistic logic, based on partially ordered values of possibility, is also constructed, and shown to be formally very strongly related to the logic of soft constraints. It is shown how the machinery that exists for the distributive lattice case can be used to perform sound and complete deduction, using a compact embedding of the partially ordered set in a distributive lattice. <s> BIB011 </s> Ordering based decision making - A survey <s> Logic Based Decision Making <s> The subject of this work is to establish a mathematical framework that provides the basis and tool for automated reasoning and uncertainty reasoning based on linguistic information. This paper focuses on a flexible and realistic approach, i.e., the use of linguistic terms, specially, the symbolic approach acts by direct computation on linguistic terms. An algebra model with linguistic terms, which is based on a logical algebraic structure, i.e., lattice implication algebra, is constructed and applied to represent imprecise information and deal with both comparable and incomparable linguistic terms (i.e., non-ordered linguistic terms). Some properties and its substructures of this algebraic model are discussed. <s> BIB012 </s> Ordering based decision making - A survey <s> Logic Based Decision Making <s> The consistency of a rule base is an essential issue for rule-based intelligent information processing. Due to the uncertainty inevitably included in the rule base, it is necessary to verify the consistency of the rule base while investigating, designing, and applying a rule-based intelligent system. In the framework of the lattice-valued first-order logic system LF(X), which attempts to handle fuzziness and incomparability, this article focuses on how to verify and increase the consistency degree of the rule base in the intelligent information processing system. First, the representations of eight kinds of rule bases in LF(X) as the generalized clause set forms based on these rule bases' nonredundant generalized Skolem standard forms are presented. Then an α-automated reasoning algorithm in LF(X), also used as an automated simplification algorithm, is proposed. Furthermore, the α-consistency and the α-simplification theories of the rule base in LF(X) are formulated, and especially the coherence between these two theories is proved. Therefore, the verification of the α-consistency of the rule base, often an infinity problem that is difficult to solve, can be transformed into a finite and achievable α-simplification problem. Finally, an α-simplification stepwise search algorithm for verifying the consistency of the rule base as well as a kind of filtering algorithm for increasing the consistency level of the rule base are proposed. © 2006 Wiley Periodicals, Inc. Int J Int Syst 21: 399–424, 2006. <s> BIB013 </s> Ordering based decision making - A survey <s> Logic Based Decision Making <s> In the present paper, the weak completeness of alpha-resolution principle for a latticevalued logic (L(n)xL(2))P(X) with truth value in a logical algebra - lattice implication algebra L(n)xL(2), is established. Accordingly, the weak completeness of (Exactly, True)-resolution principle for a linguistic truth-valued propositional logic l based on the linguistic truth-valued lattice implication algebra L-LIA is derived. <s> BIB014 </s> Ordering based decision making - A survey <s> Logic Based Decision Making <s> Decision making under uncertainty is a key issue in information fusion and logic based reasoning approaches. The aim of this paper is to show noteworthy theoretical and applicational issues in the area of decision making under uncertainty that have been already done and raise new open research related to these topics pointing out promising and challenging research gaps that should be addressed in the coming future in order to improve the resolution of decision making problems under uncertainty. Keyword: decision making, uncertainty, information fusion, logics, uncertain information processing, computing with words Categories: I.1, I.2, M.4, F.4.1 <s> BIB015 </s> Ordering based decision making - A survey <s> Logic Based Decision Making <s> In the semantics of natural language, quantification may have received more attention than any other subject, and syllogistic reasoning is one of the main topics in many-valued logic studies on inference. Particularly, lattice-valued logic, a kind of important non-classical logic, can be applied to describe and treat incomparability by the incomparable elements in its truth-valued set. In this paper, we first focus on some properties of linguistic truth-valued lattice implication algebra. Secondly, we introduce some concepts of linguistic truth-valued lattice-valued propositional logic system @?P(X), whose truth-valued domain is a linguistic truth-valued lattice implication algebra. Then we investigate the semantic problem of @?P(X). Finally, we further probe into the syntax of linguistic truth-valued lattice-valued propositional logic system @?P(X), and prove the soundness theorem, deduction theorem and consistency theorem. <s> BIB016 </s> Ordering based decision making - A survey <s> Logic Based Decision Making <s> This book focuses on systematic and intelligent information processing theories and methods based on linguistic values. The main contents include topics such as the 2-tuple model of linguistic values, 2-tuple linguistic aggregation operator hedge algebras, complete hedge algebras and linguistic value reasoning, reasoning with linguistic quantifiers, linguistic reasoning based on a random set, the fuzzy number model of linguistic values, linguistic value aggregation based on the fuzzy number model, extraction of linguistic proposition from a database and handling of linguistic information on the Internet, linguistic truth value lattice implication algebras, linguistic truth value resolution automatic reasoning, linguistic truth value reasoning based on lattice implication algebra, and related applications in decision-making and forecast. <s> BIB017
Logic is the foundation and standard for justifying or evaluating the soundness and consistency of the methods, including decision making methods BIB004 . In order to establish the rational reasoning approaches and intelligent support system to deal with both totally ordered information and non-totally ordered information, it is important and necessary to study the logical foundation with such kind of feature for them, which should be some kind of non-classical logical system BIB015 . Generally, logic can be used for modelling decision making problems in two different ways: syntactic and semantic. From the syntactic point of view, logic uses formulas and propositions to represent judgments from decision makers. For example, we consider a set of decision makers: E={e 1 , e 2 , …, e n } whose judgments among a set of alternatives can be represented by the propositions of a logic system, p 1 , p 2 , etc. Such as, p 1 means that alternative 1 performs well in some specified property according to decision maker 2. The composite propositions, which are composed by the primitive propositions p 1 , p 2 , etc. with logical connectives ⇁ (not), ⋀ (and), ⋁ (or), → (if-then) and ↔ (if and only if), can be used for modelling more complex judgments. Then different logical reasoning methods, such as MP (Modus Ponens) rule and fuzzy CRI (Compositional Rule of Inference) BIB002 BIB005 , can be applied to reach the collective judgment. From the semantic side, the truth-value field of logic system, such as {0, 1} for classical logic or [0, 1] for fuzzy logic, is used for modelling the set of evaluations on the alternatives. Take Fig. 3 as a truth-value field example, the truth-value of p 1 is b means that the judgment of decision maker 2 on alternative 1 is highly true. This kind of truth-value can be used for modelling the uncertainties involved in the decision making process, and will change accordingly along with the syntactic inference process. Mainly from the syntactic representation point of view, Das BIB001 developed a formal logic for reasoning about preferences by representing preference through the binary relation R among propositional formulae which represent the considered alternatives or actions. For example, R(a, b) is interpreted as the alternative a is preferred to b. Wilson BIB011 developed a logic of soft constraints where the set of preferences is only assumed to be a partially ordered set, with a minimum element and a maximum element. This means that there are no additional restrictions and operations, which will restrict the representational power, needed for the set of preferences to form a lattice. There are also some attempts to model decision making problems in logic framework by combining syntactic and semantic parts together. Among them, Benferhat et al. BIB008 proposed some reasoning methods with partial information by using extended possibilistic distribution in the framework of possibilistic logic. Namely, elements from a partially ordered set are associated with formulas or interpretations in the logic instead of numbers in [0, 1], and two definitions of possibilistic inference are presented by extending the one used in possibilistic logic. They BIB006 BIB003 also extended the possibilistic logic by defining new combination rules to aggregate multiple-source information, which provides a coherent way to represent and reason uncertain information from different sources. Inspired from hedge algebra and by analyzing semantic heredity of linguistic hedges, based on the extensive work on lattice implication algebras and the corresponding logic systems , Xu, et al. proposed linguistic truth-valued lattice implication algebra BIB012 BIB016 , as simply shown in Fig. 3 , for modelling ordinal linguistic information, and discussed the corresponding logic system BIB016 BIB014 and the approximate reasoning approaches based on it BIB009 BIB013 . Liu et al. BIB010 laid some basic ideas on lattice valued decision making, especially with linguistic information, along with some lattice structures for representing the ordinal linguistic information involved in decision making procedure. Lu et al. adopted several simple temporal predicates into a linguistic-valued logic based reasoning system for dynamically modelling and aggregating information under uncertain qualitative situations, and applied this reasoning mechanism to some smart home applications. Although these logic based methods are promising due to the strict theoretical foundation, there are still many efforts that need to be made in order to make them more applicable to real decision making problems, especially socio-economic areas, due to the complex structures and high computational complexity BIB007 . The book BIB017 gave a detailed introduction and analysis of some existing popular ordinal linguistic information processing approaches, including the above mentioned fuzzy ordinal linguistic approaches, algebraic based and logic based methods.
Ordering based decision making - A survey <s> Conclusions and Perspectives <s> We extend the Constraint Logic Programming (CLP) formalism in order to handle semiring-based constraints. This allows us to perform in the same language both constraint solving and optimization. In fact, constraints based on semirings are able to model both classical constraint solving and more sophisticated features like uncertainty, probability, fuzziness, and optimization. We then provide this class of languages with three equivalent semantics: model-theoretic, fix-point, and proof-theoretic, in the style of classical CLP programs. <s> BIB001 </s> Ordering based decision making - A survey <s> Conclusions and Perspectives <s> An alternative voting system, referred to as probabilistic Borda rule, is developed and analyzed. The winning alternative under this system is chosen by lottery where the weights are determined from each alternative’s Borda score relative to all Borda points possible. Advantages of the lottery include the elimination of strategic voting on the set of alternatives under consideration and breaking the tyranny of majority coalitions. Disadvantages include an increased incentive for strategic introduction of new alternatives to alter the lottery weights, and the possible selection of a Condorcet loser. Normative axiomatic properties of the system are also considered. It is shown this system satisfies the axiomatic properties of the standard Borda procedure in a probabilistic fashion. <s> BIB002 </s> Ordering based decision making - A survey <s> Conclusions and Perspectives <s> We introduce the notion of combinatorial vote, where a group of agents (or voters) is supposed to express preferences and come to a common decision concerning a set of non-independent variables to assign. We study two key issues pertaining to combinatorial vote, namely preference representation and the automated choice of an optimal decision. For each of these issues, we briefly review the state of the art, we try to define the main problems to be solved and identify their computational complexity. <s> BIB003 </s> Ordering based decision making - A survey <s> Conclusions and Perspectives <s> Many real life optimization problems are defined in terms of both hard and soft constraints, and qualitative conditional preferences. However, there is as yet no single framework for combined reasoning about these three kinds of information. In this paper we study how to exploit classical and soft constraint solvers for handling qualitative preference statements such as those captured by the CP-nets model. In particular, we show how hard constraints are sufficient to model the optimal outcomes of a possibly cyclic CP-net, and how soft constraints can faithfully approximate the semantics of acyclic conditional preference statements whilst improving the computational efficiency of reasoning about these statements. <s> BIB004 </s> Ordering based decision making - A survey <s> Conclusions and Perspectives <s> We present a framework for decision-making in relation to disaster management with a focus on situation assessment during disaster management monitoring. The use of causality reasoning based on the temporal evolution of a scenario provides a natural way to chain meaningful events and possible states of the system. There are usually different ways to analyse a problem and different strategies to follow as a solution and it is also often the case that information originating in different sources can be inconsistent or unreliable. Therefore we allow the specification of possibly conflicting situations as they are typical elements in disaster management. A decision procedure to decide on those conflicting situations is presented which not only provides a framework for the assistance of one decision-maker but also how to handle opinions from a hierarchy of decision-makers. <s> BIB005 </s> Ordering based decision making - A survey <s> Conclusions and Perspectives <s> This book focuses on systematic and intelligent information processing theories and methods based on linguistic values. The main contents include topics such as the 2-tuple model of linguistic values, 2-tuple linguistic aggregation operator hedge algebras, complete hedge algebras and linguistic value reasoning, reasoning with linguistic quantifiers, linguistic reasoning based on a random set, the fuzzy number model of linguistic values, linguistic value aggregation based on the fuzzy number model, extraction of linguistic proposition from a database and handling of linguistic information on the Internet, linguistic truth value lattice implication algebras, linguistic truth value resolution automatic reasoning, linguistic truth value reasoning based on lattice implication algebra, and related applications in decision-making and forecast. <s> BIB006 </s> Ordering based decision making - A survey <s> Conclusions and Perspectives <s> Decision making under uncertainty is a key issue in information fusion and logic based reasoning approaches. The aim of this paper is to show noteworthy theoretical and applicational issues in the area of decision making under uncertainty that have been already done and raise new open research related to these topics pointing out promising and challenging research gaps that should be addressed in the coming future in order to improve the resolution of decision making problems under uncertainty. Keyword: decision making, uncertainty, information fusion, logics, uncertain information processing, computing with words Categories: I.1, I.2, M.4, F.4.1 <s> BIB007
Ordinal information is usually involved in the process of decision making such as ordinal attributes and preference relations, and always appears as qualitative and partially ordered. Ordering based decision making, usually on how to rank alternatives based on given ordering information, is drawing much attention recently. Table 1 illustrates the strengths and weaknesses of some typical approaches related to ordering based decision making reviewed in this paper. The first five are preference based alternative ordering methods, and the rest three are mainly for qualitative information processing. BIB001 Difficult on global combination of quantitative preferences BIB004 CP-nets Elicitation of CP-nets from users is very intuitive The complexity of reasoning with them BIB002 BIB004 Borda count and other Social Choice approaches Take all the opinions into account, intuitive and easy to implement BIB002 Can only be used for total order, and absolute scores are usually difficult to decide Cohen's method It can obtain an approximately optimal total order It can only deal with preferences in total ordering Wang's method Intuitive and easy to use It is not expressive under situations where some alternatives can be equally ranked BIB005 Fuzzy ordinal linguistic approach Easy to use by manipulating directly on linguistic labels with indexes without underlying numerical approximation It can only deal with totally ordered information BIB006 Hedge algebra Expressive for modelling the semantic ordering relation among linguistic terms No further steps made into logic and information processing methods BIB006 Logic based method Strict theoretical foundation and direct reasoning about information without numerical approximation BIB007 Complexity of the theoretical results makes it somewhat difficult for non-specialist users BIB003 All the methods in Table 1 can deal with qualitative information except Soft constraints, but Borda's Rule, Cohen's method and Wang's method deal with qualitative information by assigning numerical score or transforming it into quantitative value. Fuzzy ordinal linguistic approach, Borda's rule and Cohen's method can only deal with totally ordered information as shown in Example 1, and Hedge algebra mainly focuses on the algebraic representation of ordinal linguistic information. As discussed above, ordinal information in decision making always appears in qualitative form and partially ordered, while most ordering based decision making methods can only deal with totally ordered information as shown in Examples 1 and 4. Although there are already some methods which can deal with partially ordered qualitative information, most of them usually simply transform partially ordered information into totally ordered structure, and qualitative information into quantitative scale, which are time consuming and will cause loss of information. How to aggregate the ordinal evaluations provided by different decision makers, always take different forms (mainly partially ordered) as shown in Example 3, to get a "fair" final decision result is still an ongoing and open research direction. There are some potential solutions or research directions for this problem, such as: 1) A new algebra-oriented structure should be developed to represent ordinal information in decision making. This kind of structure should be able to model partially ordered information, along with additional operations such that direct computations/reasoning between ordinal terms is possible. This computations/reasoning process should have much lower computational complexity than that of CP-nets. 2) Ordering based decision making should be formalized in a logic framework. The process of decision making which is to draw some conclusion based on given information, can be essentially interpreted as a reasoning process. Logic serves as the most important foundation and standard for justifying or evaluating the soundness and consistency of reasoning methods. Therefore, it is necessary to develop ordering based decision making approach based on approximate reasoning from the logical point of view. 3) New aggregation approach should be developed for dealing with different kinds of uncertainties accompanying the process of ordering based decision making. These kinds of uncertainties consist of the degrees of credibility or belief associated with the preferences given by different decision makers, consistency level of different opinions, and so on, which are very common in real decision problems, especially in the complex and dynamic socio-economic environment. The complexity and dynamics of real world decision making problems require more advanced tools to develop more appropriate decision making approaches which can successfully deal with partial orders, adaptability, uncertainty and more solid theoretical foundation. It is clear from the research reported here that, due to the challenging problems that remain to be solved, there is still substantial work to be done on ordering based decision making to see its prosperity both in theoretical researches and practical applications. However, given its importance for a number of important areas and real world applications, we are optimistic these challenges will attract considerable attention and be eventually overcome.
Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> INTRODUCTION <s> The main purpose of the research reported here is to show that a new and more powerful type of computer-assisted instruction (CAI), based on extensive application of artificial-intelligence (AI) techniques, is feasible, and to demonstrate some of its major capabilities. A set of computer programs was written and given the name SCHOLAR. Due to its complexity, only the conception and educational aspects of this system (including an actual on-line protocol) are presented in this paper. In what may be called conventional ad hoc-frame-oriented (AFO) CAI, the data base consists of many "frames" of specific pieces of text, questions, and anticipated answers entered in advance by the teacher. By contrast, an information-structure-oriented (ISO) CAI system is based on the utilization of an information network of facts, concepts, and procedures; it can generate text, questions, and corresponding answers. Because an ISO CAI system can also utilize its information network to answer questions formulated by the student, a mixed-initiative dialogue between student and computer is possible with questions and answers from both sides. <s> BIB001 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> INTRODUCTION <s> T w o University of Chicago doctoral students in education, Anania (1982, 1983) and Burke (1984), completed dissertations in which they compared student learning under the following three conditions of instruction: 1. Conventional. Students learn the subject matter in a class with about 30 students per teacher. Tests are given periodically for marking the students. 2. Mastery Learning. Students learn the subject matter in a class with about 30 students per teacher. The instruction is the same as in the conventional class (usually with the same teacher). Formative tests (the same tests used with the conventional group) are given for feedback followed by corrective procedures and parallel formative tests to determine the extent to which the students have mastered the subject matter. 3. Tutoring. Students learn the subject matter with a good tutor for each student (or for two or three students simultaneously). This tutoring instruction is followed periodically by formative tests, feedback-corrective procedures, and parallel formative tests as in the mastery learning classes. It should be pointed out that the need for corrective work under tutoring is very small. <s> BIB002 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> INTRODUCTION <s> This paper considers the case for formalising aspects of intelligent tutoring systems in order to derive more reliable implementations, as opposed to the present use of informal theories to build experimental systems which are then studied empirically. Some recent work in theoretical AI is suggested as a possible source for the elements of a 'theory of ITS'. I n t r o d u c t i o n The engineering of any complex device (such as an ITS) gradually relies less on empirical experimentation and more on mathematical or scientific theory. As yet, there is no significant 'theory of ITS': all of the recent ITS texts (e.g. Wenger, 1987; Mandl and Lesgold, 1988; Polson and Richardson, 1988) are entirely discursive and attempt no kind of formalisation of their content. The aim of this paper is to suggest that it is not premature for ITS research to begin an attempt to complement a short-term emphasis on pragmatic aspects (Kearsley, 1989) by seeking theoretical foundations for its implementations. Most AI researchers regard ITSs as peripheral applications of AI, an understandable opinion in view of the virtual absence of ITS papers from the major AI journals and conferences. But Clancey (1986) has argued that work on ITSs is not a "mere matter of putting well-known AI methods into practice" but is (or should be) "broadening the meaning of AI research". Historically, ITS research began within AI, but AI researchers have retreated from the ITS arena as they have come to appreciate the need for more fundamental work on mental models, language understanding, knowledge representation, etc., leaving others to move into an intrinsically multi-disciplinary field. However, if there is ever to be a formal theory of (aspects of) ITS then it will be derived from elements of AI. Moreover, recent AI research begins to indicate what those elements might be. <s> BIB003 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> INTRODUCTION <s> Tutoring systems are described as having two loops. The outer loop executes once for each task, where a task usually consists of solving a complex, multi-step problem. The inner loop executes once for each step taken by the student in the solution of a task. The inner loop can give feedback and hints on each step. The inner loop can also assess the student's evolving competence and update a student model, which is used by the outer loop to select a next task that is appropriate for the student. For those who know little about tutoring systems, this description is meant as a demystifying introduction. For tutoring system experts, this description illustrates that although tutoring systems differ widely in their task domains, user interfaces, software structures, knowledge bases, etc., their behaviors are in fact quite similar. <s> BIB004 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> INTRODUCTION <s> Acquiring and representing a domain knowledge model is a challenging problem that has been the subject of much research in the fields of both AI and AIED. This part of the book provides an overview of possible methods and techniques that are used for that purpose. This introductory chapter first presents and discusses the epistemological issue associated with domain knowledge engineering. Second, it briefly presents several knowledge representation languages while considering their expressivity, inferential power, cognitive plausibility and pedagogical emphasis. Lastly, the chapter ends with a presentation of the subsequent chapters in this part of the book. <s> BIB005
From the earliest days, computers have been employed in a variety of areas to help society, including education. The computer was first introduced to the field of education in the 1970s under the aegis of Computer Assisted Instruction (CAI). Efforts at using computers in education were presented by Carbonel in the 1970s. He claimed that a CAI could be endowed with enhanced capabilities by incorporating Artificial Intelligence (AI) techniques to overcome current limitations BIB001 . In 1984, a study conducted by Bloom BIB002 showed that learners who studied a topic under the guidance of a human tutor, combined with traditional assessment and corrective instructions performed two standard deviations (sigma) better than those who received traditional group teaching. Researchers in the field of AI saw a solid opportunity to create intelligent systems to provide effective tutoring for individual students, tailored to their needs and to enhance learning BIB005 . Researchers found a new and inspiring goal, studied the human tutor and attempted to absorb and adapt what they learned into Intelligent Computer-Assisted Instruction (ICAI) or Intelligent Tutoring Systems (ITS) . Self, in a paper published in 1990, claimed that ITSs should be viewed as an engineering design field. Therefore, ITS design should be guided by methods and techniques appropriate for design BIB005 BIB003 . Twenty years after Self's claim, ITSs had become a growing field with signs of vitality and self-confidence BIB005 . Intelligent tutoring systems motivate students to perform challenging reasoning tasks by capitalizing on multimedia capabilities to present information. ITSs have successfully been used in all educational and training markets, including homes, schools, universities, businesses, and governments. One of the goals of ITSs is to better understand student behaviors through interaction with students . ITSs are computer programs that use AI techniques to provide intelligent tutors that know what they teach, whom they teach, and how to teach. AI helps simulate human tutors in order to produce intelligent tutors. ITSs differ from other educational systems such as Computer-Aided Instruction (CAI). A CAI generally lacks the ability to monitor the learner's solution steps and provide instant help BIB004 . For historical reasons, much of the research in the domain of educational software involving AI has been conducted under the name of Intelligent Computer-Aided Instruction (ICAI). In recent decades, the term ITS has often been used as a replacement for ICAI. The field of ITS is a combination of computer science, cognitive psychology, and educational research ( Figure 1 ). The fact that ITS researchers use three different disciplines warrants important consideration regarding the major differences in research goals, terminologies, theoretical frameworks, and emphases among ITS researchers. Consequently, ITS researchers are required to have a good understanding of these three disciplines, resulting in competing demands. Fortunately, many researchers have stood up to meet this challenge this challenge [8] .
Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> RELATED SURVEY PAPERS <s> Abstract : In this paper, we address many aspects of Intelligent Tutoring Systems (ITS) in our search for answers to the following main questions; (a) What are the precursors of ITS? (b) What does the term mean? (c) What are some important milestones and issues across the 20+ year history of ITS? (d) What is the status of ITS evaluations? and (e) What is the future of ITS? We start with an historical perspective. <s> BIB001 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> RELATED SURVEY PAPERS <s> This is a non-expert overview of Intelligent Tutoring Systems (ITSs), a way in which Artificial Intelligence (AI) techniques are being applied to education. It introduces ITSs and the motivation for them. It looks at its history: its evolution from Computer-Assisted Instruction (CAI). After looking at the structure of a ‘typical’ ITS, the paper further examines and discusses some other architectures. Several classic ITSs are reviewed, mainly due to their historical significance or because they best demonstrate some of the principles of intelligent tutoring. A reasonably representative list of ITSs is also provided in order to provide a better appreciation of this vibrant field as well as reveal the scope of existing tutors. The paper concludes, perhaps more appropriately, with some of the author's viewpoints on a couple of controversial issues in the intelligent tutoring domain. <s> BIB002 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> RELATED SURVEY PAPERS <s> Tutoring systems are described as having two loops. The outer loop executes once for each task, where a task usually consists of solving a complex, multi-step problem. The inner loop executes once for each step taken by the student in the solution of a task. The inner loop can give feedback and hints on each step. The inner loop can also assess the student's evolving competence and update a student model, which is used by the outer loop to select a next task that is appropriate for the student. For those who know little about tutoring systems, this description is meant as a demystifying introduction. For tutoring system experts, this description illustrates that although tutoring systems differ widely in their task domains, user interfaces, software structures, knowledge bases, etc., their behaviors are in fact quite similar. <s> BIB003 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> RELATED SURVEY PAPERS <s> Fifteen years ago, research started on SQL-Tutor, the first constraint-based tutor. The initial efforts were focused on evaluating Constraint-Based Modeling (CBM), its effectiveness and applicability to various instructional domains. Since then, we extended CBM in a number of ways, and developed many constraint-based tutors. Our tutors teach both well- and ill-defined domains and tasks, and deal with domain- and meta-level skills. We have supported mainly individual learning, but also the acquisition of collaborative skills. Authoring support for constraint-based tutors is now available, as well as mature, well-tested deployment environments. Our current research focuses on building affect-sensitive and motivational tutors. Over the period of fifteen years, CBM has progressed from a theoretical idea to a mature, reliable and effective methodology for developing effective tutors. <s> BIB004
The field of ITS has a long history of productive research and continues to grow. There have been a number of well known surveys to keep researchers, new and old, updated. In this section, we will list these surveys with a few key points about each. These surveys can be divided into two main categories. The first category belongs to the surveys that present a general discussion of ITSs. The second category belongs to the surveys that specialize in a specific dimension in ITS. A well-known survey which belongs to the first category was published in 1990 by Nwana BIB002 . The survey identifies components of ITSs and describes the evolution from Computer-Assisted Instruction and some of popular ITSs of that era. Another survey on ITS was published in 1994 by Shute et al. BIB001 . This is a more in-depth survey regarding the history of ITS, ITS evaluation and the future of ITSs as seen at the time. Finally, in-depth case studies were published by Woolf et al. in 2001 [11] for the purpose of presenting intelligent capabilities of ITSs when interacting with students. Four tutors were used to exhibit these abilities. The authors ended by discussing evaluations, and some critical development issues for ITSs of the time. The other survey category is more concerned with reviewing a specific dimension of ITSs. Authoring tools in ITSs were reviewed by Murry and Tom in 2003 . The paper is an in-depth summary and analysis of authoring tools in ITSs along with a characterization of each authoring tool. Another example of a specific topic-based survey paper is in regard to conversational ITSs . The history of constraint-based tutors were reviewed in 2012 by Mitrovic BIB004 . The paper concentrated on the history and advanced features that have been implemented in tutoring systems. Other survey papers in this category covers dimensions such as behavior of ITSs BIB003 , and behavior of ITSs in ill defined domains .
Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> HUMAN TUTORS VS. COMPUTER TUTORS <s> T w o University of Chicago doctoral students in education, Anania (1982, 1983) and Burke (1984), completed dissertations in which they compared student learning under the following three conditions of instruction: 1. Conventional. Students learn the subject matter in a class with about 30 students per teacher. Tests are given periodically for marking the students. 2. Mastery Learning. Students learn the subject matter in a class with about 30 students per teacher. The instruction is the same as in the conventional class (usually with the same teacher). Formative tests (the same tests used with the conventional group) are given for feedback followed by corrective procedures and parallel formative tests to determine the extent to which the students have mastered the subject matter. 3. Tutoring. Students learn the subject matter with a good tutor for each student (or for two or three students simultaneously). This tutoring instruction is followed periodically by formative tests, feedback-corrective procedures, and parallel formative tests as in the mastery learning classes. It should be pointed out that the need for corrective work under tutoring is very small. <s> BIB001 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> HUMAN TUTORS VS. COMPUTER TUTORS <s> There has been much debate about instructional strategies for computerized learning environments. Many of the arguments designed to choose between the various philosophies have appealed, at least implicitly, to the behavior of effective human teachers. In this article, we compare the guidance and support offered by human tutors with that offered by intelligent tutoring systems. First, we review research on human tutoring strategies in various domains. Then we investigate the capabilities of a widely used technique for providing feedback, model tracing. Finally, we contrast the types of guidance and support provided by human tutors with those in intelligent tutoring systems, by examining the process of recovering from impasses encountered during problem solving. In general, the support offered by human tutors is more flexible and more subtle than that offered by model tracing tutors, but the two are more similar than sometimes argued. <s> BIB002
A number of studies have shown the effectiveness of one-on-one human tutors [18] BIB001 . When students struggle with difficulties in understanding concepts or exercises, the most effective choice is to seek a one-on-one tutor. There are a variety of features that human tutors are able to provide to students. Good human tutors allow students to do as much work as possible while guiding them to keep them on track towards solutions BIB002 . Of course, students learning by themselves also can increase their knowledge and reasoning skills. However, this may consume much time and effort. A one-on-one tutor allows the student to work around difficulties by guiding them to a strategy that works and helping them understand what does not. In addition, tutors usually promote a sense of challenge, provoke curiosity, and maintain a student's feeling of being in control. Human tutors give hints and suggestions to students rather than giving them explicit solutions. This motivates students to overcome challenges. Furthermore, human tutors are highly interactive in that they give constant feedback to students while the students are solving problems. In order to enable an ITS to give similar feedback as given by a human tutor, we must ensure that it interacts with students as human tutors do. This leads to the question of how to make an ITS deal with students as effectively as human tutors. When modeling ITSs, a student's problem solving processes must be monitored step by step. By keeping track of the steps incrementally, it is possible to detect if a student has made a mistake so that the system can intervene to help the student recover. Feedback can be provided when mistakes are made and hints can be given if students are unsure of how to proceed. One technique used for tracing a student's problem solving is to match the steps a student takes with a rule-based domain expert. In the model tracing technique, the system monitors and follows a student's progress step by step. In case the student makes an error or a wrong assumption, the system intervenes to give explanatory feedback, a hint, or a suggestion to allow the student diagnose errors. Otherwise, the system silently follows the student's progress. A lot of experiments have shown how student model tracing facilitates learning performance in many educational areas such as the visual presentation in the Geometry Tutor and the graphical instruction in the LISP tutor (GIL) BIB002 . Indeed, model tracing tutoring systems support students' learning of the target domain BIB002 [20].
Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> EFFECTIVENESS OF ITS <s> T w o University of Chicago doctoral students in education, Anania (1982, 1983) and Burke (1984), completed dissertations in which they compared student learning under the following three conditions of instruction: 1. Conventional. Students learn the subject matter in a class with about 30 students per teacher. Tests are given periodically for marking the students. 2. Mastery Learning. Students learn the subject matter in a class with about 30 students per teacher. The instruction is the same as in the conventional class (usually with the same teacher). Formative tests (the same tests used with the conventional group) are given for feedback followed by corrective procedures and parallel formative tests to determine the extent to which the students have mastered the subject matter. 3. Tutoring. Students learn the subject matter with a good tutor for each student (or for two or three students simultaneously). This tutoring instruction is followed periodically by formative tests, feedback-corrective procedures, and parallel formative tests as in the mastery learning classes. It should be pointed out that the need for corrective work under tutoring is very small. <s> BIB001 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> EFFECTIVENESS OF ITS <s> This meta-analysis synthesizes research on the effectiveness of intelligent tutoring systems (ITS) for college students. Thirty-five reports were found containing 39 studies assessing the effectiveness of 22 types of ITS in higher education settings. Most frequently studied were AutoTutor, Assessment and Learning in Knowledge Spaces, eXtended Tutor-Expert System, and Web Interface for Statistics Education. Major findings include (a) Overall, ITS had a moderate positive effect on college students’ academic learning (g = .32 to g = .37); (b) ITS were less effective than human tutoring, but they outperformed all other instruction methods and learning activities, including traditional classroom instruction, reading printed text or computerized materials, computer-assisted instruction, laboratory or homework assignments, and no-treatment control; (c) ITS’s effectiveness did not significantly differ by different ITS, subject domain, or the manner or degree of their involvement in instruction and learning; and (d) effectiveness in earlier studies appeared to be significantly greater than that in more recent studies. In addition, there is some evidence suggesting the importance of teachers and pedagogy in ITS-assisted learning. <s> BIB002 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> EFFECTIVENESS OF ITS <s> Intelligent Tutoring Systems (ITS) are computer programs that model learners’ psychological states to provide individualized instruction. They have been developed for diverse subject areas (e.g., algebra, medicine, law, reading) to help learners acquire domain-specific, cognitive and metacognitive knowledge. A meta-analysis was conducted on research that compared the outcomes from students learning from ITS to those learning from non-ITS learning environments. The meta-analysis examined how effect sizes varied with type of ITS, type of comparison treatment received by learners, type of learning outcome, whether knowledge to be learned was procedural or declarative, and other factors. After a search of major bibliographic databases, 107 effect sizes involving 14,321 participants were extracted and analyzed. The use of ITS was associated with greater achievement in comparison with teacher-led, large-group instruction (g .42), non-ITS computer-based instruction (g .57), and textbooks or workbooks (g .35). There was no significant difference between learning from ITS and learning from individualized human tutoring (g –.11) or small-group instruction (g .05). Significant, positive mean effect sizes were found regardless of whether the ITS was used as the principal means of instruction, a supplement to teacher-led instruction, an integral component of teacher-led instruction, or an aid to homework. Significant, positive effect sizes were found at all levels of education, in almost all subject domains evaluated, and whether or not the ITS provided feedback or modeled student misconceptions. The claim that ITS are relatively effective tools for learning is consistent with our analysis of potential publication bias. <s> BIB003
An important question to answer is whether or not ITSs are really effective in providing the learning outcomes they claim to obtain. There have been a number of meta-analysis efforts to investigate the effectiveness of ITSs. The following present a few recent such efforts with their findings to answer the question. A meta-analysis was conducted by VanLehn in 2011 for the purpose of comparing effectiveness of computer tutoring, human tutoring and no tutoring . In this analysis, computer tutors were characterized based on the granularity of the user interface interactions, including answer-based, step-based, and substep-based tutoring systems. Their analysis included studies published between 1975 and 2010. 10 comparisons were presented from 28 evaluation studies. The study found that human tutoring raised test scores by an effect size of 0.79 compared to no tutoring; thus it is not as effective as 2.0 found by Bloom earlier BIB001 . Moreover, it was found that step-based tutoring (0.76) was as almost effective as human tutoring whereas substep-based tutoring was only 0.40 as effective compared to no tutoring. VanLehn's findings suggest that tutoring researchers should focus on ways to improve computer tutoring to reach up to Bloom's finding that human tutoring has 2.0 multiplicative effect compared to no tutoring. The meta-analysis conducted by Steenbergen and Cooper in 2013 analyzed the effectiveness of ITSs on k-12 students' math learning BIB002 . This empirical research examined 26 reports comparing the effectiveness of ITSs with that of regular classroom instruction. Their finding was that ITSs did not have a significant effect on student learning outcomes when used for a short period. However, the effectiveness appeared to be greater when ITS was used for one full school year or longer. In addition, the effects appeared to be greater on general students than on low achievers. The meta-analysis by Ma et al. BIB003 was conducted in 2014 for the purpose of comparing the learning outcomes for those who learn by using ITSs and those who learn in non-ITS learning environments. Their goal was to verify the effect sizes of ITSs taking into account factors such as type of ITS, type of instruction (individual, small, large human instruction etc.), and subject domain (chemistry, physics, mathematics etc.), and other factors. Ma et al., analyzed 107 effect size findings from 73 separate studies. The ITS environment was associated with greater learning achievement compared to teacher-led and large group instruction with an effect size of 0.42, 0.57 for non-ITS computer based instruction, and 0.35 for text books or workbooks. On the other hand, there was no considerable difference between learning outcomes from ITSs and from individualized human tutoring (-0.11) or small group instruction (0.05). Ma et al., reported that ITSs achieved higher education outcome than other forms of instructions except for small group human tutoring. In addition, the ITS effect varied as features and characteristics of ITSs, student attributes, domain knowledge, and other factors varied. Finally, the meta-analysis produced by Kulik and Fletcher in 2015 [23] compared the learning effectiveness of ITSs with conventional classes from 50 studies. 92% of the studies indicated that students who interacted with ITSs outperformed those who received traditional class instructions. In 39 of the 50 studies, performance improvement gains were up to 0.66 median effect sizes, which is considered to be moderate to strong. However, the effect was weak for standardized tests as the effect size was 0.13. Because of the fact that there is no general agreement on the effectiveness of ITSs, questions come up for researchers to answer. How effective are ITSs really?, What are the critical reasons that affect learning in ITSs?, What possible changes can be made to improve ITSs?
Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> ARCHITECTURE OF ITS <s> Tutoring systems are described as having two loops. The outer loop executes once for each task, where a task usually consists of solving a complex, multi-step problem. The inner loop executes once for each step taken by the student in the solution of a task. The inner loop can give feedback and hints on each step. The inner loop can also assess the student's evolving competence and update a student model, which is used by the outer loop to select a next task that is appropriate for the student. For those who know little about tutoring systems, this description is meant as a demystifying introduction. For tutoring system experts, this description illustrates that although tutoring systems differ widely in their task domains, user interfaces, software structures, knowledge bases, etc., their behaviors are in fact quite similar. <s> BIB001 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> ARCHITECTURE OF ITS <s> Intelligent Tutoring Systems (ITS) is the interdisciplinary field that investigates how to devise educational systems that provide instruction tailored to the needs of individual learners, as many good teachers do. Research in this field has successfully delivered techniques and systems that provide adaptive support for student problem solving in a variety of domains. There are, however, other educational activities that can benefit from individualized computer-based support, such as studying examples, exploring interactive simulations and playing educational games. Providing individualized support for these activities poses unique challenges, because it requires an ITS that can model and adapt to student behaviors, skills and mental states often not as structured and welldefined as those involved in traditional problem solving. This paper presents a variety of projects that illustrate some of these challenges, our proposed solutions, and future opportunities. <s> BIB002 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> ARCHITECTURE OF ITS <s> Acquiring and representing a domain knowledge model is a challenging problem that has been the subject of much research in the fields of both AI and AIED. This part of the book provides an overview of possible methods and techniques that are used for that purpose. This introductory chapter first presents and discusses the epistemological issue associated with domain knowledge engineering. Second, it briefly presents several knowledge representation languages while considering their expressivity, inferential power, cognitive plausibility and pedagogical emphasis. Lastly, the chapter ends with a presentation of the subsequent chapters in this part of the book. <s> BIB003
ITSs vary greatly in architecture. It is very rare to find two ITSs based on the same architecture. There are three types of knowledge that ITSs possess: knowledge about the content that will be taught, knowledge about the student, and knowledge about teaching strategies. Additionally, an ITS needs to have communication knowledge in order to present the desired information to the students. Consequently, the traditional 'typical' ITS has four basic components: the domain model which stores domain knowledge, the student model which stores the current state of an individual student in order to choose a suitable new problem for the student, and the tutor model which stores pedagogical knowledge and makes decisions about when and how to intervene. The intervention can use different forms of interactions: Socratic dialogs, hints, feedback from the systems, etc. Finally the user interface model gives access to the domain knowledge elements. Figure 2 shows the traditional architecture of ITSs BIB003 [8] BIB002 . In addition, even though ITSs differ greatly in their internal structures and components and contain a wide variety of features, their behaviors are similar in some ways as stated by VanLehn BIB001 . According to VanLehn, ITSs behave in similar ways such that they involve two loops named inner loop and outer loop. The outer loop mainly decides which task students should practice next among other tasks. The decision takes place based on the student's history of knowledge and background. The inner loop is responsible for monitoring the student's solution steps within a task by providing appropriate pedagogical intervention such as feedback on a step, hints on the next step, assessment of knowledge and review of the solution. The goal of this section is to describe these components along with their functions.
Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Domain Model <s> Numerous approaches to student modeling have been proposed since the inception of the field more than three decades ago. What the field is lacking completely is comparative analyses of different student modeling approaches. In this paper we compare Cognitive Tutoring to Constraint-Based Modeling (CBM). We present our experiences in implementing a database design tutor using both methodologies and highlight their strengths and weaknesses. We compare their characteristics and argue the differences are often more apparent than real: for specific domains one approach may be favoured over the other, making them viable complementary methods for supporting learning. <s> BIB001 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Domain Model <s> This is a non-expert overview of Intelligent Tutoring Systems (ITSs), a way in which Artificial Intelligence (AI) techniques are being applied to education. It introduces ITSs and the motivation for them. It looks at its history: its evolution from Computer-Assisted Instruction (CAI). After looking at the structure of a ‘typical’ ITS, the paper further examines and discusses some other architectures. Several classic ITSs are reviewed, mainly due to their historical significance or because they best demonstrate some of the principles of intelligent tutoring. A reasonably representative list of ITSs is also provided in order to provide a better appreciation of this vibrant field as well as reveal the scope of existing tutors. The paper concludes, perhaps more appropriately, with some of the author's viewpoints on a couple of controversial issues in the intelligent tutoring domain. <s> BIB002 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Domain Model <s> In this paper, we make a first effort to define requirements for knowledge representation (KR) in an ITS. The requirements concern all stages of an ITS's life cycle (construction, operation and maintenance), all types of users (experts, engineers, learners) and all its modules (domain knowledge, user model, pedagogical model). We also briefly present and compare various KR formalisms used (or that could be used) in ITSs as far as the specified KR requirements are concerned. It appears that various hybrid approaches to knowledge representation can satisfy the requirements in a greater degree than that of single representations. Another finding is that there is not a hybrid formalism that can satisfy the requirements of all of the modules of an ITS, but each one individually. So, a multi-paradigm representation environment could provide a solution to requirements satisfaction. <s> BIB003 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Domain Model <s> Model tracing and constraint-based modeling are two prominent paradigms on which intelligent tutoring systems (ITSs) have been based. We Kodaganallur, Weitz and Rosenthal (2005), have written a paper comparing the two paradigms, and offering guidance based on this comparison to prospective authors of ITSs. In a detailed critique of our paper, Mitrovic and Ohlsson (2006) have taken issue with many of our observations and conclusions. In this work we refute their critique and provide a more general, critical assessment of constraint-based modeling. <s> BIB004 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Domain Model <s> This paper describes an effort to model students' changing knowledge state during skill acquisition. Students in this research are learning to write short programs with the ACT Programming Tutor (APT). APT is constructed around a production rule cognitive model of programming knowledge, called theideal student model. This model allows the tutor to solve exercises along with the student and provide assistance as necessary. As the student works, the tutor also maintains an estimate of the probability that the student has learned each of the rules in the ideal model, in a process calledknowledge tracing. The tutor presents an individualized sequence of exercises to the student based on these probability estimates until the student has ‘mastered’ each rule. The programming tutor, cognitive model and learning and performance assumptions are described. A series of studies is reviewed that examine the empirical validity of knowledge tracing and has led to modifications in the process. Currently the model is quite successful in predicting test performance. Further modifications in the modeling process are discussed that may improve performance levels. <s> BIB005 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Domain Model <s> A cognitive model is a set of production rules or skills encoded in intelligent tutors to model how students solve problems. It is usually generated by brainstorming and iterative refinement between subject experts, cognitive scientists and programmers. In this paper we propose a semi-automated method for improving a cognitive model called Learning Factors Analysis that combines a statistical model, human expertise and a combinatorial search. We use this method to evaluate an existing cognitive model and to generate and evaluate alternative models. We present improved cognitive models and make suggestions for improving the intelligent tutor based on those models. <s> BIB006 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Domain Model <s> Knowledge tracing (KT)[1] has been used in various forms for adaptive computerized instruction for more than 40 years. However, despite its long history of application, it is difficult to use in domain model search procedures, has not been used to capture learning where multiple skills are needed to perform a single action, and has not been used to compute latencies of actions. On the other hand, existing models used for educational data mining (e.g. Learning Factors Analysis (LFA)[2]) and model search do not tend to allow the creation of a “model overlay” that traces predictions for individual students with individual skills so as to allow the adaptive instruction to automatically remediate performance. Because these limitations make the transition from model search to model application in adaptive instruction more difficult, this paper describes our work to modify an existing data mining model so that it can also be used to select practice adaptively. We compare this new adaptive data mining model (PFA, Performance Factors Analysis) with two versions of LFA and then compare PFA with standard KT. <s> BIB007 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Domain Model <s> Intelligent Tutoring Systems (ITSs) that employ a model-tracing methodology have consistently shown their effectiveness. However, what evidently makes these tutors effective, the cognitive model embedded within them, has traditionally been difficult to create, requiring great expertise and time, both of which come at a cost. Furthermore, an interface has to be constructed that communicates with the cognitive model. Together these constitute a high bar that needs to be crossed in order to create such a tutor. We outline a system that lowers this bar on both accounts and that has been used to produce commercial-quality tutors. First, we discuss and evaluate a tool that allows authors who are not cognitive scientists or programmers to create a cognitive model. Second, we detail a way for this cognitive model to communicate with third-party interfaces. <s> BIB008 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Domain Model <s> Acquiring and representing a domain knowledge model is a challenging problem that has been the subject of much research in the fields of both AI and AIED. This part of the book provides an overview of possible methods and techniques that are used for that purpose. This introductory chapter first presents and discusses the epistemological issue associated with domain knowledge engineering. Second, it briefly presents several knowledge representation languages while considering their expressivity, inferential power, cognitive plausibility and pedagogical emphasis. Lastly, the chapter ends with a presentation of the subsequent chapters in this part of the book. <s> BIB009 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Domain Model <s> We present an inference algorithm for perturbation models based on Poisson regression. The algorithm is designed to handle unclassified input with multiple errors described by independent mal-rules. This knowledge representation provides an intelligent tutoring system with local and global information about a student, such as error classification (local) and prediction of further performance (global). The inference algorithm has been employed in a student model for spelling with a detailed set of letter and phoneme based mal-rules. The local and global information about the student allows for appropriate remediation actions to adapt to their needs. The error classification, student model prediction and the efficacy of the adapted remediation actions have been validated on the data of two large-scale user studies. The enhancement of the spelling training based on the novel student model resulted a significant increase in the student learning performance. <s> BIB010 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Domain Model <s> This chapter addresses the challenge of building or authoring an Intelligent Tutoring System (ITS), along with the problems that have arisen and been dealt with, and the solutions that have been tested. We begin by clarifying what building an ITS entails, and then position today’s systems in the overall historical context of ITS research. The chapter concludes with a series of open questions and an introduction to the other chapters in this part of the book. <s> BIB011 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Domain Model <s> Nowadays, beside computer has come into our life, learning, independent from time and place, is implemented in an effective structure. Since many studies are consummated education is implemented in a structure which takes into account. Benefits of the qualities include being more effective, qualified and independent from time and place. In order to develop the software's that present students effective instruction methods and provide education with being adapted to students, studies are carried out. Intelligent Tutoring Systems (ITSs) are tutoring systems which form with using artificial intelligence techniques in computer programs to facilitate instruction. These systems are based on cognitive learning theory which is a learning theory interested in how information organizes in human's memory. ITSs are intelligent programs which know what, how and whom they will teach so computers play an important part in education and instruction aims are performed and suggested in this work. In this paper we describe ITSs in educational application and demonstrate used modules in ITSs. Otherwise, these have been compared with computer-aided learning systems. The results indicate that these systems formed with artificial intelligence techniques omit this incompetence with vast rate and countenance students and teachers to learning in a better manner. Nowadays, with the 21st century's, computers play an important part in that education-instruction aims are implemented. Beside computer has come into our life, learning, independent from time and place, is performed in an effective structure. Also, software that present students effective instruction methods and provide education with being adapted to students begins to be developed. The most important software category which is developed with this aim is Intelligent Tutoring System (ITS) which is formed by using computer Technologies and Artificial Intelligence. ITSs are tutoring systems which form with using artificial intelligence techniques in computer programs to facilitate instruction (1, 2). These systems are based on cognitive learning theory which is a learning theory interested in how information organizes in human's memory (3, 4). ITSs are intelligent programs which know what, how and whom they will teach (5, 6, 7, and 8). <s> BIB012 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Domain Model <s> Fifteen years ago, research started on SQL-Tutor, the first constraint-based tutor. The initial efforts were focused on evaluating Constraint-Based Modeling (CBM), its effectiveness and applicability to various instructional domains. Since then, we extended CBM in a number of ways, and developed many constraint-based tutors. Our tutors teach both well- and ill-defined domains and tasks, and deal with domain- and meta-level skills. We have supported mainly individual learning, but also the acquisition of collaborative skills. Authoring support for constraint-based tutors is now available, as well as mature, well-tested deployment environments. Our current research focuses on building affect-sensitive and motivational tutors. Over the period of fifteen years, CBM has progressed from a theoretical idea to a mature, reliable and effective methodology for developing effective tutors. <s> BIB013 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Domain Model <s> In many domains, problem solving involves the application of general domain principles to specific problem representations. In 3 classroom studies with an intelligent tutoring system, we examined the impact of (learner-generated) interactions and (tutor-provided) visual cues designed to facilitate rule–diagram mapping (where students connect domain knowledge to problem diagrams), with the goal of promoting students’ understanding of domain principles. Understanding was not supported when students failed to form a visual representation of rule–diagram mappings (Experiment 1); student interactions with diagrams promoted understanding of domain principles, but providing visual representations of rule–diagram mappings negated the benefits of interaction (Experiment 2). However, scaffolding student generation of rule–diagram mappings via diagram highlighting supported better understanding of domain rules that manifested at delayed testing, even when students already interacted with problem diagrams (Experiment 3). This work extends the literature on learning technologies, generative processing, and desirable difficulties by demonstrating the potential of visually based interaction techniques implemented during problem solving to have long-term impact on the type of knowledge that students develop during intelligent tutoring. <s> BIB014 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Domain Model <s> In order for an Intelligent Tutoring System (ITS) to correct students’ exercises, it must know how to solve the same type of problems that students do and the related knowledge components. It can, thereby, compare the desirable solution with the student’s answer. This task can be accomplished by an expert system. However, it has some drawbacks, such as an exponential complexity time, which impairs the desirable real-time response. In this paper we describe the expert system (ES) module of an Algebra ITS, called PAT2Math. The ES is responsible for correcting student steps and modeling student knowledge components during equations problem solving. Another important function of this module is to demonstrate to students how to solve a problem. In this paper, we focus mainly on the implementation of this module as a rule-based expert system. We also describe how we reduced the complexity of this module from O(nd) to O(d), where n is the number of rules in the knowledge base, by implementing some meta-rules that aim at inferring the operations students applied in order to produce a step. We evaluated our approach through a user study with forty-three seventh grade students. The students who interacted with our tool showed statistically higher scores on equation solving tests, after solving algebra exercises with PAT2Math during an approximately two-hour session, than students who solved the same exercises using only paper and pencil. <s> BIB015
The expert knowledge, the domain expert, or the expert model, represents the facts, concepts, problem solving strategies, and rules of the particular domain to be taught, and provides ITSs with the knowledge of what they are teaching. The material and detailed knowledge are usually derived from experts who have years of experience in the domain to be taught. It is important to mention that to find what to teach is the goal of the domain model. However, it is separate from the control information (how to teach), which is represented by the tutoring model . The domain expert fulfills a double function. Firstly, it acts as the source of the knowledge to be presented to students through explanations, responses and questions. Secondly, it evaluates the student's performance. In order to accomplish these tasks, the system must be able to present correct solutions to problems so that the student's answers can be compared to those of the system. In case the ITS is required to guide the student in solving problems, the expert model must be able to generate sensible and multiple paths of solutions to help fill the gap in the student's knowledge. The expert model can also provide an overall progress assessment of students by establishing specific criteria with which to compare knowledge BIB002 [26][27] BIB012 . An ITS must have a knowledge base system which contains information on what will be taught to the learners. The need for suitable knowledge representation (KR) languages must be considered in representing and using the knowledge. The principles that need to be considered when choosing KR languages to build the knowledge are the expressivity of the language, the inference capacity of the language, the cognitive plausibility of the language and pedagogical orientation of the language BIB009 . Hatzilygeroudis and Prentzas made the first efforts to define and analyze the requirements for knowledge representation in ITSs BIB003 . Various knowledge representation and reasoning schemes have been used in ITSs. These include symbolic rules, fuzzy logic, Bayesian networks, and casebased reasoning, and hybrid representations such as neuro-symbolic and neuro-fuzzy approaches. More details on examples of ITS systems along with the knowledge representation languages used can be found in BIB009 . The following explains three traditional types of ITS approaches for representing and reasoning with the domain knowledge. Two types of domain knowledge models used frequently in ITSs are the cognitive model, and the constraint based model. The third approach incorporates an expert system in the ITS BIB013 . BIB014 . They have been fielded in a variety of scientific domains such as algebra, physics, geometry and computer programming BIB004 . Cognitive tutors use a cognitive model to provide students with immediate feedback. The goal of this approach is to provide a detailed and precise description on the relevant knowledge in a task domain including principles and strategies for problem-solving. A rule-based model generates a step by step solution to provide support to students in a rich environment for problem solving that generates feedback to students on the correctness of each step in the solution and can keep track of many approaches (strategies) to the final correct answers. Not only are the correct solutions represented, but also the common mistakes that the students usually make (Bug Libraries) as shown in Figure 3 [33] BIB001 . Cognitive tutors have been built based on the ACT-R theory of cognition and learning BIB009 . The underlying principle of ACT-R is the distinction between explicit and implicit knowledge. Procedural knowledge is considered implicit whereas declarative knowledge is explicit. Declarative knowledge consists of facts and concepts in the form of a semantic net or similar network of concepts linking what are called chunks. In contrast, procedural memory represents knowledge of how we do things in the form of production rules written in IF-THEN format. Thus, chunks and productions are the basic forms of an ACT-R model BIB010 . In order to use cognitive models to facilitate tutoring, an algorithm called model tracing has been used. The tutor assesses the student solution by comparing the student solution steps against what the model would do for the same task. If the student action is the same as the model action, it is deemed correct. Otherwise it is not correct. An error is hypothesized when a student step does not match any rule or it matches one or more of the buggy rules . Each production rule that generates the matching action can be interpreted as a skill possessed by the student. So over time, the model is able to evaluate the skills that have been mastered by the student (knowledge tracing). Thus, knowledge tracing is used to monitor the skills that students have acquired from solving a problem BIB010 BIB015 . The Knowledge Tracing model called Cognitive Mastery Learning is one of the most popular methods for estimating the probability that a student knows each skill BIB005 . The model continuously keeps assessing the probability that a student has acquired each skill taking into account four parameters for each skill. Cognitive Mastery Learning is known to produce a significant improvement in learning and it has a long history of application. Educational data mining approaches such as Learning Factor Analysis (LFA) BIB006 and Performance Factors Analysis (PFA) BIB007 have been used to futher improve ITSs using this model. Despite the fact that cognitive tutors have led to impressive student learning gains in a variety of domains, these model tracing tutors have not been widely adopted in educational or other settings such as corporate training. The fact is that building complete and optimal cognitive tutors requires software pieces such as an interface, a curriculum, a learner interacting management system, and a teacher reporting package. Additionally, the process also needs a team of professionals to work together, resulting in high cost and time. These two requirements have limited practical use of such tutors BIB008 . To reduce the cost of building model tracing tutors, authoring tools with some of the capabilities have been built. An example is Cognitive Tutor Authoring Tools (CTAT) BIB011 . CTAT has been used to create full ITSs without programming. This has led to a new paradigm of an ITS called Example-Tracing Tutors BIB011 .
Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Constraint based Model (CBM) <s> Model tracing and constraint-based modeling are two prominent paradigms on which intelligent tutoring systems (ITSs) have been based. We Kodaganallur, Weitz and Rosenthal (2005), have written a paper comparing the two paradigms, and offering guidance based on this comparison to prospective authors of ITSs. In a detailed critique of our paper, Mitrovic and Ohlsson (2006) have taken issue with many of our observations and conclusions. In this work we refute their critique and provide a more general, critical assessment of constraint-based modeling. <s> BIB001 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Constraint based Model (CBM) <s> Fifteen years ago, research started on SQL-Tutor, the first constraint-based tutor. The initial efforts were focused on evaluating Constraint-Based Modeling (CBM), its effectiveness and applicability to various instructional domains. Since then, we extended CBM in a number of ways, and developed many constraint-based tutors. Our tutors teach both well- and ill-defined domains and tasks, and deal with domain- and meta-level skills. We have supported mainly individual learning, but also the acquisition of collaborative skills. Authoring support for constraint-based tutors is now available, as well as mature, well-tested deployment environments. Our current research focuses on building affect-sensitive and motivational tutors. Over the period of fifteen years, CBM has progressed from a theoretical idea to a mature, reliable and effective methodology for developing effective tutors. <s> BIB002 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Constraint based Model (CBM) <s> AutoTutor is a natural language tutoring system that has produced learning gains across multiple domains (e.g., computer literacy, physics, critical thinking). In this paper, we review the development, key research findings, and systems that have evolved from AutoTutor. First, the rationale for developing AutoTutor is outlined and the advantages of natural language tutoring are presented. Next, we review three central themes in AutoTutor’s development: human-inspired tutoring strategies, pedagogical agents, and technologies that support natural-language tutoring. Research on early versions of AutoTutor documented the impact on deep learning by co-constructed explanations, feedback, conversational scaffolding, and subject matter content. Systems that evolved from AutoTutor added additional components that have been evaluated with respect to learning and motivation. The latter findings include the effectiveness of deep reasoning questions for tutoring multiple domains, of adapting to the affect of low-knowledge learners, of content over surface features such as voices and persona of animated agents, and of alternative tutoring strategies such as collaborative lecturing and vicarious tutoring demonstrations. The paper also considers advances in pedagogical agent roles (such as trialogs) and in tutoring technologies, such semantic processing and tutoring delivery platforms. This paper summarizes and integrates significant findings produced by studies using AutoTutor and related systems. <s> BIB003 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Constraint based Model (CBM) <s> SQL-Tutor is the first constraint-based tutor. The initial conference papers about the system were published in 1998 (Mitrovic 1998a, 1998b, 1998c), with an IJAIED paper published in 1999 (Mitrovic and Ohlsson, International Journal Artificial Intelligence in Education, 10(3–4), 238–256, 1999). We published another IJAIED paper in 2003, focussed on the Web-enabled version of the same system (Mitrovic, Artificial Intelligence in Education, 13(2–4), 173–197, 2003). In this paper, we discuss the reasons for developing the system, our experiences with the early versions, and also provide a history of later projects involving SQL-Tutor. <s> BIB004
. Constraint based modeling was proposed by Ohlson in 1992 to overcome difficulties in building the student model BIB002 . Since then, CBM has been used widely in numerous ITSs to represent instructional domains, students' knowledge and higher level skills. CBM is based on Ohlson's theory of learning using performance errors, resulting in a new methodology for representing the knowledge using constraints which cannot be violated during problem solving. It is different from model tracing, which generates all possible paths of solutions using production rules. CBM can be used to represent both domain and student knowledge. This model has been used to design and implement efficient and effective learning environments BIB002 BIB004 . The fundamental idea behind CBM is that constraints represent the basic principles and facts in the underlying domain which a correct solution must follow . The observation here is that all correct solutions for any problem are similar in that they do not violate any domain principles or "constraints". Instead of representing both correct and incorrect spaces as in model tracing, it is sufficient to capture only the domain principles BIB002 . The form of a constraint is an ordered pair (Cr, Cs), where Cr is the relevance condition and Cs is the satisfaction condition so the constraint follows the form: If < relevance condition > is true, Then < satis f action condition > had better also be true. The relevance condition may contain simple or compound tests to specify features of student solutions whereas the satisfaction condition is an additional test that has to be met in order for the student's solution to be correct BIB001 . The CBM approach was proposed to avoid some limitations of model tracing tutors. First, the nature of CBM's knowledge representation as constraints allows for creativity. The system accepts student's solutions even though it is not represented in the system as long as they do not violate any constraints. On the contrary, model tracing limits the students' solutions to ones stored in the model. Thus, the idea that different students might have different strategies or beliefs to find their results is considered. Second, creating a bug library as used in model tracing requires a lot of time since the types of mistakes can be vast. Consequently, CBM concentrates on domain principles that every correct solution must meet. CBM's hypothesis in this regard is that all correct solutions share the same features, so it is enough to represent the correct space by capturing domain principles. In case of student errors, the system can advise the student on the mistake without being able to represent it. Finally, for some instructional tasks categorized as ill-defined (for details see ), it may even be impossible to follow the steps of the correct solutions because the runnable models are expressed as set of production rules for both the expert and student. CBM avoids this limitation and can handle some ill-defined tasks BIB004 . 6.1.3 Expert Approach. The third approach for representing and reasoning with domain knowledge consists of integrating an expert system in an ITS. This is considered a broad approach in ITS since several formalisms of expert systems can be used such as rule-based, neural networks, decision trees and case-based reasoning. An expert system mimics the ability of an expert in terms of decision making and skills in modeling, and solves a problem . The advantage of the expert system approach is in its ability to accept a broader domain to represent and reason with, unlike constraint based and cognitive models work with limited domains . Fournier-Viger et al. showed that an expert system approach should provide for two modalities. First, the expert system should be able to generate expert solutions and then compare these solutions with the learner's solutions. GUIDON is as an example of an ITS that uses this modality. The second modality for using an expert system approach in ITSs is to compare ideal solutions with the learners' solutions. Some examples of systems that are able to meet the second modality are AutoTutor BIB003 , and DesignFirst-ITS . Despite the fact that the expert system approach is powerful, it faces some limitations as noted by : "(1) developing or adapting an expert system can be costly and difficult, especially for ill-defined domains; and (2) some expert systems cannot justify their inferences, or provide explanations that are appropriate for learning".
Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Student Model <s> This is a non-expert overview of Intelligent Tutoring Systems (ITSs), a way in which Artificial Intelligence (AI) techniques are being applied to education. It introduces ITSs and the motivation for them. It looks at its history: its evolution from Computer-Assisted Instruction (CAI). After looking at the structure of a ‘typical’ ITS, the paper further examines and discusses some other architectures. Several classic ITSs are reviewed, mainly due to their historical significance or because they best demonstrate some of the principles of intelligent tutoring. A reasonably representative list of ITSs is also provided in order to provide a better appreciation of this vibrant field as well as reveal the scope of existing tutors. The paper concludes, perhaps more appropriately, with some of the author's viewpoints on a couple of controversial issues in the intelligent tutoring domain. <s> BIB001 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Student Model <s> A learner model must store all the relevant information about a student, including knowledge and attitude. This paper proposes a domain independent learner model based in the classical overlay approach that can be used in a distributed environment. The model has two sub-models: the learner attitude model, where the static information about the user is stored (user’s personal and technical characteristics, user’s preferences, etc.) and the learner knowledge model, where the user’s knowledge and performance is stored. The knowledge model has four layers: estimated, assessed, inferred by prerequisite and inferred by granularity. The learner model is used as a part of the MEDEA system, so the first and second layers are updated directly by the components of MEDEA and the third and fourth are updated by Bayesian inference. <s> BIB002 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Student Model <s> The work described here pertains to ICICLE, an intelligent tutoring system for which we have designed a user model to supply data for intelligent natural language parse disambiguation. This model attempts to capture the user's mastery of various grammatical units and thus can be used to predict the grammar rules he or she is most likely using when producing language. Because ICICLE's user modeling component must infer the user's language mastery on the basis of limited writing samples, it makes use of an inferencing mechanism that will require knowledge of stereotypic acquisition sequences in the user population. We discuss in this paper the methodology of how we have applied an empirical investigation into user performance in order to derive the sequence of stereotypes that forms the basis of our modeling component's reasoning capabilities. <s> BIB003 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Student Model <s> This paper presents a new model for simulating procedural knowledge in the problem solving process with our ontological system, InfoMap. The method divides procedural knowledge into two parts: process control and action performer. By adopting InfoMap, we hope to help teachers construct curricula (declarative knowledge) and teaching strategies by capturing students’ problem-solving processes (procedural knowledge) dynamically. Using the concept of declarative and procedural knowledge in intelligent tutoring systems, we can accumulate and duplicate the knowledge of the curriculum manager and student. <s> BIB004 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Student Model <s> We have been using the concept map of the domain, enhanced with pedagogical concepts called learning objectives, as the overlay student model in our intelligent tutors for programming. The resulting student model is finegrained, and has several advantages: (1) it facilitates better adaptation during problem generation; (2) it makes it possible for the tutor to automatically vary the level of locality during problem generation to meet the needs of the learner; (3) it clarifies to the learner the relationship among domain concepts when opened to scrutiny; (4) the tutor can estimate the level of understanding of a student in any higher-level concept, not just the concepts for which it presents problems; and (5) two tutors in a domain can affect each other’s adaptation of problems. Prior evaluations have shown that tutors that use enhanced concept maps help improve learning. <s> BIB005 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Student Model <s> The research reported in this paper focuses on the hypothesis that an intelligent tutoring system that provides guidance with respect to students' meta-cognitive abilities can help them to become better learners. Our strategy is to extend a Cognitive Tutor (Anderson, Corbett, Koedinger, & Pelletier, 1995) so that it not only helps students acquire domain-specific skills, but also develop better general help-seeking strategies. In developing the Help Tutor, we used the same Cognitive Tutor technology at the meta-cognitive level that has been proven to be very effective at the cognitive level. A key challenge is to develop a model of how students should use a Cognitive Tutor's help facilities. We created a preliminary model, implemented by 57 production rules that capture both effective and ineffective help-seeking behavior. As a first test of the model's efficacy, we used it off-line to evaluate students' help-seeking behavior in an existing data set of student-tutor interactions. We then refined the model based on the results of this analysis. Finally, we conducted a pilot study with the Help Tutor involving four students. During one session, we saw a statistically significant reduction in students' meta-cognitive error rate, as determined by the Help Tutor's model. These preliminary results inspire confidence as we gear up for a larger-scale controlled experiment to evaluate whether tutoring on help seeking has a positive effect on students' learning outcomes. <s> BIB006 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Student Model <s> The paper presents a conceptual model and some ideas for realization of the e-learning system in the secondary school that is adaptive to the user knowledge. We propose the algorithm for evaluation of student's cognitions at three levels -- as general mark, as level of subject domain knowledge and as mark of each concept. According to this valuation and the stereotype information, the LMS starts and manages an appropriate training process. <s> BIB007 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Student Model <s> LS-Plan is a framework for personalization and adaptation in e-learning. In such framework an Adaptation Engine plays a main role, managing the generation of personalized courses from suitable repositories of learning nodes and ensuring the maintenance of such courses, for continuous adaptation of the learning material proposed to the learner. Adaptation is meant, in this case, with respect to the knowledge possessed by the learner and her learning styles, both evaluated prior to the course and maintained while attending the course. Knowledge and Learning styles are the components of the student model managed by the framework. Both the static, precourse, and dynamic, in-course, generation of personalized learning paths are managed through an adaptation algorithm and performed by a planner, based on Linear Temporal Logic. A first Learning Objects Sequence is produced based on the initial learner's Cognitive State and Learning Styles, as assessed through prenavigation tests. During the student's navigation, and on the basis of learning assessments, the adaptation algorithm can output a new Learning Objects Sequence to respond to changes in the student model. We report here on an extensive experimental evaluation, performed by integrating LS-Plan in an educational hypermedia, the LecompS web application, and using it to produce and deliver several personalized courses in an educational environment dedicated to Italian Neorealist Cinema. The evaluation is performed by mainly following two standard procedures: the As a Whole and the Layered approaches. The results are encouraging both for the system on the whole and for the adaptive components. <s> BIB008 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Student Model <s> Bayesian networks are graphical modeling tools that have been proven very powerful in a variety of application contexts. The purpose of this paper is to provide education practitioners with the background and examples needed to understand Bayesian networks and use them to design and implement student models. The student model is the key component of any adaptive tutoring system, as it stores all the information about the student (for example, knowledge, interest, learning styles, etc.) so the tutoring system can use this information to provide personalized instruction. Basic and advanced concepts and techniques are introduced and applied in the context of typical student modeling problems. A repertoire of models of varying complexity is discussed. To illustrate the proposed methodology a Bayesian Student Model for the Simplex algorithm is developed. <s> BIB009 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Student Model <s> Adaptive educational systems (AESs) guide students through the course materials in order to improve the effectiveness of the learning process. However, AES cannot replace the teacher. Instead, teachers can also benefit from the use of adaptive educational systems enabling them to detect situations in which students experience problems (when working with the AES). To this end the teacher needs to monitor, understand and evaluate the students' activity within the AES. In fact, these systems can be enhanced if tools for supporting teachers in this task are provided. In this paper, we present the experiences with predictive models that have been undertaken to assist the teacher in PDinamet, a web-based adaptive educational system for teaching Physics in secondary education. Although the obtained models are still very simple, our findings suggest the feasibility of predictive modeling in the area of supporting teachers in adaptive educational systems. <s> BIB010 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Student Model <s> This paper constitutes a literature review on student modeling for the last decade. The review aims at answering three basic questions on student modeling: what to model, how and why. The prevailing student modeling approaches that have been used in the past 10years are described, the aspects of students' characteristics that were taken into consideration are presented and how a student model can be used in order to provide adaptivity and personalisation in computer-based educational software is highlighted. This paper aims to provide important information to researchers, educators and software developers of computer-based educational software ranging from e-learning and mobile learning systems to educational games including stand alone educational applications and intelligent tutoring systems. In addition, this paper can be used as a guide for making decisions about the techniques that should be adopted when designing a student model for an adaptive tutoring system. One significant conclusion is that the most preferred technique for representing the student's mastery of knowledge is the overlay approach. Also, stereotyping seems to be ideal for modeling students' learning styles and preferences. Furthermore, affective student modeling has had a rapid growth over the past years, while it has been noticed an increase in the adoption of fuzzy techniques and Bayesian networks in order to deal the uncertainty of student modeling. <s> BIB011
It would be difficult for an ITS to succeed without some understanding of the user. The student model represents the knowledge and skills of the student dynamically. Just as domain knowledge must be explicitly represented so that it can be communicated, the student model must also be represented likewise. Ideally, the student model should store aspects of the student's behavior and skills in such a way that the ITS can infer the student's performance and skills. According to Nwana, the uses of the student model can be classified into six different types BIB001 . The first type is corrective in that it enables removing bugs in a student's knowledge. The second type is elaborative in that it fills in the student's incomplete knowledge. The third type is strategic in that it assists in adapting the tutorial strategy based on the student's action and performance. The fourth type is diagnostic in that it assists in identifying errors in the student's knowledge. The fifth type is predictive in that it assists in understanding the response of the student to the system's actions. The sixth and final type is evaluative in that it assists in evaluating the students overall progress. The student model acts as a source of information about the student. The system should be able to infer unobservable aspects of the student's behavior from the model. It should reconstruct misconceptions in the student's knowledge by interpreting the student's actions. The representation of the student model is likely to be based on the representation of domain knowledge. The knowledge can be separated into elements with evaluations of mastery incorporated into the student model. This allows the system to compare the state of the student's knowledge with that of the expert. As a result, instructions can be adapted to exercise the weaknesses in the student's skills. It should be noted that incomplete knowledge is not necessarily the source of incorrect behavior. The knowledge to be taught can evolve, which presents a challenge to the tutoring system. It is for this reason that explicit representations of a student's supposed incorrect knowledge must be included in a student model so that remediation can be performed as necessary. An important feature of the student model is that it is executable or runnable. This allows for prediction of a particular student's behavior in a particular context. This ultimately allows this important architectural component of an ITS to interact appropriately with the student. These interactions may include correction of misconceptions, providing personalized feedback, and suggestion for learning a particular item, etc. [8] . Designing a student model is not an easy mission. It should be based on responses to certain questions. What does the student know? What types of knowledge will the student need to solve a problem? It is from such questions that the methodology for designing a student model should derived. It is first necessary to identify the knowledge that the student has gained in terms of the components that are integrated with the mechanism. It is secondly necessary to identify the understanding level of the student vis-a-vis the functionality of the mechanism. It is finally necessary to identify the pedagogical strategies used by the student to arrive at a problem's solution. These must be taken under consideration in the development of the student model . There are several kinds of student characteristics that should be taken into consideration. In order to build an efficient student model, the system needs to consider both static and dynamic characteristics of students. Static characteristics include information such as email, age, and mother tongue, and are set before the learning processes start, whereas dynamic characteristics come from the behavior of students during the interaction with the system BIB011 BIB009 . According to BIB011 , the challenge is to find the relevant dynamic characteristics of an individual student in order to adapt the system for each student. The dynamic characteristics include knowledge and skills, errors and misconceptions, learning styles and preferences, affective and cognitive factors, and meta-cognitive factors. The term knowledge here refers to the knowledge that has been acquired by the student previously, while learning style or preferences refer to how the student prefers to perceive the learning material (e.g., graphical representation, audio materials and text representation). Affective factors include the emotional characteristics of the students such as being angry, happy, sad, or frustrated. Cognitive factors refer to the cognitive features of students, for instance, attention, ability to learn and ability to solve problems and make decisions. Meta-cognitive aspects involve attitude and ability for help-seeking, self-regulation, and self-assessment BIB011 BIB006 . Several approaches have been used to build the student model. The following subsections discuss some approaches that have been found in the literature. 6.2.1 Overlay Model. The overlay model was invented in 1976 by Stansfield, Carr and Goldstein. It is one of the most popular student models, and it has been used by many tutoring systems. It assumes that student knowledge is a subset of domain knowledge. If the student has a different behavior from that of the domain, it is considered a gap in the student's knowledge. As a result, the goal is to eliminate the gap between them as much as possible BIB011 BIB002 . Consequently, the domain contains a set of elements and the overlay model indicates a set of masteries over these elements. A simple overlay model uses a Boolean value to indicate if an individual student knows the element or does not know the element. In the modern overlay model, a qualitative measure is used to indicate the level of student knowledge (good, average or poor). The advantage of using this model is that it allows making the amount of student knowledge as large as necessary. However, the disadvantage of using this model is that the student may take a different approach to solve a problem. The student may also have different beliefs in the form of 'misconceptions' that are not stored in the domain knowledge BIB011 . Carmon and Conejo in 2004 proposed a learner model in their MEDEA system, which is a framework to build open ITSs . The classical overlay model was used to represent knowledge and attitude of the students in MEDEA. The learner model was divided into two sub-models: attitude model and learner knowledge model. The attitude model contains static information about the students (users' personal and technical characteristics, users' preferences, etc.). These features were collected directly from the student before the learning processes take place. The learner knowledge model was responsible for the student's knowledge and performance. These features were updated during the learning processes. For each domain concept, the learner model stores an estimation of the knowledge level of the student on this concept [51] . InfoMap is designed to facilitate both human browsing and computer processing of the domain ontology in a system. It uses the overlay model combined with a buggy model to identify deficient knowledge BIB004 . Another ITS that applies the overlay model for the student model is ICICLE (Interactive Computer Identification and Correction of Language Errors). The system's goal is to employ natural language processing to tutor students on grammatical components of written English. ICICLE uses a student overlay model to capture the user's mastery of various grammatical units and thus can be used to predict the grammar rules he or she is most likely using when producing language BIB003 . Kumar in 2006 used the overlay student model in an ITS for computer programming called DeLC (Distributed eLearning Center) for distance and electronic teaching BIB005 . It used the overlay student model to capture the level of understanding of the user. However, it also used another modeling approach named the stereotype approach to model learner's manner of access to training resources, their preferences, habits and behaviors during the learning process BIB007 . LS-Plan is a framework for personalization and adaption in e-learning systems. It uses a qualitative overlay model based on Bloom's Taxonomy. LS-Plan also uses a bug model to detect misconceptions of the users BIB008 . PDinamet, a web-based adaptive learning system for the teaching of physics in secondary school, uses an overlay student model to store concepts that the student has already learned or has not learned yet. Consequently, the tutor can recommend to an individual student a certain topic by taking into account the student's skill level and the learning activities the student has already participated in PDinamet BIB010 .
Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Stereotypes Model. <s> Decision makers are often faced with several conflicting alternatives [1]. How do they evaluate trade-offs when there are more than three criteria? To help people make optimal decisions, scholars in the discipline of multiple criteria decision making (MCDM) continue to develop new methods for structuring preferences and determining the correct relative weights for criteria. A compilation of modern decision-making techniques, Multiple Attribute Decision Making: Methods and Applications focuses on the fuzzy set approach to multiple attribute decision making (MADM). Drawing on their experience, the authors bring together current methods and real-life applications of MADM techniques for decision analysis. They also propose a novel hybrid MADM model that combines DEMATEL and analytic network process (ANP) with VIKOR procedures. <s> BIB001 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Stereotypes Model. <s> Abstract : This paper describes the current state of implementation of a cognitive computer model of human plausible reasoning, based on the theory of plausible reasoning described by Collins and Michalski. Our goal is to use the simulation as a means of testing and refining the theory. This requires developing appropriate memory organization and search techniques to support of this style of inference, finding ways to estimate similarity in specific contexts and investigating ways of combining the sometimes contradictory conclusions reached when inferences of different types are used to answer questions. Keywords: Science Track: Cognitive Modelling, ArtificiaL intelligence, Similarity, Analogy. <s> BIB002 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Stereotypes Model. <s> A stereotype represents a collection of attributes that often co-occur in people. Stereotypes can play an important role in a user modeling system because they enable the system to make a large number of plausible inferences on the basis of a substantially smaller number of observations. These inferences must, however, be treated as defaults, which can be overridden by specific observations. Thus any stereotype-based user-modeling system must include techniques for nonmonotonic reasoning. This chapter will discuss the role that stereotypes can play in a user modeling system and it will outline specific techniques that can be used to implement stereotype-based reasoning in such systems. <s> BIB003 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Stereotypes Model. <s> One of the most important problems for an intelligent tutoring system is deciding how to respond when a student asks for help. Responding cooperatively requires an understanding of both what solution path the student is pursuing, and the student's current level of domain knowledge. Andes, an intelligent tutoring system for Newtonian physics, refers to a probabilistic student model to make decisions about responding to help requests. Andes' student model uses a Bayesian network that computes a probabilistic assessment of three kinds of information: (I) the student's general knowledge about physics, (2) the student's specific knowledge about the current problem, and (3) the abstract plans that the student may be pursuing to solve the problem. Using this model, Andes provides feedback and hints tailored to the student's knowledge and goals. <s> BIB004 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Stereotypes Model. <s> The paper presents SQLT-Web, a Web-enabled intelligent tutoring system for the SQL database language. SQLT-Web is a Web-enabled version of an earlier, standalone ITS. In this paper we describe how the components of the standalone system were reused to develop the Web-enabled system. The system observes students' actions and adapts to their knowledge and learning abilities. We describe the system's architecture in comparison to the architectures of other existing Web-enabled tutors. All tutoring functions are performed on the server side, and we explain how SQLT-Web deals with multiple students. The system has been open to outside users since March 2000. SQLT-Web has been evaluated in the context of genuine teaching activities. We present the results of three evaluation studies with the University of Canterbury students taking database courses, which show that SQLT-Web is an effective system. The students have found the system a valuable asset to their learning. <s> BIB005 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Stereotypes Model. <s> In this paper we propose a method that implements student diagnosis in the context of the Adaptive Hypermedia Educational System INSPIRE - INtelligent System for Personalized Instruction in a Remote Environment. The method explores ideas from the fields of fuzzy logic and multicriteria decision-making in order to deal with uncertainty and incorporate in the system a more complete and accurate description of the expert's knowledge as well as flexibility in student's assessment. To be more precise, an inference system, using fuzzy logic and the Analytic Hierarchy Process to represent the knowledge of the teacher-expert on student's diagnosis, analyzes student's answers to questions of varying difficulty and importance, and estimates the student's knowledge level. Preliminary experiments with real students indicate that the method is characterized by effectiveness in handling the uncertainty of student diagnosis, and is found to be closer to the assessment performed by a human teacher, when compared to a more traditional method of assessment. <s> BIB006 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Stereotypes Model. <s> Nowadays there is an increasing interest in the development of computational systems that provide alternative (to the traditional classroom) forms of education, such as Distance Learning (DL) and Intelligent Tutoring Systems (ITS). Adaptation in the process of interaction with the student is a key feature of ITS that is particularly critical in web-based DL, where the system should provide real-time support to a learner that most times does not rely on other kinds of synchronous feedback. This paper presents the LeCo-EAD approach of student modelling. LeCo-EAD is a Learning Companion System for web-based DL that includes three kinds of learning companions – collaborator, learner, and trouble maker – that are always available to interact with and support the remote students. The student modelling approach of LeCo-EAD is appropriate to the DL context as it allows updating the student model in order to provide feedback and support to the distant students in real-time. <s> BIB007 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Stereotypes Model. <s> This paper is about providing intelligent help to users interacting with an operating system. Its main focus is an investigation of Human Plausible Reasoning Theory (Collins & Michalski, 1989) to infer the commands the user should have typed, given what they did type. The theory has been adapted and incorporated into a prototype Intelligent Help System (IHS) for UNIX users, called RESCUER, and has been used for the generation and evaluation of hypotheses about users' beliefs underlying the observed users' actions on the UNIX file store. The hypotheses generated by RESCUER were compared to those made by human experts on the sample scripts from UNIX user sessions. The potential for Human Plausible Reasoning as a mechanism to reason about slips and misconceptions is discussed. <s> BIB008 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Stereotypes Model. <s> This paper describes a multi-agent, intelligent learning environment. The system is called F-SMILE and is meant to help novice users learn how to manipulate the file store of their computer. F-SMILE consists of a Learner Modelling (LM) Agent, an Advising Agent, a Tutoring Agent and a Speech-driven Agent. The LM Agent constantly observes the learner and in case it suspects that the user is involved in a problematic situation it tries to find out what the cause of the problem has been. This is done by searching for similar actions to the one issued by the learner which would have been more appropriate with respect to the hypothesised learner’s goals. These alternative actions are sent to the Advising Agent, which is responsible for selecting the most appropriate action to be suggested to the particular learner. The selection of the best alternative action is based on the information about the learner that the LM Agent has collected and a cognitive theory. In case the problem of the user was due to lack of knowledge, the Tutoring Agent is activated in order to generate adaptively a lesson appropriate for the particular user. When the advice and the corresponding lesson are ready, they are sent to the Speech-driven Agent, which is responsible for rendering the interaction with the user more human-like and user-friendly. <s> BIB009 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Stereotypes Model. <s> This paper presents the details of a student model that enables an open learning environment to provide tailored feedback on a learner's exploration. Open learning environments have been shown to be beneficial for learners with appropriate learning styles and characteristics, but problematic for those who are not able to explore effectively. To address this problem, we have built a student model capable of detecting when the learner is having difficulty exploring and of providing the types of assessments that the environment needs to guide and improve the learner's exploration of the available material. The model, which uses Bayesian Networks, was built using an iterative design and evaluation process. We describe the details of this process, as it was used to both define the structure of the model and to provide its initial validation. <s> BIB010 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Stereotypes Model. <s> This paper presents a new model for simulating procedural knowledge in the problem solving process with our ontological system, InfoMap. The method divides procedural knowledge into two parts: process control and action performer. By adopting InfoMap, we hope to help teachers construct curricula (declarative knowledge) and teaching strategies by capturing students’ problem-solving processes (procedural knowledge) dynamically. Using the concept of declarative and procedural knowledge in intelligent tutoring systems, we can accumulate and duplicate the knowledge of the curriculum manager and student. <s> BIB011 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Stereotypes Model. <s> A universal trouble light includes a longitudinally extending handle and a first apertured ball and a socket therefor carried at one end of the handle. A longitudinally extending bulb carrying section also carries a ball and socket therefor at one of its ends. The balls are rigidly connected by a tube to thereby position the handle and barrel member in spaced proximity to each other. The handle and barrel member are capable of relative rotative and angular movement. A bulb socket is positioned within the bulb carrying section and an electrical conductor connects the socket with a source in the handle, the conductor extending through the balls. The rotation of each ball is limited to a single plane and for a limited arc within that plane. <s> BIB012 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Stereotypes Model. <s> In this paper, we describe the user modelling mechanism of AUTO-COLLEAGUE, which is an adaptive Computer Supported Collaborative Learning system. AUTO-COLLEAGUE provides personalised and adaptive environment for users to learn UML. Users are organized into working groups under the supervision of a human coacher/trainer. The system constantly traces the performance of the learners and makes inferences about user characteristics, such as the performance type and the personality. All of these attributes form the individual learner models, which are built using the stereotype theory. User modelling is applied in order to offer adaptive help to learners and adaptive advice to trainers aiming to support them mainly in forming the most effective groups of learners. <s> BIB013 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Stereotypes Model. <s> This document is a survey in the research area of User Modeling (UM) for the specific field of Adaptive Learning. The aims of this document are: To define what it is a User Model; To present existing and well known User Models; To analyze the existent standards related with UM; To compare existing systems. In the scientific area of User Modeling (UM), numerous research and developed systems already seem to promise good results, but some experimentation and implementation are still necessary to conclude about the utility of the UM. That is, the experimentation and implementation of these systems are still very scarce to determine the utility of some of the referred applications. At present, the Student Modeling research goes in the direction to make possible reuse a student model in different systems. The standards are more and more relevant for this effect, allowing systems communicate and to share data, components and structures, at syntax and semantic level, even if most of them still only allow syntax integration. <s> BIB014 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Stereotypes Model. <s> One important field where mobile technology can make significant contributions is education. However one criticism in mobile education is that students receive impersonal teaching. Affective computing may give a solution to this problem. In this paper we describe an affective bi-modal educational system for mobile devices. In our research we describe a novel approach of combining information from two modalities namely the keyboard and the microphone through a multi-criteria decision making theory. <s> BIB015 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Stereotypes Model. <s> A complete diagnostic Bayesian network model cannot be achieved and the result of the constructed model cannot be guaranteed unless correct and reliable data are provided to the model. In this paper, we propose a technique to dynamically feed data into a diagnostic Bayesian network model in the first part of this paper. In the second part of the paper, a case study of several factors that have an impact on students for making a decision in enrollment is transformed into a Bayesian network model. The last part of the paper discusses a Web user interface for the model in terms of its design and diagnosis. The user is allowed to perform a diagnosis of the model through the SMILE Web application interface. <s> BIB016 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Stereotypes Model. <s> In this paper we present eTeacher, an intelligent agent that provides personalized assistance to e-learning students. eTeacher observes a student's behavior while he/she is taking online courses and automatically builds the student's profile. This profile comprises the student's learning style and information about the student's performance, such as exercises done, topics studied, exam results. In our approach, a student's learning style is automatically detected from the student's actions in an e-learning system using Bayesian networks. Then, eTeacher uses the information contained in the student profile to proactively assist the student by suggesting him/her personalized courses of action that will help him/her during the learning process. eTeacher has been evaluated when assisting System Engineering students and the results obtained thus far are promising. <s> BIB017 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Stereotypes Model. <s> We present J-LATTE, a constraint-based intelligent tutoring system that teaches a subset of the Java programming language. J-LATTE supports two modes: concept mode, in which the student designs the program without having to specify contents of statements, and coding mode, in which the student completes the code. We present the style of interaction with J-LATTE, its interface, domain model and the student modeling approach. We also report the results of a study we conducted in an introductory programming course. Although we did not have enough participants to obtain statistical significance, the results show very promising trends indicating that students learned the constraints. <s> BIB018 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Stereotypes Model. <s> In this paper, we introduce logic programming as a domain that exhibits some characteristics of being ill-defined. In order to diagnose student errors in such a domain, we need a means to hypothesise the student's intention, that is the strategy underlying her solution. This is achieved by weighting constraints, so that hypotheses about solution strategies, programming patterns and error diagnoses can be ranked and selected. Since diagnostic accuracy becomes an increasingly important issue, we present an evaluation methodology that measures diagnostic accuracy in terms of (1) the ability to identify the actual solution strategy, and (2) the reliability of error diagnoses. The evaluation results confirm that the system is able to analyse a major share of real student solutions, providing highly informative and precise feedback. <s> BIB019 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Stereotypes Model. <s> We present a probabilistic model of user affect designed to allow an intelligent agent to recognise multiple user emotions during the interaction with an educational computer game. Our model is based on a probabilistic framework that deals with the high level of uncertainty involved in recognizing a variety of user emotions by combining in a Dynamic Bayesian Network information on both the causes and effects of emotional reactions. The part of the framework that reasons from causes to emotions (diagnostic model) implements a theoretical model of affect, the OCC model, which accounts for how emotions are caused by one's appraisal of the current context in terms of one's goals and preferences. The advantage of using the OCC model is that it provides an affective agent with explicit information not only on which emotions a user feels but also why, thus increasing the agent's capability to effectively respond to the users' emotions. The challenge is that building the model requires having mechanisms to assess user goals and how the environment fits them, a form of plan recognition. In this paper, we illustrate how we built the predictive part of the affective model by combining general theories with empirical studies to adapt the theories to our target application domain. We then present results on the model's accuracy, showing that the model achieves good accuracy on several of the target emotions. We also discuss the model's limitations, to open the ground for the next stage of the work, i.e., complementing the model with diagnostic information. <s> BIB020 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Stereotypes Model. <s> Bayesian networks are graphical modeling tools that have been proven very powerful in a variety of application contexts. The purpose of this paper is to provide education practitioners with the background and examples needed to understand Bayesian networks and use them to design and implement student models. The student model is the key component of any adaptive tutoring system, as it stores all the information about the student (for example, knowledge, interest, learning styles, etc.) so the tutoring system can use this information to provide personalized instruction. Basic and advanced concepts and techniques are introduced and applied in the context of typical student modeling problems. A repertoire of models of varying complexity is discussed. To illustrate the proposed methodology a Bayesian Student Model for the Simplex algorithm is developed. <s> BIB021 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Stereotypes Model. <s> Cognitive approaches have been used for student modeling in intelligent tutoring systems (ITSs). Many of those systems have tackled fundamental subjects such as mathematics, physics, and computer programming. The change of the student's cognitive behavior over time, however, has not been considered and modeled systematically. Furthermore, the nature of domain knowledge in specific subjects such as orthopedic surgery, in which pragmatic knowledge could play an important role, has also not been taken into account deliberately. We believe that the temporal dimension in modeling the student's knowledge state and cognitive behavior is critical, especially in such domains. In this paper, we propose an approach for student modeling and diagnosis, which is based on a symbiosis between temporal Bayesian networks and fine-grained didactic analysis. The latter may help building a powerful domain knowledge model and the former may help modeling the learner's complex cognitive behavior, so as to be able to provide him or her with relevant feedback during a problem-solving process. To illustrate the application of the approach, we designed and developed several key components of an intelligent learning environment for teaching the concept of sacro-iliac screw fixation in orthopedic surgery, for which we videotaped and analyzed six surgical interventions in a French hospital. A preliminary gold-standard validation suggests that our diagnosis component is able to produce coherent diagnosis with acceptable response time. <s> BIB022 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Stereotypes Model. <s> Fifteen years ago, research started on SQL-Tutor, the first constraint-based tutor. The initial efforts were focused on evaluating Constraint-Based Modeling (CBM), its effectiveness and applicability to various instructional domains. Since then, we extended CBM in a number of ways, and developed many constraint-based tutors. Our tutors teach both well- and ill-defined domains and tasks, and deal with domain- and meta-level skills. We have supported mainly individual learning, but also the acquisition of collaborative skills. Authoring support for constraint-based tutors is now available, as well as mature, well-tested deployment environments. Our current research focuses on building affect-sensitive and motivational tutors. Over the period of fifteen years, CBM has progressed from a theoretical idea to a mature, reliable and effective methodology for developing effective tutors. <s> BIB023 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Stereotypes Model. <s> In this paper, we evaluate the effectiveness and accuracy of the student model of a web-based educational environment for teaching computer programming. Our student model represents the learner's knowledge through an overlay model and uses a fuzzy logic technique in order to define and update the student's knowledge level of each domain concept, each time that s/he interacts with the e-learning system. Evaluation of the student model of an Intelligent Tutoring System (ITS) is an aspect for which there are not clear guidelines to be provided by literature. Therefore, we choose to use two well-known evaluation methods for the evaluation of our fuzzy student model, in order to design an accurate and correct evaluation methodology. These evaluation models are: the Kirkpatrick's model and the layered evaluation method. Our system was used by the students of a postgraduate program in the field of Informatics in the University of Piraeus, in order to learn how to program in the programming language C. The results of the evaluation were very encouraging. <s> BIB024 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Stereotypes Model. <s> This paper constitutes a literature review on student modeling for the last decade. The review aims at answering three basic questions on student modeling: what to model, how and why. The prevailing student modeling approaches that have been used in the past 10years are described, the aspects of students' characteristics that were taken into consideration are presented and how a student model can be used in order to provide adaptivity and personalisation in computer-based educational software is highlighted. This paper aims to provide important information to researchers, educators and software developers of computer-based educational software ranging from e-learning and mobile learning systems to educational games including stand alone educational applications and intelligent tutoring systems. In addition, this paper can be used as a guide for making decisions about the techniques that should be adopted when designing a student model for an adaptive tutoring system. One significant conclusion is that the most preferred technique for representing the student's mastery of knowledge is the overlay approach. Also, stereotyping seems to be ideal for modeling students' learning styles and preferences. Furthermore, affective student modeling has had a rapid growth over the past years, while it has been noticed an increase in the adoption of fuzzy techniques and Bayesian networks in order to deal the uncertainty of student modeling. <s> BIB025 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Stereotypes Model. <s> In this paper, we present a Web-based system aimed at learning basic mathematics. The Web-based system includes different components like a social network for learning, an intelligent tutoring system and an emotion recognizer. We have developed the system with the goal of being accessed from any kind of computer platform and Android-based mobile device. We have also built a neural-fuzzy system for the identification of student emotions and a fuzzy system for tracking student´s pedagogical states. We carried out different experiments with the emotion recognizer where we obtained a success rate of 96%. Furthermore, the system (including the social network and the intelligent tutoring system) was tested with real students and the obtained results were very satisfying. <s> BIB026 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Stereotypes Model. <s> The use of computer has widely used as a tool to help student in learning, one of the computer application to help student in learning is in the form of Intelligent Tutoring System. Intelligent Tutoring System used to diagnose student knowledge state and provide adaptive assistance to student. However, diagnosing student knowledge level is a difficult task due to rife with uncertainty. Student Model is the key component in Intelligent Tutoring System to deal with uncertainty. Bayesian Network and Fuzzy Logic is the most widely used to develop student model. In this paper we will compare the accuracy of student model developed with Bayesian Network and Fuzzy Logic in predicting student knowledge level. <s> BIB027
Another widely used approach for student modeling is in terms of stereotypes. The stereotypes approach in student modeling began with a system called GRUNDY by Rich BIB003 . According to Rich, "A stereotype represents a collection of attributes that often co-occur in people. They enable the system to make a large number of plausible inferences on the basis of a substantially smaller number of observations. These inferences must, however, be treated as defaults, which can be overridden by specific observation" BIB003 . The main assumption in stereotypes is that it is possible to group all possible users based upon certain features they typically share. Such groups are called stereotypes. A new user will be assigned to a specific stereotype if his/her features match this stereotype. Most ITSs give students freedom to choose, meaning that the student chooses his/her own learning path in the courseware. As a consequence, students may study material that is too hard or too easy for them, or skip learning certain courseware elements. Beside generating, selecting and sequencing material for the students, the ITSs should take into consideration the current knowledge of the students. As they reduce the cognitive overload as well as provide individualized guidance for learning and the teaching process , stereotypes are particularly important to overcome the problem of initializing a student model by assigning the student to a certain group of students. The system might ask the user some questions to initiate its student model BIB025 . For example, let us consider a system that teaches the Python programming language. The system might start interactions with students by asking questions in order to discover the stereotype this student belongs to. A related question that could be asked is if the student is an expert in C++ programming. If the student is an expert in C++, the system would infer that this student knows the basic concepts in programming such as loops, while loops and nested loops. Consequently, the system will assign this particular student to a stereotype whose members know these basic programming concepts . Many adaptive tutoring systems have used the stereotype approach to student modeling and often combine them with other student modeling approaches . INSPIRE is an ITS for personalized instruction. The stereotype approach is used to classify knowledge on a topic to one of four levels of proficiency: Insufficient, Rather Insufficient, Rather Sufficient, Sufficient. Besides stereotypes, the fuzzy logic technique is used to deal with student diagnosis BIB006 . Another ITS using stereotypes is Web-PVT which teaches the passive voice in English using the Web Passive Voice Tutor. Machine learning and stereotypes were used to tailor instruction and feedback to each individual student. The initialization of the model for a new student is performed using a novel combination of stereotypes and a distance weighted k-nearest neighbor algorithm . AUTO-COLLEAGUE, an adaptive computer supported collaborative learning system, aims to provide a personalized and adaptive environment for users to learn UML BIB013 . AUTO-COLLEAGUE uses a hybrid student model that combines stereotypes and the perturbation approach, to be discussed next. The stereotypes are concerned with three aspects of the user (the level of expertise, the performance type and the personality). Another ITS that uses stereotype for student modeling is CLT. CLT teaches C++ iterative constructs (while, do-while, and for loops). The triggers for the stereotypes used in CLT are verbal ability, numerical ability, and spatial ability, each of which can be rated low, medium and high . According to Chrysafiadi et al., the advantages of the stereotypes technique are that the knowledge about a particular user is inferred from the related stereotype(s) as much as possible, without explicitly going through the knowledge elicitation process with each individual user BIB025 . In addition, maintaining information about stereotypes can be done efficiently with low redundancy. On the other hand, the disadvantages of stereotypes are that stereotypes are constructed based upon external characteristics of users and on subjective human judgment, usually of a number of users/experts. It is common that that some stereotypes do not represent their members accurately. Therefore, many researchers have pointed out the issue of inaccuracy in stereotypes. Stereotypes suffer from two additional problems. First, the users must be divided into classes before the interactions with the system begin, and as a result, some classes might not exist. Second, even if a class exists, the designer must build the stereotype, which is time consuming and error-prone. 6.2.3 Perturbation Model. The perturbation student model is an extension of the overlay model. Besides representing the knowledge of the students as a subset of the expert's knowledge (overlay model), it also includes possible misconceptions which allow for better remediation of student mistakes BIB014 . The perturbation model incorporates misconceptions or lack of knowledge, which may be considered mal-knowledge or incorrect beliefs . According to Martins et al., the perturbation model can be obtained by replacing correct rules with wrong rules BIB014 . When applied, they produce the answers given by the student. Since there can be several reasons for a student's wrong answer (several wrong rules in student knowledge before the beginning of the interaction with the student to acquire knowledge and after interaction with specialized knowledge), the system proceeds to generate discriminating problems and presents them to the student to identify the wrong rules that this user has. The mistakes that students make are usually stored in what is termed the bug library. The bug library is built by either collecting the mistakes that students make during interaction with the system (enumeration) or by listing the common misconceptions that students usually have (generative technique). This model gives better explanations of student behavior than the overlay model. However, it is costly to build and maintain BIB021 . Many adaptive tutoring systems have used the perturbation technique for their student model. Hung and his colleagues in 2005, used the perturbation model (also called buggy model), with 31 types of addition errors and 51 subtraction errors, to help the system analyze and reason with student's mistakes BIB011 . LeCo-EAD uses the perturbation model to represent students' incorrect knowledge to provide personalized feedback and support to distant students in real time BIB007 . The perturbation model was also used by Surjano and Maltby in combination with stereotypes and the overlay model to perform a better remediation of student mistakes . Baschera and Gross also used a perturbation student model in 2010 for the purpose of spelling training to better diagnose students' errors . 6.2.4 Constraint Based Model. The constraint based model (CBM) was first published for shortterm student modeling and the diagnosis of the current solution state. CBM uses constraints to present both domain and student knowledge BIB025 . The process of diagnosing the student's solution is by matching the relevance conditions of all constraints to the students' solutions. The satisfaction condition for all relevance conditions are matched as well. The system checks each step taken by the student, diagnoses any problem, and provides feedback to the student when there is an error. The feedback informs the student that the solution is wrong, indicates the part of the solution that's wrong, and then specifies the domain principle that is violated BIB023 . According to Mitrovic et al., important advantages of CBM are that CBM does not require a runnable expert module, leading to computational simplicity; it does not require extensive studies of students' bugs; and it does not require complex reasoning about possible origins of student errors . These advantages have led researchers to apply the CBM approach to their tutoring systems in a variety of domains. SQLT-Web is a web enabled ITS for the SQL database language. It diagnoses the student's solution and adapts feedback to his/her knowledge and learning abilities BIB005 . J-LATTE, which is an ITS that teaches a subset of the Java programming language, uses the CBM approach in the student model. When the student submits his/her solution, the student modeler evaluates it and produces a list of relevant, satisfied and (possibly) violated constraints BIB026 BIB018 . INCOM is a system which helps students of a logic programming course at the University of Hamburg. Weighted constraints are used to achieve accuracy in diagnosing students' solutions BIB019 . EER-Tutor is also another system that teaches database concepts and adapts the CBM student model to represent the student's level of knowledge BIB023 . 6.2.5 Cognitive Theories. The use of cognitive theories for the purpose of student modeling and error diagnosis leads to effective tutoring systems, as many researchers have pointed out. A cognitive theory helps interpret human behavior during the learning process by trying to understand human processes of thinking and understanding. The Human Plausible Reasoning (HPR) Theory BIB002 , and the Multiple Attribute Decision Making (MADM) Theory BIB001 are some cognitive theories that have been used in student modeling BIB025 . Human Plausible Reasoning (HPR) is a theory which categorizes plausible human inferences in terms of a set of frequently recurring inference patterns and a set of transformations on these patterns. In particular, it is a domain-independent theory, originally based on a corpus of people's answers to everyday questions BIB002 . A system that uses HPR in student modeling is RESCUER, which is an intelligent help system for UNIX users. The set of HPR transformations are applied to statements to generate different possible interpretations of how a user may have come to the conclusion that the command s/he typed is acceptable to UNIX BIB008 . Another system that uses HPR to model the student is F-SMILE. F-SMILE stands for File-Store Manipulation Intelligent Learning Environment, which aims to teach novice learners how to use file-store manipulation programs. The student model in F-SMILE captures the cognitive state, as well as the characteristics of the learner and identifies possible misconceptions. The LM Agent in F-SMILE uses a novel combination of HPR with a stereotype based mechanism to generate default assumptions about learners until it is able to acquire sufficient information about each individual learner BIB009 . Another cognitive theory which has been used to build student models is Multiple Attribute Decision Making (MADM) BIB001 . MADM makes preference decisions (e.g., evaluation, prioritization, or selection) among available alternatives that are usually characterized by multiple, usually conflicting, attributes. Web-IT is a Web-based intelligent learning environment for novice adult users of a Graphical User Interface (GUI) that manipulates files, such as the Windows 98/NT Explorer. Web-IT uses MADM in combination with an age stereotype to dynamically provide personalized tutoring . A novel mobile educational system has been developed by Alepis and Kabassi, incorporating bi-modal emotion recognition based on two modes of interactions, mobile device microphone and keyboard, through a multi-criteria decision making theory to improve the system's accuracy in recognizing emotions BIB015 . 6.2.6 Bayesian Networks. Another well-known and established approach for representing and reasoning about uncertainty in student models is Bayesian networks BIB025 . A Bayesian network (BN) is a directed acyclic graph containing random variables, which are represented as nodes in the network. A set of probabilistic relationships between the variables is presented as arcs. The BN reasons about the situation it models, analyzing action sequences, observations, consequences and expected utilities BIB021 . Regarding the student model, components of students such as knowledge, misconceptions, emotions, learning styles, motivation and goals can be represented as nodes in a BN BIB025 . BNs have been shown to be powerful and multi-purpose when modeling any problems that involve knowledge. Bayesian networks have attracted attention from theoreticians and system designers not only because of sound mathematical foundation, but also for being a natural way to represent uncertainty using probabilities. Therefore, BNs have been used in many different domains such as medical diagnosis, information retrieval, bioinformatics, and marketing, for many different purposes such as troubleshooting, diagnosis, prediction, and classification BIB021 . Those who are interested in using Bayesian networks can use tools such as GeNIe BIB016 and SMILE for easy creation and efficient BNs. Andes is an ITS providing help in the domain of Newtonian physics BIB004 [84]. The student model in Andes uses Bayesian networks to carry out long-term knowledge assessment, plan recognition, and prediction of the students' actions during problem solving. Another student model that uses BN is in Adaptive Coach Exploration (ACE), an intelligent exploratory learning environment for the domain of mathematical functions. The student model is capable of providing tailored feedback on a learner's exploration process, also detecting when the learner is having difficulty exploring BIB010 . A Bayesian student model also has been implemented in the context of an Assessment-Based Learning Environment for English grammar. A Bayesian student model is used by pedagogical agents to provide adaptive feedback and adaptive sequencing of tasks BIB012 . A Bayesian student model is also used in E-teacher to provide personalized assistance to e-learning students with the goal of automatically detecting a student's learning style BIB017 . A Dynamic Bayesian network was used by Conati and Maclaren to recognize a high level of uncertainty regarding multiple user emotions by combining information on both the causes and effects of emotional behavior BIB020 . Similarly, a Dynamic Bayesian network was implemented in PlayPhysics to reason about the learner's emotional state from cognitive and motivational variables using observable behavior . TELEOS (Technology Enhanced Learning Environment for Orthopedic Surgery) used a Bayesian student model to diagnose the students' knowledge states and cognitive behaviors BIB022 . A Bayesian student model was also applied in Crystal Island, which is a game-based learning environment in the domain of microbiology to predict student affects by modeling students' emotions . 6.2.7 Fuzzy student modeling. In general, learning and determining the student's state of knowledge are not straightforward tasks, since they are mostly affected by factors which cannot be directly observed and measured, especially in ITSs where there is a lack of real life interaction between a teacher and students. One possible approach to deal with uncertainty is fuzzy logic, introduced by Zadeh in 1956 as a methodology for computing and reasoning with subjective words instead of numbers BIB025 . Fuzzy logic is used to deal with uncertainly in real world problems caused by imprecise and incomplete data as well as human subjectivity BIB027 . Fuzzy logic uses fuzzy sets that involve variables with uncertain values. A fuzzy set is described by variables and values such as "excellent", "good" and "bad" rather than a Boolean value "yes/no" or "true/false". A fuzzy set is determined by a membership function expressed as U (x) . The value of the membership function U (x) is called the degree of membership or membership value, and has a value between 0 and 1. The use of fuzzy logic can improve the learning environment by allowing intelligent decisions about the learning content to be delivered to the learner as well as tailored feedback that should be given to each individual learner BIB025 . Fuzzy logic can also diagnose the level of knowledge of the learner for a concept, and predict the level of knowledge for other concepts that are related to that concept BIB027 . Chrysafiadi and Virvou, in 2012, perform an empirical evaluation of the use of fuzzy logic in student modeling in a web-based educational environment for teaching computer programming. The result of the evaluation showed that the integration of fuzzy logic into the student model increases the learner's satisfaction and performance, improves the system's adaptivity and helps the system make more reliable decisions BIB024 . The use of fuzzy logic in student modeling is becoming popular since it overcomes computational complexity issues and mimics human-like nature BIB025 [93].
Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Tutor Model <s> We present an evaluation of various kinds of feedback in SQL-Tutor. Our initial hypothesis was that low-level feedback, containing all the details of a correct solution would be contra-productive, and that high-level feedback referring to the general principles of the domain that the student's solution violates would be highly effective. The evaluation study performed in 1999 confirmed our hypothesis. <s> BIB001 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Tutor Model <s> Following Computer Aided Instruction systems, 2nd generation tutors are Model-Tracing Tutors (MTTs) (Anderson & Pelletier, 1991) which are intelligent tutoring systems that have been very successful at aiding student learning, but have not reached the level of performance of experienced human tutors (Anderson et al., 1995). To that end, this paper presents a new architecture called ATM ("Adding a Tutorial Model"), which is an extension to model-tracing, that allows these tutors to engage in a dialog that is more like those in which experienced human tutors engage. Specifically, while MTTs provide hints toward doing the next problemsolving step, this 3rd generation of tutors, the ATM architecture, adds the capability to ask questions towards thinking about the knowledge behind the next problem-solving step. We present a new tutor built in ATM, called Ms. Lindquist, which is designed to carry on a tutorial dialog about algebra symbolization. The difference between ATM and MTT is the separate tutorial model that encodes pedagogical content knowledge in the form of different tutorial strategies, which were partially developed by observing an experienced human tutor. Ms. Lindquist has tutored thousands of students at www.AlgebraTutor.org. Future work will reveal if Ms. Lindquist is a better tutor because of the addition of the tutorial model. <s> BIB002 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Tutor Model <s> In addition to knowledge, in various domains skills are equally important. Active learning and training are effective forms of education. We present an automated skills training system for a database programming environment that promotes procedural knowledge acquisition and skills training. The system provides support features such as correction of solutions, feedback and personalised guidance, similar to interactions with a human tutor. Specifically, we address synchronous feedback and guidance based on personalised assessment. Each of these features is automated and includes a level of personalisation and adaptation. At the core of the system is a pattern-based error classification and correction component that analyses student input. <s> BIB003 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Tutor Model <s> In this paper, we introduce logic programming as a domain that exhibits some characteristics of being ill-defined. In order to diagnose student errors in such a domain, we need a means to hypothesise the student's intention, that is the strategy underlying her solution. This is achieved by weighting constraints, so that hypotheses about solution strategies, programming patterns and error diagnoses can be ranked and selected. Since diagnostic accuracy becomes an increasingly important issue, we present an evaluation methodology that measures diagnostic accuracy in terms of (1) the ability to identify the actual solution strategy, and (2) the reliability of error diagnoses. The evaluation results confirm that the system is able to analyse a major share of real student solutions, providing highly informative and precise feedback. <s> BIB004 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Tutor Model <s> AbstractThis study compared learning for fifth grade students in two math homework conditions. The paper-and-pencil condition represented traditional homework, with review of problems in class the following day. The Web-based homework condition provided immediate feedback in the form of hints on demand and step-by-step scaffolding. We analyzed the results for students who completed both the paper-and-pencil and the Web-based conditions. In this group of 28 students, students learned significantly more when given computer feedback than when doing traditional paper-and-pencil homework, with an effect size of .61. The implications of this study are that, given the large effect size, it may be worth the cost and effort to give Web-based homework when students have access to the needed equipment, such as in schools that have implemented one-to-one computing programs. <s> BIB005 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Tutor Model <s> This chapter introduces Part II on modeling tutoring knowledge in ITS research. Starting with its origin and with a characterization of tutoring, it proposes a general definition of tutoring, and a description of tutoring functions, variables, and interactions. The Interaction Hypothesis is presented and discussed, followed by the development of the tutorial component of ITSs, and their evaluation. New challenges are described, such as integrating the emotional states of the learner. Perspectives of opening the Tutoring Model and of equipping it with social intelligence are also presented. <s> BIB006 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Tutor Model <s> Rule-based cognitive models serve many roles in intelligent tutoring systems (ITS) development. They help understand student thinking and problem solving, help guide many aspects of the design of a tutor, and can function as the “smarts” of a system. Cognitive Tutors using rule-based cognitive models have been proven to be successful in improving student learning in a range of learning domain. The chapter focuses on key practical aspects of model development for this type of tutors and describes two models in significant detail. First, a simple rule-based model built for fraction addition, created with the Cognitive Tutor Authoring Tools, illustrates the importance of a model’s flexibility and its cognitive fidelity. It also illustrates the model-tracing algorithm in greater detail than many previous publications. Second, a rule-based model used in the Geometry Cognitive Tutor illustrates how ease of engineering is a second important concern shaping a model used in a large-scale tutor. Although cognitive fidelity and ease of engineering are sometimes at odds, overall the model used in the Geometry Cognitive Tutor meets both concerns to a significant degree. On-going work in educational data mining may lead to novel techniques for improving the cognitive fidelity of models and thereby the effectiveness of tutors. <s> BIB007 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Tutor Model <s> This paper presents the design, implementation, and evaluation of a student model in DEPTHS (Design Pattern Teaching Help System), an intelligent tutoring system for learning software design patterns. There are many approaches and technologies for student modeling, but choosing the right one depends on intended functionalities of an intelligent system that the student model is going to be used in. Those functionalities often determine the kinds of information that the student model should contain. The student model used in DEPTHS is a result of combining two widely known modeling approaches, namely, stereotype and overlay modeling. The model is domain independent and can be easily applied in other learning domains as well. To keep student model update during the learning process, DEPTHS makes use of a knowledge assessment method based on fuzzy rules (i.e., a combination of production rules and fuzzy logics). The evaluation of DEPTHS performed with the aim of assessing the system's overall effectiveness and the accuracy of its student model, indicated several advantages of the DEPTHS system over the traditional approach to learning design patterns, and encouraged us to move on further with this research. <s> BIB008
An ITS provides personalized feedback to an individual student based upon the traits that are stored in the student model. The tutor model or the pedagogical module, as it is alternatively called, is the driving engine for the whole system . This model performs several tasks in order to behave like a human-like tutor that can decide how to teach and what to teach next. The role of the tutor model is not only to provide guidance like a tutor but also to make the interaction of the ITS with the learner smooth and natural BIB006 . The pedagogical module should be able to answer questions such as should the student be presented a concept, a lesson, or a test next. Other questions include deciding how to present the teaching material to the student, evaluate student performance, and provide feedback to the student BIB006 . Indeed, the pedagogical module communicates with all other components in the system's expert model, and the student model and acts as an intermediary between them BIB008 . When a student makes a mistake, the pedagogical module is responsible for providing feedback to explain the type of error, re-explain the usage of that rule and provide help whenever the student needs it . The tutor must also decide what to present next to the student such as topic, or the problem to work on. To do so, the pedagogical model must consult the student model to determine the topics on which the student needs to focus. These decisions that this model makes are according to the information about the student stored in the student model and the information about the learned content which the expert model stores BIB004 . The pedagogical module is responsible for the interaction between the student and the system in case the student needs help at any given step, for remediation of student's error. It does so by giving a sequence of feedback messages (e.g., hints), or suggesting to the student to study a certain topic to increase learning performance. All ITSs have embedded the pedagogical module to control interaction with the students. The following will present some pedagogical techniques which have been used for the purpose of delivering content and making interventions when needed in ITSs. BIB002 , give three types of feedback to students: flag feedback, buggy messages, and a chain of hints. Flag feedback informs the student on the correct or wrong answer by using a color (e.g., green = correct or red = wrong). A buggy message is attached to a specific incorrect answer the student has provided to inform the student of the type of errors s/he has made. In case the student needs help, s/he can ask for a hint to receive the first hint from a chain of hints, which include suggestions to make the student think. The student can request hints to get more specific hints on what to do, and when the chain of hints is all delivered, eventually, the system tells the student exactly what to type BIB007 . CBM tutors such as KERMIT and SQL-Tutor, ITSs that teach conceptual database design, provide six levels of feedback to the student: correct, error flag, hint, detailed hint, all errors and solution. The correct level simply indicates the student whether the answer is correct or incorrect. The error flag indicates the type of construct (e.g., entity and relationship) that contains the error. Hint and detailed hint are feedback that are generated from the violated constraint. The complete solution is displayed at the solution level BIB001 . Besides providing feedback to remedy students' errors, personalized guidance can also be given to help students, as Kenny and Pahl have done in SQL-Tutor BIB003 . They offer the student advice and recommendation about subject areas that a particular student needs to focus on. The decision about the particular areas recommended by the system is determined by collecting data from the student model. The pedagogical model retrieves the information on all errors made by the student from the student model to make the decision. Model Tracing Tutors have developed teaching strategies and interactions between the system and the student to reach the level of performance of experienced human tutors. However, many researches have criticized model tracing tutor because an MTT needs a strategic tutor BIB002 . According to them, an MTT should encourage students to construct their own knowledge instead of telling it to them. In other words, students can learn better if they are engaged in a dialog that helps them construct their knowledge themselves instead of being hinted toward inducing the knowledge from problem-solving experiences. A 3rd generation model tracing tutor, named Ms. Lindquist, using what is called an Adding Tutorial Model, was the first model tracing tutor that was designed to be more human-like in caring when participating in a conversation. Ms. Lindquist could produce probing questions, positive and negative feedback, follow-up questions in embedded subdialogs, and requests for explanation as to why something is correct BIB002 BIB005 . DEPTHS, which is an ITS for learning software design patterns, implements a curriculum planning model for selecting appropriate learning materials (e.g., concepts, content units, fragments and test questions) that best fit the student's characteristics BIB008 . DEPTHS is able to decide on the concepts that should be added to the concept plan of a particular student along with a detailed lesson and a test plan for that concept. Each time the student performance significantly changes, the concept plan is created from scratch. The decision to add a new concept to the concept plan is made according to the curriculum sequence stored in the expert model and the performance of the student and his/her current knowledge stored in student model BIB008 .
Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Tutorial Dialog in Natural <s> CIRCSIM-Tutor version 2, a dialogue-based intelligent tutoring system (ITS), is nearly five years old. It conducts a conversation with a student to help the student learn to solve a class of problems in cardiovascular physiology dealing with the regulation of blood pressure. It uses natural language for both input and output, and can handle a variety of syntactic constructions and lexical items, including sentence fragments and misspelled words. <s> BIB001 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Tutorial Dialog in Natural <s> AutoTutor is a computer tutor that simulates the discourse patterns and pedagogical strategies of a typical human tutor. AutoTutor is designed to assist college students in learning the fundamentals of hardware, operating systems, and the Internet in an introductory computer literacy course. Most tutors in school systems are not highly trained in tutoring techniques and have only a modest expertise on the tutoring topic, but they are surprisingly effective in producing learning gains in students. We have dissected the discourse and pedagogical strategies these unskilled tutors exhibit by analyzing approximately 100 hours of naturalistic tutoring sessions. These mechanisms are implemented in AutoTutor. AutoTutor presents questions and problems from a curriculum script, attempts to comprehend learner contributions that are entered by keyboard, formulates dialog moves that are sensitive to the learner's contributions (such as short feedback, pumps, prompts, elaborations, corrections, and hints), and delivers the dialog moves with a talking head. AutoTutor has seven modules: a curriculum script, language extraction, speech act classification, latent semantic analysis, topic selection, dialog move generation, and a talking head. <s> BIB002 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Tutorial Dialog in Natural <s> This paper describes an application of APE (the Atlas Planning Engine), an integrated planning and execution system at the heart of the Atlas dialogue management system. APE controls a mixed-initiative dialogue between a human user and a host system, where turns in the 'conversation' may include graphical actions and/or written text. APE has full unification and can handle arbitrarily nested discourse constructs, making it more powerful than dialogue managers based on finitestate machines. We illustrate this work by describing Atlas-Andes, an intelligent tutoring system built using APE with the Andes physics tutor as the host. <s> BIB003 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Tutorial Dialog in Natural <s> Atlas-Andes is a dialogue enhanced model tracing tutor (MTT) integrating the Andes Physics tutoring system (Gertner ~ VanLelm 2000) with the Atlas tutorial dialogue system (Freedman et al. 2000). Andes is a MTT that presents quantitative physics problems to students. Each problem solving action entered by students is highlighted either red or green to indicate whether it. was correct or not. This basic feedback is terlned red-greeu feedback. Furthermore, when students get stuck in the midst of problem solving and request help, Andes provides hint sequences designed to help them achieve the goal of soh’ing the problem as quickly as possible. Atlas provides Andes with the capability of leading students through directed lines of reasoning that teach basic physics conceptual knowledge, such as Newton’s Laws. The purpose of these directed lines of reasoning is to provide a solid foundation in conceptual physics to promote deep learning and to enable students to develop meaningful problem solving strategies. While students in elementary mechanics courses have demonstrated an ability to master the skills required to solve quantitative physics problems, a nmnber of studies have revealed that the same students perform very poorly when faced with qualitative physics problems (Halloun & Hestenes 1985b; 1985a; Hake 1998). Furthermore, the naive conceptions of physics that they bring with them when they begin a formal study of physics do not change significantly by the time they finish their classes (Halloun & Hestenes 1985b). Similarly, MTTs in a wide range of domains have commonly been criticized for failing to encourage deep learning (VanLehn et al. 2000). If students do not reflect upon the hints they are given, but instead simply continue guessing until they perform an action that receives positive feedback, they tend to learn the right actions for the wrong reasons (Aleveu, Koedinger, K~ Cross 1999; <s> BIB004 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Tutorial Dialog in Natural <s> Foreword, Robert J. Sternberg Chapter 1. A Growing Sense of "Agency," Douglas J. Hacker, John Dunlosky, and Arthur C. Graesser Part I: Comprehension Strategies Chapter 2. The Role of Metacognition in Understanding and Supporting Reading Comprehension, Margaret G. McKeown and Isabel L. Beck Chapter 3. The Role of Metacognition in Teaching Reading Comprehension to Primary Students, Joanna P. Williams and J. Grant Atkins Part II: Metacognitive Strategies Chapter 4. Question Generation and Anomaly Detection in Texts, Jose Otero Chapter 5. Self-Explanation and Metacognition: The Dynamics of Reading, Danielle S. McNamara and Joseph P. Magliano Part III: Metacomprehension Chapter 6. Metacognitive Monitoring During and After Reading, Keith W. Thiede, Thomas D. Griffin, Jennifer Wiley, and Joshua Redford Chapter 7. The Importance of Knowing What You Know: A Knowledge Monitoring Framework for Studying Metacognition in Education, Sigmund Tobias and Howard T. Everson Part IV: Writing Chapter 8. Metacognition and Children's Writing, Karen R. Harris, Steve Graham, Mary Brindle, and Karin Sandmel Chapter 9. Writing is Applied Metacognition , Douglas J. Hacker, Matt C. Keener, and John C. Kircher Part V: Science and Mathematics Chapter 10. The Interplay of Scientific Inquiry and Metacognition: More than a Marriage of Convenience, Barbara White, John Frederiksen, and Allan Collins Chapter 11. The Enigma of Mathematical Learning Disabilities: Metacognition or STICORDI, That's the Question, Annemie Desoete Part VI: Individual Differences Chapter 12. Context Matters: Gender and Cross-Cultural Differences in Confidence, Mary Lundeberg and Lindsey Mohan Chapter 13. Teachers as Metacognitive Professionals, Gerald G. Duffy, Samuel Miller, Seth Parsons, and Michael Meloth Part VII: Self-Regulated Learning Chapter 14. Supporting Self-Regulated Learning with Cognitive Tools, Philip H. Winne and John C. Nesbit Chapter 15. Effective Implementation of Metacognition, Michael J. Serra and Janet Metcalfe Chapter 16. Self-Regulation: Where Metacognition and Motivation Intersect, Barry J. Zimmerman and Adam R. Moylan Part VIII: Technology Chapter 17. Self-Regulated Learning with Hypermedia, Roger Azevedo and Amy M. Witherspoon Chapter 18. Interactive Metacognition: Monitoring and Regulating a Teachable Agent, Daniel L. Schwartz, Catherine Chase, Doris B. Chin, Marily Oppezzo, Henry Kwong, Sandra Okita, Gautam Biswas, Rod Roscoe, Hogyeong Jeong, and John Wagster Part IX: Tutoring Chapter 19. Meta-Knowledge in Tutoring, Arthur C. Graesser, Sidney D'Mello, and Natalie Person Chapter 20. In Vivo Experiments on Whether Supporting Metacognition in Intelligent Tutoring Systems Yields Robust Learning, Ken Koedinger, Vincent Aleven, Ido Roll, and Ryan Baker Part X: Measurement Chapter 21. Measuring Metacognitive Judgments, Gregory Schraw Chapter 22. Sins Committed in the Name of Ecological Validity: A Call for Representative Design in Education Science, John Dunlosky, Sara Bottiroli, and Marissa Hartwig <s> BIB005 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Tutorial Dialog in Natural <s> AutoTutor is a natural language tutoring system that has produced learning gains across multiple domains (e.g., computer literacy, physics, critical thinking). In this paper, we review the development, key research findings, and systems that have evolved from AutoTutor. First, the rationale for developing AutoTutor is outlined and the advantages of natural language tutoring are presented. Next, we review three central themes in AutoTutor’s development: human-inspired tutoring strategies, pedagogical agents, and technologies that support natural-language tutoring. Research on early versions of AutoTutor documented the impact on deep learning by co-constructed explanations, feedback, conversational scaffolding, and subject matter content. Systems that evolved from AutoTutor added additional components that have been evaluated with respect to learning and motivation. The latter findings include the effectiveness of deep reasoning questions for tutoring multiple domains, of adapting to the affect of low-knowledge learners, of content over surface features such as voices and persona of animated agents, and of alternative tutoring strategies such as collaborative lecturing and vicarious tutoring demonstrations. The paper also considers advances in pedagogical agent roles (such as trialogs) and in tutoring technologies, such semantic processing and tutoring delivery platforms. This paper summarizes and integrates significant findings produced by studies using AutoTutor and related systems. <s> BIB006
Language. Human tutors use conversational dialogs during tutoring to deliver instructions. Early ITSs were not able to provide the use of natural language, discourse, or dialog based instruction. However, many modern ITSs use natural language . The aim of this sub-section is to present how tutorial dialog techniques can be used to build interaction environments in ITSs along with some well-known dialog based ITSs found in the literature. AutoTutor is a natural language tutoring system that has been developed for multiple domains such as computer literacy, physics, and critical thinking BIB006 . AutoTutor is a family of systems that has a long history. AutoTutor uses strategies of human tutors such as comprehension strategies, meta-cognitive strategies, self-regulated learning and meta-comprehension BIB005 BIB006 . In addition, AutoTutor incorporates learning strategies derived from learning research such as Socratic tutoring, scaffolding-fading, and frontier learning . Benjamin et al. claim that the use of discourse in ITSs can facilitate new learning activities such as self-reflection, answering deep questions, generating questions and resolving conflicting statements BIB006 . In AutoTutor, the pedagogical interventions that occur between the system and students are categorized as positive feedback, neutral feedback, negative feedback, pump, prompt, hint, elaboration and splice/correction BIB002 . Latent Semantic Analysis (LSA) is used in AutoTutor as the backbone to represent computer literacy knowledge. The modules of AutoTutor are different from the traditional modules that have been identified and used in cognitive and constraint based tutors. The fact that AutoTutor uses language and discourse have led to the use of novel architectures (for more details on the architecture of AutoTutor see BIB002 ). AutoTutor incorporates a variety computational architectures and learning methodologies, and has been shown to be very effective as a learning technology BIB006 . Atlas is an ITS that uses natural language dialogs to increase opportunities for students to construct their own knowledge BIB004 . The two main components of Atlas are APE BIB003 , the Atlas Planning Engine and CARMEL , the natural language understanding component. APE is Table 1 . Tutoring Tactics in CIRCSIM-Tutor Adopted From BIB001 Plan Tactics Tutor Ask the student a series of questions. Give answer Ask the student to explain their answer.
Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Hint <s> CIRCSIM-Tutor version 2, a dialogue-based intelligent tutoring system (ITS), is nearly five years old. It conducts a conversation with a student to help the student learn to solve a class of problems in cardiovascular physiology dealing with the regulation of blood pressure. It uses natural language for both input and output, and can handle a variety of syntactic constructions and lexical items, including sentence fragments and misspelled words. <s> BIB001 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Hint <s> This chapter provides an overview of dialog-based intelligent tutoring systems (ITSs), which are learning technologies that help learners develop mastery of difficult subject matter by holding conversations in natural language. The first section discusses some of the basic issues in the design of dialog-based ITSs, while the second section highlights recent advances in this area. The first section begins with an analysis of human–human tutorial dialogs followed with a discussion of the six major components of most dialog-based ITSs: input transformation, speech-act classification, learner modeling, dialog management, output rendering, and domain modeling. These abstract components are concretized within the context of one of the first dialog-based ITSs, AutoTutor. The second section discusses recent advances in the area with an emphasis on systems that model learners’ emotional states in addition to their cognitive states. These include a system that automatically adapts its dialogs based on whether the learner is bored, confused, or frustrated, a system with unique mechanisms to monitor and correct learners’ disengagement behaviors by tracking eye gaze, and a system that strategically plants confusion in the minds of learners to engender deeper modes of thinking. We conclude the chapter by discussing some of the open issues in dialog-based ITSs, such as identifying benefits of spoken versus typed input, understanding when imperfect natural-language understanding is sufficient, contrasting the importance of the message vs. the medium in influencing learning, and identifying conditions in which dialog-based tutoring is effective. <s> BIB002 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Hint <s> We present in this paper an innovative solution to the challenge of building effective educational technologies that offer tailored instruction to each individual learner. The proposed solution in the form of a conversational intelligent tutoring system, called DeepTutor, has been developed as a web application that is accessible 24/7 through a browser from any device connected to the Internet. The success of several large scale experiments with high-school students using DeepTutor is a solid proof that conversational intelligent tutoring at scale over the web is possible. <s> BIB003
Remind the student ("Remember that.... "). Acknowledge 4 possible cases (see below). responsible for constructing and generating coherent dialogues while CARMEL understands and analyzes student's answers. Another conversational ITS is DeepTutor, an ITS developed for the domain of Newtonian physics BIB003 . A framework called learning progressions (LPs) used by the science education research community is integrated as a way to better model students' cognition and learning. The system implements conversational goals to accurately understand the student at each turn by analyzing the interaction that occurs between the system and student. Conversational goals such including coaching students to articulate expectations, correcting students' misconceptions, answering students' questions, feedback on students' contributions, and error handling . In order to understand a student's contributions while interacting with DeepTutor, a semantic similarity task needs to be computed based on the quadratic assignment problem (QAP) . An efficient branch and bound algorithm was developed to model QAP to reduce the explored space in search for the optimal solution. CIRCSIM-Tutor is a tutoring system in the area of cardiovascular physiology that incorporates natural language dialogue with the learner by using a collection of tutoring tactics that mimic expert human tutors BIB001 . It can handle different syntactic constructions and lexical items such as sentence fragments and misspelled words. Tutoring tactics in CIRCSIM-Tutor are categorized into four major types as illustrated in Table 1 . Theses evolved from the major types of tactics used in repeated pedagogical interventions: ask the next question, evaluate the user's response, recognize the user's answer, and if the answer is incorrect either provide a hint or the correct answer. The architecture of CIRCSIM-Tutor contains the following: a planner, a text generator, an input understander, a student model, a knowledge base, a problem solver and a screen manager BIB001 . CIRCSIM-Tutor showed significant improvement in students from pre-test to post-test. The input understander mechanism of the system was able to recognize and respond to over 95% of students' inputs. Evens et al. suggest the use of APE for planning in tutoring sessions. Dialog based ITSs have the same main goal as traditional ITSs, which is to increase the level of engagement and learning gains. However, dialog based ITSs can use different dimensions of evaluation in classifying learner's responses, comprehending learner's contributions, modeling knowledge, and generating conversationally smooth tutorial dialogs. D'Mello and Graesser BIB002 conducted a study to describe how dialog based ITSs can be evaluated along these dimensions using AutoTutor as a case study.
Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Spoken Dialogue. <s> Abstract This paper describes an empirical study of man-computer speech interaction. The goals of the experiment were to find out how people would communicate with a real-time, speaker-independent continuous speech understanding system. The experimental design compared three communication modes: natural language typing, speaking directly to a computer and speaking to a computer through a human interpreter. The results show that speech to a computer is not as ill-formed as one would expect. People speaking to a computer are more disciplined than when speaking to each other. There are significant differences in the usage of spoken language compared to typed language, and several phenomena which are unique to spoken or typed input respectively. Usefulness for work in speech understanding systems for the future is considered. <s> BIB001 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Spoken Dialogue. <s> In order for speech recognizers to deal with increased task perplexity, speaker variation, and environment variation, improved speech recognition is critical. Steady progress has been made along these three dimensions at Carnegie Mellon. In this paper, we review the SPHINX-II speech recognition system and summarize our recent efforts on improved speech recognition. <s> BIB002 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Spoken Dialogue. <s> While human tutors typically interact with students using spoken dialogue, most computer dialogue tutors are text-based. We have conducted two experiments comparing typed and spoken tutoring dialogues, one in a human-human scenario, and another in a human-computer scenario. In both experiments, we compared spoken versus typed tutoring for learning gains and time on task, and also measured the correlations of learning gains with dialogue features. Our main results are that changing the modality from text to speech caused changes in the learning gains, time and superficial dialogue characteristics of human tutoring, but for computer tutoring it made less difference. <s> BIB003 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Spoken Dialogue. <s> The ability to lead collaborative discussions and appropriately scaffold learning has been identified as one of the central advantages of human tutorial interaction [6]. In order to reproduce the effectiveness of human tutors, many developers of tutorial dialogue systems have taken the approach of identifying human tutorial tactics and then incorporating them into their systems. Equally important as understanding the tactics themselves is understanding how human tutors decide which tactics to use. We argue that these decisions are made based not only on student actions and the content of student utterances, but also on the meta-communicative information conveyed through spoken utterances (e.g. pauses, disfluencies, intonation). Since this information is less frequent or unavailable in typed input, tutorial dialogue systems with speech interfaces have the potential to be more effective than those without. This paper gives an overview of the Spoken Conversational Tutor (SCoT) that we have built and describes how we are beginning to make use of spoken language information in SCoT. <s> BIB004 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Spoken Dialogue. <s> ITSPOKE is a spoken dialogue system that uses the Why2-Atlas text-based tutoring system as its "back-end". A student first types a natural language answer to a qualitative physics problem. ITSPOKE then engages the student in a spoken dialogue to provide feedback and correct misconceptions, and to elicit more complete explanations. We are using ITSPOKE to generate an empirically-based understanding of the ramifications of adding spoken language capabilities to text-based dialogue tutors. <s> BIB005
It is well-known that the best human tutors are more effective than the best computer tutors BIB003 . The main difference between human and computer tutors is the fact that human tutors predominantly use spoken natural language when interacting with learners. This raises the question of whether making the interaction more natural, such as by changing the modality of the computer tutor to spoken natural language dialogue, would decrease the advantage of human tutoring over computer tutoring. In fact, the majority of dialogue-based ITSs use typed student input. However, many potential advantages of using speech-to-speech interaction in the domain of ITSs have been found in the literature BIB003 BIB004 . One advantage is in terms of self-explanation, which gives the student a better opportunity to construct his/her knowledge BIB004 . For instance, Hauptmann et al. showed that self-explanation happens more often in speech than in typed interaction BIB001 . Another advantage is that speech interaction provides a more accurate student model. Students use meta-communication strategies such as hedges, pauses, and disfluencies, which allow the tutor to infer more information regarding student understanding. The following will discuses some computer tutors, which implement spoken dialogue BIB003 BIB004 . ITSPOKE is an ITS which uses spoken dialogue for the purpose of providing spoken feedback and correcting misconceptions BIB005 . The student and the system interact with each other in English to discus the student's answers. ITSPOKE uses a microphone as an input device for the student's speech and sends the signal to the Sphinx2 recognizer BIB002 . Litman et al. showed that ITSPOKE is more effective than typed dialogue; however, there was no evidence that ITSPOKE increases student learning BIB003 . In addition, it was clear that speech recognition errors did not decrease learning. Another spoken ITS is SCoT (Spoken Conversational Tutor). SCoT's domain is shipboard damage control, which refers to fires, flood and other critical situations that happen aboard Navy vessels BIB004 . Pon-Barry et al. suggested several challenges that ITS developers should be aware of when developing spoken language ITSs. First, repeated critical feedback from the tutor such as You made this mistake more than once and We discussed this same mistake earlier cause negative effect. This suggests further work on better understanding and use of more tact in correcting user's misconceptions. Second, even though the accuracy of speech recognition is high, small recognition errors can make the tutor less effective.
Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Affective Tutoring System <s> ABSTRACT In this paper, we report on our efforts in developing affective character-based interfaces, i.e., interfaces that recognize and measure affective information of the user and address user affect by employing embodied characters. In particular, we describe the Empathic Companion, an animated interface agent that accompanies the user in the setting of a virtual job interview. This interface application takes physiological data (skin conductance and electromyography) of a user in realtime, interprets them as emotions, and addresses the user's affective states in the form of empathic feedback. The Empathic Companion is conceived as an educational agent that supports job seekers preparing for a job interview. We also present results from an exploratory study that aims to evaluate the impact of the Empathic Companion by measuring users' skin conductance and heart rate. While an overall positive effect of the Empathic Companion could not be shown, the outcome of the experiment suggests that empathic feed... <s> BIB001 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Affective Tutoring System <s> This paper describes the use of sensors in intelligent tutors to detect students' affective states and to embed emotional support. Using four sensors in two classroom experiments the tutor dynamically collected data streams of physiological activity and students' self-reports of emotions. Evidence indicates that state-based fluctuating student emotions are related to larger, longer-term affective variables such as self-concept in mathematics. Students produced self-reports of emotions and models were created to automatically infer these emotions from physiological data from the sensors. Summaries of student physiological activity, in particular data streams from facial detection software, helped to predict more than 60% of the variance of students emotional states, which is much better than predicting emotions from other contextual variables from the tutor, when these sensors are absent. This research also provides evidence that by modifying the “context” of the tutoring system we may well be able to optimize students' emotion reports and in turn improve math attitudes. <s> BIB002 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Affective Tutoring System <s> Affective Computing is a new Artificial Intelligence area that deals with the possibility of making computers able to recognize human emotions in different ways. This paper represents a study about the integration of this new area in the intelligent tutoring system. We argue that socially appropriate affective behaviors would provide a new dimension for collaborative learning systems. The main goal is to analyses learner facial expressions and show how Affective Computing could contribute for this interaction, being part of the complete student tracking (traceability) to monitor student behaviors during learning sessions. <s> BIB003 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Affective Tutoring System <s> Recent works in Computer Science, Neurosciences, Education, and Psychology have shown that emotions play an important role in learning. Learner’s cognitive ability depends on his emotions. We will point out the role of emotions in learning, distinguishing the different types and models of emotions which have been considered until now. We will address an important issue concerning the different means to detect emotions and introduce recent approaches to measure brain activity using Electroencephalograms (EEG). Knowing the influence of emotional events on learning it becomes important to induce specific emotions so that the learner can be in a more adequate state for better learning or memorization. To this end, we will introduce the main components of an emotionally intelligent tutoring system able to recognize, interpret and influence learner’s emotions. We will talk about specific virtual agents that can influence learner’s emotions to motivate and encourage him and involve a more cooperative work, particularly in narrative learning environments. Pushing further this paradigm, we will present the advantages and perspectives of subliminal learning which intervenes without conscious perception. Finally, we conclude with new directions to emotional learning. <s> BIB004 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Affective Tutoring System <s> We study the incidence (rate of occurrence), persistence (rate of reoccurrence immediately after occurrence), and impact (effect on behavior) of students' cognitive-affective states during their use of three different computer-based learning environments. Students' cognitive-affective states are studied using different populations (Philippines, USA), different methods (quantitative field observation, self-report), and different types of learning environments (dialogue tutor, problem-solving game, and problem-solving-based Intelligent Tutoring System). By varying the studies along these multiple factors, we can have greater confidence that findings which generalize across studies are robust. The incidence, persistence, and impact of boredom, frustration, confusion, engaged concentration, delight, and surprise were compared. We found that boredom was very persistent across learning environments and was associated with poorer learning and problem behaviors, such as gaming the system. Despite prior hypothesis to the contrary, frustration was less persistent, less associated with poorer learning, and did not appear to be an antecedent to gaming the system. Confusion and engaged concentration were the most common states within all three learning environments. Experiences of delight and surprise were rare. These findings suggest that significant effort should be put into detecting and responding to boredom and confusion, with a particular emphasis on developing pedagogical interventions to disrupt the ''vicious cycles'' which occur when a student becomes bored and remains bored for long periods of time. <s> BIB005 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Affective Tutoring System <s> This chapter describes the automatic recognition of and response to human emotion within intelligent tutors. Tutors can recognize student emotion with more than 80%accuracy compared to student self-reports, using wireless sensors that provide data about posture, movement, grip tension, facially expressed mental states and arousal. Pedagogical agents have been used that provide emotional or motivational feedback. Students using such agents increased their math value, self-concept and mastery orientation, with females reporting more confidence and less frustration. Low-achieving students—one third of whom have learning disabilities—report higher affective needs than their higher-achieving peers. After interacting with affective pedagogical agents, low-achieving students improved their affective outcomes and reported reduced frustration and anxiety. <s> BIB006 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Affective Tutoring System <s> In this paper, we present a Web-based system aimed at learning basic mathematics. The Web-based system includes different components like a social network for learning, an intelligent tutoring system and an emotion recognizer. We have developed the system with the goal of being accessed from any kind of computer platform and Android-based mobile device. We have also built a neural-fuzzy system for the identification of student emotions and a fuzzy system for tracking student´s pedagogical states. We carried out different experiments with the emotion recognizer where we obtained a success rate of 96%. Furthermore, the system (including the social network and the intelligent tutoring system) was tested with real students and the obtained results were very satisfying. <s> BIB007
Affective Tutoring Systems (ATS) are ITSs that can recognize human emotions (sad, happy, frustrated, motivated, etc.) in different ways BIB003 . It is important to incorporate the emotions of students in the learning process because recent learning theories have established a link between emotions and learning, with the claim that cognition, motivation and emotion are the three components of learning BIB004 . Over the last few years, there has been a great amount of interest in computing the learner's affective states in ITSs and studying how to respond to them in effective ways BIB005 . Affective tutors use various techniques to enable computers to recognize, model, understand and respond to students' emotions in an effective manner. Knowing the emotional states of the student provides information on the student's psychological states and offers the possibility of responding appropriately BIB003 . A system can embed devices to detect a student's affective or emotional states. These include PC cameras, PC microphones, special mouses, and neuro-headsets among others. These devices are responsible for identifying physical signals such as facial image, voice, mouse pressure, heart rate and stress level. These signals are then sent to the system to be processed. Consequently, the emotional state is obtained in real time. The ATS objective is to change a negative emotional state (e.g., confused) to a positive emotional state (e.g. committed) BIB007 . In , the learners' affective states are detected by monitoring their gross body language (body position and arousal) as they interact with the system. An automated body pressure measurement system is also used to capture the learner's pressure. The system detects six affective states of the learner: confusion, flow, delight, surprise, boredom and neutral. If the system realizes that the student is bored, the tutor stimulates him by presenting engaging tasks. If frustration is detected, the tutor offers encouraging statements or corrects information that the learner is experiencing difficulty with. Experiments suggest that that boredom and flow might best be detected from body language although the face plays a significant role in conveying confusion and delight. Jraidi et al. present an ITS that acts differently when the student is frustrated . For example, it may provide problems similar to ones in which the student has been successful to help the student. In case of boredom, the system provides an easier problem to motivate the student again or provides a more difficult problem if the problem seems too easy. Another approach used in the system to respond to student emotions integrates a virtual pedagogical agent called a learning companion to allow affective real time interaction with the learners. This agent can communicate with the learner as a study partner when solving problems, or provide encouragement and congratulatory messages, appearing to care about the learner. In other words, these agents can provide empathic responses which mirror the learner's emotional states BIB007 . Wolf and his colleagues also implement an empathetic learning companion that reflects the last expressed emotion of the learner as long as the emotion is not negative such as frustration or boredom BIB006 BIB002 . The companion responds in full sentences providing feedback with voice and emotion. The presence of someone who appears to care can be motivating to the learners. Studies show that students who use the learning companion increased their math understanding and level of interest, and show reduced boredom. Another affective tutoring system that uses an empathetic companion to respond to learner emotion is a system that practices interview questions with users BIB001 . The system perceives the user's emotion by measuring skin conductance and then takes appropriate actions. For instance, the agent displays concern for a user who is aroused and has a negatively valenced emotion, e.g., by saying "I am sorry that you seem to feel a bit bad about that question". Their study shows that users receiving feedback with empathy are less stressed when asked interview questions.
Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Cultural Awareness in Education <s> As information and communication technology access expands in the developing world, learning technologies have the opportunity to play a growing role to enhance and supplement strained educational systems. Intelligent tutoring systems (ITS) offer strong learning gains, but are a class of technology traditionally designed for most-developed countries. Recently, closer consideration has been made to ITS targeting the developing world and to culturally-adapted ITS. This paper presents findings from a systematic literature review that focused on barriers to ITS adoption in the developing world. While ITS were the primary focus of the review, the implications likely apply to a broader range of educational technology as well. The geographical and economic landscape of tutoring publications is mapped out, to determine where tutoring systems research occurs. Next, the paper discusses challenges and promising solutions for barriers to ITS within both formal and informal settings. These barriers include student basic computing skills, hardware sharing, mobile-dominant computing, data costs, electrical reliability, internet infrastructure, language, and culture. Differences and similarities between externally-developed and locally-developed tutoring system research for the developing world are then considered. Finally, this paper concludes with some potential future directions and opportunities for research on tutoring systems and other educational technologies on the global stage. <s> BIB001 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Cultural Awareness in Education <s> In recent years, there has been increasing interest in automatically assessing help seeking, the process of referring to resources outside of oneself to accomplish a task or solve a problem. Research in the United States has shown that specific help-seeking behaviors led to better learning within intelligent tutoring systems. However, intelligent tutors are used differently by students in different countries, raising the question of whether the same help-seeking behaviors are effective and desirable in different cultural settings. To investigate this question, models connecting help-seeking behaviors with learning were generated from datasets from students in three countries – Costa Rica, the Philippines, and the United States, as well as a combined dataset from all three sites. Each model was tested on data from the other countries. This study found that models of effective help seeking transfer to some degree between the United States and Philippines, but not between those countries and Costa Rica. Differences may be explained by variations in classroom practices between the sites; for example, greater collaboration observed in the Costa Rican site indicates that much help seeking occurred outside of the technology. Findings indicate that greater care should be taken when assuming that the models underlying AIED systems generalize across cultures and contexts. <s> BIB002 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Cultural Awareness in Education <s> Cultural awareness, when applied to Intelligent Learning Environments (ILEs), contours the overall appearance, behaviour, and content used in these systems through the use of culturally-relevant student data and information. In most cases, these adaptations are system-initiated with little to no consideration given to student-initiated control over the extent of cultural-awareness being used. This paper examines some of the issues relevant to these challenges through the development of the ICON (Instructional Cultural cONtextualisation) system. The paper explores computational approaches for modelling the diversity of students within subcultures, and the necessary semantic formalisms for representing and reasoning about cultural backgrounds at an appropriate level of granularity for ILEs. The paper investigates how student-initiated control of dynamic cultural adaptation of educational content can be achieved in ILEs, and examines the effects of cultural variations of language formality and contextualisation on student preferences for different types of educational content. Evaluations revealed preliminary insight into quantifiable thresholds at which student perception for specific types of culturally-contextualised content vary. The findings further support the notion put forth in the paper that student-initiated control of cultural contextualisation should be featured in ILEs aiming to cater for diverse groups of students. <s> BIB003 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Cultural Awareness in Education <s> One of the major concerns and hopes for the 21st century has been the education of aglobal society. The increase in social connectivity through expanding Internet capabil-ities, the proliferation of computing technologies, and the growing sophistication ofeducational tools and systems has provided significant promise for achieving this goal.However, technical and conceptual barriers currently stand in the way of realizing thispromise. Lingering discrepancies in the utilization of information and communicationtechnology (ICT) around the world impede the worldwide diffusion of advancededucational technologies. Moreover, research in education has shown that teachingmethodologies and instructional design often differ from one culture to another, andcannot always be universally applied, as their effects can vary greatly from one contextto another. AIED developers therefore must be culturally aware, lest their designsincorporate implicit cultural assumptions that will impede adoption in other culturalcontexts. Ideally their designs should be culturally aware as well, so that they areadaptable to a variety of cultural contexts.Researchers have additionally been interested in cultural modeling and culturalfactorsforanotherreason.Inaglobalsociety,learners’culturalawarenessandtoleranceof cultural diversity are increasingly important. This presents a new opportunity forlearning technologies, which can expose learners in immersive and supportive ways tocultural practices, perspectives, and values that they may not otherwise be able toaccess. The incorporation of novel technologies such as social media, mobile comput-ing,and onlinecommunities can facilitatethe sharing of experiences and provide a richsource of cultural diversity for learning interventions embedded in simulations, games,and other rich formats.However in both of these cases instilling cultural aspects in learning technologiesremains a major challenge. Cultural issues are fluid, diverse, and continuously evolveas societies change. They may involve interrelations between such complex factors as <s> BIB004
In recent years, special attention is being paid to the issues that arise in the context of delivering education in a globalized society BIB004 . Researchers in the field of ITS and learning technologies are increasingly concerned about how learning technology systems can be adapted across a diversity of cultures. Nye in 2015 BIB001 addressed the barriers faced by ITSs entering the developing world. Barriers such as lack of student computing skills, problems arising due to multiplicity of languages and cultures, etc., were presented along with existing solutions. An analysis of student help seeking behaviors in ITSs across different cultures was conducted by Ogan et al. BIB002 . Models of help seeking behaviors during learning have been developed based on datasets of students in three different countries: Costa Rica, the Philippines, and the United States. Ogan et al. find that help seeking behaviors across different cultures is not substantially transferable. This finding suggests the need to replicate research to understand student behaviors. Mohammed and Mohan BIB003 take the first step toward tackling this issue. Their system provides learners with some control over their cultural preferences including problem description, feedback, and presentation of images and hints. Deployment of such systems has provided researchers with the opportunity to experimentally investigate phenomena surrounding the social acceptability of non-dominant language use in education, and its effects on learning.
Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Game-based Tutoring Systems <s> The authors investigated whether guidance and reflection would facilitate science learning in an interactive multimedia game. College students learned how to design plants to survive in different weather conditions. In Experiment 1, they learned with an agent that either guided them with corrective and explanatory feedback or corrective feedback alone. Some students were asked to reflect by giving explanations about their problem-solving answers. Guidance in the form of explanatory feedback produced higher transfer scores, fewer incorrect answers, and greater reduction of misconceptions during problem solving. Reflection in the form of having students give explanations for their answers did not affect learning. Experiments 2 and 3 showed that reflection promotes retention and far transfer in noninteractive environments but not in interactive ones unless students are asked to reflect on correct program solutions rather than on their own solutions. Results support the appropriate use of guidance and reflection for interactive multimedia games. <s> BIB001 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Game-based Tutoring Systems <s> Educational games and tutors provide conflicting approaches to the assistance dilemma, yet there is little work that directly compares them. This study tested the effects of game-based and tutor-based assistance on learning and interest. The laboratory experiment randomly assigned 105 university students to two versions of the educational game Policy World designed to teach the skills of policy argument. The game version provided minimal feedback and imposed penalties during training while the tutor version provided additional step-level, knowledge-based feedback and required immediate error correction. The study measured students' success during training, their interest in the game, and posttest performance. Tutor students were better able to analyze policy problems and reported higher level of competence which in turn affected interest. This suggests that we can improve the efficacy and interest in educational games by applying tutor-based approaches to assistance. <s> BIB002 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Game-based Tutoring Systems <s> iSTART is an intelligent tutoring system (ITS) designed to improve students’ reading comprehension. Previous studies have indicated that iSTART is successful; however, these studies have also indicated that students benefit most from long-term interactions that can become tedious and boring. A new game-based version of the system has been developed, called iSTART-ME (motivationally enhanced). Initial results from a usability study with iSTART-ME indicate that this system increases engagement and decreases boredom over time. <s> BIB003 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Game-based Tutoring Systems <s> Intelligent game-based learning environments integrate commercial game technologies with AI methods from intelligent tutoring systems and intelligent narrative technologies. This article introduces the CRYSTAL ISLAND intelligent game-based learning environment, which has been under development in the authors’ laboratory for the past seven years. After presenting CRYSTAL ISLAND, the principal technical problems of intelligent game-based learning environments are discussed: narrative-centered tutorial planning, student affect recognition, student knowledge modeling, and student goal recognition. Solutions to these problems are illustrated with research conducted with the CRYSTAL ISLAND learning environment. <s> BIB004
The novelty of an ITS and its interactive components is quite engaging when they are used for short periods of time (e.g., hours), but can be monotonous and even annoying when a student is Fig. 4 . Math-learning Environment with Game-like Elements required to interact with an ITS for weeks or months . The underlying idea for game based learning is that students learn better when they are having fun and engaged in the learning process. Game based tutoring systems engage learners to interact actively with the system, thereby making them more motivated to use the system for a longer time . Whereas the ITS principles maximize learning, the game technologies maximize motivation. Instead of learning a subject in a conventional and traditional way, the students play an educational game which successfully integrates game strategies with curriculum-based contents. Although there is no overwhelming evidence supporting the effectiveness of educational game based systems over computer tutors, it has been found that educational games have advantages over traditional tutoring approaches BIB002 BIB003 . Moreno and Mayer BIB001 summarize characteristics of educational games that make them enjoyable to operate. These are interactivity, reflection, feedback, and guidance. To enhance both engagement and learning, Rai and Beck implemented game-like elements in their math tutor . The system provides a math learning environment and the students engage in a narrated visual story. Students help story characters solve the problem in order to move the story forward as shown in Figure 4 . Students receive feedback and bug messages as when using a traditional tutor. The study found that students are more likely to interact with the version of the math tutor that contains game-like elements; however, the authors suggest adding more tutorial features to a game-like environment for higher levels of learning. Another tutoring system that uses an educational game approach is Writing Pal (W-Pal), which is designed to help students across multiple phases of the writing process . Crystal Island is a narrative-centered learning environment in biology, where students attempt to discover the identity and source of an infectious disease on a remote island. The student (player) is involved in a scenario of meeting a patient and attempts to perform a diagnosis. The study of educational impact using a game based system by Lester at el. BIB004 found that students answer more questions correctly on the post-test than the pre-test, and this finding was statistically significant. Additionally, there was a strong relationship between learning outcomes, in-game problem solving and increased engagement BIB004 .
Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Adaptive Intelligent Web Based Educational System (AIWBES) <s> Adaptivity is a particular functionality of hypermedia that may be applied through a variety of methods in computer-based learning environments used in educational settings, such as the flexible delivery of courses through Web-based instruction. Adaptive link annotation is a specific adaptive hypermedia technology whose aim is to help users find an appropriate path in a learning and information space by adapting link presentation to the goals, knowledge, and other characteristics of an individual user. To date, empirical studies in this area are limited but generally recognised by the Adaptive Hypertext and Hypermedia research community as critically important to validate existing approaches. The purpose of this paper is two fold: to briefly report the results of an experiment to determine the effectiveness of adaptive link annotation in educational hypermedia (fully described in Brusilovsky & Eklund, 1998), and to situate the study within a summarised survey of the literature of adaptive educational hypermedia systems and the empirical studies that have been undertaken to evaluate them. <s> BIB001 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Adaptive Intelligent Web Based Educational System (AIWBES) <s> Abstract Hypermedia applications generate comprehension and orientation problems due to their rich link structure. Adaptive hypermedia tries to alleviate these problems by ensuring that the links that are offered and the content of the information pages are adapted to each individual user. This is done by maintaining a user model. Most adaptive hypermedia systems are aimed at one specific application. They provide an engine for maintaining the user model and for adapting content and link structure. They use a fixed screen layout that may include windows (HTML frames) for an annotated table of contents, an overview of known or missing knowledge, etc. Such systems are typically closed and difficult to reuse for very different applications. We present AHA, an open Adaptive Hypermedia Architecture that is suitable for many different applications. This paper concentrates on the adaptive hypermedia engine, which maintains the user model and which filters content pages and link structures accordingly. The engine... <s> BIB002 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Adaptive Intelligent Web Based Educational System (AIWBES) <s> This paper depicts a set of integrated tools to build an intelligent Web-based education system. Our purpose is to create a Web learning environment that can be tailored to the Learners' needs. The Web learning environment is composed of Authoring Tool, Evaluation System, Interactive Voice System and a Virtual Laboratory for programming in Java. All tools use Web Services and have the characteristics of powerful adaptability for the management, authoring, delivery and monitoring of learning content. Part of the decision-making inside the intelligent Web-based education system was made with a multi-agent system. <s> BIB003 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Adaptive Intelligent Web Based Educational System (AIWBES) <s> © Cambridge University Press 2012. Adaptive hypermedia (AH) is an alternative to the traditional, one-size-fits-all approach in the development of hypermedia systems. AH systems build a model of the goals, preferences, and knowledge of each individual user; this model is used throughout the interaction with the user to adapt to the needs of that particular user (Brusilovsky, 1996b). For example, a student in an adaptive educational hypermedia system will be given a presentation that is adapted specifically to his or her knowledge of the subject (De Bra & Calvi, 1998; Hothi, Hall, & Sly, 2000) as well as a suggested set of the most relevant links to proceed further (Brusilovsky, Eklund, & Schwarz, 1998; Kavcic, 2004). An adaptive electronic encyclopedia will personalize the content of an article to augment the user's existing knowledge and interests (Bontcheva & Wilks, 2005; Milosavljevic, 1997). A museum guide will adapt the presentation about every visited object to the user's individual path through the museum (Oberlander et al., 1998; Stock et al., 2007). Adaptive hypermedia belongs to the class of user-adaptive systems (Schneider-Hufschmidt, Kuhme, & Malinowski, 1993). A distinctive feature of an adaptive system is an explicit user model that represents user knowledge, goals, and interests, as well as other features that enable the system to adapt to different users with their own specific set of goals. An adaptive system collects data for the user model from various sources that can include implicitly observing user interaction and explicitly requesting direct input from the user. The user model is applied to provide an adaptation effect, that is, tailor interaction to different users in the same context. In different kinds of adaptive systems, adaptation effects could vary greatly. In AH systems, it is limited to three major adaptation technologies: adaptive content selection, adaptive navigation support, and adaptive presentation. The first of these three technologies comes from the fields of adaptive information retrieval (IR) and intelligent tutoring systems (ITS). When the user searches for information, the system adaptively selects and prioritizes the most relevant items (Brajnik, Guida, & Tasso, 1987; Brusilovsky, 1992b). <s> BIB004
Adaptive Intelligent Web Based Educational Systems (AIWBES) or adaptive hypermedia provide an alternative to the traditional, just-put-it-on-the-web approach in the development of web based educational courseware. An AIWBES provides adaptivity in terms of goals, preferences, and knowledge of individual students during interaction with the system BIB003 . The area of ITSs inspired early research on adaptive educational hypermedia, which combine ITSs and educational hypermedia. During the development of the early ITSs, the concern was to support students in solving problems and how to overcome the lack of learning material. The required knowledge was acquired by developers attending lectures or reading textbooks. As computers became more powerful, ITS researchers integrated ITS features with the learning material. Many research groups have found that combining hypermedia systems with an ITS can lead to more functionality than traditional static educational hypermedia . A number of systems have been developed under the category of AIWBES. ELM-ART (ELM Adaptive Remote Tutor) is a WWW based ITS to support learning programming in Lisp. It has been used in distance learning to not only support course material from the textbook, but also to provide problem solving support. Adaptive navigation through the material was implemented to support learning by individual students. The system classifies the content of a page to be as ready to be learned or not ready to be learned because some prerequisite knowledge has not been learned yet . In addition, the links are sorted depending on the relevancy to the current student state so the students know which are the most similar situations or most relevant web pages. When the student enters a page which contains a chunk of prerequisite knowledge to be learned, the system alerts the student about the prerequisite and suggests additional links to textbook and manual pages regarding them. In case the student struggles with understanding some contents or solving a problem, he/she can use the help button . Empirical studies have shown that hypermedia systems in conjunction with tutoring tools can be helpful with a self-learner BIB001 . Other adaptive intelligent hypermedia systems that have been used by hundreds of students include AHA! BIB002 and InterBook , which have been shown to help student learn fast and better BIB004 .
Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Collaborative Learning <s> This study quantitatively synthesized the empirical research on the effects of social context (i.e., small group versus individual learning) when students learn using computer technology. In total, 486 independent findings were extracted from 122 studies involving 11,317 learners. The results indicate that, on average, small group learning had significantly more positive effects than individual learning on student individual achievement (mean ES = +0.15), group task performance (mean ES = +0.31), and several process and affective outcomes. However, findings on both individual achievement and group task performance were significantly heterogeneous. Through weighted least squares univariate and multiple regression analyses, we found that variability in each of the two cognitive outcomes could be accounted for by a few technology, task, grouping, and learner characteristics in the studies. <s> BIB001 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Collaborative Learning <s> In this paper we investigate the role of reflection in simulation based learning by manipulating two independent factors that each separately lead to significant learning effects, namely whether students worked alone or in pairs, and what type of support students were provided with. Our finding is that in our simulation based learning task, students learned significantly more when they worked in pairs than when they worked alone. Furthermore, dynamic support implemented with tutorial dialogue agents lead to significantly more learning than no support, while static support was not statistically distinguishable from either of the other two conditions. The largest effect size in comparison with the control condition of individuals working alone with no support was Pairs+Dynamic support, with an effect size of 1.24 standard deviations. <s> BIB002 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Collaborative Learning <s> Intelligent Tutoring Systems (ITS) is the interdisciplinary field that investigates how to devise educational systems that provide instruction tailored to the needs of individual learners, as many good teachers do. Research in this field has successfully delivered techniques and systems that provide adaptive support for student problem solving in a variety of domains. There are, however, other educational activities that can benefit from individualized computer-based support, such as studying examples, exploring interactive simulations and playing educational games. Providing individualized support for these activities poses unique challenges, because it requires an ITS that can model and adapt to student behaviors, skills and mental states often not as structured and welldefined as those involved in traditional problem solving. This paper presents a variety of projects that illustrate some of these challenges, our proposed solutions, and future opportunities. <s> BIB003 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Collaborative Learning <s> Tutorial Dialog Systems that employ Conversational Agents (CAs) to deliver instructional content to learners in one-on-one tutoring settings have been shown to be effective in multiple learning domains by multiple research groups. Our work focuses on extending this successful learning technology to collaborative learning settings involving two or more learners interacting with one or more agents. Experience from extending existing techniques for developing conversational agents into multiple-learner settings highlights two underlying assumptions from the one-learner setting that do not generalize well to the multiuser setting, and thus cause difficulties. These assumptions include what we refer to as the near-even participation assumption and the known addressee assumption. A new software architecture called Basilica that allows us to address and overcome these limitations is a major contribution of this article. The Basilica architecture adopts an object-oriented approach to represent agents as a network composed of what we refer to as behavioral components because they enable the agents to engage in rich conversational behaviors. Additionally, we describe three specific conversational agents built using Basilica in order to illustrate the desirable properties of this new architecture. <s> BIB004 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Collaborative Learning <s> An emerging trend in classrooms is the use of collaborative learning environments that promote lively exchanges between learners in order to facilitate learning. This paper explored the possibility of using discourse features to predict student and group performance during collaborative learning interactions. We investigated the linguistic patterns of group chats, within an online collaborative learning exercise, on five discourse dimensions using an automated linguistic facility, Coh-Metrix. The results indicated that students who engaged in deeper cohesive integration and generated more complicated syntactic structures performed significantly better. The overall group level results indicated collaborative groups who engaged in deeper cohesive and expository style interactions performed significantly better on posttests. Although students do not directly express knowledge construction and cognitive processes, our results indicate that these states can be monitored by analyzing language and discourse. Implications are discussed regarding computer supported collaborative learning and ITS's to facilitate productive communication in collaborative learning environments. <s> BIB005
Current educational research suggests collaborative learning or group-based learning increases the learning performance of a group as well as individual learning outcomes BIB005 BIB001 . In a collaborative learning environment, students learn in groups via interactions with each other by asking questions, explaining and justifying their opinions, explaining their reasoning, and presenting their knowledge . A number of researchers have pointed out the importance of a group learning environment and how significantly effective it is in term of learning gain . Recently, there has been a rise in interest in implementing collaborative learning in tutoring systems to show the benefits obtained from interactions among students during problem solving. Kumar and Rose, in 2011, built intelligent interactive tutoring systems CycleTalk and WrenchTalk that support collaborative learning environments in the engineering domain BIB003 . Teams of two or more students work on the same task when solving a problem. They conducted a number of experiments to investigate the effectiveness of collaborative learning and how to engage the students more deeply in instructional conversations with the tutors using teaching techniques such as Attention Grabbing, Ask when Ready and Social Interaction Strategies. It was found that students who worked in pairs learned better than students who worked individually BIB004 BIB002 . Another tutoring system that supports collaborative learning is described in for teaching mathematical fractions.
Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Data Mining in ITSs <s> This paper describes the integration of robot path-planning and spatial task modeling into a software system that teaches the operation of a robot manipulator deployed on International Space Station (ISS). The system addresses the complexity of the manipulator, the limited direct view of the ISS exterior and the unpredictability of lighting conditions in the workspace. Robot path planning is used not for controlling the manipulator, but for automatically checking errors of a student learning to operate the manipulator and for automatically producing illustrations of good and bad motions in training. <s> BIB001 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Data Mining in ITSs <s> It has been found in recent years that many students who use intelligent tutoring systems game the system, attempting to succeed in the educational environment by exploiting properties of the system rather than by learning the material and trying to use that knowledge to answer correctly. In this paper, we introduce a system which gives a gaming student supplementary exercises focused on exactly the material the student bypassed by gaming, and which also expresses negative emotion to gaming students through an animated agent. Students using this system engage in less gaming, and students who receive many supplemental exercises have considerably better learning than is associated with gaming in the control condition or prior studies. <s> BIB002 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Data Mining in ITSs <s> Domain experts should provide relevant domain knowledge to an Intelligent Tutoring System (ITS) so that it can guide a learner during problem-solving learning activities. However, for many ill-defined domains, the domain knowledge is hard to define explicitly. In previous works, we showed how sequential pattern mining can be used to extract a partial problem space from logged user interactions, and how it can support tutoring services during problem-solving exercises. This article describes an extension of this approach to extract a problem space that is richer and more adapted for supporting tutoring services. We combined sequential pattern mining with (1) dimensional pattern mining (2) time intervals, (3) the automatic clustering of valued actions and (4) closed sequences mining. Some tutoring services have been implemented and an experiment has been conducted in a tutoring system. <s> BIB003 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Data Mining in ITSs <s> We present a probabilistic model of user affect designed to allow an intelligent agent to recognise multiple user emotions during the interaction with an educational computer game. Our model is based on a probabilistic framework that deals with the high level of uncertainty involved in recognizing a variety of user emotions by combining in a Dynamic Bayesian Network information on both the causes and effects of emotional reactions. The part of the framework that reasons from causes to emotions (diagnostic model) implements a theoretical model of affect, the OCC model, which accounts for how emotions are caused by one's appraisal of the current context in terms of one's goals and preferences. The advantage of using the OCC model is that it provides an affective agent with explicit information not only on which emotions a user feels but also why, thus increasing the agent's capability to effectively respond to the users' emotions. The challenge is that building the model requires having mechanisms to assess user goals and how the environment fits them, a form of plan recognition. In this paper, we illustrate how we built the predictive part of the affective model by combining general theories with empirical studies to adapt the theories to our target application domain. We then present results on the model's accuracy, showing that the model achieves good accuracy on several of the target emotions. We also discuss the model's limitations, to open the ground for the next stage of the work, i.e., complementing the model with diagnostic information. <s> BIB004 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Data Mining in ITSs <s> Data mining methods have in recent years enabled the development of more sophisticated student models which represent and detect a broader range of student behaviors than was previously possible. This chapter summarizes key data mining methods that have supported student modeling efforts, discussing also the specific constructs that have been modeled with the use of educational data mining. We also discuss the relative advantages of educational data mining compared to knowledge engineering, and key upcoming directions that are needed for educational data mining research to reach its full potential. <s> BIB005 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Data Mining in ITSs <s> Data Mining: Practical Machine Learning Tools and Techniques offers a thorough grounding in machine learning concepts as well as practical advice on applying machine learning tools and techniques in real-world data mining situations. This highly anticipated third edition of the most acclaimed work on data mining and machine learning will teach you everything you need to know about preparing inputs, interpreting outputs, evaluating results, and the algorithmic methods at the heart of successful data mining. Thorough updates reflect the technical changes and modernizations that have taken place in the field since the last edition, including new material on Data Transformations, Ensemble Learning, Massive Data Sets, Multi-instance Learning, plus a new version of the popular Weka machine learning software developed by the authors. Witten, Frank, and Hall include both tried-and-true techniques of today as well as methods at the leading edge of contemporary research. *Provides a thorough grounding in machine learning concepts as well as practical advice on applying the tools and techniques to your data mining projects *Offers concrete tips and techniques for performance improvement that work by transforming the input or output in machine learning methods *Includes downloadable Weka software toolkit, a collection of machine learning algorithms for data mining tasks-in an updated, interactive interface. Algorithms in toolkit cover: data pre-processing, classification, regression, clustering, association rules, visualization <s> BIB006 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Data Mining in ITSs <s> In recent years, the usefulness of affect detection for educational software has become clear. Accurate detection of student affect can support a wide range of interventions with the potential to improve student affect, increase engagement, and improve learning. In addition, accurate detection of student affect could play an essential role in research attempting to understand the root causes and impacts of different forms of affect. However, current approaches to affect detection have largely relied upon sensor systems, which are expensive and typically not physically robust to classroom conditions, reducing their potential real-world impact. Work towards sensor-free affect detection has produced detectors that are better than chance, but not substantially better— especially when subject to stringent cross-validation processes. In this paper we present models which can detect student engaged concentration, confusion, frustration, and boredom solely from students' interactions within a Cognitive Tutor for Algebra. These detectors are designed to operate solely on the information available through students’ semantic actions within the interface, making these detectors applicable both for driving interventions and for labeling existing log files in the PSLC DataShop, facilitating future discovery with models analyses at scale. <s> BIB007
Data mining or knowledge discovery in databases as it is alternatively called, is the process of analyzing large amounts of data for the purpose of extracting and discovering useful information BIB006 . Data mining has been used in the field of ITSs for many different purposes. For instance, it has been used to identify learners who game the system. Gaming the system or off-task behavior which is defined as "attempting to succeed in the environment by exploiting properties of the system rather than by learning the material and trying to use that knowledge to answer correctly" BIB002 . Identifying situations where the system has been gamed has been the focus for many researchers in recent years. Additional discussions on mining student datasets can be found in . Another use of data mining in ITSs is to detect student affect. Detecting student's affective states can potentially increase the engagement level and learning outcomes as stated by Baker et al. . For example, classification methods have been used in automating detectors to predict student states, including boredom, engaged concentration, frustration, and confusion BIB007 . Similarly, classification methods have been used to detect affect such as joy and distress BIB004 . Another use of data mining is automatically discovering a partial problem space from logged user interactions rather than traditional techniques where domain experts have to provide the source of the knowledge. As an example, clustering methods including sequential pattern mining and association rule discovery are used in RomanTutor BIB001 to extract problem space and support tutoring services BIB003 . Interested readers are referred to read BIB005 for more details.
Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Authoring Tools <s> The current generation of intelligent tutoring systems (ITS) have successfully produced learning gains without the use of natural language technology, but the goal for the next generation is to add natural language dialogue capabilities. Since it is already a tremendous effort to add domain and pedagogical knowledge to the current generation of ITSs, adding natural language dialogue capabilities can further increase the development time by requiring that language knowledge also be engineered. Rather than having natural language knowledge become an additional engineering burden, we seek to build tools that will allow us to attack the problem of pedagogical and language knowledge engineering in tandem. In this paper, we describe the authoring tool suite we are building to address this problem. We have found that our prototype tools do facilitate the rapid development of natural language dialogue interfaces for ITSs. With these tools we were able to build knowledge sources for our dialogue interface to an ITS in only 3 man months. The resulting dialogue system was able to hold natural language dialogues with students on 50 physics concepts and students showed significant learning gains over seeing only monologue text hints [8]. <s> BIB001 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Authoring Tools <s> REDEEM allows teachers and instructors with little technological knowledge to create simple Intelligent Tutoring Systems. Unlike the other authoring tools described in this book, REDEEM does not support the construction of domain material. Instead, authors import existing computer-based material as a domain model and then use the REDEEM tools to overlay their teaching expertise. The REDEEM shell uses this knowledge, together with its own default teaching knowledge, to deliver the courseware adaptively to meet the needs of different learners. In this chapter, we first explain how the REDEEM tools capture this knowledge and how the REDEEM Shell uses it. Then, we describe four different studies with REDEEM aimed at answering questions concerning the effectiveness of this approach to ITS development. We conclude by reflecting on the experiences of the last six years and the lessons that we have learned by using REDEEM in a variety of real world contexts. <s> BIB002 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Authoring Tools <s> This chapter presents an authoring model and a system for curriculum development in Intelligent Tutoring Systems (ITSs). We first present an approach for modeling knowledge of the subject matter (the curriculum) to be taught by a large-scale ITS, and we show how it serves as the framework of the authoring process. This approach, called CREAM (Curriculum REspresentation and Acquisition Model), allows creation and organization of the curriculum according to three models concerning respectively the domain, the pedagogy and the didactic aspects. The domain is supported by the capability model (CREAM-C) which represents and organizes domain knowledge through logical links. The pedagogical view allows the definition and organization of teaching objectives by modeling skills required to achieve them and evaluating the impact of this achievement on the domain knowledge (CREAM-O and pedagogical model). The didactic component is based on a model of resources which defines and specifies different activities that are necessary to support teaching (CREAM-R). The construction of each part of CREAM is supported by specific authoring tools and methods. The overall authoring system, called CREAM-Tools, allows Instructional Designers (IDs) to produce a complete ITS curriculum based on the CREAM approach. Although this article is limited to curriculum development, we give some guidelines on how the resulting system could support the construction of other ITS components such as the planner and the student model. <s> BIB003 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Authoring Tools <s> Model-tracing tutors have consistently been among the most effective class of intelligent learning environments. Across a number of empirical studies, these tutors have shown students can learn the tutored domain better or in a shorter amount of time than traditionally taught students (Anderson et al., 1990). Unfortunately, the creation of these tutors, particularly the production system component, is a time-intensive task, requiring knowledge that lies outside the tutored domain. This outside knowledge—knowledge of programming and cognitive science—prohibits domain experts from being able to construct effective, model-tracing tutors for their domain of expertise. This paper reports on a system, referred to as Demonstr8 (and pronounced "demonstrate"), which attempts to reduce the outside knowledge required to construct a model-tracing tutor, within the domain of arithmetic. By utilizing programming by demonstration techniques (Cypher, 1993; Myers et al., 1993) coupled with a mechanism for abstracting the underlying productions (the procedures to be used by the tutor and learned by the student), the author can interact with the interface the student will use, and the productions will be inferred by the system. In such a way, a domain expert can create in a short time a model-tracing tutor with the full capabilities implied by such a tutor—a production system that monitors the student's progress at each step in solving the problem and gives feedback when requested or necessary, in either an immediate or delayed manner. Model-tracing tutors have proven extremely effective in the classroom, with the most promising efforts demonstrating more than a standard deviation's improvement over traditional instruction (Anderson et al., 1990; Koedinger & Anderson, 1993a). They are referred to as model- tracing tutors because they contain an expert model which is used to trace the student's responses to ensure that the student's responses are part of an acceptable solution path. The creation of such tutors, particularly the expert models that underlie them, is a time-intensive task, requiring much knowledge outside of the domain being tutored. Anderson (1992) estimated <s> BIB004 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Authoring Tools <s> User modelling shells and learner modelling servers have been proposed in order to provide reusable user/student model information over different domains, common inference mechanisms, and mechanisms to handle consistency of beliefs from different sources. Open and inspectable student models have been investigated by several authors as a means to promote student reflection, knowledge awareness, collaborative assessment, self-assessment, interactive diagnosis, to arrange groups of students, and to support the use of students' models by the teacher. ::: ::: This paper presents SModel, a Bayesian student modelling server used in distributed multi-agent environments. SModel server includes a student model database and a Bayesian student modelling component. SModel provides several services to a group of agents in a CORBA platform. Users can use ViSMod, a Bayesian student modelling visualization tool, and SMV, a student modelling database viewer, to visualize and inspect distributed Bayesian student models maintained by SModel server. SModel has been tested in a multi-agent tutoring system for teaching basic Java programming. In addition, SModel server has been used to maintain and share student models in a study focussed on exploring the existence of student reflection and analysing student model accuracy using inspectable Bayesian student models. <s> BIB005 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Authoring Tools <s> Intelligent tutoring systems (ITSs) have potential for making computer-based instruction moreadaptive and interactive, but development in the area of manufacturing engineering has been rare.However, recent developments in the area of ITS authoring tools may make this technology moreaccessible. The objectives of this study were to: 1) evaluate the feasibility of faculty coursedevelopment using an ITS authoring tool; and 2) evaluate the instructional effectiveness of thedeveloped courseware. An ITS authoring tool called XAIDA was used to develop a tutorial on howto use a computer numerical control (CNC) machine. This paper summarizes the results of apreliminary evaluation conducted with 25 undergraduate manufacturing engineering students. Theresults suggest that instructional development in XAIDA is feasible and quick, and that studentslearned from and enjoyed the tutorial <s> BIB006 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Authoring Tools <s> Special classes of asynchronous e-learning systems are the intelligent tutoring systems which represent an advanced learning and teaching environment adaptable to individual student's characteristics. Authoring shells have an environment that enables development of the intelligent tutoring systems. In this paper we present, in entirety, for the first time, our approach to research, development and implementation related to intelligent tutoring systems and ITS authoring shells. Our research relies on the traditional intelligent tutoring system, the consideration that teaching is control of learning and principles of good human tutoring in order to develop the Tutor-Expert System model for building intelligent tutoring systems in freely chosen domain knowledge. In this way we can wrap up an ongoing process that has lasted for the previous fifteen years. Prototype tests with the implemented systems have been carried out with students from a primary education to an academic level. Results of those tests are advantageous, according to surveys, and the implemented and deployed software satisfies functionalities and actors' demands. <s> BIB007 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Authoring Tools <s> Over the last decade, the Intelligent Computer Tutoring Group (ICTG) has implemented many successful constraint-based Intelligent Tutoring Systems (ITSs) in a variety of instructional domains. Our tutors have proven their effectiveness not only in controlled lab studies but also in real classrooms, and some of them have been commercialized. Although constraint-based tutors seem easier to develop in comparison to other existing ITS methodologies, they still require substantial expertise in Artificial Intelligence (AI) and programming. Our initial approach to making the development easier was WETAS (Web-Enabled Tutor Authoring System), an authoring shell that provided all the necessary functionality for ITSs but still required domain models to be developed manually. This paper presents ASPIRE (Authoring Software Platform for Intelligent Resources in Education), a complete authoring and deployment environment for constraint-based ITSs. ASPIRE consists of the authoring server (ASPIRE-Author), which enables domain experts to easily develop new constraint-based tutors, and a tutoring server (ASPIRE-Tutor), which deploys the developed systems. ASPIRE-Author supports the authoring of the domain model, in which the author is required to provide a high-level description of the domain, as well as examples of problems and their solutions. From this information, ASPIRE generates the domain model automatically. We discuss the authoring process and illustrate it using the development process of CIT, an ITS that teaches capital investment decision making. We also discuss a preliminary study of ASPIRE, and some of the ITSs being developed in it. <s> BIB008 </s> Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments <s> Authoring Tools <s> This chapter addresses the challenge of building or authoring an Intelligent Tutoring System (ITS), along with the problems that have arisen and been dealt with, and the solutions that have been tested. We begin by clarifying what building an ITS entails, and then position today’s systems in the overall historical context of ITS research. The chapter concludes with a series of open questions and an introduction to the other chapters in this part of the book. <s> BIB009
ITS researcher teams have been interested in simplifying the building process for ITSs by making authoring of ITSs more accessible and affordable to designers and teachers. Authoring tools in the domain of ITSs can be categorized in different dimensions such as tools that require programming skills and those that do not, pedagogy-oriented and performance-oriented , or paradigm specific such as model tracing system, and constraint based tutor BIB009 . SModel BIB005 and Tex-Sys BIB007 are tools that fall into the category of tools that require programming skills. SModel is a Bayesian student modeling component, which provides services to a group of agents in the CORBA platform. Tex-Sys is another example in the same category, which provides a generic module for designing ITS components (domain model, and student model) for any given domain. Examples of authoring tools categorized as pedagogy-oriented are REDEEM BIB002 and CREAMTools BIB003 . Pedagogy-oriented tools are those that concentrate on how to deliver and sequence a package of content BIB009 . REDEEM provides reusability of existing domain material and then provides authoring tactics on how to teach this material, tactics such as sequencing of contents and learning activities. Similarly, CREAM-Tools focuses on the operations required to develop curriculum content, taking into account aspects of the domain, and pedagogy and didactic requirements. Performance-oriented tools are those that concentrate on the learner's performance to provide a rich environment of skills for the learners to practice and receive system responses BIB009 . Examples of authoring tools that belong to this category are Demonstr8 BIB004 , XAIDA BIB006 and Knowledge Construction Dialog (KCD) BIB001 . In recent years, there has been a great interest in building authoring tools that are specific to certain paradigms and do not require programing skills in order to allow for sharing of components across ITSs and reduce development costs BIB009 . Cognitive Tutor Authoring Tools (CTAT) provides a set of authoring tool specific for model tracing tutors and example tracing tutors BIB009 . CTAT provides step by step guidance for problem solving activities as well as how to adaptively select problems based on a Bayesian student model. Authoring Software Platform for Intelligent Resources in Education (ASPIRE) is also a paradigm specific authoring tool for constraint based models BIB008 . ASPIRE supports authoring the domain model, enabling the subject experts to easily develop constraint based tutors. Another authoring tool that falls into this category is AutoTutor Script Authoring Tool (ASAT) . ASAT facilitates developing components of AutoTutor, integrating conversations into learning systems. Finally, the Generalized Intelligent Framework for Tutoring (GIFT) is a framework and set of tools for developing intelligent and adaptive tutoring systems . GIFT supports a variety of services that include domain knowledge representation, performance assessment, course flow, pedagogical model and student model.
Fault diagnosis of electronic systems using intelligent techniques: a review <s> IV. TRADITIONAL APPROACHES <s> Abstract This paper describes the design and implementation of an expert system for personal computer repair and maintenance (ESPCRM). Based on the Personal Consultant Plus 4.0 expert system shell, ESPCRM provides consultation for the repair and maintenance of the whole series of IBM/IBM compatible PCs from the XT to 486-based machines. Troubleshooting a personal computer (PC) is a knowledge-intensive task. Depending on the experience of the technician, a simple problem could take hours or even days to solve. An expert system offers a viable solution to the problem. Presently, the knowledge base of the expert system developed consists of some 94 rules, 68 parameters and 40 graphic pages. The acquisition of knowledge is conducted through interviews with technicians in the PC repair workshop catering for some 1200 PCs of various makes, models and configurations within the Nanyang Technological University (NTU). <s> BIB001 </s> Fault diagnosis of electronic systems using intelligent techniques: a review <s> IV. TRADITIONAL APPROACHES <s> Abstract A number of knowledge-based systems in the electronic engineering field have been developed in the past decade. These include those that use knowledge-based techniques to diagnose instrumentation, determine system configurations and to aid circuit and system design. This paper reviews the literature on knowledge-based systems for electronic engineering applications in these areas and reports progress made in the development of a basis for realising electronic systems. <s> BIB002 </s> Fault diagnosis of electronic systems using intelligent techniques: a review <s> IV. TRADITIONAL APPROACHES <s> Abstract The aim of this work is to develop a rule-based expert system to aid an operator in the fault diagnosis of the electronics of forge press equipment. It is a menu-driven package, developed in Turbo PROLOG on an IBM PC., to help the operator fix faults up to replaceable module level. The system has been broadly categorised into eight sub-systems, and the rules, based on fault cause relations, have been developed for each of the sub-systems. This modular development reduces the access time, and also facilitates the handling of the knowledge base. <s> BIB003 </s> Fault diagnosis of electronic systems using intelligent techniques: a review <s> IV. TRADITIONAL APPROACHES <s> This paper describes a tool to diagnose the cause of failure of a HP computer server. We do this by analyzing the dump of Processor Internal Memory (PIM) and maximizing the leverage of expert learning from one hardware failure situation to another. The tool is a rule-based expert system, with some nested rules which translate to decision trees. The rules were implemented using a metalanguage which was customized for the hardware failure analysis problem domain. Pimtool has been deployed to 25 users as of December 1996. We plan to expand usage to over 400 users by end of 1997. Using Pimtool, we expect to save over 15 minutes in Mean-Time-to-Repair (MTIR) per call. We have recognized that knowledge management will be a key issue in the future and are developing tools and strategies to address it. <s> BIB004 </s> Fault diagnosis of electronic systems using intelligent techniques: a review <s> IV. TRADITIONAL APPROACHES <s> Artificial Intelligence: Structures and Strategies for Complex Problem Solving by George F. Luger 6th edition, Addison Wesley, 2008 The book serves as a good introductory textbook for artificial intelligence, particularly for undergraduate level. It covers major AI topics and makes good connection between different areas of artificial intelligence. Along with each technique and algorithm introduced in the book, is a discussion of its complexity and application domain. There is an attached website to the book that provides auxiliary materials for some chapters, sample problems with solutions, and ideas for student projects. Besides Prolog and Lisp, java and C++ are also used to implement many of the algorithms in the book. The book is organized in five parts. The first part (chapter 1) gives an overview of AI, its history and its various application areas. The second part (chapters 2–6) concerns with knowledge representation and search algorithms. Chapter 2 introduces predicate calculus as a mathematical tool for representing AI problems. The state space search as well as un-informed and heuristic search methods is introduced in chapters 3 and 4. Chapter 5 discusses the issue of uncertainty in problem solving and covers the foundation of stochastic methodology and its application. In chapter 6 the implementation of search algorithms is shown in production system and blackboard architectures. Part 3 (chapters 7–9) discusses knowledge representation and different methods of problem solving, including strong, weak and distributed problem solving. Chapter 7 begins with reviewing the history of evolution of AI representation schemes, including semantic networks, frames, scripts and conceptual graphs. This chapter ends with a brief introduction of Agent problem solving. Chapter 8 presents the production model and rule-based expert systems as well as case-based and model-based reasoning. The methods of dealing with various aspects of uncertainty are discussed in chapter 9. These methods include Dempster-Shafer theory of evidence, Bayesian and Belief networks, fuzzy logics and Markov models. Part 4 is devoted to machine learning. Chapter 10 describes algorithms for symbol-based learning, including induction, concept learning, vision-space search and ID3. The neural network methods for learning, such as back propagation, competitive, Associative memories and Hebbian Coincidence learning were presented in chapter 11. Genetic algorithms and evolutionary learning approaches are introduced in chapter 12. Chapter 13 introduces stochastic and dynamic models of learning along with Hidden Markov Models, Dynamic Baysian networks and Markov Decision Processes. Part 5 (chapters 14 and 15) examines two main application of AI: automated reasoning and natural language understanding. Chapter 14 begins with an introduction to weak methods in problem solving and continues with presenting resolution theorem proving. Chapter 15 deals with the complex issue of natural language understanding by discussing main methods of syntax and semantic analysis of natural language corpus. The chapter ends with examples of natural language application in Database query generation, text summarization and question answering systems. Finally, chapter 16 is a summary of the materials covered in the book as well current AI's limitations and future directions. One criticism about the book would be that the materials are not covered in enough depth. Because of the space limitation, many important AI algorithms and techniques are discussed briefly without providing enough details. As a result, some chapters (e.g., 8, 9, 11, and 13) of the book should be supported by complementary materials to make it understandable for undergraduate students and motivating for graduate students. Another issue is with the structure of the book. The order of presenting chapters introduces sequentially different challenges and techniques in problem solving. Consequently, some topics such as uncertainty and logic are not introduced separately and are distributed in different chapters of the book related to different parts. Although interesting, this makes the book hard to follow. In summary, the book gives a great insight to the readers that want to familiar themselves with artificial intelligence. It covers a broad range of topics in AI problem solving and its practical application and is a good reference for an undergraduate level introductory AI class. Elham S. Khorasani, Department of Computer Science Southern Illinois University Carbondale, IL 62901, USA <s> BIB005
A. Rule-Based Systems 1) Approach: Rule-based diagnostic systems represent the experience of skilled diagnosticians in the form of rules which generally take the form "IF symptom(s) THEN fault(s)." Representing the knowledge for a particular problem domain, may require hundreds, or even thousands of rules. Rule-based inference involves taking information about the problem domain, and invoking rules which match this information. This generates new data which is added to the problem information. This process is repeated iteratively until a solution to the problem is found BIB005 , [62] . Most intelligent diagnostic programs implemented in the 1970s and early 1980s were of this form. 2) Applications: A survey of a selection of applications in electronic engineering is described in BIB002 . Included are applications in the diagnosis of telephone networks, disk drives, telephone switching equipment, and avionics control systems. Even more recently, rule-based systems are continuing to be used. An Expert System for PC Repair and Maintenance (ES-PCRM) BIB001 describes a system for diagnosing PC systems to the replaceable module level. In BIB003 , a program for diagnosing electronic forge press faults is reported. In , a complex expert system employing multiple specialized rulebases for diagnosing complex PC boards is described. Finally, in BIB004 , a diagnostic tool for server computers boards which uses a rulebase to analyze a dump of the processor's internal memory is reported. 3) Issues: The primary advantage of this approach is its intuitive simplicity. Its disadvantages are the following. a) The difficulty of acquiring the knowledge to build the rulebase-known as the knowledge acquisition bottleneck. b) Its ability to deal with novel faults. c) System dependence, that is, a new rulebase will have to be generated for each new system type.
Fault diagnosis of electronic systems using intelligent techniques: a review <s> 1) Approach: <s> Abstract We describe a system that reasons from first principles, i.e., using knowledge of structure and behavior. The system has been implemented and tested on several examples in the domain of troubleshooting digital electronic circuits. We give an example of the system in operation, illustrating that this approach provides several advantages, including a significant degree of device independence, the ability to constrain the hypotheses it considers at the outset, yet deal with a progressively wider range of problems, and the ability to deal with situations that are novel in the sense that their outward manifestations may not have been encountered previously. As background we review our basic approach to describing structure and behavior, then explore some of the technologies used previously in troubleshooting. Difficulties encountered there lead us to a number of new contributions, four of which make up the central focus of this paper. • — We describe a technique we call constraint suspension that provides a powerful tool for troubleshooting. • — We point out the importance of making explicit the assumptions underlying reasoning and describe a technique that helps enumerate assumptions methodically. • — The result is an overall strategy for troubleshooting based on the progressive relaxation of underlying assumptions. The system can focus its efforts initially, yet will methodically expand its focus to include a broad range of faults. • — Finally, abstracting from our examples, we find that the concept of adjacency proves to be useful in understanding why some faults are especially difficult to diagnose and why multiple representations are useful. <s> BIB001 </s> Fault diagnosis of electronic systems using intelligent techniques: a review <s> 1) Approach: <s> “Hard problems” can be hard because they are computationally intractable. or because they are underconstrained. Here we describe candidate generation for digital devrces with state, a fault localization problem that is intractable when the devices are described at low levels of abstraction, and is underconstrained when described at higher levels of abstraction. Previous v;ork [l] has shown that a fault in a combinatorial digital circuit can be localized using a constraint-based representation of structure and behavior. ln this paper we (1) extend this represerltation to model a circuit with state by choosrng a time granularity and vocabulary of signals appropriate to that circuit; (2) demonstrate that the same candidate generation procedure that works for combinatorial circuits becomes indiscriminate when applied to a state circuit modeled in that extended representationL(3) show how the common technique of singlestepping can be viewed as a divide-and-conquer approach to overcoming that lack of constraint; and (4) illustrate how using structural de?ail can help to make the candidate generator discriminating once again, but only at great cost. <s> BIB002 </s> Fault diagnosis of electronic systems using intelligent techniques: a review <s> 1) Approach: <s> This paper shows how order of magnitude reasoning has been successfully used for troubleshooting complex analog circuits. The originality of this approach was to be able to remove the gap between the information required to apply a general theory of diagnosis and the limited information actually available. The expert's ability to detect a defect by reasoning about the significant changes in behavior it induces is extensively exploited here: as a kind of reasoning that justifies the qualitative modeling, as a heuristic that defmes a strategy and as a working hypothesis that makes clear the scope of this approach. <s> BIB003 </s> Fault diagnosis of electronic systems using intelligent techniques: a review <s> 1) Approach: <s> Abstract This paper presents an expert system approach to off-line generation and optimisation of fault-trees for use in on-line fault diagnosis systems, incorporating the knowledge and experience of manufacturers and users. The size of the problem is such that explicit formulation of the fault-tree is very complicated. The knowledge, however, is implicitly available in the description of faults in terms of symptom-codes, results of performed tests and repair actions. Case-based reasoning is selected for the implementation to facilitate the automatic generation, consistency checking and maintenance of the fault-tree. Different diagnosis systems for different levels of diagnosis tasks can be generated automatically from the same problem description. Special attention is given to the processing speed needed for on-line use on modern transportation systems. <s> BIB004 </s> Fault diagnosis of electronic systems using intelligent techniques: a review <s> 1) Approach: <s> After preliminary work in economics and control theory, qualitative reasoning emerged in AI at the end of the 70s and beginning of the 80s, in the form of Naive Physics and Commonsense Reasoning. This way was progressively abandoned in aid of more formalised approaches to tackle modelling problems in engineering tasks. Qualitative Reasoning became a proper subfield of AI in 1984, the year when several seminal papers developed the foundations and the main concepts that remain topical today. Since then Qualitative Reasoning has considerably broadened the scope of problems addressed, investigating new tasks and new systems, such as natural systems. This paper gives a survey of the development of Qualitative Reasoning from the 80s, focusing on the present state-of-the-art of the mathematical formalisms and modelling techniques, and presents the principal domains of application through the applied research done in France. <s> BIB005 </s> Fault diagnosis of electronic systems using intelligent techniques: a review <s> 1) Approach: <s> Abstract Fault diagnosis by means of diagnostic trees is of considerable interest for industrial applications. The drawbacks of this approach are mostly related to the knowledge elicitation through laborious enumeration of the tree structure and ad hoc threshold selection for symptoms definition. These problems can be alleviated if a more profound knowledge of the process is brought into play. The main idea of the paper consists of modeling the nominal and faulty states of the plant by means of interval-like component models derived from first-principles laws, e.g. the conservation law. Such a model serves to simulate the entire system under different fault conditions, in order to obtain the representative patterns of measurable process quantities, i.e. training examples. To match these patterns by diagnostic rules, multistrategy machine learning is applied. As a result, binary decision trees that relate symptoms to faults are obtained, along with the thresholds defining the symptoms. This technique is applied to a laboratory test process operating in the steady state, and is shown to be suitable for handling incipient single faults. The proposed learning approach is compared with two related machine learning methods. It is found that it achieves similar classification accuracy with better transparency of the resulting diagnostic system. <s> BIB006 </s> Fault diagnosis of electronic systems using intelligent techniques: a review <s> 1) Approach: <s> Diagnosing analog systems, i.e. systems for which physical quantities vary over time in a continuous range is, in itself, a difficult problem. Analog electronic circuits, especially those with feedback loops, raise new difficulties that cannot be solved by using classical techniques. This paper shows how model-based diagnosis theory can be used to diagnose analog circuits. The two main tasks for making the theory applicable to real size problems will be emphasized: the modeling of the system to be diagnosed, and the building of efficient conflict recognition engines adapted to the formalism used for the modeling. This will be illustrated through the description of two systems. The first one, DEDALE, only considers failures observable in quiescent mode. It uses qualitative modeling based on relative orders of magnitude relations, for which an axiomatics is given, thus allowing a symbolic solver for checking consistency of such relations to be developed. The second one, CATS/DIANA, deals with time variations. It uses modeling based on numeric intervals, arrays of such intervals to represent transient signals, and an ATMS-like domain-independent conflict recognition engine, CATS. This engine is able to work on such data and to achieve interval propagation through constraints in such a way as to focus on the detection of all minimal nogoods. It is thus well adapted for diagnosing continuous time-varying physical systems. Experimental results of the two systems are given through various types of circuits. <s> BIB007
Historically, this has been the most commonly used method for documenting fault diagnosis procedures. A fault tree uses symptom(s) or test results as its starting point, followed by a branching decision tree, consisting of actions, decisions, and finally repair recommendations. Fig. 1 shows a simple example. 2) Applications: To assist in the navigation of large diagnostic networks, describes a hypermedia system for a point-and-click traversal of fault trees and other types of diagnostic information. To simplify the generation of fault trees for complex systems, intelligent techniques have been applied to automatically generate them. In , automatic fault tree generation is performed by using a circuit description, fault simulation to produce the electrical effects caused by failures, quantification and classification of these effects to produce a test matrix, and finally production of the test tree by recursively searching and evaluating the test matrix. In BIB004 , fault trees are generated using cases extracted from a case-based reasoning system. In BIB006 , process models, fault simulation, and machine learning techniques are applied to generate fault trees. Also, fault trees have been used in various real-world intelligent applications including which presents a system for diagnosing automotive electronic control systems and which describes an expert system for color TV diagnosis. 3) Issues: The primary advantage of fault trees is simplicity and ease of use. In fact, little training is needed to use these diagnostic aids. However, for more complex systems, a full fault tree can be very large. In addition, a fault tree is system dependent and even small engineering changes can mean significant updates. Lastly, a fault tree offers no indication of the knowledge used to generate the answer. One of the primary research directions over the last 15 years has been the use of models based on structure and behavior. A dual representation of both structure and behavior is used. The structure representation lists all the components and their interconnections within the modeled system. The behavior representation describes the correct behavior pattern for each component. Behavior models can use various levels of abstraction including: mathematical, qualitative, or functional , BIB005 . Both representations are often created using logical formulae such as first order predicate calculus. If the operation of the model does not agree with observations from the real system during a particular mode of operation, then a discrepancy has occurred and a diagnosis must be performed to find the defective component(s). Fig. 3 shows an example of a simple arithmetic circuit. If the inputs A through E are stimulated as shown, the outputs should measure as shown. Failure to measure these values indicates a discrepancy between the model and the real system. Unlike fault models, this type of model is a correct model. That is, it models a working device, and theoretically, it can diagnose any fault type, not just the modeled ones. Many of the basic techniques were proposed during the 1980s and involve the diagnosis of simple combinational digital circuits. The same basic principles apply to other device types. The process generally consists of three steps. 2) Applications: Hypothesis testing (HT) was one of the seminal works in structural/behavioral models for diagnostic applications BIB001 . Its application area was combinational digital circuits. To describe structure it used a subset of DECmmp Parallel Language, a VLSI design language. The representations used were hierarchical and both physical and functional descriptions were employed. Constraints were used to describe behavior, and both simulation and inference rules were used to describe the relationships between component inputs and outputs. Diagnosis was performed using candidate generation and constraint suspension. In BIB002 , HT is extended to deal with time variant digital circuits. Its behavioral representations are extended to deal with (value, time) pairs so that the behavior of a circuit can be described over a series of time periods. However, it concluded that unless complete state visibility (i.e., measurements being made at the end of different time periods) is available, diagnosis generation is inherently under-constrained and indiscriminate. Single stepping the circuit, so that observations could be taken at different time points, was proposed as a possible solution. In , the general diagnostic engine (GDE) is introduced. GDE addressed the issue of multiple faults and became the basis for much later research in the area . It introduced the use of ATMS for diagnosis. Using constraint propagation and ATMS, it identifies minimal diagnoses, but considered all supersets of each minimal set a possible diagnosis; if a particular minimal diagnosis is exonerated all its supersets are also exonerated. To further discriminate amongst candidate diagnoses, it uses additional circuit measurements. To make an optimum set of measurements, it uses one-step look-ahead based on minimum entropy to predict the best probing sequence. Failure probabilities of individual components are needed to guide this process. Often such failure probabilities can be difficult to obtain, so an extension to GDE , proposes the use of crude probability estimates to guide diagnosis. It does this by assuming all components fail with equal probability and with extremely small probability. Lastly, some extensions to GDE exploit fault modes or models to provide additional diagnostic discrimination . In , extended diagnostic engine (XDE) extends the GDE program described above, to deal with more complex circuits, including sequential ones. It uses a structural language called BASIL to provide both a physical and a functional representation of a circuit. For example, the functional representation would describe an arithmetic circuit in terms of adders and multipliers, whereas the physical description would describe the actual components used to build the circuit. The relationship between both descriptions is defined. To describe behavior, a temporal constraint propagation language called TINT is used. TINT defines rules at multiple levels of temporal abstraction to describe the operation of the circuit primarily at the functional level. Probability estimates are used to rank alternative diagnoses and to choose the next best measurement. These estimates are defined relative to a components complexity. To further refine diagnoses, fault models are employed to further adjust probability estimates. XDE has been tested on complex boards including microprocessor-based circuits. In BIB003 and BIB007 DEDALE, an approach for analog circuit fault diagnosis, is described. DEDALE is an ATMS-like system. Components behavior is described using qualitative models based on relative orders of magnitude. Some components, such as transistors, can have a number of correct modes of operation. Diagnosis is performed in a hierarchal fashion, starting at the device level, which is diagnosed in a functional manner, and working down to the component level, block by block. To perform inference within a defective block, the measurement at each node and its attached components are checked for consistency with a correct model for the observed measurements. An inconsistency in the behavior indicates that one of the components attached to that node is defective. Intersection with other inconsistent nodes can further isolate the defective component. CATS is a domain independent diagnosis engine based on the GDE framework, but with extensions to process values which are imprecise and change with time. DIANA is an implementation of CATS for diagnosing analog circuits , BIB007 . To allow for measurement imprecision, quantities are represented in CATS/DIANA using ranges or numeric intervals. Continuous signals are represented by using arrays of numeric intervals, accompanied by a triplet defining sample start instant, sampling increment, and number of samples. In order to repeat measurements the sample start instant must be synchronized in some way (e.g., clock signal). The imprecision of component parameters is also represented using numeric intervals. Component models are qualitative approximations, not suitable for accurate simulations, but adequate for troubleshooting purposes. The diagnostic engine CATS receives as input constraints from the models and measurements. Then using an ATMS-like inference mechanism it produces diagnostic candidates as outputs. In , a generic model-based diagnostic system for a particular area of technical diagnosis is presented (switch-mode power supplies). A structural model based on frames, and a behavioral model based on heuristic rules which represent fault behavior in modules or components is used. 3) Issues: Models based on structure and behavior would appear to represent an ideal solution for many diagnostic problems. Theoretically, because of the use of correct models, all faults can be diagnosed; CAD data can be used to automatically generate suitable models. However, in practice, there are a number of significant limitations. 1) It is computationally intensive for complex problems . Focusing on the most probable failures first and the inclusion of fault models have been used to improve efficiency . 2) Representing the behavior of complex components, such as a Pentium microprocessor, is still a major research issue . 3) Complete and consistent models are hard to develop. Essentially, a model is only an approximate representation of a real world system. For example, a circuit bridging fault will not be represented in the structural model , . 4) Information relating to the ways the system can fail is often not present. This can lead to the isolation nonsensical faults . 5) Unless CAD generation is possible models can be time consuming to develop and maintain.
Fault diagnosis of electronic systems using intelligent techniques: a review <s> B. Causal Models 1) Approach: <s> The issue of how to effectively integrate and use symbolic causal knowledge with numeric estimates of probabilities in abductive diagnostic expert systems is examined. In particular, a formal probabilistic causal model that integrates Bayesian classification with a domain-independent artificial intelligence model of diagnostic problem solving (parsimonious covering theory) is developed. Through a careful analysis, it is shown that the causal relationships in a general diagnostic domain can be used to remove the barriers to applying Bayesian classification effectively (large number of probabilities required as part of the knowledge base, certain unrealistic independence assumptions, the explosion of diagnostic hypotheses that occurs when multiple disorders can occur simultaneously, etc.). Further, this analysis provides insight into which notions of "parsimony" may be relevant in a given application area. In a companion paper, Part Two, a computationally efficient diagnostic strategy based on the probabilistic causal model discussed in this paper is developed. <s> BIB001 </s> Fault diagnosis of electronic systems using intelligent techniques: a review <s> B. Causal Models 1) Approach: <s> An important issue in diagnostic problem solving is how to generate and rank plausible hypotheses for a given set of manifestations. Since the space of possible hypotheses can be astronomically large if multiple disorders can be present simultaneously, some means is required to focus an expert system's attention on those hypotheses most likely to be valid. A domain-independent algorithm is presented that uses symbolic causal knowledge and numeric probabilistic knowledge to generate and evaluate plausible hypotheses during diagnostic problem solving. Given a set of manifestations known to be present, the algorithm uses a merit function for partially completed competing hypotheses to guide itself to the provably most probable hypothesis or hypotheses. <s> BIB002
A causal model is a directed graph where the nodes represent the variables of the modeled system and the links represent the relationships or associations between the variables. For example, in a diagnostic model, the variables often represent the symptoms and the faults, and the links represent the symptom-fault associations. The strength of each link is often defined using a numerical weight or probability. Therefore, the faults hypotheses formed are ranked or eliminated using Bayesian techniques BIB001 , BIB002 . Bayesian networks are a variation on this approach .
Fault diagnosis of electronic systems using intelligent techniques: a review <s> 2) Applications: <s> Research efforts to implement a Bayesian belief-network-based expert system to solve a real-world diagnostic problem-the diagnosis of integrated circuit (IC) testing machines-are described. The development of several models of the IC tester diagnostic problem in belief networks also is described, the implementation of one of these models using symbolic probabilistic inference (SPI) is outlined, and the difficulties and advantages encountered are discussed. It was observed that modeling with interdependencies in belief networks simplifies the knowledge engineering task for the IC tester diagnosis problem, by avoiding procedural knowledge and focusing on the diagnostic component's interdependencies. Several general model frameworks evolved through knowledge engineering to capture diagnostic expertise that facilitated expanding and modifying the networks. However, model implementation was restricted to a small portion of the modeling, that of contact resistance failures, which were due to time limitations and inefficiencies in the prototype inference software we used. Further research is recommended to refine existing methods, in order to speed evaluation of the models created in this research. With this accomplished, a more complete diagnosis can be achieved <s> BIB001 </s> Fault diagnosis of electronic systems using intelligent techniques: a review <s> 2) Applications: <s> I report on my experience over the past few years in introducing automated, model-based diagnostic technologies into industrial settings. In partic-ular, I discuss the competition that this technology has been receiving from handcrafted, rule-based diagnostic systems that has set some high standards that must be met by model-based systems before they can be viewed as viable alternatives. The battle between model-based and rule-based approaches to diagnosis has been over in the academic literature for many years, but the situation is different in industry where rule-based systems are dominant and appear to be attractive given the considerations of efficiency, embeddability, and cost effectiveness. My goal in this article is to provide a perspective on this competition and discuss a diagnostic tool, called DTOOL/CNETS, that I have been developing over the years as I tried to address the major challenges posed by rule-based systems. In particular, I discuss three major features of the developed tool that were either adopted, designed, or innovated to address these challenges: (1) its compositional modeling approach, (2) its structure-based computational approach, and (3) its ability to synthesize embeddable diagnostic systems for a variety of software and hardware platforms. <s> BIB002
In BIB001 , a Bayesian network is applied to the diagnosis of an integrated circuit tester. The knowledge of a domain expert regarding the probability of different tester failure modes is represented as a Bayesian network. According to BIB002 , rule-based systems are more prevalent than model-based approaches in industry because it is perceived that model-based systems are more difficult to build. To overcome this, a tool for converting a simple block diagram of a system to a causal model is presented. 3) Issues: Expert knowledge of the application area is needed to construct a causal model, so the "knowledge acquisition bottleneck" is its primary shortcoming. The primary advantage is the ability to represent complex structured knowledge about physical or abstract concepts more easily than rules thus leading to greater computational efficiency. In addition, causal models are based on the firm mathematical theory of probability.
Fault diagnosis of electronic systems using intelligent techniques: a review <s> D. Diagnostic Inference Model 1) Approach: <s> In diagnosing a failed system, a smart technician would choose tests to be performed based on the context of the situation. Currently, test program sets do not fault-. isolate within the context of a situation. Instead, testing follows a rigid, predetermined, fault-isolation sequence that is based on an embedded fault tree. Current test programs do not tolerate instrument failure and cannot redirect testing by incorporating new information. However, there is a new approach to automatic testing that emulates the best features of a trained technician yet, unlike the development of rule-based expert systems, does not require a trained technician to build the knowledge base. This new approach is model-based and has evolved over the last 10 years. This evolution has led to the development of several maintenance tools and an architecture for intelligent automatic test equipment (ATE). The architecture has been implemented for testing two cards from an AV-8B power supply. <s> BIB001 </s> Fault diagnosis of electronic systems using intelligent techniques: a review <s> D. Diagnostic Inference Model 1) Approach: <s> Part One: Motivation. 1. Introduction. 2. Maintainability: a Historical Perspective. 3. Field Diagnosis and Repair: the Problem. Part Two: Analysis and Application. 4. Bottom-Up Modeling for Diagnosis. 5. System Level Analysis for Diagnosis. 6. The Information Flow Model. 7. System Level Diagnosis. 8. Evaluating System Diagnosability. 9. Verification and Validation. 10. Architecture for System Diagnosis. Part Three: Advanced Topics. 11. Inexact Diagnosis. 12. Partitioning Large Problems. 13. Modeling Temporal Information. 14. Adaptive Diagnosis. 15. Diagnosis -- Art versus Science. References. Index. <s> BIB002 </s> Fault diagnosis of electronic systems using intelligent techniques: a review <s> D. Diagnostic Inference Model 1) Approach: <s> Artificial Intelligence: Structures and Strategies for Complex Problem Solving by George F. Luger 6th edition, Addison Wesley, 2008 The book serves as a good introductory textbook for artificial intelligence, particularly for undergraduate level. It covers major AI topics and makes good connection between different areas of artificial intelligence. Along with each technique and algorithm introduced in the book, is a discussion of its complexity and application domain. There is an attached website to the book that provides auxiliary materials for some chapters, sample problems with solutions, and ideas for student projects. Besides Prolog and Lisp, java and C++ are also used to implement many of the algorithms in the book. The book is organized in five parts. The first part (chapter 1) gives an overview of AI, its history and its various application areas. The second part (chapters 2–6) concerns with knowledge representation and search algorithms. Chapter 2 introduces predicate calculus as a mathematical tool for representing AI problems. The state space search as well as un-informed and heuristic search methods is introduced in chapters 3 and 4. Chapter 5 discusses the issue of uncertainty in problem solving and covers the foundation of stochastic methodology and its application. In chapter 6 the implementation of search algorithms is shown in production system and blackboard architectures. Part 3 (chapters 7–9) discusses knowledge representation and different methods of problem solving, including strong, weak and distributed problem solving. Chapter 7 begins with reviewing the history of evolution of AI representation schemes, including semantic networks, frames, scripts and conceptual graphs. This chapter ends with a brief introduction of Agent problem solving. Chapter 8 presents the production model and rule-based expert systems as well as case-based and model-based reasoning. The methods of dealing with various aspects of uncertainty are discussed in chapter 9. These methods include Dempster-Shafer theory of evidence, Bayesian and Belief networks, fuzzy logics and Markov models. Part 4 is devoted to machine learning. Chapter 10 describes algorithms for symbol-based learning, including induction, concept learning, vision-space search and ID3. The neural network methods for learning, such as back propagation, competitive, Associative memories and Hebbian Coincidence learning were presented in chapter 11. Genetic algorithms and evolutionary learning approaches are introduced in chapter 12. Chapter 13 introduces stochastic and dynamic models of learning along with Hidden Markov Models, Dynamic Baysian networks and Markov Decision Processes. Part 5 (chapters 14 and 15) examines two main application of AI: automated reasoning and natural language understanding. Chapter 14 begins with an introduction to weak methods in problem solving and continues with presenting resolution theorem proving. Chapter 15 deals with the complex issue of natural language understanding by discussing main methods of syntax and semantic analysis of natural language corpus. The chapter ends with examples of natural language application in Database query generation, text summarization and question answering systems. Finally, chapter 16 is a summary of the materials covered in the book as well current AI's limitations and future directions. One criticism about the book would be that the materials are not covered in enough depth. Because of the space limitation, many important AI algorithms and techniques are discussed briefly without providing enough details. As a result, some chapters (e.g., 8, 9, 11, and 13) of the book should be supported by complementary materials to make it understandable for undergraduate students and motivating for graduate students. Another issue is with the structure of the book. The order of presenting chapters introduces sequentially different challenges and techniques in problem solving. Consequently, some topics such as uncertainty and logic are not introduced separately and are distributed in different chapters of the book related to different parts. Although interesting, this makes the book hard to follow. In summary, the book gives a great insight to the readers that want to familiar themselves with artificial intelligence. It covers a broad range of topics in AI problem solving and its practical application and is a good reference for an undergraduate level introductory AI class. Elham S. Khorasani, Department of Computer Science Southern Illinois University Carbondale, IL 62901, USA <s> BIB003
The diagnostic inference model BIB002 , , performs diagnosis by representing the problem to be solved via the flow of diagnostic information. Previously known as the information flow model, the name change reflects the models focus on information provided by diagnostics and inferences that can be drawn from this information. The model consists of two basic elements: tests and conclusions. Tests consist of any source of diagnostic information including, observable symptoms, logistics history, and results from diagnostic tests. Conclusions typically represent faults or units to replace. The dependency relationship between tests and conclusions is represented using a directed graph. In addition to tests and conclusions, there are three other possible elements in a diagnostic inference model: testable input, untestable input, and No-Fault. An input represents information entering the system which may affect the health of the system. A testable input can be examined for validity, an untestable input cannot. A No-Fault is a special conclusion indicating that the test set found no fault. Fig. 4 shows an example of a diagnostic inference model. Test sequencing is optimized using algorithms based on maximum test information gain. Diagnostic inference combines information from multiple tests using several logical and statistical inference techniques, including a modified form of Dempster-Shafer (D-S) evidential reasoning BIB003 which incorporates a special conclusion, the unanticipated result. The unanticipated result compensates for disappearing uncertainty in the face of conflict. As with all model-based techniques conflicting diagnoses may be derived. Conflicts are caused by: test error, multiple faults, or incomplete or inaccurate models. The D-S method and certainty factors are both used as methods for reasoning with these uncertainties . 2) Applications: Various successful uses of the diagnostic inference model are summarized in . In , its application to radar system maintenance is outlined, and in BIB001 its application to the diagnosis of power supplies is described. In , an approach similar to the diagnostic inference model is proposed and deployed for the troubleshooting of complex PC boards to component level. Models of the tests, rather than structure or behavior are used. The test models are specified in terms of how the tests act on the device under test (e.g., does the test access memory, output to port), and each test is mapped to specific components. This is combined with information on the degree to which each component is exercised, to give a relative weighting to each diagnosis using Bayesian-like probabilistic formula. The system now forms part of the Hewlett-Packard Fault Detective product. 3) Issues: Diagnostic inference models are at their most effective if considered and implemented at the design phase of the product lifecycle. Unfortunately with many systems, design for diagnosis is still not an important consideration, so an inadequate supply of structured diagnostic information makes accurate diagnosis difficult using this approach. However, if an adequate model can be built using available diagnostic information diagnosis can be both accurate and computationally efficient .
Fault diagnosis of electronic systems using intelligent techniques: a review <s> VI. MACHINE LEARNING APPROACHES <s> Field service is now recognized as one of the most important corporate activities in order to improve customer satisfaction and to compete successfully world-wide competition. Sharing repair experience with a state-of-the-art computer technology is a key issue to improve the productivity of field service. We have developed a diagnostic expert system, named Doctor, which employs case-based reasoning (CBR) and lists the most necessary ten service parts from a product type and some symptoms acquired from a service-request call. In this paper, we describe the Doctor system and explain how accurate and reliable product-type case-bases are generated and updated from the troubleshooting experience and the generic case base, i.e., general diagnostic knowledge. We also demonstrate the effectiveness of our system with experimental results using real repair cases. > <s> BIB001 </s> Fault diagnosis of electronic systems using intelligent techniques: a review <s> VI. MACHINE LEARNING APPROACHES <s> One problem with using CBR for diagnosis is that a full case description may not be available at the beginning of the diagnosis. The standard CBR methodology requires a detailed case description in order to perform case retrieval and this is often not practical in diagnosis. We describe two fault diagnosis tasks where many features may make up a case description but only a few features are required in an individual diagnosis. We evaluate an incremental CBR mechanism that can initiate case retrieval with a skeletal case description and will elicit extra discriminating information during the diagnostic process. <s> BIB002 </s> Fault diagnosis of electronic systems using intelligent techniques: a review <s> VI. MACHINE LEARNING APPROACHES <s> Abstract Diagnosis and repair operations are often major bottlenecks in electronics circuit assembly operations. Increasing board density and circuit complexity have made fault diagnosis difficult. But, with shrinking product life cycles and increasing competition, quick diagnosis and feedback is critical for cost control, process improvement, and timely product introduction. This paper describes a case-based diagnosis support system to improve the effectiveness and efficiency of circuit diagnosis in electronics assembly facilities. The system stores individual diagnostic instances rather than general rules and algorithmic procedures, and prioritizes the tests during the sequential testing process. Its knowledge base grows as new faults are detected and diagnosed by the analyzers. The system provides distributed access to multiple users, and incorporates on-line updating features that make it quick to adapt to changing circumstances. Because it is easy to install and update, this method is well-suited for real manufacturing applications. We have implemented a prototype version, and tested the approach in an actual electronics assembly environment. We describe the system's underlying principles, discuss methods to improve diagnostic effectiveness through principled test selection and sequencing, and discuss managerial implications for successful implementation. <s> BIB003
The approaches discussed in the previous sections, once implemented, will have a fixed level of performance. It is not possible to improve performance, by using the experiences of past successes and failures. Machine learning approaches exploit knowledge of previous successful or failed diagnoses to continually improve system performance or use available domain data to automatically generate knowledge. A. Case-Based Reasoning 1) Approach: Case-based reasoning (CBR) involves storing experiences of past solutions known as cases, retrieving a suit-able case to use in a new problem situation, adapting and reusing the retrieved case to suit the new problem, revising the adapted case based on it's level of success or failure, and eventually retaining any useful learned experiences in the case memory , . A CBR solution generally consists of the following steps. • Knowledge or Case Representation. • Case Retrieval. • Case Reuse. • Case Revision. • Case Retainment (or learning). Case representation, often called case memory, consists of deciding what to store in a case, selecting an appropriate structure for representing the case contents, and deciding on a suitable case indexing scheme to enable efficient retrieval. Case retrieval consists of the following steps. 1) Identify features which sum up the current problem or case. 2) Use the features to find similar cases in the case memory. These are ranked in order of similarity. 3) Perform final matching by analyzing in more detail the cases selected in step 2) against the current case. Select the most similar case. Case reuse consists of finding the differences between the past and the current case, and then adapting the past case in some way to match the current case. Common forms of adaptation include substitution (substituting new values for old) and transformation (using heuristics). Case revision involves evaluating the case solution from the reuse phase, and if necessary repairing any parts of the solution which are contributing to an inadequate solution. Evaluating involves applying the solution in a real situation, and measuring in some way its level of success. Errors in the solution are then detected and repaired using domain specific knowledge. Finally, case retainment (or learning) adds useful information learned during the current problem solving task to the case memory. This may not only be successful new cases, but also failed cases ("don't do that again!"). Retainment can be an adjustment to an existing case(s) and its indeces or the addition of an entirely new case. 2) Applications: In BIB001 , each case is represented using an ID number, frequency, symptoms, and actions. On retrieval it uses a possibility metric to rank cases; this is based on similarity and frequency. The problem of generating casebases for new products is discussed, and the solution of using two casebases, a generic casebase, and a product-type casebase is proposed. The generic casebase stores domain diagnostic rules based on symptom-defect causalities. The product-type casebase is generated from the generic casebase by specializing its cases and updating the frequencies. In BIB002 , an incremental case-based electronic fault diagnosis system is presented. A minimal case description can be used to perform initial case retrieval. The retrieved set is examined to determine tests which the operator is asked to perform and these results are used to discriminate between cases. In BIB003 , a circuit diagnosis support system for electronic assembly operations is described. Real-time diagnosis is required, so CBR is chosen over model-based diagnosis (MBD), as the computational overhead of MBD is considered to be too high. Initial case retrieval is performed, and additional tests are optimally selected, using dynamic programming techniques or heuristics, to refine the diagnosis. The case-base is updated after each diagnosis to reflect previously unknown faults. After five weeks of on-line use the system could diagnose 95% of defects. 3) Issues: The effectiveness of CBR depends on the availability of suitable case data, generated from historical data or simulation, and the selection of effective indexing, retrieval, and adaptation methods.
Fault diagnosis of electronic systems using intelligent techniques: a review <s> B. Explanation-Based Learning 1) Approach: <s> The author discusses an approach to identifying and correcting errors in diagnostic models using explanation based learning. The approach uses a model of the system to be diagnosed that may have missing information about the relationships between tests and possible diagnoses. In particular, he uses a structural model or information flow model to guide diagnosis. When misdiagnosis occurs, the model is used to determine how to search for the actual fault through additional testing. When the fault is identified, an explanation is constructed from the original misdiagnosis and the model is modified to compensate for the incorrect behavior of the system. > <s> BIB001 </s> Fault diagnosis of electronic systems using intelligent techniques: a review <s> B. Explanation-Based Learning 1) Approach: <s> Artificial Intelligence: Structures and Strategies for Complex Problem Solving by George F. Luger 6th edition, Addison Wesley, 2008 The book serves as a good introductory textbook for artificial intelligence, particularly for undergraduate level. It covers major AI topics and makes good connection between different areas of artificial intelligence. Along with each technique and algorithm introduced in the book, is a discussion of its complexity and application domain. There is an attached website to the book that provides auxiliary materials for some chapters, sample problems with solutions, and ideas for student projects. Besides Prolog and Lisp, java and C++ are also used to implement many of the algorithms in the book. The book is organized in five parts. The first part (chapter 1) gives an overview of AI, its history and its various application areas. The second part (chapters 2–6) concerns with knowledge representation and search algorithms. Chapter 2 introduces predicate calculus as a mathematical tool for representing AI problems. The state space search as well as un-informed and heuristic search methods is introduced in chapters 3 and 4. Chapter 5 discusses the issue of uncertainty in problem solving and covers the foundation of stochastic methodology and its application. In chapter 6 the implementation of search algorithms is shown in production system and blackboard architectures. Part 3 (chapters 7–9) discusses knowledge representation and different methods of problem solving, including strong, weak and distributed problem solving. Chapter 7 begins with reviewing the history of evolution of AI representation schemes, including semantic networks, frames, scripts and conceptual graphs. This chapter ends with a brief introduction of Agent problem solving. Chapter 8 presents the production model and rule-based expert systems as well as case-based and model-based reasoning. The methods of dealing with various aspects of uncertainty are discussed in chapter 9. These methods include Dempster-Shafer theory of evidence, Bayesian and Belief networks, fuzzy logics and Markov models. Part 4 is devoted to machine learning. Chapter 10 describes algorithms for symbol-based learning, including induction, concept learning, vision-space search and ID3. The neural network methods for learning, such as back propagation, competitive, Associative memories and Hebbian Coincidence learning were presented in chapter 11. Genetic algorithms and evolutionary learning approaches are introduced in chapter 12. Chapter 13 introduces stochastic and dynamic models of learning along with Hidden Markov Models, Dynamic Baysian networks and Markov Decision Processes. Part 5 (chapters 14 and 15) examines two main application of AI: automated reasoning and natural language understanding. Chapter 14 begins with an introduction to weak methods in problem solving and continues with presenting resolution theorem proving. Chapter 15 deals with the complex issue of natural language understanding by discussing main methods of syntax and semantic analysis of natural language corpus. The chapter ends with examples of natural language application in Database query generation, text summarization and question answering systems. Finally, chapter 16 is a summary of the materials covered in the book as well current AI's limitations and future directions. One criticism about the book would be that the materials are not covered in enough depth. Because of the space limitation, many important AI algorithms and techniques are discussed briefly without providing enough details. As a result, some chapters (e.g., 8, 9, 11, and 13) of the book should be supported by complementary materials to make it understandable for undergraduate students and motivating for graduate students. Another issue is with the structure of the book. The order of presenting chapters introduces sequentially different challenges and techniques in problem solving. Consequently, some topics such as uncertainty and logic are not introduced separately and are distributed in different chapters of the book related to different parts. Although interesting, this makes the book hard to follow. In summary, the book gives a great insight to the readers that want to familiar themselves with artificial intelligence. It covers a broad range of topics in AI problem solving and its practical application and is a good reference for an undergraduate level introductory AI class. Elham S. Khorasani, Department of Computer Science Southern Illinois University Carbondale, IL 62901, USA <s> BIB002
Explanation-based learning (EBL) uses domain knowledge, and a single training example, to learn a new concept BIB002 . For example, in diagnosis, a system model and an example of misdiagnosis can be used to derive an explanation of an appropriate diagnosis. 2) Applications: In BIB001 , a diagnostic EBL system is described which improves diagnostic inference models following learning. It operates as follows. After a misdiagnosis, further testing is performed until a correct diagnosis is made; this additional knowledge is then used to modify the model so that the correct diagnosis is consistent with the testing. 3) Issues: EBL success depends on the availability of adequate domain knowledge. Therefore, for complex domains where extensive knowledge is needed to formulate new concepts, the approach may prove to be intractable.
Fault diagnosis of electronic systems using intelligent techniques: a review <s> - <s> Two tasks of fault detection in linear dynamical systems are addressed in this paper. On one hand, to estimate residuals, a system described by a model with some deviations in parameters or unknown input disturbances is considered. In such a situation, sensor fault detection using classical methods is not very efficient. In order to solve this problem, an adaptive thresholding approach using fuzzy logic is proposed. On the other hand, to locate faults, a fuzzy logic technique is put in place of usual classical logic used with dedicated observer scheme. > <s> BIB001 </s> Fault diagnosis of electronic systems using intelligent techniques: a review <s> - <s> Diagnosing analog circuits with their numerous known difficulties is a very hard problem. Digital approaches have proven to be inappropriate, and AI-based ones suffer from many problems. In this paper we present a new system, FLAMES, which uses fuzzy logic, model-based reasoning, ATMS extension, and the human expertise in an appropriate combination to go far in the treatment of this problem. <s> BIB002 </s> Fault diagnosis of electronic systems using intelligent techniques: a review <s> - <s> Testing and diagnosing analog circuits is a very challenging problem. The inaccuracy of measurement and the infinite domain of possible values are the principal difficulties. AI approaches were the base of many systems which tried to overcome these problems. The first part of this paper is a state of the art of this research area. We present two fundamental approaches, model-based reasoning and qualitative reasoning, and the systems implementing them; a discussion and an evaluation of these systems are given. In the second part, we present and propose a novel approach based on fuzzy logic in order to go further in dealing with analog circuits testing and diagnosis. Tolerance is treated by means of fuzzy intervals which are more general, more efficient and of higher fidelity to represent the imprecision in its different forms than other approaches. Fuzzy intervals are also able to be semi-qualitative which is more suitable to the simulation of analog systems. We use this idea to develop a best test point finding strategy based on fuzzy probabilities and fuzzy decision-making methodology. Finally, a complete expert system which implements this approach is presented. <s> BIB003 </s> Fault diagnosis of electronic systems using intelligent techniques: a review <s> - <s> Abstract This paper is intended to give a survey on the state of the art of model-based fault diagnosis for dynamic processes employing artificial intelligence approaches. Emphasis is placed upon the use of fuzzy models for residual generation and fuzzy logic for residual evaluation. By the suggestion of a knowledge-based observer-like concept for residual generation, the basic idea of a novel observer concept, the so-called “knowledge observer”, is introduced. The neural-network approach for residual generation and evaluation is outlined as well. <s> BIB004 </s> Fault diagnosis of electronic systems using intelligent techniques: a review <s> - <s> The degree of vagueness of variables, process description, and automation functions is considered and is shown. Where quantitative and qualitative knowledge is available for design and information processing within automation systems. Fuzzy-rule-based systems with several levels of rules form the basis for different automation functions. Fuzzy control can be used in many ways, for normal and for special operating conditions. Experience with the design of fuzzy controllers in the basic level is summarized, as well as criteria for efficient applications. Different fuzzy control schemes are considered, including cascade, feedforward, variable structure, self-tuning, adaptive and quality control leading to hybrid classical/fuzzy control systems. It is then shown how fuzzy logic approaches can be applied to process supervision and to fault diagnosis with approximate reasoning on observed symptoms. Based on the properties of fuzzy logic approaches the contribution gives a review and classification of the potentials of fuzzy logic in process automation. <s> BIB005
The water is very hot. - The signal on the oscilloscope is a bit noisy. It deals with approximates rather than exact measurements and is based on fuzzy set theory , , , . In traditional sets, membership is either true or false [0], and there is no concept of partial membership. In fuzzy sets, partial membership is allowed, so membership is represented by a value between 0 (definitely not a member) and 1 (definitely a member). In fuzzy set theory, a series of operators is defined, for manipulating sets. Many of these are analogous to those used in conventional sets, such as, union (OR), intersection (AND), and complement (NOT). Fuzzy reasoning consists of manipulating a series of unconditional and conditional fuzzy propositions or rules using fuzzy rules of inference. With its concept of partial set membership, fuzzy logic provides a good alternative for reasoning with uncertain and inaccurate data. 2) Applications: Most of the research work relating to fuzzy logic and diagnosis has occurred in the area of dynamic industrial processes. In this domain, fuzzy logic has been applied primarily to the following tasks BIB004 , BIB005 , BIB001 . a) Fault Detection: Industrial processes are characterized by dynamic continuous variables (symptoms). Such variables are prone to measurement errors, noise, and operating conditions. Therefore, reliable measurement thresholds are difficult to define. Fuzzy logic provides a good solution to this problem, by representing signal values using overlapping linguistic variables. b) Fault Diagnosis: Fault diagnosis in dynamic processes is always approximate, as measured signal values are only known to a certain degree of accuracy. A fuzzy inference system based on fuzzy IF-THEN rules can provide a solution to this problem, and is proposed and reported by many researchers. Applications, which apply fuzzy logic to the diagnosis of electronic systems are also reported. FLAMES is a program for troubleshooting analog circuits BIB002 , BIB003 . It is a GDE-like program employing ATMS. However, continuous signals and component parameters are represented using fuzzy values, and fuzzy values can be propagated across the circuit model. In , a similar system using possibility theory (a form of fuzzy logic) , to improve the accuracy of diagnosis of analog circuits is reported. In , a practical application, which uses fuzzy qualitative values for sensor measurements, in the development of self-maintenance photocopiers, is presented. 3) Issues: Because of it, use of linguistic variables' fuzzy logic provides a very human-like and intuitive way of representing and reasoning with incomplete and inaccurate information. It is typically combined with other approaches such as rules, models or cases, and provides a good alternative for reasoning under uncertainty.
Fault diagnosis of electronic systems using intelligent techniques: a review <s> B. Artificial Neural Networks 1) Approach: <s> British Telecommunication plc (BT) has an interest in developing fast, efficient diagnostic systems especially for high volume circuit boards as found in today's digital telephone exchanges. Previous work to produce a diagnostic system for line cards has shown that a model-based, expert system shell can be most beneficial in assisting in the diagnosis and subsequent repair of these complex, mixed-signal cards. Expert systems, however successful, can take a long time to develop in terms of knowledge acquisition, model building and rule development. The re-emergence of neural networks stimulated the authors to develop a system that would diagnose common faults found on line cards by training a network using historical test data. <s> BIB001 </s> Fault diagnosis of electronic systems using intelligent techniques: a review <s> B. Artificial Neural Networks 1) Approach: <s> Neural network software can be applied to manufacturing process control as a tool for diagnosing the state of an electronic circuit board. The neural network approach significantly reduces the amount of time required to build a diagnostic system. This time reduction occurs because the ordinary combinatorial explosion in rules for identifying faulted components can be avoided. Neural networks circumvent the combinatorial explosion by taking advantage of the fact that the fault characteristics of multiple simultaneous faults frequently correlate to the fault characteristics of the individual faulted components. This article clearly demonstrates that state-of-the-art neural networks can be used in automatic test equipment for iterative diagnosis of electronic circuit board malfunctions. <s> BIB002 </s> Fault diagnosis of electronic systems using intelligent techniques: a review <s> B. Artificial Neural Networks 1) Approach: <s> Very complex technical and other physical processes require sophisticated methods of fault diagnosis and online condition monitoring. Various conventional techniques have already been well investigated and presented in the literature. However, in the last few years, a lot of attention has been given to adaptive methods based on artificial neural networks, which can significantly improve the symptom interpretation and system performance in a case of malfunctioning. Such methods are especially considered in cases where no explicit algorithms or models for the problem under investigation exist. In such problems, automatic interpretation of faulty symptoms with the use of artificial neural network classifiers is recommended. Two different models of artificial neural networks, the extended backpropagation and the radial basis function, are discussed and applied with appropriate simulations for a real world applications in a chemical manufacturing plant. > <s> BIB003 </s> Fault diagnosis of electronic systems using intelligent techniques: a review <s> B. Artificial Neural Networks 1) Approach: <s> Part One: Motivation. 1. Introduction. 2. Maintainability: a Historical Perspective. 3. Field Diagnosis and Repair: the Problem. Part Two: Analysis and Application. 4. Bottom-Up Modeling for Diagnosis. 5. System Level Analysis for Diagnosis. 6. The Information Flow Model. 7. System Level Diagnosis. 8. Evaluating System Diagnosability. 9. Verification and Validation. 10. Architecture for System Diagnosis. Part Three: Advanced Topics. 11. Inexact Diagnosis. 12. Partitioning Large Problems. 13. Modeling Temporal Information. 14. Adaptive Diagnosis. 15. Diagnosis -- Art versus Science. References. Index. <s> BIB004 </s> Fault diagnosis of electronic systems using intelligent techniques: a review <s> B. Artificial Neural Networks 1) Approach: <s> It is shown, by means of an example, how multiple faults in bipolar analogue integrated circuits can be diagnosed, and their resistances determined, from the magnitudes of the Fourier harmonics in the spectrum of the circuit responses to a sinusoidal input test signal using a two-stage multilayer perceptron (MLP) artificial neural network arrangement to classify the responses to the corresponding fault. A sensitivity analysis is performed to identify those harmonic amplitudes which are most sensitive to the faults, and also to which faults the functioning of the circuit under test is most sensitive. The experimental and simulation procedures are described. The procedures adopted for data preprocessing and for training the MLPs are given. One hundred percent diagnostic accuracy was achieved, and most resistances were determined with tolerable accuracy. <s> BIB005 </s> Fault diagnosis of electronic systems using intelligent techniques: a review <s> B. Artificial Neural Networks 1) Approach: <s> Abstract This paper is intended to give a survey on the state of the art of model-based fault diagnosis for dynamic processes employing artificial intelligence approaches. Emphasis is placed upon the use of fuzzy models for residual generation and fuzzy logic for residual evaluation. By the suggestion of a knowledge-based observer-like concept for residual generation, the basic idea of a novel observer concept, the so-called “knowledge observer”, is introduced. The neural-network approach for residual generation and evaluation is outlined as well. <s> BIB006 </s> Fault diagnosis of electronic systems using intelligent techniques: a review <s> B. Artificial Neural Networks 1) Approach: <s> The paper describes a technique, based on the use of Artificial Neural Networks (ANNs), for the diagnosis of multiple faults in digital circuits. The technique utilises different quantities of randomly selected circuit test data derived from a fault truth table, which is constructed by inserting random single stuck-at faults in the circuit. The paper describes the diagnostic procedure using the technique, the ANN architecture and results obtained with example circuits. Our results demonstrate that when the test data selection procedure is guided by test vectors of the circuit a compact, efficient and flexible ANN architecture is achieved. <s> BIB007
The human brain is constructed of billions of interconnected cells or miniprocessors called neurons. Artificial neural networks (ANN) are inspired by the brain's neural circuitry and use the approach for complex problem solving . ANNs can be considered as weighted directed graphs, the neurons being the nodes, and the connections between the nodes being weighted links. Groups of nodes are arranged in layers. BIB002 , an application of ANNs in the diagnosis of simple combinational digital circuits is described. A multilayer feedforward network is trained using back-propagation and is designed to detect single faults in a one bit full adder. The inputs consist of circuit inputs and outputs, and internal test points. The outputs represent the defective component and fault type (none, stuck-open, stuck-closed). It operated well on single faults. The authors checked its ability to generalize by testing it with multiple faults; it did not generalize well. In BIB007 , a multilayer perceptron trained using back-propagation is used for diagnosing digital circuits. The input layers accept the pass/fail status of a test vector set and the output layer equals single faults. Trained with a fault dictionary for single faults it showed 100% success for single fault diagnosis and 75% success with two faults. In BIB001 , the diagnosis of telephone exchange line cards using ANNs at British Telecom is described. The authors had already explored and implemented a model-based approach to the same problem, but they wished to investigate the use of an ANN trained with historical data to achieve the same task. It was felt that an ANN solution could be implemented much more rapidly if the historical data was available. A three-layer feedforward network, with circuit measurements as the inputs, and component pass/fail as the outputs, was constructed and trained using back-propagation. Comparing their experiences with model-based approaches, the authors summed up as follows. ANNs can be trained directly from data, are good with common faults, and provide rapid diagnosis. MBD can diagnose obscure faults, provides graphical support, and can explain a diagnosis. In BIB005 , an application for the diagnosis of multiple faults using multilayer perceptrons (MLP) is presented. The circuit is a bipolar section of an analog IC. It is stimulated using a sine wave and the magnitude of the Fourier harmonics in the spectrum of the circuit output is measured to verify and diagnose the circuit. A signature representing the output measurement is the input to the MLP, and the outputs represented the location and resistances (types of faults corresponds to technological problems of dopage of the extrinsic zones or an open contact problem before metallization) of the faults. The MLP was trained using back-propagation, to detect single, dual, and triple faults using data generated via simulation. In , an ANN is designed which assists a technician in circuit diagnosis (e.g., next best node to measure). A threelayer network is used; inputs are either on or off, and represent symptom states, pins observed to be good, pins observed to be bad, and a flag indicating whether the overall circuit is good or bad; the outputs indicate the next best points to test. The input-hidden layer use an unsupervised learning paradigm to form self organizing feature maps containing knowledge about fault symptoms represented in topographical order. Once the feature maps are formed, the hidden-output layer is trained using a supervised learning paradigm based on the delta rule, to indicate the next best location to test. As well as equipment diagnosis, ANNs have also been applied to the diagnosis of dynamic processes. In this area, ANNs have been used to process the outputs of sensors and to perform diagnoses by using symptom-fault networks , BIB006 , BIB003 . In BIB004 , an ANN is used to determine when enough evidence has been gathered to draw a diagnostic conclusion. The approach used, is to terminate when a pattern of certainty values indicate a conclusion can be drawn. A three-layer network with three inputs and one output is used. The inputs represent highest expected probability, second highest expected probability, and the probability of an unanticipated result. The activation level of the output determines whether or not to terminate. Training was via back-propagation and used training data collected from experts. 3) Issues: The power of ANNs is their ability to approximate and recognize patterns. In diagnostic applications they have shown great promise in areas where noise and error is present. The diagnosis of analog circuits is an example. However, their scalability to large systems and circuits is questionable, and they may best be used to assist other techniques in dealing with error and noise.
Fault diagnosis of electronic systems using intelligent techniques: a review <s> B. Model-Based Reasoning (MBR) and Fuzzy Logic <s> Diagnosing analog circuits with their numerous known difficulties is a very hard problem. Digital approaches have proven to be inappropriate, and AI-based ones suffer from many problems. In this paper we present a new system, FLAMES, which uses fuzzy logic, model-based reasoning, ATMS extension, and the human expertise in an appropriate combination to go far in the treatment of this problem. <s> BIB001 </s> Fault diagnosis of electronic systems using intelligent techniques: a review <s> B. Model-Based Reasoning (MBR) and Fuzzy Logic <s> Testing and diagnosing analog circuits is a very challenging problem. The inaccuracy of measurement and the infinite domain of possible values are the principal difficulties. AI approaches were the base of many systems which tried to overcome these problems. The first part of this paper is a state of the art of this research area. We present two fundamental approaches, model-based reasoning and qualitative reasoning, and the systems implementing them; a discussion and an evaluation of these systems are given. In the second part, we present and propose a novel approach based on fuzzy logic in order to go further in dealing with analog circuits testing and diagnosis. Tolerance is treated by means of fuzzy intervals which are more general, more efficient and of higher fidelity to represent the imprecision in its different forms than other approaches. Fuzzy intervals are also able to be semi-qualitative which is more suitable to the simulation of analog systems. We use this idea to develop a best test point finding strategy based on fuzzy probabilities and fuzzy decision-making methodology. Finally, a complete expert system which implements this approach is presented. <s> BIB002 </s> Fault diagnosis of electronic systems using intelligent techniques: a review <s> B. Model-Based Reasoning (MBR) and Fuzzy Logic <s> Preface. 1. Diagnostic Inaccuracies: Approaches to Mitigate W.R. Simpson. 2.Pass/Fail Limits - The Key to Effective Diagnostic Tests H. Dill. 3. Fault Hypothesis Computations Using Fuzzy Logic T.M. Bearse, M.L. Lynch. 4. Deriving a Diagnostic Inference Model from a Test Strategy T.M. Bearse. 5. Inducing Diagnostic Inference Models from Case Data J.W. Sheppard. 6. Accurate Diagnosis Through Conflict Management J.W. Sheppard, W.R. Simpson. 7. System Level Test Process Characterization and Improvement D. Farren, et al. 8. A Standard for Test and Diagnosis J. Taylor. 9. Advanced Onboard Diagnostic System for Vehicle Management K. Keller, et al. 12. Combining Model-Based and Case-Based Expert Systems M. Ben-Bassat, et al. 11. Enhanced Sequential Diagnosis A. Biasizzo, et al. Subject Index. <s> BIB003
A fuzzy extension to the diagnostic inference model is described in BIB003 . It uses fuzzy logic, in its front end to deal with the uncertainty of measurements, and internally to generate membership degrees for faults, predicted by the outcomes of multiple tests. In , BIB001 , and BIB002 , and extensions to the model-based ATMS architecture, employing fuzzy logic is described. Essentially, fuzzy logic is used to improve the accuracy of component modeling and measurements.
Fault diagnosis of electronic systems using intelligent techniques: a review <s> C. Case-Based Reasoning (CBR), Artificial Neural Networks, and Fuzzy Logic <s> The authors describe a connectionist case-based diagnostic expert system that can learn while the system is being used. The system, called INSIDE (Inertial Navigation System Interactive Diagnostic Expert), was developed for Singapore Airlines to assist the technicians in diagnosing the inertial navigation system used by the airplanes. The system learns from past repair cases and adapts its knowledge base to newly solved cases without having to relearn all the old cases. > <s> BIB001 </s> Fault diagnosis of electronic systems using intelligent techniques: a review <s> C. Case-Based Reasoning (CBR), Artificial Neural Networks, and Fuzzy Logic <s> In this paper, we describe a case-based system using fuzzy logic type neural networks for diagnosing electronic systems. We present a brief derivation of OR and AND neurons and the architecture of our system. To illustrate the effectiveness of the proposed system, we show experimental results on real data from call logs collected at the technical support centre in Ericsson Australia. <s> BIB002
In BIB001 , a connectionist case-based diagnostic expert system which learns incrementally, is reported. Designed for Singapore Airlines to assist technicians in troubleshooting inertial navigation systems, it consists of two parts, a connectionist module, and a flowchart module. The connectionist module is a threelayer feedforward network taking symptoms as its inputs and producing component faults as its outputs. It is trained using historical cases. The flowchart module is invoked if the ANN fails to produce a diagnosis. This consults the technician, and a new case is constructed, which is inputted to the connectionist module as a new case to perform incremental learning. In BIB002 , a case-based diagnostic system using fuzzy neural networks is described. The system is used to diagnose telecommunications systems. Fuzzy rules relating symptoms to faults are encoded in the network architecture. The network is a three-layer feedforward network. The input data is fed through possibility measure nodes, then through fuzzy-AND neurons in the hidden layer, and finally through fuzzy-OR neurons in the output layer. It is trained using historical data from an existing helpdesk.
Fault diagnosis of electronic systems using intelligent techniques: a review <s> A. Rule-Based Approaches <s> Abstract A number of knowledge-based systems in the electronic engineering field have been developed in the past decade. These include those that use knowledge-based techniques to diagnose instrumentation, determine system configurations and to aid circuit and system design. This paper reviews the literature on knowledge-based systems for electronic engineering applications in these areas and reports progress made in the development of a basis for realising electronic systems. <s> BIB001 </s> Fault diagnosis of electronic systems using intelligent techniques: a review <s> A. Rule-Based Approaches <s> This paper describes a tool to diagnose the cause of failure of a HP computer server. We do this by analyzing the dump of Processor Internal Memory (PIM) and maximizing the leverage of expert learning from one hardware failure situation to another. The tool is a rule-based expert system, with some nested rules which translate to decision trees. The rules were implemented using a metalanguage which was customized for the hardware failure analysis problem domain. Pimtool has been deployed to 25 users as of December 1996. We plan to expand usage to over 400 users by end of 1997. Using Pimtool, we expect to save over 15 minutes in Mean-Time-to-Repair (MTIR) per call. We have recognized that knowledge management will be a key issue in the future and are developing tools and strategies to address it. <s> BIB002
As rule-based systems strive to encompass the knowledge of a domain expert, in the form of rules (often hundreds or thousands), development and maintenance can be complex and time consuming. Particularly, for systems with short lifecycles (many electronic systems), it may not be worth the development effort. In addition, only faults anticipated during the design phase can be diagnosed. Conversely, their intuitive simplicity makes rules easy to understand and the inference sequence used for a particular problem can easily be traced. Additionally, the technique is well proven, with many rule-based systems having been deployed in real applications , BIB002 , BIB001 .
Fault diagnosis of electronic systems using intelligent techniques: a review <s> C. Case-Based Approaches <s> Field service is now recognized as one of the most important corporate activities in order to improve customer satisfaction and to compete successfully world-wide competition. Sharing repair experience with a state-of-the-art computer technology is a key issue to improve the productivity of field service. We have developed a diagnostic expert system, named Doctor, which employs case-based reasoning (CBR) and lists the most necessary ten service parts from a product type and some symptoms acquired from a service-request call. In this paper, we describe the Doctor system and explain how accurate and reliable product-type case-bases are generated and updated from the troubleshooting experience and the generic case base, i.e., general diagnostic knowledge. We also demonstrate the effectiveness of our system with experimental results using real repair cases. > <s> BIB001 </s> Fault diagnosis of electronic systems using intelligent techniques: a review <s> C. Case-Based Approaches <s> One problem with using CBR for diagnosis is that a full case description may not be available at the beginning of the diagnosis. The standard CBR methodology requires a detailed case description in order to perform case retrieval and this is often not practical in diagnosis. We describe two fault diagnosis tasks where many features may make up a case description but only a few features are required in an individual diagnosis. We evaluate an incremental CBR mechanism that can initiate case retrieval with a skeletal case description and will elicit extra discriminating information during the diagnostic process. <s> BIB002 </s> Fault diagnosis of electronic systems using intelligent techniques: a review <s> C. Case-Based Approaches <s> Abstract Diagnosis and repair operations are often major bottlenecks in electronics circuit assembly operations. Increasing board density and circuit complexity have made fault diagnosis difficult. But, with shrinking product life cycles and increasing competition, quick diagnosis and feedback is critical for cost control, process improvement, and timely product introduction. This paper describes a case-based diagnosis support system to improve the effectiveness and efficiency of circuit diagnosis in electronics assembly facilities. The system stores individual diagnostic instances rather than general rules and algorithmic procedures, and prioritizes the tests during the sequential testing process. Its knowledge base grows as new faults are detected and diagnosed by the analyzers. The system provides distributed access to multiple users, and incorporates on-line updating features that make it quick to adapt to changing circumstances. Because it is easy to install and update, this method is well-suited for real manufacturing applications. We have implemented a prototype version, and tested the approach in an actual electronics assembly environment. We describe the system's underlying principles, discuss methods to improve diagnostic effectiveness through principled test selection and sequencing, and discuss managerial implications for successful implementation. <s> BIB003
Case-based systems depend on past diagnostic experiences to perform new diagnoses. In practice, CBR has proved to be effective in real-world circuit diagnosis applications BIB003 , BIB002 . Issues include the following. 1) The inability to diagnose until an adequate case-base becomes available. In BIB003 , an application involving the diagnosis of consumer electronics products is described, and it is reported that the system could diagnose 90% of defects after six weeks of operation, however, the domain complexity is not apparent. Additionally, less common faults will be more difficult to diagnose due to their lack of presence in the initial casebase. 2) Compared to rule-based and model-based systems it is not always apparent how conclusions are arrived at BIB002 , as the diagnosis is based on the overall fault pattern, rather than a logical sequence of steps. 3) In BIB002 , development times and performance were reported to be better than an equivalent model-based solution previously reported in . 4) Development and maintenance is easier than for traditional solutions such as rules, as knowledge acquisition is on-line and incremental BIB003 . 5) Efficiency may be hindered by the indexing and retrieval mechanisms used particularly as the case-base begins to grow . 6) How domain specific are CBR solutions? A human technician can apply troubleshooting techniques learned on one product to a different product, by extracting general purpose rules or procedures from specific experiences. Can this be applied to CBR? In BIB001 , the casebase is divided into generic and specific knowledge. However, the generic casebase is defined by domain experts and is not incremental.
Fault diagnosis of electronic systems using intelligent techniques: a review <s> E. Hybrid Approaches <s> Diagnosing analog circuits with their numerous known difficulties is a very hard problem. Digital approaches have proven to be inappropriate, and AI-based ones suffer from many problems. In this paper we present a new system, FLAMES, which uses fuzzy logic, model-based reasoning, ATMS extension, and the human expertise in an appropriate combination to go far in the treatment of this problem. <s> BIB001 </s> Fault diagnosis of electronic systems using intelligent techniques: a review <s> E. Hybrid Approaches <s> Testing and diagnosing analog circuits is a very challenging problem. The inaccuracy of measurement and the infinite domain of possible values are the principal difficulties. AI approaches were the base of many systems which tried to overcome these problems. The first part of this paper is a state of the art of this research area. We present two fundamental approaches, model-based reasoning and qualitative reasoning, and the systems implementing them; a discussion and an evaluation of these systems are given. In the second part, we present and propose a novel approach based on fuzzy logic in order to go further in dealing with analog circuits testing and diagnosis. Tolerance is treated by means of fuzzy intervals which are more general, more efficient and of higher fidelity to represent the imprecision in its different forms than other approaches. Fuzzy intervals are also able to be semi-qualitative which is more suitable to the simulation of analog systems. We use this idea to develop a best test point finding strategy based on fuzzy probabilities and fuzzy decision-making methodology. Finally, a complete expert system which implements this approach is presented. <s> BIB002 </s> Fault diagnosis of electronic systems using intelligent techniques: a review <s> E. Hybrid Approaches <s> Preface. 1. Diagnostic Inaccuracies: Approaches to Mitigate W.R. Simpson. 2.Pass/Fail Limits - The Key to Effective Diagnostic Tests H. Dill. 3. Fault Hypothesis Computations Using Fuzzy Logic T.M. Bearse, M.L. Lynch. 4. Deriving a Diagnostic Inference Model from a Test Strategy T.M. Bearse. 5. Inducing Diagnostic Inference Models from Case Data J.W. Sheppard. 6. Accurate Diagnosis Through Conflict Management J.W. Sheppard, W.R. Simpson. 7. System Level Test Process Characterization and Improvement D. Farren, et al. 8. A Standard for Test and Diagnosis J. Taylor. 9. Advanced Onboard Diagnostic System for Vehicle Management K. Keller, et al. 12. Combining Model-Based and Case-Based Expert Systems M. Ben-Bassat, et al. 11. Enhanced Sequential Diagnosis A. Biasizzo, et al. Subject Index. <s> BIB003 </s> Fault diagnosis of electronic systems using intelligent techniques: a review <s> E. Hybrid Approaches <s> Abstract Diagnosis and repair operations are often major bottlenecks in electronics circuit assembly operations. Increasing board density and circuit complexity have made fault diagnosis difficult. But, with shrinking product life cycles and increasing competition, quick diagnosis and feedback is critical for cost control, process improvement, and timely product introduction. This paper describes a case-based diagnosis support system to improve the effectiveness and efficiency of circuit diagnosis in electronics assembly facilities. The system stores individual diagnostic instances rather than general rules and algorithmic procedures, and prioritizes the tests during the sequential testing process. Its knowledge base grows as new faults are detected and diagnosed by the analyzers. The system provides distributed access to multiple users, and incorporates on-line updating features that make it quick to adapt to changing circumstances. Because it is easy to install and update, this method is well-suited for real manufacturing applications. We have implemented a prototype version, and tested the approach in an actual electronics assembly environment. We describe the system's underlying principles, discuss methods to improve diagnostic effectiveness through principled test selection and sequencing, and discuss managerial implications for successful implementation. <s> BIB004
A primary research direction has been the combined use of MBR and CBR in diagnostic systems. Models are often inconsistent and incomplete resulting in inaccurate diagnoses. In addition, operators can input inaccurate information again leading to inappropriate conclusions. Supplemented by cases, irrelevant conclusions can easily be pruned from a candidate list of diagnoses. In addition, the model can be updated and improved using case data , . MBR can be too slow for real-time applications so CBR may be a better alternative BIB004 . However, to supplement the CBR approach off-line, models can be used to verify new cases created by the adaptation process, or models can be used to initialize a case-base for a new product. And, in , models are generated from available case data. Fuzzy logic has been combined with model-based reasoning, particularly in the domain of analog circuit diagnosis. Circuit measurements are represented using fuzzy values, and inferences are propagated using fuzzy techniques BIB003 , , BIB001 , BIB002 . Finally, in a proposed hybrid architecture employing models, cases, and fuzzy inference, for diagnosing microprocessor-based boards is described.
Fault diagnosis of electronic systems using intelligent techniques: a review <s> XI. FUTURE DIRECTIONS <s> Abstract This paper describes a device-independent diagnostic program called dart. dart differs from previous approaches to diagnosis taken in the Artificial Intelligence community in that it works directly from design descriptions rather than mycin -like symptom-fault rules. dart differs from previous approaches to diagnosis taken in the design-automation community in that it is more general and in many cases more efficient. dart uses a device-independent language for describing devices and a device-independent inference procedure for diagnosis. The resulting generality allows it to be applied to a wide class of devices ranging from digital logic to nuclear reactors. Although this generality engenders some computational overhead on small problems, it facilitates the use of multiple design descriptions and thereby makes possible combinatoric savings that more than offsets this overhead on problems of realistic size. <s> BIB001 </s> Fault diagnosis of electronic systems using intelligent techniques: a review <s> XI. FUTURE DIRECTIONS <s> This survey provides a conceptual introduction to ontologies and their role in information systems and AI. The authors also discuss how ontologies clarify the domain's structure of knowledge and enable knowledge sharing. <s> BIB002 </s> Fault diagnosis of electronic systems using intelligent techniques: a review <s> XI. FUTURE DIRECTIONS <s> Abstract Diagnosis and repair operations are often major bottlenecks in electronics circuit assembly operations. Increasing board density and circuit complexity have made fault diagnosis difficult. But, with shrinking product life cycles and increasing competition, quick diagnosis and feedback is critical for cost control, process improvement, and timely product introduction. This paper describes a case-based diagnosis support system to improve the effectiveness and efficiency of circuit diagnosis in electronics assembly facilities. The system stores individual diagnostic instances rather than general rules and algorithmic procedures, and prioritizes the tests during the sequential testing process. Its knowledge base grows as new faults are detected and diagnosed by the analyzers. The system provides distributed access to multiple users, and incorporates on-line updating features that make it quick to adapt to changing circumstances. Because it is easy to install and update, this method is well-suited for real manufacturing applications. We have implemented a prototype version, and tested the approach in an actual electronics assembly environment. We describe the system's underlying principles, discuss methods to improve diagnostic effectiveness through principled test selection and sequencing, and discuss managerial implications for successful implementation. <s> BIB003 </s> Fault diagnosis of electronic systems using intelligent techniques: a review <s> XI. FUTURE DIRECTIONS <s> I report on my experience over the past few years in introducing automated, model-based diagnostic technologies into industrial settings. In partic-ular, I discuss the competition that this technology has been receiving from handcrafted, rule-based diagnostic systems that has set some high standards that must be met by model-based systems before they can be viewed as viable alternatives. The battle between model-based and rule-based approaches to diagnosis has been over in the academic literature for many years, but the situation is different in industry where rule-based systems are dominant and appear to be attractive given the considerations of efficiency, embeddability, and cost effectiveness. My goal in this article is to provide a perspective on this competition and discuss a diagnostic tool, called DTOOL/CNETS, that I have been developing over the years as I tried to address the major challenges posed by rule-based systems. In particular, I discuss three major features of the developed tool that were either adopted, designed, or innovated to address these challenges: (1) its compositional modeling approach, (2) its structure-based computational approach, and (3) its ability to synthesize embeddable diagnostic systems for a variety of software and hardware platforms. <s> BIB004
As electronic systems increase in complexity, the need for automated diagnostic tools has become more acute. This is exacerbated by reduced time-to-market, and shorter product lifecycles, leading to little development time being available for diagnostics. Although much research has been carried out in the area, much remains to be done, particularly in the deployment of useful tools, which save dollars, in real applications. Without a return on investment, there will be no implementation and no deployment. Some issues for future research are briefly discussed in this section. 1) Most complex electronic systems are now microprocessor or digital signal processor (DSP) driven. Most research has concentrated on hardware-only systems which consist of inputs, circuitry, and outputs. Processor-based boards involve the tight integration of hardware and software, and, therefore, present additional problems including the following. -Software test programs are generally used to test the hardware, but often these cannot be started if the board is defective. - The test programs often only provide a pass/fail result. What is an alternative test architecture which includes diagnosis without increasing the cost of test generation? On a manufacturing line, diagnosis is currently performed off-line using expensive debug technicians because diagnosis will often slow down the rate of production and, therefore, increase costs. 2) As product lifecycles reduce, fast deployment is a key issue . For example, many PC systems have a lifecycle of three months. Developing diagnostics models is time-consuming unless CAD data can be used BIB001 . Using cases suffers from the initial lack of suitable cases and a three to six month lifecycle does not give enough time to overcome this. 3) In BIB004 , it is claimed that rule-based approaches are prevalent in industry and that the deployment of model-based approaches has been delayed by the perception that model-based solutions require specialized knowledge to enable implementation. To overcome this, a tool, which converts simple block diagrams of a system to a causal model is presented. Clearly, tools which simplify the development of intelligent diagnostic solutions are required and these tools must cater to the needs of engineers who may have little knowledge of AI. 4) Models based on structure and behavior have problems when scaled up to large circuits . Particularly, representing devices with complex behaviors (e.g., Pentium microprocessor) continues to be a problem . A suitable ontology BIB002 or representation vocabulary is needed for the electronic system domain and with specific representations for particular device types. For example, in , a structure/behavior model-based solution with a representation vocabulary suited to the domain of switch-mode power supplies only, is deployed successfully. In comparison, the more generic MBR solutions have not been successfully applied to complex real-world circuit diagnosis to our knowledge. 5) Structure/behavior models use a correctly functioning model. What about defects which change the structure of the model (e.g., bridging fault) thus making the model incomplete ? 6) Hybrid solutions form a continuing area of investigation, particularly the combined use of models and cases. Models suffer from the complexity versus completeness issue. If too complex, diagnosis can become intractable. If incomplete, diagnosis can be rapid but inaccurate. Conversely, CBR only becomes accurate after a period of deployment. Therefore, cases can be used to supplement and improve the diagnosis of an incomplete model and models can be used to initialize and verify cases. However, what complexity of model supplemented by cases, will provide fast and accurate diagnosis from initial deployment, where no cases are available, but yet is simple enough to be developed within an acceptable timeframe , ? 7) Most CBR solutions only collect cases, which are relevant to a specific system type. A new product means starting all over again. Is it possible to collect generic cases or experiences? For example, a human technician can carry experiences learned on old products to new products. Can cases be stored in a more generalized way? 8) Collecting diagnostic information using probing, forms part of many past works on circuit diagnosis. However, with modern circuit boards, probing is becoming less of an option, as packaging densities increase. More information will have to be collected via diagnostic tests . 9) Design for test (DFT) has become more prominent as system test becomes more difficult. Can DFT strategies incorporate diagnosis without compromising test cost and quality? 10) Electronic systems diagnosis is an expensive activity requiring high skill. As part of manufacturing, it is performed off-line by debug technicians. Using automated techniques, can it be performed as part of an on-line test BIB003 or can it be performed off-line by operators ? XII. SUMMARY Increasing costs, shorter product lifecycles, and rapid changes in technology are driving the need for automated diagnosis. Although research has been active over the last two decades, much remains to be done. Primarily, the developed techniques must be scaled up to deal with current and future technologies but with improved development times and costs. Otherwise, acceptance will be difficult, particularly in cost sensitive domains, such as PCs and consumer electronics. To date, there have been some applications, but the general use of intelligent diagnostic solutions for electronic system diagnosis has yet to happen.
Effects of Aerobic Exercise on Patients with Mental Illness in a Veterans Inpatient Psychiatric Unit: A Review of the Literature <s> Introduction <s> Involvement in warfare can have dramatic consequences for the mental health and well-being of military personnel. During the 20th century, US military psychiatrists tried to deal with these consequences while contributing to the military goal of preserving manpower and reducing the debilitating impact of psychiatric syndromes by implementing screening programs to detect factors that predispose individuals to mental disorders, providing early intervention strategies for acute war-related syndromes, and treating long-term psychiatric disability after deployment.The success of screening has proven disappointing, the effects of treatment near the front lines are unclear, and the results of treatment for chronic postwar syndromes are mixed.After the Persian Gulf War, a number of military physicians made innovative proposals for a population-based approach, anchored in primary care instead of specialty-based care. This approach appears to hold the most promise for the future. <s> BIB001 </s> Effects of Aerobic Exercise on Patients with Mental Illness in a Veterans Inpatient Psychiatric Unit: A Review of the Literature <s> Introduction <s> People with mental illness are more likely to suffer physical health problems than comparable populations who do not have mental illness. There is evidence to suggest that exercise, as well has having obvious physical benefits, also has positive effects on mental health. There is a distinct paucity of research testing its effects on young people seeking help for mental health issues. Additionally, it is generally found that compliance with prescribed exercise programmes is low. As such, encouraging young people to exercise at levels recommended by national guidelines may be unrealistic considering their struggle with mental health difficulties. It is proposed that an exercise intervention tailored to young people's preferred intensity may improve mental health outcomes, overall quality of life, and reduce exercise attrition rates. A sequential mixed methods design will be utilised to assess the effectiveness of an individually tailored exercise programme on the mental health outcomes of young people with depression. The mixed methods design incorporates a Randomised Controlled Trial (RCT), focus groups and interviews and an economic evaluation. Participants: 158 young people (14-17 years) recruited from primary care and voluntary services randomly allocated to either the intervention group or control group. Intervention group: Participants will undertake a 12 week exercise programme of 12 × 60 minutes of preferred intensity aerobic exercise receiving motivational coaching and support throughout. Participants will also be invited to attend focus groups and 1-1 interviews following completion of the exercise programme to illicit potential barriers facilitators to participation. Control group: Participants will receive treatment as usual. Primary Outcome measure: Depression using the Children's Depression Inventory 2 (CDI-2). Secondary Outcome measures: Quality of Life (EQ-5D), physical fitness (Borg RPE scale, heart rate), incidents of self-harm, treatment received and compliance with treatment, and the cost effectiveness of the intervention. Outcome measures will be taken at baseline, post intervention and 6 month follow up. The results of this study will inform policy makers of the effectiveness of preferred intensity exercise on the mental health outcomes of young people with depression, the acceptability of such an intervention to this population and its cost effectiveness. ClinicalTrials.gov: NCT01474837 <s> BIB002 </s> Effects of Aerobic Exercise on Patients with Mental Illness in a Veterans Inpatient Psychiatric Unit: A Review of the Literature <s> Introduction <s> Estimates of substance use and other mental health disorders of veterans (N = 269) who returned to predominantly low-income minority New York City neighborhoods between 2009 and 2012 are presented. Although prevalences of posttraumatic stress disorder, traumatic brain injury, and depression clustered around 20%, the estimated prevalence rates of alcohol use disorder, drug use disorder, and substance use disorder were 28%, 18%, and 32%, respectively. Only about 40% of veterans with any diagnosed disorder received some form of treatment. For alcohol use disorder, the estimate of unmet treatment need was 84%, which is particularly worrisome given that excessive alcohol use was the greatest substance use problem. <s> BIB003
A review of the literature was conducted to investigate the benefits of introducing aerobic exercise to veterans in an inpatient mental health unit and the possibility of reducing symptoms of mental illness. The World Health Organization ([WHO], 2003) reports indicate that mental health is a growing issue worldwide, with more than 450 million people suffering from a mental illness. Schizophrenia and major depressive disorders are in the top 10 mental illness diagnoses (Oertel-Knochel et al., 2014) and is known to effect role functioning and relationships negatively for both the individual, loved ones, and society . Exercise may help the veteran reduce symptoms of mental illness BIB002 . Aerobic exercise can be used to help reduce psychopathological or abnormal behavioral symptoms while improving cognitive ability in people with mental illness (Oertel-Knochel et al., 2014) . Physical exercise has been shown to increase neurogenesis and develop neuroplasticity and therefore increasing cognitive functioning (Oertel-Knochel et al., 2014) . Furthermore, aerobic exercise has been shown to increase the prevalence of important growth factors including glucocorticoids, brain-derived neurotrophic factor (BDNF), insulin-like growth factor-1 (IGF-1) and vascular endothelial growth factor ([VEGF] , Oertel-Knochel et al., 2014) . Growth factors such as VEGF are essential in the growth of new capillaries to increase neuroplasticity that positively affects cognitive ability and mood that will, in turn decrease the symptoms of mental illness. Many people suffer from mental illness and veterans are at high risk for mental illness BIB001 . Veterans are prone to mental health issues with exposure to combat, conflict, death, carnage, chronic pain, and separation from loved ones BIB003 . Participating in multiple acts that are threatening to one's life can potentially lead to mental health problems such as in the veterans BIB001 . Aerobic exercise could help veterans better cope with their mental health disorders, be more productive in their lives, their relationships with others, and in the society. Veterans on an inpatient mental health unit may benefit from regular aerobic exercise as an adjunct intervention to their therapy. On an inpatient unit, most of the veteran's day is filled with therapy, eating, and sleeping leading to a sedentary lifestyle. Physical exercise could be used as an intervention to distract the veteran and teach effective coping strategy for dealing with mental illness. Therefore, the purpose of this literature review was to explore extensive literature on the benefits of aerobic exercise that lead to non-pharmacological interventions, to help veterans who suffer from mental illness in an inpatient mental health setting.
Effects of Aerobic Exercise on Patients with Mental Illness in a Veterans Inpatient Psychiatric Unit: A Review of the Literature <s> Methods and Designs of Studies <s> This study investigated qualitatively the experiences of men who took part in a 10 week integrated exercise/psychosocial mental health promotion programme, “Back of the Net” (BTN). 15 participants who completed the BTN programme were recruited to participate in either a focus group discussion (N = 9) or individual interview (N = 6). A thematic analytic approach was employed to identify key themes in the data. Results indicated that participants felt that football was a positive means of engaging men in a mental health promotion program. Perceived benefits experienced included perceptions of mastery, social support, positive affect and changes in daily behaviour. The findings support the value of developing gender specific mental health interventions to both access and engage young men. <s> BIB001 </s> Effects of Aerobic Exercise on Patients with Mental Illness in a Veterans Inpatient Psychiatric Unit: A Review of the Literature <s> Methods and Designs of Studies <s> OBJECTIVE ::: To evaluate the efficacy of a home-based exercise programme added to usual medical care for the treatment of depression. ::: ::: ::: DESIGN ::: Prospective, two group parallel, randomised controlled study. ::: ::: ::: SETTING ::: Community-based. ::: ::: ::: PATIENTS ::: 200 adults aged 50 years or older deemed to be currently suffering from a clinical depressive illness and under the care of a general practitioner. ::: ::: ::: INTERVENTIONS ::: Participants were randomly allocated to either usual medical care alone (control) or usual medical care plus physical activity (intervention). The intervention consisted of a 12-week home-based programme to promote physical activity at a level that meets recently published guidelines for exercise in people aged 65 years or over. ::: ::: ::: MAIN OUTCOME MEASUREMENTS ::: Severity of depression was measured with the structured interview guide for the Montgomery-Asberg Depression Rating Scale (SIGMA), and depression status was assessed with the Structured Clinical Interview for DSM-IV Axis I Disorders (SCID-I). ::: ::: ::: RESULTS ::: Remission of depressive illness was similar in both the usual care (59%) and exercise groups (63%; OR = 1.18, 95% CI 0.61 to 2.30) at the end of the 12-week intervention, and again at the 52-week follow-up (67% vs 68%) (OR=1.07, 95% CI 0.56 to 2.02). There was no change in objective measures of fitness over the 12-week intervention among the exercise group. ::: ::: ::: CONCLUSIONS ::: This home-based physical activity intervention failed to enhance fitness and did not ameliorate depressive symptoms in older adults, possibly due to a lack of ongoing supervision to ensure compliance and optimal engagement. <s> BIB002 </s> Effects of Aerobic Exercise on Patients with Mental Illness in a Veterans Inpatient Psychiatric Unit: A Review of the Literature <s> Methods and Designs of Studies <s> Abstract We developed a physical exercise intervention aimed at improving multiple determinants of physical performance in severe mental illness. A sample of 12 (9M, 3F) overweight or obese community-dwelling patients with schizophrenia ( n =9) and bipolar disorder ( n =3) completed an eight-week, high-velocity circuit resistance training, performed twice a week on the computerized Keiser pneumatic exercise machines, including extensive pre/post physical performance testing. Participants showed significant increases in strength and power in all major muscle groups. There were significant positive cognitive changes, objectively measured with the Brief Assessment of Cognition Scale: improvement in composite scores, processing speed and symbol coding. Calgary Depression Scale for Schizophrenia and Positive and Negative Syndrome Scale total scores improved significantly. There were large gains in neuromuscular performance that have functional implications. The cognitive domains that showed the greatest improvements (memory and processing speed) are most highly predictive of disability in schizophrenia. Moreover, the improvements seen in depression suggest this type of exercise intervention may be a valuable add-on therapy for bipolar depression. <s> BIB003 </s> Effects of Aerobic Exercise on Patients with Mental Illness in a Veterans Inpatient Psychiatric Unit: A Review of the Literature <s> Methods and Designs of Studies <s> The health benefits of exercise are well established, yet individuals with serious mental illness (SMI) have a shorter life expectancy due in large part to physical health complications associated with poor diet and lack of exercise. There is a paucity of research examining exercise in this population with the majority of studies having examined interventions with limited feasibility and sustainability. Before developing an intervention, a thorough exploration of client and clinician perspectives on exercise and its associated barriers is warranted. Twelve clients and fourteen clinicians participated in focus groups aimed at examining exercise, barriers, incentives, and attitudes about walking groups. Results indicated that clients and clinicians identified walking as the primary form of exercise, yet barriers impeded consistent participation. Distinct themes arose between groups; however, both clients and clinicians reported interest in a combination group/pedometer based walking program for individuals with SMI. Future research should consider examining walking programs for this population. <s> BIB004 </s> Effects of Aerobic Exercise on Patients with Mental Illness in a Veterans Inpatient Psychiatric Unit: A Review of the Literature <s> Methods and Designs of Studies <s> Previous qualitative studies have found that exercise may facilitate symptomatic and functional recovery in people with long-term schizophrenia. This study examined the perceived effects of exercise as experienced by people in the early stages of psychosis, and explored which aspects of an exercise intervention facilitated or hindered their engagement. Nineteen semi-structured interviews were conducted with early intervention service users who had participated in a 10-week exercise intervention. Interviews discussed people’s incentives and barriers to exercise, short- and long-term effects, and opinions on optimal interventions. A thematic analysis was applied to determine the prevailing themes. The intervention was perceived as beneficial and engaging for participants. The main themes were (a) exercise alleviating psychiatric symptoms, (b) improved self-perceptions following exercise, and (c) factors determining exercise participation, with three respective sub-themes for each. Participants explained how exercise had improved their mental health, improved their confidence and given them a sense of achievement. Autonomy and social support were identified as critical factors for effectively engaging people with first-episode psychosis in moderate-to-vigorous exercise. Implementing such programs in early intervention services may lead to better physical health, symptom management and social functioning among service users. Current Controlled Trials ISRCTN09150095. Registered 10 December 2013. <s> BIB005 </s> Effects of Aerobic Exercise on Patients with Mental Illness in a Veterans Inpatient Psychiatric Unit: A Review of the Literature <s> Methods and Designs of Studies <s> BackgroundIndividuals with a severe mental illness (SMI) are at least two times more likely to suffer from metabolic co-morbidities, leading to excessive and premature deaths. In spite of the many physical and mental health benefits of physical activity (PA), individuals with SMI are less physically active and more sedentary than the general population. One key component towards increasing the acceptability, adoption, and long-term adherence to PA is to understand, tailor and incorporate the PA preferences of individuals. Therefore, the objective of this study was to determine if there are differences in PA preferences among individuals diagnosed with different psychiatric disorders, in particular schizophrenia or bipolar disorder (BD), and to identify PA design features that participants would prefer.MethodsParticipants with schizophrenia (n = 113) or BD (n = 60) completed a survey assessing their PA preferences.ResultsThere were no statistical between-group differences on any preferred PA program design feature between those diagnosed with schizophrenia or BD. As such, participants with either diagnosis were collapsed into one group in order to report PA preferences. Walking (59.5 %) at moderate intensity (61.3 %) was the most popular activity and participants were receptive to using self-monitoring tools (59.0 %). Participants were also interested in incorporating strength and resistance training (58.5 %) into their PA program and preferred some level of regular contact with a fitness specialist (66.0 %).ConclusionsThese findings can be used to tailor a physical activity intervention for adults with schizophrenia or BD. Since participants with schizophrenia or BD do not differ in PA program preferences, the preferred features may have broad applicability for individuals with any SMI. <s> BIB006 </s> Effects of Aerobic Exercise on Patients with Mental Illness in a Veterans Inpatient Psychiatric Unit: A Review of the Literature <s> Methods and Designs of Studies <s> Abstract Introduction Measurement of symptoms domains and their response to treatment in relative isolation from diagnosed mental disorders has gained new urgency, as reflected by the National Institute of Mental Health's introduction of the Research Domain Criteria (RDoC). The Snaith Hamilton Pleasure Scale (SHAPS) and the Motivation and Energy Inventory (MEI) are two scales measuring positive valence symptoms. We evaluated the effect of exercise on positive valence symptoms of Major Depressive Disorder (MDD). Methods Subjects in the Treatment with Exercise Augmentation for Depression (TREAD) study completed self-reported SHAPS and MEI during 12 weeks of exercise augmentation for depression. We evaluated the effect of exercise on SHAPS and MEI scores, and whether the changes were related to overall MDD severity measured with the Quick Inventory of Depression Symptomatology (QIDS). Results SHAPS and MEI scores significantly improved with exercise. MEI score change had larger effect size and greater correlation with change in QIDS score. MEI also showed significant moderator and mediator effects of exercise in MDD. Limitations Generalizability to other treatments is limited. This study lacked other bio-behavioral markers that would enhance understanding of the relationship of RDoC and the measures used. Conclusions Positive valence symptoms improve with exercise treatment for depression, and this change correlates well with overall outcome. Motivation and energy may be more clinically relevant to outcome of exercise treatment than anhedonia. <s> BIB007
The findings from the review focused on 10 peer-reviewed articles. Frameworks of the reviewed articles were not specifically listed in the literature. However, the articles were found to be a collection of different methods and designs. Designs ranging from qualitative studies BIB005 BIB001 to quantitative (Bonsaksen & Lerdal, 2012; Oretel-Knochel et al., 2014; BIB002 Powers et al., 2015; BIB003 BIB006 BIB007 ) and a mix-method design BIB004 . Methods found were exploratory (Bonsaksen & Lerdal, 2012; BIB004 BIB005 , experimental (Oretel-Knochel et al., 2014; BIB003 BIB007 ), ethnographic (McArdle et al., 2012 , prospective BIB002 , pilot (Powers et al., 2015) , and cross-sectional (Bonsaksen & Lerdal, 2012; BIB006 .