reference
stringlengths 141
444k
| target
stringlengths 31
68k
|
---|---|
A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Other Glucose Detection Platforms <s> In this report, a paper-based micro-calorimetric biochemical detection method is presented. Calorimetric detection of biochemical reactions is demonstrated as an extension of current colorimetric and electrochemical detection mechanisms of paper-based biochemical analytical systems. Reaction and/or binding temperature of glucose/glucose oxidase, DNA/hydrogen peroxide, and biotin/streptavidin, are measured by the paper-based micro-calorimeter. Commercially available glucose calibration samples of 0.05, 0.15 and 0.3% wt/vol concentration are used for comparing the device performance with a commercially available glucose meter (electrochemical detection). The calorimetric glucose detection demonstrates a measurement error less than 2%. The calorimetric detection results of DNA concentrations from 0.9 to 7.3 mg/mL and temperature changes in biotin and streptavidin reaction are presented to demonstrate the feasibility of integrating the calorimetric detection method with paper based microfluidic devices. <s> BIB001 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Other Glucose Detection Platforms <s> A simple and inexpensive method to fabricate a colloidal CdSe/ZnS quantum dots-modified paper-based assay for glucose is herein reported. The circular paper sheets were uniformly loaded and displayed strong fluorescence under a conventional hand-held UV lamp (365 nm). The assay is based on the use of glucose oxidase enzyme (GOx), which impregnated the paper sheets, producing H2O2 upon the reaction with the glucose contained in the samples. After 20 min of exposure, the fluorescence intensity changed due to the quenching caused by H2O2. To obtain a reading, the paper sheets were photographed under 365 nm excitation using a digital camera. Several parameters, including the amount of QD, sample pH, and amount of GOx were optimized to maximize the response to glucose. The paper-based assay showed a sigmoidal-shaped response with respect to the glucose concentration in the 5-200 mg·dL-1 range (limit of detection of 5 μg·dL-1), demonstrating their potential use for biomedical applications. <s> BIB002 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Other Glucose Detection Platforms <s> In this report, we present a paper membrane-based surface-enhanced Raman scattering (SERS) platform for the determination of blood glucose level using a nitrocellulose membrane as substrate paper, and the microfluidic channel was simply constructed by wax-printing method. The rod-shaped gold nanorod particles were modified with 4-mercaptophenylboronic acid (4-MBA) and 1-decanethiol (1-DT) molecules and used as embedded SERS probe for paper-based microfluidics. The SERS measurement area was simply constructed by dropping gold nanoparticles on nitrocellulose membrane, and the blood sample was dropped on the membrane hydrophilic channel. While the blood cells and proteins were held on nitrocellulose membrane, glucose molecules were moved through the channel toward the SERS measurement area. Scanning electron microscopy (SEM) was used to confirm the effective separation of blood matrix, and total analysis is completed in 5 min. In SERS measurements, the intensity of the band at 1070 cm(-1) which is attributed to B-OH vibration decreased depending on the rise in glucose concentration in the blood sample. The glucose concentration was found to be 5.43 ± 0.51 mM in the reference blood sample by using a calibration equation, and the certified value for glucose was 6.17 ± 0.11 mM. The recovery of the glucose in the reference blood sample was about 88 %. According to these results, the developed paper-based microfluidic SERS platform has been found to be suitable for use for the detection of glucose in blood samples without any pretreatment procedure. We believe that paper-based microfluidic systems may provide a wide field of usage for paper-based applications. <s> BIB003 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Other Glucose Detection Platforms <s> In this study, a turn-on paper-based optical analytical system with a rapid, sensitive and quantitative response for glucose was developed. The luminescence sensing material, crystalline iridium(III)-Zn(II) coordination polymers, or Ir-Zne, was grown electrochemically on stainless steel mesh and then deposited on filter paper. This sensing substrate was subsequently built up under glucose oxidase encapsulated in hydrogel and then immobilized on egg membrane with the layer-by-layer method. Once the glucose solution was dropped onto the paper, the oxygen content was depleted simultaneously with a concomitant increase in the phosphorescence of Ir-Zne. The detection limit for glucose was 0.05 mM. The linear dynamic range for the determination of glucose was 0.05–8.0 mM with a correlation coefficient (R2) of 0.9956 (y=68.11 [glucose]−14.72). The response time was about 0.12 s, and the sample volume was less than 5 μL. The effects of mesh size, buffer concentration, pH, enzyme concentration, temperature, and interference, and the stability of the biosensor, have also been studied in detail. Finally, the biosensor was successfully applied to the determination of glucose in human serum. <s> BIB004 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Other Glucose Detection Platforms <s> The analytical performance for paper spray (PS) using a new insert sample approach based on paper with paraffin barriers (PS-PB) is presented. The paraffin barrier is made using a simple, fast and cheap method based on the stamping of paraffin onto a paper surface. Typical operation conditions of paper spray such as the solvent volume applied on the paper surface, and the paper substrate type are evaluated. A paper substrate with paraffin barriers shows better performance on analysis of a range of typical analytes when compared to the conventional PS-MS using normal paper (PS-NP) and PS-MS using paper with two rounded corners (PS-RC). PS-PB was applied to detect sugars and their inhibitors in sugarcane bagasse liquors from a second generation ethanol process. Moreover, the PS-PB proved to be excellent, showing results for the quantification of glucose in hydrolysis liquors with excellent linearity (R2 = 0.99), limits of detection (2.77 mmol L−1) and quantification (9.27 mmol L−1). The results are better than for PS-NP and PS-RC. The PS-PB was also excellent in performance when compared with the HPLC-UV method for glucose quantification on hydrolysis of liquor samples. <s> BIB005 | Except for the conventional colorimetric and electrochemical techniques for glucose detection, there are some other techniques, such as luminescence BIB004 , fluorescence BIB002 , calorimetric BIB001 , mass spectrum (MS) BIB005 and surface-enhanced Raman spectroscopy (SERS) BIB003 applied to µPADs for rapid glucose diagnostics. Chen et al. BIB004 developed a turn-on paper-based phosphorescence device using Ir-Zn e , a kind of luminescence sensing material, composited with GOx with layer-by-layer technique. Once glucose existed, the oxygen content was depleted and the phosphorescence of Ir-Zn e increased concomitantly. The linear calibration range was from 0.05 to 8.0 mM with a correlation coefficient of 0.9956 and the LOD was 0.05 mM. Durán et al. BIB002 utilized colloidal CdSe/ZnS quantum dots (Q-dots) to produce an optical paper-based device for glucose detection. Paper loaded with Q-dots would display strong fluorescence under a UV lamp. H 2 O 2 generated by GOx could cause fluorescence intensity to be quenched after a 20 min exposure. Calorimetric detection is demonstrated as an extension of current detection mechanisms of colorimetric and electrochemical µPADs. Davaji et al. BIB001 developed a calorimetric µPAD based on binding temperature of glucose/GOx for glucose detection through change in heat. Colletes et al. BIB005 presented a new insert sample method based on paper with paraffin barriers (PS-PB) and it was employed to glucose detection with a LOD of 2.77 mM. A paper membrane-based SERS platform was developed by Torul et al. BIB003 for glucose determination in blood using a nitrocellulose membrane and wax-printing microfluidic channel. Gold nanoparticles modified with 4-mercaptophenylboronic acid (4-MBA) and 1-decanethiol (1-DT) molecules were used as probe for µPADs. Glucose molecules were moved through the channel toward the measuring area constructed by dropping AuNPs on the membrane. The glucose concentration was 6.17 ± 0.11 mM and the device may provide a wide range of applications in daily life. |
A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Conclusions <s> The fabrication of toner-based microfluidic devices to perform clinical diagnostics with capillary action and colorimetric detection is described in this report. Test zones and microfluidic channels were drawn in a graphic software package and laser printed on a polyester film. The printed layout and its mirror image were aligned with an intermediary cut-through polyester film and then thermally laminated together at 150 °C at 60 cm/min to obtain a channel with ca. 100-μm depth. Colorimetric assays for glucose, protein, and cholesterol were successfully performed using a desktop scanner. The limit of detection (LD) values found for protein, cholesterol, and glucose were 8, 0.2, and 0.3 mg/mL, respectively. The relative standard deviation (RSD) values for an interdevices comparison were 6%, 1%, and 3% for protein, cholesterol, and glucose, respectively. Bioassays were successfully performed on toner-based devices stored at different temperatures during five consecutive days without loss of activity. <s> BIB001 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Conclusions <s> Here we report development of a smartphone app (application) that digitizes the colours of a colorimetric sensor array. A conventional colorimetric sensor array consists of multiple paper-based sensors, and reports the detection results in terms of colour change. Evaluation of the colour changes is normally done by the naked eye, which may cause uncertainties due to personal subjectivity and the surrounding conditions. Solutions have been particularly sought in smartphones as they are capable of spectrometric functions. Our report specifically focuses on development of a practical app for immediate point-of-care (POC) multi-analyte sensing without additional devices. First, the individual positions of the sensors are automatically identified by the smartphone; second, the colours measured at each sensor are digitized based on a correction algorithm; and third, the corrected colours are converted to concentration values by pre-loaded calibration curves. All through these sequential processes, the sensor array taken in a smartphone snapshot undergoes laboratory-level spectrometry. The advantages of inexpensive and convenient paper-based colorimetry and the ubiquitous smartphone are tied to achieve a ready-to-go POC diagnosis. <s> BIB002 | Rapid and convenient tests for glucose have become essential in underdeveloped and developing countries, as glucose is an important indicator of metabolic activity. Since microfluidic paper-based analytical device was proposed by the Harvard group in 2007, it has attracted extensive attention in a wide range of applications. Numerous methods have been developed to fabricate the µPADs and multiple detection techniques have been applied to glucose diagnostics. Colorimetric and electrochemical detection are doubtlessly the most important techniques. Colorimetric detection is more widely used than electrochemical detection while the sensitivity is lower than the latter. With the development of point-of-care diagnostic (POCT), it is expected that the carry-on paper-based analytical devices will be generated. The devices tend to be miniaturization and the spectrometric functions or electronic measurements could be integrated in the smartphones BIB002 . Alternative materials like toner BIB001 have also been investigated for clinical glucose diagnostics without the part of cumbersome fabrication process. Besides, the exploration of biocompatibility and toxicity of papers give a potential for developing minimally invasive or non-invasive µPADs for real-time glucose detection. Improvements of stability and accuracy of glucose detection will bring the devices to be commercially available in the future. |
A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Introduction <s> The ramifications of assistive technology for both current and future service provision are wide. In recent years, policy makers have become increasingly aware of the potential of these services to maintain older and disabled people in their own homes. The purpose of this paper is to report on a literature review and provide illustrations of how the evidence can be used to underpin the development of assistive technology services for older and disabled people and disabled children. The aim is to support the development of user‐focused, accessible services. <s> BIB001 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Introduction <s> The potential of technology to connect people and provide access to education, commerce, employment and entertainment has never been greater or more rapidly changing. Communication technologies and new media promise to ‘revolutionize our lives’ by breaking down barriers and expanding access for disabled people. Yet, it is also true that technology can create unexpected and undercritiqued forms of social exclusion for disabled people. In addition to exploring some of the ways that even (or especially) assistive technology can result in new forms of social exclusion, we also propose alternative ways of thinking about inclusive and accessible (as opposed to assistive) technology and provide some very practical ways that accessible technologies would promote greater access and flexibility for disabled students and adults. We contend that technology should be conceived of as a global, accessible and inclusive concept, not one that requires a qualifier based on who it is for. <s> BIB002 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Introduction <s> There are around 285 million visually impaired people worldwide, and around 370,000 people are registered as blind or partially sighted in the UK. Ongoing advances in information technology (IT) are increasing the scope for IT-based mobile assistive technologies to facilitate the independence, safety, and improved quality of life of the visually impaired. Research is being directed at making mobile phones and other handheld devices accessible via our haptic (touch) and audio sensory channels. We review research and innovation within the field of mobile assistive technology for the visually impaired and, in so doing, highlight the need for successful collaboration between clinical expertise, computer science, and domain users to realize fully the potential benefits of such technologies. We initially reflect on research that has been conducted to make mobile phones more accessible to people with vision loss. We then discuss innovative assistive applications designed for the visually impaired that are either delivered via mainstream devices and can be used while in motion (e.g., mobile phones) or are embedded within an environment that may be in motion (e.g., public transport) or within which the user may be in motion (e.g., smart homes). <s> BIB003 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Introduction <s> This paper proposes a novel concept for helping the visually impaired know what kind of object there is in an environment. This concept is implemented as a cane system that selects a target object based on a user's demand, recognizes the object from depth data obtained by a Microsoft Kinect sensor, and returns the recognition results via a tactile device. The proposed system is evaluated through a user study where one blindfolded subject actually uses the system to find chairs in an experimental environment. The experimental results indicate that the system is promising as means of helping the visually impaired recognize objects. <s> BIB004 | The World Health Organization (WHO) reported in 2013 that 285 million people are estimated to be visually impaired worldwide: 39 million are blind and 246 million suffer from low vision. From the overall population with visual impairment, about 90% of the world's visually impaired live in developing countries and 82% of people living with blindness are aged 50 and above. Regrettably, this percentage is expected to increase in the coming decades. Visual impairment has a significant impact on individuals' quality of life, including their ability to work and to develop personal relationships. Almost half (48%) of the visually impaired feel ''moderately'' or ''completely'' cut off from people and things around them BIB003 . There are four levels of visual function, according to the International Classification of Diseases (ICD-10, Update and Revision 2006): normal vision, moderate visual impairment, severe visual impairment and blindness . Moderate visual impairment combined with severe visual impairment may be grouped under the term ''low vision''; low vision combined with blindness represents all forms of visual impairment . In order to overcome or lessen the difficulties imposed by visual impairment, extensive research has been dedicated to building assistive systems. The need for assistive technologies has long been a constant in the daily lives of people with visual impairment and will remain so in future years. There are various definitions for assistive technology in general. Common to all of them, however, is the concept of an item or piece of equipment that enables individuals with disabilities to enjoy full inclusion and integration into society BIB002 BIB001 . Traditional assistive technologies for the blind include white canes, guide dogs, screen readers, and so forth. However, the detectable ranges of white canes are very short (at most 1.5 m) and, consequently, the visually impaired can only immediately detect nearby obstacles at ground level. Guide dogs are also used by the visually impaired to navigate to their destinations avoiding the dangers they may encounter along their path. However, it is difficult to provide a sufficient number of guide dogs because of the long-time periods needed to train them, as well as the high costs associated with their training. Furthermore, it is also quite difficult for the visually impaired to take care of the living dogs appropriately BIB004 . Modern mobile assistive technologies are becoming more discrete and include (or are delivered via) a wide range of mobile computerized devices, including ubiquitous technologies like mobile phones. Such discrete technologies can help alleviate the cultural stigma associated with the more traditional (and noticeable) assistive devices . Visual impairment imposes many restrictions and specific requirements on human mobility. The overall objective of this work is to review the assistive technologies that have been proposed by researchers in recent years to address the limitations in user mobility resulting from visual impairment. This work does not focus on the analysis and description of individual systems. Instead it will review how technology is being used in recent years to individually address the different tasks related to assistive human navigation and how the components of traditional navigation systems can be adapted to address the limitations and requirements of visually impaired users. Human navigation, in general, requires an estimation of the user location, the relation to its context and finding a way to a specific destination. This work will cover these topics in dedicated sections. In this work, the term ''visual impairment'' incorporates any condition that impedes an individual's ability to execute typical daily activities due to visual loss. Because the aim of this work is to present a general review of navigation and orientation assistive technologies for the visually impaired, low vision is not separated from total blindness and so these terms are used interchangeably. |
A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Understanding human navigation <s> This article reports on an experiment undertaken to test the spatiocognitive competence of the visually impaired population in regard to wayfinding. The test consisted of eight basic wayfinding tasks, each representing a particular spatio-cognitive operation. The tasks were executed in a labyrinthian layout allowing for control of the difficulty level of the tasks and limiting extraneous perceptual factors, which tended to interfere with the measure of spatio-cognitive abilities. The experimental groups were composed of congenitally totally blind, adventitiously totally blind, and subjects with a weak visual residue; the control was established by a sighted and a sighted blindfolded group. The sample's 18 subjects per group were matched in terms of age, education, and sex. The performance results of the visually impaired groups in all eight tasks led to rejection of any spatio-cognitive deficiency theory. The performance of the congenitally totally blind group, in particular, shows that spatio-cognitive c... <s> BIB001 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Understanding human navigation <s> This paper illustrates the application of cognitive mapping to people with visual impairments and blindness. It gives perspectives on past research, outlines ongoing research, highlights some of the methodological and validity issues arising from this research, and discusses the movement of theory into practice. The findings of three small preliminary studies have been reported, as part of continuing research into the cognitive mapping abilities of blind or visually impaired people. These studies have highlighted the need to use multiple, mutually supportive tests to assess cognitive map knowledge. In light of these findings and the need to move theory into practice, a current research project is outlined. This project seeks to use the knowledge gained from the three projects to design and implement an auditory hypermap system to aid wayfinding and the spatial learning of an area. Finally an agenda for applied research is presented. <s> BIB002 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Understanding human navigation <s> This chapter first presents a review of existing locomotion assistance devices for the blind. These devices are merely proximeters, that measure the distance to the closest obstacles, and convey this information to their users. We introduce the measurement methods (infrared sensors, ultrasonic sensors, laser telemeters) and the user interfaces (sounds and tactile vibrations). Then, we analyse the shortcomings of these systems, and thus explain what additional features new devices could offer. To study the feasibility of such systems, we tackle the different issues raised in the process: localizing users, modeling their environment and adding semantic annotations. Finally, we explain how such devices could fit into a view of ambient intelligence, and how the problems raised extend beyond the field of assistance to blind people. <s> BIB003 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Understanding human navigation <s> In this article, recent achievements of cognitive research in geographic information science (GIScience) are reviewed and prospects for future directions discussed. Cognitive research in GIScience concerns human knowledge and knowing involving geographic information and geographic information systems (GIS). It includes both internal mental and external symbolic structures and processes, and is practically motivated by the desire to improve the usability, efficiency, equity, and profitability of geographic information and GIS. Taking 1992 as the start of modern GIScience, recent cognitive research falls into six areas: human factors of GIS, geovisualization, navigation systems, cognitive geo-ontologies, geographic and environmental spatial thinking and memory, and cognitive aspects of geographic education. Future prospects for cognitive GIScience research include recommendations for methods, including eye-movement recordings and fMRI; theoretical approaches, including situated cognition, evolutionary cognition, and cognitive neuroscience; and specific problems, including how users incorporate uncertainty metadata in reasoning and decision making, the role of GIS in teaching K-12 students to think spatially, and the potential detrimental effects of over-reliance on digital navigation systems. <s> BIB004 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Understanding human navigation <s> Haptic Sight is a new interface idea providing immediate spatial information to visually impaired people in order to assist independent walking. The interface idea stems from a thorough investigation in which we studied visually impaired people's indoor walking behavior, decision making process, their unique concept of space, and information needs. The aim of this study is to identify an interface design and investigate an appropriate means of spatial information delivery. <s> BIB005 | Human beings have the ability to acquire and use information obtained from the surrounding environment using their natural sensors. They have developed a number of evolutionary mechanisms that enable the distinction between different objects and the triggering of events and complex processes based on their perception of reality. Cognition concerns knowledge and knowing in intelligent entities, especially by human beings, but also nonhuman animals and synthetic computational entities such as robots BIB004 . Cognition includes the mental structures and processes involved in perception, attention, thinking and reasoning, learning, memory, linguistic and non-linguistic communication. It also includes external symbolic structures and processes, such as maps or written procedures for carrying out formal spatial analysis, which assist internal cognition. Similarly, cognition is often about space, place, or environment, so cognitive acts are quite often of geographic nature BIB004 . Cognitive mapping BIB002 is of extreme importance for individuals in terms of creating a conceptual model of the surrounding space and objects around them, thereby supporting their interaction with the physical environment BIB003 . In new environments, finding your way can be time consuming and may require a considerable amount of attention. In these types of scenario, visual impairment is a major limitation to user mobility. On the one hand, individuals with visual impairments often need the help of sighted people to navigate and cognitively map new environments, which is time consuming, not always available and leads to lower mobility BIB001 . On the other hand, individuals with cognitive impairment may experience difficulty in learning new environments and following directions. Assistive systems for human navigation generally aim to allow their users to safely and efficiently navigate in unfamiliar environments, without getting lost, by dynamically planning the path based on the user's location, respecting the constraints posed by their special needs. Collecting the specific needs or specificities of any impairment is a key point for the development of any assistive system. Using direct observational and interviewbased knowledge elicitation methods, researchers of The Haptic Sight study BIB005 tried to gain a better understanding of a visually impaired person's indoor walking behavior and the information required for him to walk independently. They found that the visually impaired need to be aware of their current location, the direction they are heading, the direction they need to go and the path to their destination. Only after the research team had identified these parameters did they develop a handheld device-based application. In other words, users with visual impairment must be aware of their physical location, their relation to the surrounding environment (context) and the route they must follow to navigate to a desired destination. When designing an assistive system for human navigation, separate processing units (or modules) can address these identified tasks, namely location, orientation, navigation and interface, as shown in Fig. 1 . This work reviews different ways with which different researchers addressed the use of technology to fill the gaps and needs presented by visual impairment in each of these topics. As with the design of any assistive system, the interface with the user must be adequate to the user's limitations. This work will cover this topic in a dedicated section as well. |
A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Location <s> We present a study focusing on the usability of a wayfinding and localization system for persons with visual impairment. This system uses special color markers, placed at key locations in the environment, that can be detected by a regular camera phone. Three blind participants tested the system in various indoor locations and under different system settings. Quantitative performance results are reported. <s> BIB001 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Location <s> Whereas outdoor navigation systems typically rely upon GPS, indoor systems have to rely upon dierent techniques for localizing the user, as GPS signals cannot be received indoors. Over the past decade various indoor navigation systems have been developed. This paper provides a comprehensive overview of existing indoor navigation systems and analyzes the dierent techniques used for: (1) locating the user; (2) planning a path; (3) representing the environment; and (4) interacting with the user. Our survey identies a number of research issues that could facilitate large scale deployment of indoor navigation systems. <s> BIB002 | All guidance/navigation systems must include a basic form of localization, i.e., the determination of a user's location and/or pose. The estimation of the user's location is sometimes referred to as ''positioning'' BIB001 . The most common localization methods can be grouped into four different categories: (1) direct sensing, (2) dead reckoning, (3) triangulation and (4) pattern recognition BIB002 . It is important to understand that depending on the technology used, the user location may be estimated by the direct application of techniques, or by using computational methods to process data that can indirectly contribute to estimate the location. It is also important to distinguish between the two. If, on one hand, direct-sensing techniques can almost directly provide an indication of the user's location, other methods, such as dead reckoning, use the components of locomotion (heading, acceleration, speed, etc.) to computationally estimate the displacement from a known location. The same applies to triangulation and pattern recognition. In the case of pattern recognition, it is not the actual detection of the visual pattern that provides an estimation of the location. Instead, some of the metrics and data outputting from the detection (such as pose and distance from the detected pattern) can be used to computationally make the estimation. The location can be used for both planning the path (navigation) and providing surrounding (contextual) information (orientation). If the user's location is known, the system can also find a new path in case the user gets lost or calculate an alternative path, if needed. The planned path is then used to generate and provide guiding directions to a user-specified destination. |
A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Direct-sensing techniques <s> Metronaut is a novel wearable computer which captures information, senses position, provides wide range communications, consumes less than one watt of power, and weighs less than one pound. Metronaut employs a bar code reader for information input and position location, a two-way pager for communications, and an ARM processor for computation. Metronaut's application is schedule negotiation and guidance instructions for a visitor to the CMU campus. The visitor's position is determined from reading bar codes at information signs around campus. Modifications to the schedule are negotiated using the two-way pager for communications with the campus computing infrastructure. Metronaut is alternatively powered by a mechanical flywheel converting kinetic energy to electrical energy. <s> BIB001 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Direct-sensing techniques <s> The design of mobile navigation systems adapting to limited resources will be an important future challenge. Since typically several different means of transportation have to be combined in order to reach a destination, the user interface of such a system has to adapt to the user's changing situation. This applies especially to the alternating use of different technologies to detect the user's position, which should be as seamless as possible. This article presents a hybrid navigation system that relies on different technologies to determine the user's location and that adapts the presentation of route directions to the limited technical resources of the output device and the limited cognitive resources of the user. <s> BIB002 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Direct-sensing techniques <s> We describe a navigation and location determination system for the blind using an RFID tag grid. Each RFID tag is programmed upon installation with spatial coordinates and information describing the surroundings. This allows for a self-describing, localized information system with no dependency on a centralized database or wireless infrastructure for communications. We describe the system and report on its characteristic performance, limitations, and lessons learned. <s> BIB003 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Direct-sensing techniques <s> We present a robot-assisted wayfinding system for the visually impaired in structured indoor environments. The system consists of a mobile robotic guide and small passive RFID sensors embedded in the environment. The system is intended for use in indoor environments, such as office buildings, supermarkets and airports. We describe how the system was deployed in two indoor environments and evaluated by visually impaired participants in a series of pilot experiments. We analyze the system's successes and failures and outline our plans for future research and development. <s> BIB004 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Direct-sensing techniques <s> Location-based mobile services have been in use, and studied, for a long time. With the proliferation of wireless networking technologies, users are mostly interested in advanced services that render the surrounding environment (i.e., the building) highly intelligent and significantly facilitate their activities. In this paper our focus is on indoor navigation, one of the most important location services. Existing approaches for indoor navigation are driven by geometric information and neglect important aspects, such as the semantics of space and user capabilities and context. The derived applications are not intelligent enough to catalytically contribute to the pervasive computing vision. In this paper, a novel navigation mechanism is introduced. Such navigation scheme is enriched with user profiles and the adoption of an ontological framework. These enhancements introduce a series of technical challenges that are extensively discussed throughout the paper. <s> BIB005 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Direct-sensing techniques <s> Blind people need to become as independent as possible in their daily life in order to guarantee a fully social inclusion. Mobility means the possibility of freely moving, without support of any accompanying person, at home, in public and private buildings, and in open spaces, as the streets of the town. Mobile and wireless technologies, and in particular the ones used to locate persons or objects, can be used to realize navigation systems in an intelligent environment. Such systems open new opportunities to improve the speed, easiness, and safety of the visually impaired persons mobility. Using these technologies together with Text To Speech systems and a mobile-based database the authors developed a cost effective, easy-to-use orientation and navigation system: RadioVirgilio/SesamoNet1. The cost effectiveness is due to the recovery of RFID identity tags from cattle slaughtering: these tags are then borrowed to create a grid used for navigation. In this paper the results of an usability analysis of this guide system are presented. A preliminary experiment involving a small group of experts and a blind person is described. In order to evaluate the usability, three cognitive walkthrough sessions have been done to discuss the system's basic functionality and to highlight the most critical aspects to be modified. <s> BIB006 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Direct-sensing techniques <s> A location and tracking system becomes very important to our future world of pervasive computing, where information is all around us. Location is one of the most needed information for emerging and future applications. Since the public use of GPS satellite is allowed, several state-of-the-art devices become part of our life, e.g. a car navigator and a mobile phone with a built-in GPS receiver. However, location information for indoor environments is still very limited. Several techniques are proposed to get location information in buildings such as using a radio signal triangulation, a radio signal (beacon) emitter, or signal fingerprinting. Using radio frequency identification (RFID) tags is a new way of giving location information to users. Due to its passive communication circuit, RFID tags can be embedded almost anywhere without an energy source. The tags stores location information and gives it to any reader that is within a proximity range which can be up to 10-15 meters for UHF RFID systems. We propose an RFID-based system for navigation in a building for blind people or visually impaired. The system relies on the location information on the tag, a userpsilas destination, and a routing server where the shortest route from the userpsilas current location to the destination. The navigation device communicates with the routing server using GPRS networks. We build a prototype based on our design and show some results. We found that there are some delay problems in the devices which are the communication delay due to the cold start cycle of a GPRS modem and the voice delay due to the file transfer delay from a MMC module. <s> BIB007 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Direct-sensing techniques <s> Mobile navigation service is one of the most important Location Based Services. With the rapid advances in enabling technologies for ubiquitous computing, more and more active or passive devices/sensors are augmented in the indoor environment, indoor environment has become smarter. This paper proposes that by introducing the notions of Smart Environment and Ambient Intelligent, a ubiquitous indoor navigation service can be built to provide an adaptive smart wayfinding support and enhance users with a new experience during indoor navigation. In this paper, we set up a smart environment with a positioning module and a wireless module. Based on this smart environment, we design a ubiquitous indoor navigation system with interaction and annotation module (for user generated content), user tracking module (for collaborative filtering) and context-aware adaptation to illustrate some potential benefits of combining indoor navigation and Smart Environment. <s> BIB008 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Direct-sensing techniques <s> The autonomy of blind people in their daily life depends on their knowledge of the surrounding world, and they are aided by keen senses and assistive devices that help them to deduce their surroundings. Existing solutions require that users carry a wide range of devices and, mostly, do not include mechanisms to ensure the autonomy of users in the event of system failure. This paper presents the nav4b system that combines guidance and navigation with object's recognition, extending traditional aids (white cane and smartphone). A working prototype was installed on the UTAD campus to perform experiments with blind users. <s> BIB009 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Direct-sensing techniques <s> Abstract Nowadays, navigation systems are widely used to find the correct path, or the quickest, between two places. These systems use the Global Positioning System (GPS) and only work well in outdoor environment since GPS signals cannot easily penetrate and/or are greatly degraded inside of buildings. Several technologies have been proposed to make navigation inside of buildings possible. One such technology is Radio-Frequency Identification (RFID). In the case of outside environments, some hybrid systems have been proposed that use GPS as main information source and RFID for corrections and location error minimization. In this article we propose a navigation system that uses RFID as the main technology to guide people with visual impairment in unfamiliar environments, both indoor and outdoor, complementing the traditional white cane and providing information about the user's geographical context. <s> BIB010 | Localization techniques based on direct sensing determine the location of the user through the sensing of identifiers (or tags), which have been installed in the environment. Typical direct-sensing technologies include the use of radio-frequency identification (RFID) tags that can either be passive BIB006 BIB004 BIB003 BIB010 or active (some systems use both active and passive tags BIB007 BIB009 ), infrared (IR) transmitters that are installed in known positions where each transmitter broadcasts a unique ID BIB002 BIB005 , Bluetooth beacons BIB008 or visual barcodes BIB001 . All of these technologies require the user to carry extra equipment to sense the identifiers. In the case of radio-frequency identification, though single RFID tags are quite inexpensive, in order to massively install them in large environments may become costly. Another disadvantage is the range of detection. In the case of passive tags the range is too low. In the case of active tags, the range is higher but they require individual power supply (and respective maintenance). Infrared emitters require the user to be in the line-of-sight and, even so, they are strongly affected by sunlight interference. Bluetooth beacons, when used for localization, require the user to walk more slowly than with other sensing techniques because of the communication/pairing delay. Barcodes are, in a way, very similar to radio-frequency identification. This approach is low cost, easy to install and to maintain. The main limitation is that the user has to find each barcode and scan it, which may be cumbersome and will slow down navigation. In the case of blind users, using a system that searches for printed barcodes that they cannot see is also very demanding and prone to reading failure. |
A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Dead reckoning <s> The position-tracking accuracy of a location-aware mobile system can change dynamically as a function of the user’s location and other variables specific to the tracker technology used. This is especially problematic for mobile augmented reality systems, which ideally require extremely precise position tracking for the user’s head, but which may not always be able to achieve the necessary level of accuracy. While it is possible to ignore variable positional accuracy in an augmented reality user interface, this can make for a confusing system; for example, when accuracy is low, virtual objects that are nominally registered with real ones may be too far off to be of use. To address this problem, we describe the early stages of an experimental mobile augmented reality system that adapts its user interface automatically to accommodate changes in tracking accuracy. Our system employs different technologies for tracking a user’s position, resulting in a wide variation in positional accuracy: an indoor ultrasonic tracker and an outdoor real-time kinematic GPS system. For areas outside the range of both, we introduce a dead-reckoning approach that combines a pedometer and orientation tracker with environmental knowledge expressed in spatial maps and accessibility graphs. We present preliminary results from this approach in the context of a navigational guidance system that helps users to orient themselves in an unfamiliar environment. Our system uses inferencing and path planning to guide users toward targets that they choose. <s> BIB001 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Dead reckoning <s> Pedestrians must often find their way in unfamiliar urban environments or complex buildings. In these cases they need guidance to reach their desired destination, for example a specific room in a local authorities' building, a counter, or a department at an university. The goal of location-based mobile services is to provide such guidance on demand (anywhere, anytime), individually tailored to the actual information needs and presented in preferred forms. Thereby the navigation service requires positioning and tracking capabilities of a mobile user with a certain positioning accuracy and reliability. In particular, navigating in urban areas is a very challenging task as pedestrians move in spaces where none of the known positioning techniques works continuously in standalone mode and the movement is in a much more complex space than 2D networks (i.e. on pedestrian paths and along roads, outdoor and indoor, through underground passages, etc.). To solve this challenging task of continuous position determination, a combination of different location technologies is required. The integration of the sensors should be performed such that all the sensors are tightly coupled in the sense of a so-called multi-sensor system. In a new research project in our University entitled "Pedestrian Navigation Systems in Combined Indoor/Outdoor Environments (NAVIO)" we are working on the improvement of such navigation services. The project is mainly focusing on the information aspect of location-based services, i.e. on the user's task at hand and support of the user's decisions by information provided by such a service. Specifications will allow selection of appropriate sensor data and to integrate data when and where needed, to propose context-dependent routes fitting to partly conflicting interests and goals as well as to select appropriate communication methods in terms of supporting the user guiding by various multimedia cartography forms. To test and to demonstrate our approach and results, the project takes a use case scenario into account, i.e. the guidance of visitors to departments of the Vienna University of Technology. First results of our project are presented in this paper. <s> BIB002 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Dead reckoning <s> Walking is the most fundamental means of human transportation. Unlike travel by car, walking is not planar, but rather stereoscopic. We therefore developed a real navigation system for pedestrian point-to-point navigation. We propose herein a method of 3D pedestrian navigation, in which position detection is driven mainly by dead reckoning. The proposed method enables ubiquitous round-the-clock 3D positioning, even inside buildings or between tall buildings. In addition, pedestrian navigation is customized by changing the costs of the road network links. Finally, a positioning data accumulation system is implemented so that we can log tracks and easily incorporate new roads or attributes in the future. <s> BIB003 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Dead reckoning <s> This paper describes, path planning and following algorithms for use in indoor navigation for the blind and visually impaired. Providing indoor navigational assistance for this type of users presents additional challenges not faced by conventional guidance systems, due to the personal nature of the interactions. The algorithms are part of an overall Indoor Navigation Model that is used to provide assistance and guidance in unfamiliar indoor environments. Path planning uses the A* and Dijkstra's shortest path algorithms, to operate on an "Intelligent Map", that is based on a new data structure termed "cactus tree" which is predicated on the relationships between the different objects that represent an indoor environment. The paths produced are termed "virtual hand rails", which can be used to dynamically plan a path for a user within a region. The path following algorithm is based on dead reckoning, but incorporates human factors as well as information about the flooring and furnishing structures along the intended planned path. Experimental and simulating results show that the guiding/navigation problem becomes a divergent mathematical problem if the positional information offered by the positioning and tracking systems does not reach a certain requirement. This research explores the potential to design an application for the visually impaired even when to- date 'positioning and tracking' system cannot offer reliable position information that highly required by this type of application. <s> BIB004 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Dead reckoning <s> Ad hoc solutions for tracking and providing navigation support to emergency response teams is an important and safety-critical challenge. We propose a navigation system based on a combination of foot-mounted inertial sensors and ultrasound beacons. We evaluate experimentally the performance of our dead reckoning system in different environments and for different trail topologies. The inherent drift observed in dead reckoning is addressed by deploying ultrasound beacons as landmarks. We study through simulations the use of the proposed approach in guiding a person along a defined path. ::: Simulation results show that satisfactory guidance performance is achieved despite noisy ultrasound measurements, magnetic interference and uncertainty in ultrasound node locations. The models used for the simulations are based on experimental data and the authors' experience with actual sensors. The simulation results will be used to inform future development of a full real time system. <s> BIB005 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Dead reckoning <s> Whereas outdoor navigation systems typically rely upon GPS, indoor systems have to rely upon dierent techniques for localizing the user, as GPS signals cannot be received indoors. Over the past decade various indoor navigation systems have been developed. This paper provides a comprehensive overview of existing indoor navigation systems and analyzes the dierent techniques used for: (1) locating the user; (2) planning a path; (3) representing the environment; and (4) interacting with the user. Our survey identies a number of research issues that could facilitate large scale deployment of indoor navigation systems. <s> BIB006 | Humans maintain (update) their sense of orientation as they move around via a combination of two processes, i.e. landmark-based and dead-reckoning processes. Landmarkbased updating involves recognizing specific features in the world that may be associated with known places. Deadreckoning updating involves keeping track of the components of locomotion (including heading, velocity or acceleration) and travel duration. Dead reckoning is sometimes referred to as ''path integration'' BIB006 . While the user is moving, a dead-reckoning system estimates the user's location through a combination of odometry readings. Odometry readings can be acquired through a combination of sensors such as accelerometers, magnetometers, compasses, and gyroscopes BIB005 BIB001 BIB003 BIB002 or using a user's specific walking pattern (such as the user's average walking speed) BIB004 . An initial location is typically determined using a global navigation satellite system (GNSS) like the Global Positioning System (GPS) BIB001 , radio-frequency identification (RFID) tags BIB003 , or cellular phone positioning (GSM broadcasting stations) BIB002 . Since the location estimation is a recursive process, inaccuracy in location estimation translates into errors that accumulate over time. The accumulated error can be corrected using environmental knowledge. The users' position can be synchronized using periodic updates from directsensing localization techniques such as RFID tags, or pattern-matching localization methods such as the use of data extracted from the recognition of known visual landmarks. A benefit of processing data from pattern matching over direct-sensing techniques is a lower installation cost, as a smaller number of identifiers must be installed. |
A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Triangulation <s> This paper presents the design, implementation, and evaluation of Cricket , a location-support system for in-building, mobile, location-dependent applications. It allows applications running on mobile and static nodes to learn their physical location by using listeners that hear and analyze information from beacons spread throughout the building. Cricket is the result of several design goals, including user privacy, decentralized administration, network heterogeneity, and low cost. Rather than explicitly tracking user location, Cricket helps devices learn where they are and lets them decide whom to advertise this information to; it does not rely on any centralized management or control and there is no explicit coordination between beacons; it provides information to devices regardless of their type of network connectivity; and each Cricket device is made from off-the-shelf components and costs less than U.S. $10. We describe the randomized algorithm used by beacons to transmit information, the use of concurrent radio and ultrasonic signals to infer distance, the listener inference algorithms to overcome multipath and interference, and practical beacon configuration and positioning techniques that improve accuracy. Our experience with Cricket shows that several location-dependent applications such as in-building active maps and device control can be developed with little effort or manual configuration. <s> BIB001 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Triangulation <s> Drishti is a wireless pedestrian navigation system. It integrates several technologies including wearable computers, voice recognition and synthesis, wireless networks, Geographic Information System (GIS) and Global positioning system (GPS). Drishti augments contextual information to the visually impaired and computes optimized routes based on user preference, temporal constraints (e.g. traffic congestion), and dynamic obstacles (e.g. ongoing ground work, road blockade for special events). The system constantly guides the blind user to navigate based on static and dynamic data. Environmental conditions and landmark information queried from a spatial database along their route are provided on the fly through detailed explanatory voice cues. The system also provides capability for the user to add intelligence, as perceived by, the blind user, to the central server hosting the spatial database. Our system is supplementary to other navigational aids such as canes, blind guide dogs and wheel chairs. <s> BIB002 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Triangulation <s> The design of mobile navigation systems adapting to limited resources will be an important future challenge. Since typically several different means of transportation have to be combined in order to reach a destination, the user interface of such a system has to adapt to the user's changing situation. This applies especially to the alternating use of different technologies to detect the user's position, which should be as seamless as possible. This article presents a hybrid navigation system that relies on different technologies to determine the user's location and that adapts the presentation of route directions to the limited technical resources of the output device and the limited cognitive resources of the user. <s> BIB003 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Triangulation <s> In this paper, we discuss application possibilities of augmented reality technologies in the field of mobility support for the deaf blind. We propose the navigation system called virtual leading blocks for the deaf-blind, which consists of a wearable interface for Finger-Braille, one of the commonly used communication methods among deaf-blind people in Japan, and a ubiquitous environment for barrier-free application, which consists of floor-embedded active radio-frequency identification (RFID) tags. The wearable Finger-Braille interface using two Linux-based wristwatch computers has been developed as a hybrid interface of verbal and nonverbal communication in order to inform users of their direction and position through the tactile sensation. We propose the metaphor of "watermelon splitting" for navigation by this system and verify the feasibility of the proposed system through experiments. <s> BIB004 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Triangulation <s> There are many navigation systems for visually impaired people but few can provide dynamic interactions and adaptability to changes. None of these systems work seamlessly both indoors and outdoors. Drishti uses a precise position measurement system, a wireless connection, a wearable computer, and a vocal communication interface to guide blind users and help them travel in familiar and unfamiliar environments independently and safely. Outdoors, it uses DGPS as its location system to keep the user as close as possible to the central line of sidewalks of campus and downtown areas; it provides the user with an optimal route by means of its dynamic routing and rerouting ability. The user can switch the system from an outdoor to an indoor environment with a simple vocal command. An OEM ultrasound positioning system is used to provide precise indoor location measurements. Experiments show an in-door accuracy of 22 cm. The user can get vocal prompts to avoid possible obstacles and step-by-step walking guidance to move about in an indoor environment. This paper describes the Drishti system and focuses on the indoor navigation design and lessons learned in integrating the indoor with the outdoor system. <s> BIB005 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Triangulation <s> A location-aware navigation system has been developed and implemented for the visually disabled or visually impaired; the system is designed to improve individuals' independent mobility. This self-contained, portable system integrates several technologies, including mobile personal digital assistants, voice synthesis, a geographic information system (GIS), and a differential Global Positioning System (DGPS). The system is meant to augment the various sensory inputs available to the visually impaired user. It provides the user with navigation assistance, making use of voice cues iterating contextual building and feature information at regular intervals, through automatic GPS readings and a GIS database. To improve the efficiency of the retrieval of contextual information, an indexing method based on road segmentation was developed to replace the exhaustive search method. Experimental results show that the performance of the system on searching the buildings, landmarks, and other features around a road has been significantly improved by using this indexing method. <s> BIB006 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Triangulation <s> In the research project NAVIO (Pedestrian Navigation Systems in Combined Indoor/Outdoor Environements) at our University we are working on the improvement of navigation services for pedestrians. Thereby we are mainly focusing on the information aspect of location-based services, i.e., on the user’s task at hand and the support of the user’s decisions by information provided by such a service. Specifications will allow us to select appropriate sensor data and to integrate data when and where needed, to propose context-dependent routes fitting to partly conflicting interests and goals as well as to select appropriate communication methods in terms of supporting the user guidance by various multimedia cartography forms. These taks are addressed in the project in three different work packages, i.e., the first on “Integrated positioning”, the second on “Pedestrian route modeling” and the third on “Multimedia route communication”. In this paper we will concentrate on the research work and findings in the first work package. For continuous positioning of a pedestrian suitable location technologies include GNSS and indoor location techniques, cellular phone positioning, dead reckoning sensors (e.g. magnetic compass, gyro and accelerometers) for measurement of heading and travelled distance as well as barometric pressure sensors for height determination. The integration of these sensors in a modern multi-sensor system can be performed using an adapted Kalman filter. To test and to demonstrate our approach, we take a use case scenario into account, i.e., the guidance of visitors to departments of the Vienna University of Technology. The results of simulation studies and practical tests could confirm that such a service can achieve a high level of performance for the guidance of a pedestrian in urban areas and mixed indoor and outdoor environments. <s> BIB007 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Triangulation <s> In this paper the design, development and evaluation of a GPS-based auditory navigation system is presented that implicitly guides a user by a contextualized rendering of personal audio files. The benefit of this navigation system is that the user can listen to his own audio contents while being navigated. Wearing headphones, the user listens to audio contents which are located in a virtual environment. The user simply walks in the direction where the sound seems to have its origin. A formal evaluation under field conditions proved that navigation with contextualized audio contents is efficient and intuitive and that users are highly satisfied with the navigation support given by the evaluated auditory display. <s> BIB008 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Triangulation <s> Location-based mobile services have been in use, and studied, for a long time. With the proliferation of wireless networking technologies, users are mostly interested in advanced services that render the surrounding environment (i.e., the building) highly intelligent and significantly facilitate their activities. In this paper our focus is on indoor navigation, one of the most important location services. Existing approaches for indoor navigation are driven by geometric information and neglect important aspects, such as the semantics of space and user capabilities and context. The derived applications are not intelligent enough to catalytically contribute to the pervasive computing vision. In this paper, a novel navigation mechanism is introduced. Such navigation scheme is enriched with user profiles and the adoption of an ontological framework. These enhancements introduce a series of technical challenges that are extensively discussed throughout the paper. <s> BIB009 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Triangulation <s> Almost 2 million Japanese citizens use Navitime, a mobile phone-based navigation service that incorporates various modes of transportation. User experiences reveal implications for designing urban-computing services. Location-based services are a key pervasive computing application that could deeply influence urban spaces and their inhabitants. Recent advances in mobile phones, GPS, and wireless networking infrastructures are making it possible to implement and operate large-scale location- based services in the real world. <s> BIB010 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Triangulation <s> Many applications in the area of location-based services and personal navigation require nowadays the location determination of a user not only in an outdoor environment but also an indoor. Typical applications of location-based services (LBS) mainly in outdoor environments are fleet management, travel aids, location identification, emergency services and vehicle navigation. LBS applications can be further extended if reliable and reasonably accurate three-dimensional positional information of a mobile device can be determined seamlessly in both indoor and outdoor environments. Current geolocation methods for LBS may be classified as GNSS-based, cellular network-based or their combinations. GNSS-based methods rely very much on the satellite visibility and the receiver-satellite geometry. This can be very problematic in dense high-rise urban environments and when transferring to an indoor environment. Especially, in cities with many high-rise buildings, the urban canyon will greatly affect the reception of the GNSS signals. Moreover, positioning in the indoor/outdoor transition areas would experience signal quality and signal reception problems, if GNSS systems alone are employed. The authors have proposed the integration of GNSS with wireless positioning techniques such as WiFi and UWB. In the case of WiFi positioning, the so-called fingerprinting method based on WiFi signal strength observations is usually employed. In this article, the underlying technology is briefly reviewed, followed by an investigation of two WiFi-positioning systems. Testing of the system is performed in two localisation test beds, one at the Vienna University of Technology and another one at the Hong Kong Polytechnic University. The first test showed that the trajectory of a moving user could be obtained with a standard deviation of about ±3-5 m. The main disadvantage of WiFi fingerprinting, however, is the required time consuming and costly signal strength system calibration in the beginning. Therefore, the authors have investigated if the measured signal strength values can be converted to the corresponding range to the access point. A new approach for this conversion is presented and analysed in typical test scenarios. <s> BIB011 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Triangulation <s> The complexity of indoor radio propagation has resulted in location-awareness being derived from empirical fingerprinting techniques, where positioning is performed via a previously-constructed radio map, usually of WiFi signals. The recent introduction of the Bluetooth Low Energy (BLE) radio protocol provides new opportunities for indoor location. It supports portable battery-powered beacons that can be easily distributed at low cost, giving it distinct advantages over WiFi. However, its differing use of the radio band brings new challenges too. In this work, we provide a detailed study of BLE fingerprinting using 19 beacons distributed around a $\sim\! 600\ \mbox{m}^2$ testbed to position a consumer device. We demonstrate the high susceptibility of BLE to fast fading, show how to mitigate this, and quantify the true power cost of continuous BLE scanning. We further investigate the choice of key parameters in a BLE positioning system, including beacon density, transmit power, and transmit frequency. We also provide quantitative comparison with WiFi fingerprinting. Our results show advantages to the use of BLE beacons for positioning. For one-shot (push-to-fix) positioning we achieve $30\ \mbox{m}^2$ ), compared to $100\ \mbox{m}^2$ ) and < 8.5 m for an established WiFi network in the same area. <s> BIB012 | Though most direct-sensing techniques try to locate the user by sensing one unique identifier, several systems employ multiple identifiers and use triangulation computational methods to locate the user. These methods locate the user by triangulating the sensed tags installed in known locations. The tags that have been frequently used for indoor or outdoor localization include RFID BIB004 , infrared (IR) BIB003 , and ultrasound BIB001 BIB005 . Lateration uses the distance between the user and at least three known points, whereas angulation uses the angular measurements from at least three known points to the user to determine the users' location . Global Positioning System (GPS) is the most commonly used system for outdoor localization BIB008 BIB002 BIB006 and uses a trilateration computational method to locate the user, based on known satellite positions. GPS receivers analyze a periodic signal sent out by each satellite to compute the latitude, longitude and altitude at the users' position. For outdoor navigation, GPS has become the standard as it is free, reliable, and it is available any place on Earth in any weather condition. The main disadvantage of GPS localization is that the GPS signal strongly degrades inside buildings, between tall buildings or in dense forest areas (such as parks). There are two alternative triangulationbased techniques, which are available in contexts where GPS signals are not sensed, or available. Cell-tower positioning BIB010 uses the triangulation of the known locations of cell towers with the provided signal strength of each cell phone tower, whereas wireless local area networks (WLAN) positioning BIB009 BIB007 triangulates the location of wireless base stations using the signal of each emitting station. Both techniques have a lower precision than GPS due to multi-path reflection problems. Another way of using the signal from wireless emitting stations, such as Wi-Fi, is signal fingerprinting. This approach is based on signal strength observations on previously known locations. An estimate of the location is obtained based on these measurements and a signal propagation model. The propagation model can be obtained by simulation or with prior calibration measurements in certain locations. In this last case, the measured signal strength values at a certain location are compared with the signal strengths values of pre-calibrated points stored in a database. This approach, with proper calibration, can provide extremely high accuracy, in comparison with GNSS-based approaches and has been successfully adopted in the field of robotics and unmanned vehicle applications. The major limitation in its application on the blind user case is the cost-over-benefit. The required time and costly signal strength system calibration is very high in the beginning BIB011 BIB012 . |
A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Pattern recognition <s> This article presents a short but detailed description of the optophone-its origins as a reading device for the blind, the various stages of its development, and the possibility of its use as a mobility aid for the blind. Research into the use of stereo vision is described as an aid to information reduction, in the hope of remedying the problems of information overload that commonly plague electronic blind aids. <s> BIB001 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Pattern recognition <s> In an easel, a clamp for suspending sheets or other objects is formed with an elongated plate and a pair of brackets that support a bar. The brackets are particularly formed so that they incline downwardly towards the plate upon which they are mounted and the bar is arranged to be slidingly affixed to the brackets. The bar slides up and down and may grip objects placed between it and the plate. Ideally, the bar is provided with cushion means which provide the actual gripping action against the plate. <s> BIB002 | Recently, systems have been developed which use computer vision techniques, like pattern matching, to sense the surrounding environment and detect visual landmarks. Although at first glance it may be quite obvious that pattern recognition alone cannot provide an indication of the user location, an estimation can indirectly be extracted using the outputting data from the pattern detection, such as pose and distance to the detected pattern. The most common artificial vision systems developed to support the guidance of blind users extract this type of information by analyzing the characteristics of the objects detected in the captured image using classical image processing techniques BIB001 BIB002 . Some systems go further by combining vision sensors with positioning sensors or even combining multiple vision sensors to obtain a 3D representation of the scene (to obtain depth information). |
A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Motion and pose estimation <s> Context awareness is an important functionality for wearable computers. In particular, the computer should know where the person is in the environment. This paper proposes an image sequence matching technique for the recognition of locations and previously visited places. As in single word recognition in speech recognition, a dynamic programming algorithm is proposed for the calculation of the similarity of different locations. The system runs on a standalone wearable computer, such as a Libretto PC. Using a training sequence, a dictionary of locations is created automatically. These locations are then recognized by the system in real time using a hat-mounted camera. <s> BIB001 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Motion and pose estimation <s> Increasingly, cell phones are used to browse for information while location systems assist in gathering information that is most appropriate to the user's current location. We seek to take this one step further and actually overlay information on to the physical world using the cell phone's camera and thereby minimize a user's cognitive effort. This "magic lens" approach has many applications of which we are exploring two: indoor building navigation and dynamic directory assistance. In essence, we match "landmarks" identified in the camera image with those stored in a building database. We use two different types of features - floor corners that can be matched against a floorplan and SIFT features that can be matched to a database constructed from other images. The camera's pose can be determined exactly from a match and information can be properly aligned so that it can overlay directly onto the phone's image display. In this paper, we present early results that demonstrate it is possible to realize this capability for a variety of indoor environments. Latency is shown to already be reasonable and likely to be improved by further optimizations. Our goal is to further explore the computational tradeoff between the server and phone client so as to achieve an acceptable latency of a few seconds. <s> BIB002 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Motion and pose estimation <s> Two major limitations of real-time visual SLAM algorithms are the restricted range of views over which they can operate and their lack of robustness when faced with erratic camera motion or severe visual occlusion. In this paper we describe a visual SLAM algorithm which addresses both of these problems. The key component is a novel feature description method which is both fast and capable of repeat-able correspondence matching over a wide range of viewing angles and scales. This is achieved in real-time by using a SIFT-like spatial gradient descriptor in conjunction with efficient scale prediction and exemplar based feature representation. Results are presented illustrating robust realtime SLAM operation within an office environment. <s> BIB003 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Motion and pose estimation <s> This low-cost indoor navigation system runs on off-the-shelf camera phones. More than 2,000 users at four different large-scale events have already used it. The system uses built-in cameras to determine user location in real time by detecting unobtrusive fiduciary markers. The required infrastructure is limited to paper markers and static digital maps, and common devices are used, facilitating quick deployment in new environments. The authors have studied the application quantitatively in a controlled environment and qualitatively during deployment at four large international events. According to test users, marker-based navigation is easier to use than conventional mobile digital maps. Moreover, the users' location awareness in navigation tasks improved. Experiences drawn from questionnaires, usage log data, and user interviews further highlight the benefits of this approach. <s> BIB004 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Motion and pose estimation <s> Abstract : This report describes an efficient algorithm to accurately determine the position and orientation of a camera in an outdoor urban environment using camera imagery acquired from a single location on the ground. The requirement to operate using imagery from a single location allows a system using our algorithms to generate instant position estimates and ensures that the approach may be applied to both mobile and immobile ground sensors. Localization is accomplished by registering visible ground images to urban terrain models that are easily generated offline from aerial imagery. Provided there are a sufficient number of buildings in view of the sensor, our approach provides accurate position and orientation estimates, with position estimates that are more accurate than those typically produced by a global positioning system (GPS). <s> BIB005 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Motion and pose estimation <s> Independent travel is a well known challenge for blind and visually impaired persons. In this paper, we propose a proof-of-concept computer vision-based wayfinding aid for blind people to independently access unfamiliar indoor environments. In order to find different rooms (e.g. an office, a lab, or a bathroom) and other building amenities (e.g. an exit or an elevator), we incorporate object detection with text recognition. First we develop a robust and efficient algorithm to detect doors, elevators, and cabinets based on their general geometric shape, by combining edges and corners. The algorithm is general enough to handle large intra-class variations of objects with different appearances among different indoor environments, as well as small inter-class differences between different objects such as doors and door-like cabinets. Next, in order to distinguish intra-class objects (e.g. an office door from a bathroom door), we extract and recognize text information associated with the detected objects. For text recognition, we first extract text regions from signs with multiple colors and possibly complex backgrounds, and then apply character localization and topological analysis to filter out background interference. The extracted text is recognized using off-the-shelf optical character recognition (OCR) software products. The object type, orientation, location, and text information are presented to the blind traveler as speech. <s> BIB006 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Motion and pose estimation <s> In this paper we introduce Audible Vision, a system that can help blind and visually impaired users navigate in large indoor open spaces. The system uses computer vision to estimate the location and orientation of the user, and enables the user to perceive his/her relative position to a landmark through 3D audio. Testing shows that Audible Vision can work reliably in real-life ever-changing environment crowded with people. <s> BIB007 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Motion and pose estimation <s> This paper presents a sensor fusion strategy applied for Simultaneous Localization and Mapping (SLAM) in dynamic environments. The designed approach consists of two features: (i) the first one is a fusion module which synthesizes line segments obtained from laser rangefinder and line features extracted from monocular camera. This policy eliminates any pseudo segments that appear from any momentary pause of dynamic objects in laser data. (ii) The second characteristic is a modified multi-sensor point estimation fusion SLAM (MPEF-SLAM) that incorporates two individual Extended Kalman Filter (EKF) based SLAM algorithms: monocular and laser SLAM. The error of the localization in fused SLAM is reduced compared with those of individual SLAM. Additionally, a new data association technique based on the homography transformation matrix is developed for monocular SLAM. This data association method relaxes the pleonastic computation. The experimental results validate the performance of the proposed sensor fusion and data association method. <s> BIB008 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Motion and pose estimation <s> Abstract Assisting the visually impaired along their navigation path is a challenging task which drew the attention of several researchers. A lot of techniques based on RFID, GPS and computer vision modules are available for blind navigation assistance. In this paper, we proposed a depth estimation technique from a single image based on local depth hypothesis devoid of any user intervention and its application to assist the visually impaired people. The ambient space ahead of the user is captured by a camera and the captured image is resized for computational efficiency. The obstacles in the foreground of the image are segregated using edge detection followed by morphological operations. Then depth is estimated for each obstacle based on local depth hypothesis. The estimated depth map is then compared with the reference depth map of the corresponding depth hypothesis and the deviation of the estimated depth map from the reference depth map is used to retrieve the spatial information about the obstacles ahead of the user. <s> BIB009 | Systems that use computer vision to estimate the location and orientation of the user enable him/her to perceive their relative position to a detected georeferenced visual landmark BIB009 BIB006 BIB007 . When the user is carrying a camera whose position and orientation relative to the user's body are known, the motion of the features detected in the captured images may be used to assess information about the carrier's pose and motion. Visual motion information is not affected by the same error sources as global navigation satellite systems or self-contained sensors (like inertial sensors) and is therefore a complementary information source for increasing the accuracy of the positioning measurements . Research related to visual positioning methods has been mainly focused on the autonomous navigation of vehicles and mobile robots. The first papers related to the use of computer vision assistance in pedestrian navigation were published in the late 90 s BIB001 . They described the use of databases preloaded with images of samples taken of the expected surroundings, which were tagged with information about their geographic location. The position of the pedestrian was provided when a match was found between an image taken by the pedestrian and an image stored in the database BIB005 . The database and the image processing could be made locally or remotely on a server, depending on processing power requirements BIB002 . A visual pedestrian navigation system independent of a server and of pre-existing databases usually needs integration with other positioning sensors to be functional. In such a system, monitoring the motion of features in consecutive images taken by the user device and integrating the information with measurements obtained with other sensors or a Global Navigation Satellite System (GNSS) receiver can be used to obtain the relative position of the user. Initial absolute position information can be used to reduce drift and other errors, as without initial position the visual perception only provides information about the user's motion. Such serverindependent systems have been proposed using visualaided Inertial Measurement Unit (IMU) measurements. Other techniques, like the ones used in Simultaneous Localization and Mapping (SLAM) systems, produce a map of the unknown environment while simultaneously locating the user. Traditionally, mapping has been done using inertial sensors, though in recent years SLAM systems that also integrate a camera (visual SLAM systems) have been developed BIB003 . The magnitude of the motion of a figure in an image is dependent on the relative depth of the object within the captured scene, i.e. the distance of the object from the camera. Because the distance of objects from the camera in the environment is usually unknown, a scale problem arises and different methods for overcoming it have been used . Tools for resolving the distance, like laser rangefinders, have been integrated with a camera BIB008 . The requirement for carrying special equipment reduces the applicability of this method for pedestrian navigation, especially for blind users. Another approach is the use of computer vision algorithms to detect artificial landmarks with known indoor location (georeferenced landmarks). Recently, indoor navigation systems have been proposed, which use computer vision to detect and decode fiduciary markers in real time, using standard camera phones. One of the most common markers used are 2-D barcodes. The barcode provides a unique ID and a fixed sized template, which may be used to estimate the pose of the viewer BIB004 . Using these types of special markers, a standard smartphone may be used in these kinds of systems, without the need to carry any extra equipment. Once the marker is in the camera's field of vision, the user can receive a warning about his relative bearing to the marker, as well as an approximate distance. |
A review of assistive spatial orientation and navigation technologies for the visually impaired <s> 3D sensing <s> This paper presents a visual odometer system using stereo cameras for pedestrian navigation. Corner detection, stereo matching, triangulation, tracking, and robust ego-motion estimation are used for data processing. The outcome is the estimated incremental egomotion of the stereo cameras. The problems of implementing the system on a pedestrian are stated. The first problem is image feature motion. The motion of image features is the result of the motion of stereo cameras. In the case that the feature belongs to an independent moving object, the movement of the feature is the result of the motion of the cameras together with the motion of the feature itself. Hence, a novel robust ego-motion estimation algorithm must be utilized to eliminate outliers, which are independent moving features, mismatched features in the stereo matching step and incorrect assigned features in the tracking step. Secondly, the feature, which is collected on a pedestrian, results in a winding trajectory, which may easily fail the tracking algorithm. In this paper, we introduce a new method based on the knowledge of gait analysis to capture images at the same stage of walking cycle. This leads to less winding trajectory, which can be tracked without increasing order and computational cost of the tracker. <s> BIB001 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> 3D sensing <s> In order to supplement the traditional aids for the visually impaired, many different technologies are being explored to provide more accurate and useful information. In particular, vision systems generate visual information that can be used to provide guidance to the visually impaired. This paper presents a ID signal matching algorithm for stereo vision correlation as well as an embedded system that provides obstacle distance estimation to the user. <s> BIB002 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> 3D sensing <s> In this paper, we present a walking guidance system for the visually impaired pedestrians. The system has been designed to help the visually impaired by responding intelligently to various situations that can occur in unrestricted natural outdoor environments when walking and finding the destinations. It involves the main functions of people detection, text recognition, face recognition. In addition, added sophisticated functions of walking path guidance using Differential Global Positioning System, obstacle detection using a stereo camera and voice user, interface are included. In order to operate all functions concurrently, we develop approaches in real situations and integrate them. Finally, we experiment on a prototype system under natural environments in order to verify our approaches. The results show that our approaches are applicable to real situations. <s> BIB003 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> 3D sensing <s> RGB-D cameras are novel sensing systems that capture RGB images along with per-pixel depth information. In this paper we investigate how such cameras can be used in the context of robotics, specifically for building dense 3D maps of indoor environments. Such maps have applications in robot navigation, manipulation, semantic mapping, and telepresence. We present RGB-D Mapping, a full 3D mapping system that utilizes a novel joint optimization algorithm combining visual features and shape-based alignment. Visual and depth information are also combined for view-based loop closure detection, followed by pose optimization to achieve globally consistent maps.We evaluate RGB-D Mapping on two large indoor environments, and show that it effectively combines the visual and shape information available from RGB-D cameras. <s> BIB004 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> 3D sensing <s> Computer stereo vision is an important technique for robotic navigation and other mobile scenarios where depth perception is needed, but it usually requires two cameras with a known horizontal displacement. In this paper, we present a solution for mobile devices with just one camera, which is a first step towards making computer stereo vision available to a wide range of devices that are not equipped with stereo cameras. We have built a prototype using a state-of-the-art mobile phone, which has to be manually displaced in order to record images from different lines of sight. Since the displacement between the two images is not known in advance, it is measured using the phone's inertial sensors. We evaluated the accuracy of our single-camera approach by performing distance calculations to everyday objects in different indoor and outdoor scenarios, and compared the results with that of a stereo camera phone. As a main advantage of a single moving camera is the possibility to vary its relative position between taking the two pictures, we investigated the effect of different camera displacements on the accuracy of distance measurements. <s> BIB005 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> 3D sensing <s> The sheer volume of data generated by depth cameras provides a challenge to process in real time, in particular when used for indoor mobile robot localization and navigation. We introduce the Fast Sampling Plane Filtering (FSPF) algorithm to reduce the volume of the 3D point cloud by sampling points from the depth image, and classifying local grouped sets of points as belonging to planes in 3D (the “plane filtered” points) or points that do not correspond to planes within a specified error margin (the “outlier” points). We then introduce a localization algorithm based on an observation model that down-projects the plane filtered points on to 2D, and assigns correspondences for each point to lines in the 2D map. The full sampled point cloud (consisting of both plane filtered as well as outlier points) is processed for obstacle avoidance for autonomous navigation. All our algorithms process only the depth information, and do not require additional RGB data. The FSPF, localization and obstacle avoidance algorithms run in real time at full camera frame rates (30Hz) with low CPU requirements (16%). We provide experimental results demonstrating the effectiveness of our approach for indoor mobile robot localization and navigation. We further compare the accuracy and robustness in localization using depth cameras with FSPF vs. alternative approaches that simulate laser rangefinder scans from the 3D data. <s> BIB006 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> 3D sensing <s> Consumer-grade range cameras such as the Kinect sensor have the potential to be used in mapping applications where accuracy requirements are less strict. To realize this potential insight into the geometric quality of the data acquired by the sensor is essential. In this paper we discuss the calibration of the Kinect sensor, and provide an analysis of the accuracy and resolution of its depth data. Based on a mathematical model of depth measurement from disparity a theoretical error analysis is presented, which provides an insight into the factors influencing the accuracy of the data. Experimental results show that the random error of depth measurement increases with increasing distance to the sensor, and ranges from a few millimeters up to about 4 cm at the maximum range of the sensor. The quality of the data is also found to be influenced by the low resolution of the depth measurements. <s> BIB007 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> 3D sensing <s> This paper presents a system which extends the use of the traditional white cane by the blind for navigation purposes in indoor environments. Depth data of the scene in front of the user is acquired using the Microsoft Kinect sensor which is then mapped into a pattern representation. Using neural networks, the proposed system uses this information to extract relevant features from the scene, enabling the detection of possible obstacles along the way. The results show that the neural network is able to correctly classify the type of pattern presented as input. <s> BIB008 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> 3D sensing <s> In this paper, we present a novel approach for aerial obstacle detection (e.g., branches or awnings) using a 3-D smartphone in the context of the visually impaired (VI) people assistance. This kind of obstacles are especially challenging because they cannot be detected by the walking stick or the guide dog. The algorithm captures the 3-D data of the scene through stereo vision. To our knowledge, this is the first work that presents a technology able to obtain real 3-D measures with smartphones in real time. The orientation sensors of the device (magnetometer and accelerometer) are used to approximate the walking direction of the user, in order to look for the obstacles only in such a direction. The obtained 3-D data are compressed and then linearized for detecting the potential obstacles. Potential obstacles are tracked in order to accumulate enough evidence to alert the user only when a real obstacle is found. In the experimental section, we show the results of the algorithm in several situations using real data and helped by VI users. <s> BIB009 | Distance is one of the most important aspects of navigation, as it is used to avoid collisions or recognize nearby objects. The way human vision uses different perspectives of the same scene to create a three-dimensional perception of the world inspired the use of multiple cameras to model/ recognize the world in three dimensions. When a stereo camera is used, the distance to objects may be estimated using triangulation BIB001 . In the case of stereovision, the distance between the two cameras, called the baseline, affects the accuracy of the motion obtained from the images. The farther the two cameras are from each other, the better the accuracy will be BIB005 . Stereovision may be used to obtain 3D range information, and area correlation methods can be used for approximate depth information. This information has been successfully used in combination with pedestrian detection models BIB003 . Methods using genetic algorithms have been used to perform stereovision correlation and generate dense disparity maps, as well. These disparity maps, in turn, provide rough distance estimates to the user, allowing them to navigate through the environment BIB002 . Simpler approaches use one relative view (right or left camera) and a depth map (from the stereo vision equipment) to perform fuzzy-based clustering segmentation of the scenario into object clusters . After, knowing the clusters' locations, it is possible to detect near and far obstacles and feed this information to the user. The detection of changes in a 3D space based on fusing range data and image data captured by the cameras may also be used to create a 3D representation of the surrounding space that can be transmitted to the user through an appropriate interface, namely haptic . Having a short-term depth map computed about the user's immediate environment may be used to classify the path/scene as having (or not) any immediate obstacles, whether they are ground based, aerial or their relative position (left/right). Recently, 3D vision sensors have evolved considerably and have been applied in several popular devices such as smartphones and game consoles, greatly reducing its cost. Stereovision has been successfully applied to mobile devices (smartphones) allowing the structure of the environment to be estimated and for some kind of obstacle classification to be performed BIB009 . Theoretically, stereovision camera phones can work and be used to extract the same type of information as other standard stereovision systems. In the case of the use of smartphones, the main limitation is their low processing power in terms of realtime execution. In recent years, it has been significantly increased. However, real world scenes are usually very structured and obstacle classification in real time is still only used to work as a virtual stick, or white cane (not replacing it entirely, for security reasons). In many cases, stereoscopic vision has been replaced by the Microsoft Kinect sensor . This led to the mass use of these sensors in scientific research with good results BIB004 BIB006 BIB007 . The Kinect sensor includes a depth sensor and an RGB camera. The depth sensor is composed by an infrared laser source that projects non-visible light with a coded pattern combined with a monochromatic CMOS image sensor that captures the reflected light. The pattern received by the RGB sensor is a deformed version of the original pattern, projected by the laser source and deformed by the objects on the scene. The algorithm that deciphers the light coding generates a depth image representing the scene. Using machine learning techniques, such as neural networks, to analyze depth images obtained from the Microsoft Kinect sensor enables the recognition of pre-defined features/patterns of the surrounding environment BIB008 . Generally, in terms of the contribution that data extracted from computer vision pattern recognition can give to location systems, whether using stereovision or other image-based sensors like the Kinect, distance can be estimated and, in combination with data from pattern/feature detection and an appropriate geographic information system, contribute to assess the location of the user. In this context, data for vision-based localization must also be present in the geographic information system used. The geographic information system is a central element to provide any type of location-based service, and its importance is discussed further in this paper. |
A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Spatial orientation <s> This paper presents incorporation of certain human vision properties in the image processing methodologies, applied in the vision substitutive system for human blind. The prototype of the system has digital video camera fixed in a headgear, stereo earphone and a laptop computer, interconnected. The processing of the captured image is designed as human vision. It involves lateral inhibition, which is developed using Feed Forward Neural Network (FFNN) and domination of the object properties with suppression of background by means of Fuzzy based Image Processing System (FLIPS). The processed image is mapped to stereo acoustic signals to the earphone. The sound is generated using non-linear frequency incremental sine wave. The sequence of the scanning to construct the acoustic signal is designed to produce stereo signals, which aids to locate the object in horizontal axis. Frequency variation implies the location of object in the vertical axis. The system is tested with blind volunteer and his suggestion in formatting, pleasantness and discrimination of sound pattern were also considered. <s> BIB001 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Spatial orientation <s> People with severe visual impairment need a means of remaining oriented to their environment as they move through it. A series of indoor and outdoor trials using a variety of technologies and interfaces led to the development and evaluation of three promising wearable orientation interfaces: a virtual sonic beacon, speech output, and a shoulder-tapping system. Street crossing was used as a critical test situation in which to evaluate these interfaces. The shoulder-tapping system was found most universally usable. Results indicated that, given the great variety of co-morbidities within this population, which is comprised of mostly older persons, optimal performance and flexibility may best be obtained in a design that combines the best elements of both the speech and shoulder-tapping interfaces. <s> BIB002 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Spatial orientation <s> In this paper, we present an object detection and classification method for OpenEyes-II. OpenEyes-II is a walking guidance system that helps the visually impaired to respond naturally to various situations that can occur in unrestricted natural outdoor environments during walking and reaching the destination. Object detection and classification is requisite for implementing obstacle and face detection which are major parts of a walking guidance system. It can discriminate pedestrian from obstacles, and extract candidate regions for face detection and recognition. We have used stereo-based segmentation and SVM (Support Vector Machines), which has superior classification performance in binary classification case such like object detection. The experiments on a large number of street scenes demonstrate the effectiveness of the proposed method. <s> BIB003 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Spatial orientation <s> In this paper, we present a real-time pedestrian detection method in outdoor environments. It is necessary for pedestrian detection to implement obstacle and face detection which are major parts of a walking guidance system for the visually impaired. It detects foreground objects on the ground, discriminates pedestrians from other noninterest objects, and extracts candidate regions for face detection and recognition. For effective real-time pedestrian detection, we have developed a method using stereo-based segmentation and the SVM (Support Vector Machines), which works well particularly in binary classification problem (e.g. object detection). We used vertical edge features extracted from arms, legs and torso. In our experiments, test results on a large number of outdoor scenes demonstrated the effectiveness of the proposed pedestrian detection method. <s> BIB004 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Spatial orientation <s> In spite of the impressive advances related to retinal prostheses, there is no imminent promise to make them soon available with a realistic performance to help navigating blind persons. In our new project, we are designing a Bionic Eyeglass that is providing a wearable TeraOps visual computing power to guide visually impaired people in their daily life. Detection and recognition of signs and displays in real, noisy environments is a key element in many functions of the Bionic Eyeglass. This paper describes spatial-temporal analogic cellular algorithms used for localizing signs and displays, and recognition of numbers they contain. <s> BIB005 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Spatial orientation <s> Urban intersections are the most dangerous parts of a blind or visually impaired person's travel. To address this problem, this paper describes the novel "Crosswatch" system, which uses computer vision to provide information about the location and orientation of crosswalks to a blind or visually impaired pedestrian holding a camera cell phone. A prototype of the system runs on an off-the-shelf Nokia camera phone in real time, which automatically takes a few images per second, uses the cell phone's built-in computer to analyze each image in a fraction of a second and sounds an audio tone when it detects a crosswalk. Tests with blind subjects demonstrate the feasibility of the system and its ability to provide useful crosswalk alignment information under real-world conditions. <s> BIB006 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Spatial orientation <s> Urban intersections are the most dangerous parts of a blind or visually impaired personpsilas travel. To address this problem, this paper describes the novel ldquoCrosswatchldquo system, which uses computer vision to provide information about the location and orientation of crosswalks to a blind or visually impaired pedestrian holding a camera cell phone. A prototype of the system runs on an off-the-shelf Nokia N95 camera phone in real time, which automatically takes a few images per second, analyzes each image in a fraction of a second and sounds an audio tone when it detects a crosswalk. Real-time performance on the cell phone, whose computational resources are limited compared to the type of desktop platform usually used in computer vision, is made possible by coding in Symbian C++. Tests with blind subjects demonstrate the feasibility of the system. <s> BIB007 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Spatial orientation <s> We describe a wayfinding system for blind and visually impaired persons that uses a camera phone to determine the user's location with respect to color markers, posted at locations of interest (such as offices), which are automatically detected by the phone. The color marker signs are specially designed to be detected in real time in cluttered environments using computer vision software running on the phone; a novel segmentation algorithm quickly locates the borders of the color marker in each image, which allows the system to calculate how far the marker is from the phone. We present a model of how the user's scanning strategy (i.e. how he/she pans the phone left and right to find color markers) affects the system's ability to detect color markers given the limitations imposed by motion blur, which is always a possibility whenever a camera is in motion. Finally, we describe experiments with our system tested by blind and visually impaired volunteers, demonstrating their ability to reliably use the system to find locations designated by color markers in a variety of indoor and outdoor environments, and elucidating which search strategies were most effective for users. <s> BIB008 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Spatial orientation <s> The ability of gaining visual information from the environment can be of utmost importance for visually impaired and blind people. Our experimental system, consisting of a cell phone and a compact cellular visual computer, is able to detect and recognize objects and understand basic events around the user in predefined situations to help them in everyday tasks. We developed algorithms for two important new tasks: pedestrian crosswalk detection and identification of gender pictograms. <s> BIB009 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Spatial orientation <s> The last decades a variety of portable or wearable navigation systems have been developed to assist visually impaired people during navigation in known or unknown, indoor or outdoor environments. There are three main categories of these systems: electronic travel aids (ETAs), electronic orientation aids (EOAs), and position locator devices (PLDs). This paper presents a comparative survey among portable/wearable obstacle detection/avoidance systems (a subcategory of ETAs) in an effort to inform the research community and users about the capabilities of these systems and about the progress in assistive technology for visually impaired people. The survey is based on various features and performance parameters of the systems that classify them in categories, giving qualitative-quantitative measures. Finally, it offers a ranking, which will serve only as a reference point and not as a critique on these systems. <s> BIB010 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Spatial orientation <s> We have built an arm-navigation assisting system for a visually impaired person (user) to reach an object on the table, where optical tracking of marks attacked both on the objects and on his arm is used in order to augment his sight. The system helps him with giving spacial information of the workspace so that he creates a cognitive map of the workspace. For this purpose degrees of congestion on the workspace must be conveyed to the user. Starting from the description of the assisting system, we propose in this paper a method of judging the degrees of congestion of the workspace around arm. There are five of them: from “narrow” to “broad,” which are determined by using well-established Neural Network techniques on the basis of the spacial data obtained by the Distance Field Model (DFM) representation of the workspace. Defining spaciousness by entropy-like measure based on the DFM data is also proposed separately. <s> BIB011 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Spatial orientation <s> Orientation and mobility are tremendous problems for Blind people. Assistive technologies based on Global Positioning System (GPS) could provide them with a remarkable autonomy. Unfortunately, GPS accuracy, Geographical Information System (GIS) data and map-matching techniques are adapted to vehicle navigation only, and fail in assisting pedestrian navigation, especially for the Blind. In this paper, we designed an assistive device for the Blind based on adapted GIS, and fusion of GPS and vision based positioning. The proposed assistive device may improve user positioning, even in urban environment where GPS signals are degraded. The estimated position would then be compatible with assisted navigation for the Blind. Interestingly the vision module may also answer Blind needs by providing them with situational awareness (localizing objects of interest) along the path. Note that the solution proposed for positioning could also enhance autonomous robots or vehicles localization. <s> BIB012 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Spatial orientation <s> In this paper, machine learning and geometric computer vision are combined for the purpose of automatic reading bus line numbers with a smart phone. This can prove very useful to improve the autonomy of visually impaired people in urban scenarios. The problem is a challenging one, since standard geometric image matching methods fail due to the abundance of distractors, occlusions, illumination changes, highlights and specularities, shadows, and perspective distortions. The problem is solved by locating the main geometric entities of the bus facade through a cascade of classifiers, and then refining the matching with robust geometric matching. The method works in real time and, as experimental results show, has a good performance in terms of recognition rate and reliability. <s> BIB013 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Spatial orientation <s> Microsoft's Kinect 3-D motion sensor is a low cost 3D camera that provides color and depth information of indoor environments. In this demonstration, the functionality of this fun-only camera accompanied by an iPad's tangible interface is targeted to the benefit of the visually impaired. A computer-vision-based framework for real time objects localization and for their audio description is introduced. Firstly, objects are extracted from the scene and recognized using feature descriptors and machine-learning. Secondly, the recognized objects are labeled by instruments sounds, whereas their position in 3D space is described by virtual space sources of sound. As a result, the scene can be heard and explored while finger-triggering the sounds within the iPad, on which a top-view of the objects is mapped. This enables blindfolded users to build a mental occupancy grid of the environment. The approach presented here brings the promise of efficient assistance and could be adapted as an electronic travel aid for the visually-impaired in the near future. <s> BIB014 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Spatial orientation <s> A vibrotactile array is a promising human computer interface which could display graphical information to users in a tactile form. This paper presents the design and testing of an image contour display system with a vibrotactile array. The tactile image display system is attached to the back of the user. It converts visual graphics into 2D tactile images and allows subjects to feel the contours of objects through vibration stimulus. The system consists of a USB camera, 48 (6×8) vibrating motors and an embedded control system. The image is captured by the camera and the 2D contour is extracted and transformed into vibrotactile stimuli using a temporal- spatial dynamic coding method. Preliminary experiments were carried out and the optimal parameters of the vibrating time and duration were explored. To evaluate the feasibility and robustness of this vibration mode, letters were also tactilely displayed and the recognition rate about the alphabet letter display was investigated. It was shown that under the condition of no pre-training for the subjects, the recognition rate was 82%. Such a recognition rate is higher than that of the scanning mode (47.5%) and the improved handwriting mode (76.8%). The results indicated that the proposed method was efficient in conveying the contour information to the visually impaired by means of vibrations. <s> BIB015 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Spatial orientation <s> In this paper we introduce Audible Vision, a system that can help blind and visually impaired users navigate in large indoor open spaces. The system uses computer vision to estimate the location and orientation of the user, and enables the user to perceive his/her relative position to a landmark through 3D audio. Testing shows that Audible Vision can work reliably in real-life ever-changing environment crowded with people. <s> BIB016 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Spatial orientation <s> Independent travel is a well known challenge for blind and visually impaired persons. In this paper, we propose a proof-of-concept computer vision-based wayfinding aid for blind people to independently access unfamiliar indoor environments. In order to find different rooms (e.g. an office, a lab, or a bathroom) and other building amenities (e.g. an exit or an elevator), we incorporate object detection with text recognition. First we develop a robust and efficient algorithm to detect doors, elevators, and cabinets based on their general geometric shape, by combining edges and corners. The algorithm is general enough to handle large intra-class variations of objects with different appearances among different indoor environments, as well as small inter-class differences between different objects such as doors and door-like cabinets. Next, in order to distinguish intra-class objects (e.g. an office door from a bathroom door), we extract and recognize text information associated with the detected objects. For text recognition, we first extract text regions from signs with multiple colors and possibly complex backgrounds, and then apply character localization and topological analysis to filter out background interference. The extracted text is recognized using off-the-shelf optical character recognition (OCR) software products. The object type, orientation, location, and text information are presented to the blind traveler as speech. <s> BIB017 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Spatial orientation <s> We develop a novel camera-based computer vision technology to automatically recognize banknotes to assist visually impaired people. Our banknote recognition system is robust and effective with the following features: 1) high accuracy: high true recognition rate and low false recognition rate; 2) robustness: handles a variety of currency designs and bills in various conditions; 3) high efficiency: recognizes banknotes quickly; and 4) ease of use: helps blind users to aim the target for image capture. To make the system robust to a variety of conditions including occlusion, rotation, scaling, cluttered background, illumination change, viewpoint variation, and worn or wrinkled bills, we propose a component-based framework by using speeded up robust features (SURF). Furthermore, we employ the spatial relationship of matched SURF features to detect if there is a bill in the camera view. This process largely alleviates false recognition and can guide the user to correctly aim at the bill to be recognized. The robustness and generalizability of the proposed system are evaluated on a dataset including both positive images (with U.S. banknotes) and negative images (no U.S. banknotes) collected under a variety of conditions. The proposed algorithm achieves 100% true recognition rate and 0% false recognition rate. Our banknote recognition system is also tested by blind users. <s> BIB018 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Spatial orientation <s> This paper proposes a novel assistive system for the visually impaired. The system is composed of a Microsoft Kinect sensor, keypad-type controller, tactile device, laptop computer and so on. The system can recognize three-dimensional objects from depth data generated by the Kinect sensor, and inform visually impaired users not only about the existence of objects, but also about their classes such as chairs and upward stairs. Ordinarily, the system works as a conventional white cane. When a user instructs the system to find the object of a particular class, the system executes the recognition scheme that is designed to find the instructed object. If the object is found in the field of view of the Kinect sensor, the tactile device provides vibration feedback. The recognition schemes are applied to actual scenes. The experimental results indicate that the system is promising as means of helping the visually impaired find the desired objects. <s> BIB019 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Spatial orientation <s> The aim of this paper is to present a service for blind and people with low vision to assist them to cross the street independently. The presented approach provides the user with significant information such as detection of pedestrian crossing signal from any point of view, when the pedestrian crossing signal light is green, the detection of dynamic and fixed obstacles, predictions of the movement of fellow pedestrians and information on objects which may intersect his path. Our approach is based on capturing multiple frames using a depth camera which is attached to a user's headgear. Currently a testbed system is built on a helmet and is connected to a laptop in the user's backpack. In this paper, we discussed efficiency of using Speeded-Up Robust Features (SURF) algorithm for object recognition for purposes of blind people assistance. The system predicts the movement of objects of interest to provide the user with information on the safest path to navigate and information on the surrounding area. Evaluation of this approach on real sequence video frames provides 90% of human detection and more than 80% for recognition of other related objects. <s> BIB020 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Spatial orientation <s> Abstract The paper describes the Object detection and recognition along with interpretation is considered as minimum survival requirements to all the creatures in the world. Especially, the human beings rule the world because of their known interpretation survival tactics than any other animals. Automatic interpretation of objects eventually event reaction makes an environment a further better place to live. This paper implements a method to track and recognize the object in a surveillance area of the visually impaired people. In order to survive in the real world, visually impaired people have to be aware of the environment. Visually impaired people need some assistance in order to move from one place to another in day to day life. It might be in a dependent manner with the help of others or in an independent manner with the help of canes, trained dogs to guide them. In both the cases the significant objective of them is to detect the obstacle in front of them and avoiding it while moving. With the advent of electronic technologies self-assistive devices are made to help them. The system should be able to report the location, distance and direction of items in the room such as equipment, furniture, doors and even other users. It must be a reliable system that minimizes the impact of installation and maintenance. A great number of benefits are realized from the implementation of systems, such as greater safety and eventually, better enhanced quality of life. <s> BIB021 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Spatial orientation <s> Abstract Assisting the visually impaired along their navigation path is a challenging task which drew the attention of several researchers. A lot of techniques based on RFID, GPS and computer vision modules are available for blind navigation assistance. In this paper, we proposed a depth estimation technique from a single image based on local depth hypothesis devoid of any user intervention and its application to assist the visually impaired people. The ambient space ahead of the user is captured by a camera and the captured image is resized for computational efficiency. The obstacles in the foreground of the image are segregated using edge detection followed by morphological operations. Then depth is estimated for each obstacle based on local depth hypothesis. The estimated depth map is then compared with the reference depth map of the corresponding depth hypothesis and the deviation of the estimated depth map from the reference depth map is used to retrieve the spatial information about the obstacles ahead of the user. <s> BIB022 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Spatial orientation <s> In this work we describe main features of software modules developed for Android smartphones that are dedicated for the blind users. The main module can recognise and match scanned objects to a database of objects., e.g. food or medicine containers. The two other modules are capable of detecting major colours and locate direction of the maximum brightness regions in the captured scenes. We conclude the paper with a short summary of the tests of the software aiding activities of daily living of a blind user. <s> BIB023 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Spatial orientation <s> This paper proposes a novel concept for helping the visually impaired know what kind of object there is in an environment. This concept is implemented as a cane system that selects a target object based on a user's demand, recognizes the object from depth data obtained by a Microsoft Kinect sensor, and returns the recognition results via a tactile device. The proposed system is evaluated through a user study where one blindfolded subject actually uses the system to find chairs in an experimental environment. The experimental results indicate that the system is promising as means of helping the visually impaired recognize objects. <s> BIB024 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Spatial orientation <s> A computer vision-based wayfinding and navigation aid can improve the mobility of blind and visually impaired people to travel independently. In this paper, we develop a new framework to detect and recognize stairs, pedestrian crosswalks, and traffic signals based on RGB-D (Red, Green, Blue, and Depth) images. Since both stairs and pedestrian crosswalks are featured by a group of parallel lines, we first apply Hough transform to extract the concurrent parallel lines based on the RGB (Red, Green, and Blue) channels. Then, the Depth channel is employed to recognize pedestrian crosswalks and stairs. The detected stairs are further identified as stairs going up (upstairs) and stairs going down (downstairs). The distance between the camera and stairs is also estimated for blind users. Furthermore, the traffic signs of pedestrian crosswalks are recognized. The detection and recognition results on our collected datasets demonstrate the effectiveness and efficiency of our proposed framework. <s> BIB025 | Visually impaired people often want more than just information about their location, having the need to relate their current location to the features existing in the surrounding environment. Orientation and mobility are essential skills for performing a proper navigation . In this process, mobility, or micro-navigation, relates to obstacle detection and avoidance in the immediate physical environment. Orientation, or macro-navigation, translates as the ability to create and maintain awareness of one's position in the physical space relative to both the landmarks in the surrounding environment, whether they are points of interest (POI) or obstacles (from micro-navigation), and to the user's desired destination BIB002 . A wide range of systems and tools is available for enhancing the mobility of visually impaired individuals. The white cane and the guide dog are the most popular. The white cane is the simplest, cheapest, most reliable and the most popular. However, it does not provide all the necessary information of context such as speed, volume and distances. The eyes usually gather this information, which is necessary for the perception and control of locomotion BIB010 . Several approaches have been conducted over the last decades to address problems relevant to blind mobility and context awareness. They can be classified into two main categories. 'Electronic Travel Aids' (ETAs) are designed to improve mobility by detecting obstacles in the user's surroundings. In order to improve the blind user's autonomy, ''Electronic Orientation Aids'' (EOAs) provide the blind with some degree of situational awareness and guidance in unknown environments BIB012 . Apart from a few implementations that use some of the location techniques described in the previous section, up to now, EOAs have mainly been based on GNSS and location-based services. However, in recent years, computer vision techniques have successfully been used to provide contextual awareness and orientation indications. In general, these assistive orientation systems use computer vision techniques to provide information ranging from the simple presence of obstacles, or the distinction between fixed and moving obstacles, to the recognition of specific objects in the captured image. In some cases, even the distance and relative displacement of the detected objects to the user is provided, using depth information. Although very simple in their purpose, systems designed to provide the blind user with information about the existence of objects in his path (through the use of artificial vision sensors) use a wide range of techniques to analyze the image. Traditional image processing techniques can be used to detect the contours of objects in the scene BIB022 BIB015 . More advanced approaches use artificial intelligence techniques to detect obstacles in the captured image BIB001 and even to classify the scene, presenting basic forms of characterization/description of the environment as being very cluttered or relatively broad BIB011 . Other classification methods may provide information regarding the spatial distribution of the obstacles/objects in the scene BIB016 , achieving the overall objective of providing direct specific orientation instructions and simple contextual awareness. More advanced systems, which apply object recognition algorithms to detect and recognize specific objects in the scene, go even further trying to reduce the gap between sighted and non-sighted people. Using their natural sensors, sighted users not only detect the existence of objects and obstacles in their immediate surroundings, but they are also able to recognize them and their attributes, such as color, shape and relative spatial orientation. The simplest approaches use markers placed at specific points of interest BIB008 . When detected, these markers are used to estimate the user location and, subsequently, the objects that are expected to be found on the scene. Additionally, it is also possible to inform the user about the distance and relative position to the marker (pose). However, the most common systems that use object recognition to provide contextual information try to locate and recognize natural objects in the scene without the need to use artificial markers placed in the infrastructure. As discussed in earlier subsections, the placement of markers/sensors in the infrastructure is costly and requires a lot of maintenance. Given this fact, many assistive systems nowadays try to give the user information about the presence and orientation of natural objects in the scene, such as crosswalks BIB006 BIB007 BIB009 or text commonly found in places like buses or office doors BIB017 BIB005 BIB013 . Even the distinction between similar objects used in everyday life that may be easily confused by blind users, like different bank notes BIB018 , food or medicine containers BIB023 , can be incorporated in spatial orientation systems which use advanced computer vision techniques to provide spatial awareness through the recognition of natural objects. Although not specifically related to spatial orientation, the techniques used in these examples provide awareness about the presence of physical items in the context of the user, and the same techniques may be extended to the purpose of spatial awareness. Table 2 summarizes the features provided by the most common spatial orientation devices, as well as their availability in terms of indoor vs. outdoor scenario. With the recent advances in 3D vision and depth sensors, an all-new kind of contextual input may be used in the context of assistive systems for the visually impaired: depth information. Using feature descriptors and machine learning techniques, different objects can be extracted and classified BIB014 . These types of systems can recognize threedimensional objects from the depth data and inform visually impaired users not only about the existence of objects but also their class, such as chairs and upward stairs BIB024 BIB019 BIB025 , working similarly to a conventional white cane, with an extended range. Some systems even incorporate the detection and distinction between fixed and moving obstacles and object recognition in one global solution, mostly for pedestrian detection and avoidance BIB003 BIB004 BIB020 BIB021 . |
A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Navigation <s> This paper presents the research for the development of a new travel aid to increase the independent mobility of blind and elderly travellers. This aid will build on the technologies of geographical information systems (GIS) and the Global Positioning System (GPS). The MOBIC Travel Aid (MOTA) consists of two interrelated components : the MOBIC Pre-journey System (MOPS) to assist users in planning journeys and the MOBIC Outdoor System (MooDs) to execute these plans by providing users with orientation and navigation assistance during journeys. The MOBIC travel aid is complementary to primary mobility aids such as the long cane or guide dog. Results of a study of user requirements are presented and their implications for the initial design of the system are discussed. <s> BIB001 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Navigation <s> We describe some of the results of our program of basic and applied research on navigating without vision. One basic research topic that we have studied extensively is path integration, a form of navigation in which perceived self-motion is integrated over time to obtain an estimate of current position and orientation. In experiments on pathway completion, one test of path integration ability, we have found that subjects who are passively guided over the outbound path without vision exhibit significant errors when attempting to return to the origin but are nevertheless sensitive to turns and segment lengths in the stimulus path. We have also found no major differences in path integration ability among blind and sighted populations. A model we have developed that attributes errors in path integration to errors in encoding the stimulus path is a good beginning toward understanding path integration performance. In other research on path integration, in which optic flow information was manipulated in addition to the proprioceptive and vestibular information of nonvisual locomotion, we have found that optic flow is a weak input to the path integration process. In other basic research, our studies of auditory distance perception in outdoor environments show systematic underestimation of sound source distance. Our applied research has been concerned with developing and evaluating a navigation system for the visually impaired that uses three recent technologies: the Global Positioning System, Geographic Information Systems, and virtual acoustics. Our work shows that there is considerable promise of these three technologies in allowing visually impaired individuals to navigate and learn about unfamiliar environments without the assistance of human guides. <s> BIB002 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Navigation <s> The position-tracking accuracy of a location-aware mobile system can change dynamically as a function of the user’s location and other variables specific to the tracker technology used. This is especially problematic for mobile augmented reality systems, which ideally require extremely precise position tracking for the user’s head, but which may not always be able to achieve the necessary level of accuracy. While it is possible to ignore variable positional accuracy in an augmented reality user interface, this can make for a confusing system; for example, when accuracy is low, virtual objects that are nominally registered with real ones may be too far off to be of use. To address this problem, we describe the early stages of an experimental mobile augmented reality system that adapts its user interface automatically to accommodate changes in tracking accuracy. Our system employs different technologies for tracking a user’s position, resulting in a wide variation in positional accuracy: an indoor ultrasonic tracker and an outdoor real-time kinematic GPS system. For areas outside the range of both, we introduce a dead-reckoning approach that combines a pedometer and orientation tracker with environmental knowledge expressed in spatial maps and accessibility graphs. We present preliminary results from this approach in the context of a navigational guidance system that helps users to orient themselves in an unfamiliar environment. Our system uses inferencing and path planning to guide users toward targets that they choose. <s> BIB003 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Navigation <s> Walking is the most fundamental means of human transportation. Unlike travel by car, walking is not planar, but rather stereoscopic. We therefore developed a real navigation system for pedestrian point-to-point navigation. We propose herein a method of 3D pedestrian navigation, in which position detection is driven mainly by dead reckoning. The proposed method enables ubiquitous round-the-clock 3D positioning, even inside buildings or between tall buildings. In addition, pedestrian navigation is customized by changing the costs of the road network links. Finally, a positioning data accumulation system is implemented so that we can log tracks and easily incorporate new roads or attributes in the future. <s> BIB004 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Navigation <s> The Context sensitive Indoor Navigation System (CoINS) implements an architecture to develop context-aware indoor user guidance services and applications. This paper presents a detailed discussion on algorithms and architectural issues in building an indoor guidance system. We first start with the World Model and required mapping to 2D for the process of path calculation and simplification. We also compare several algorithm optimizations applied in this particular context. The system provides the infrastructure to support different techniques of presenting the path and supporting user orientation to reach a certain destination in indoor premises. <s> BIB005 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Navigation <s> This paper describes, path planning and following algorithms for use in indoor navigation for the blind and visually impaired. Providing indoor navigational assistance for this type of users presents additional challenges not faced by conventional guidance systems, due to the personal nature of the interactions. The algorithms are part of an overall Indoor Navigation Model that is used to provide assistance and guidance in unfamiliar indoor environments. Path planning uses the A* and Dijkstra's shortest path algorithms, to operate on an "Intelligent Map", that is based on a new data structure termed "cactus tree" which is predicated on the relationships between the different objects that represent an indoor environment. The paths produced are termed "virtual hand rails", which can be used to dynamically plan a path for a user within a region. The path following algorithm is based on dead reckoning, but incorporates human factors as well as information about the flooring and furnishing structures along the intended planned path. Experimental and simulating results show that the guiding/navigation problem becomes a divergent mathematical problem if the positional information offered by the positioning and tracking systems does not reach a certain requirement. This research explores the potential to design an application for the visually impaired even when to- date 'positioning and tracking' system cannot offer reliable position information that highly required by this type of application. <s> BIB006 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Navigation <s> Indoor navigation technology is needed to support seamless mobility for the visually impaired. A small portable personal navigation device that provides current position, useful contextual wayfinding information about the indoor environment and directions to a destination would greatly improve access and independence for people with low vision. This paper describes the construction of such a device which utilizes a commercial Ultra-Wideband (UWB) asset tracking system to support real-time location and navigation information. Human trials were conducted to assess the efficacy of the system by comparing target-finding performance between blindfolded subjects using the navigation system for real-time guidance, and blindfolded subjects who only received speech information about their local surrounds but no route guidance information (similar to that available from a long cane or guide dog). A normal vision control condition was also run. The time and distance traveled was measured in each trial and a point-back test was performed after goal completion to assess cognitive map development. Statistically significant differences were observed between the three conditions in time and distance traveled; with the navigation system and the visual condition yielding the best results, and the navigation system dramatically outperforming the non-guided condition. <s> BIB007 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Navigation <s> This paper proposes a framework to enable intuitive navigation guidance for complex buildings which are huge in size and their space boundaries contain non-convex shape including non-navigable areas inside. Our approach utilizes ‘topological’ way-finding method to generate paths. This can be done by means of the integration of building information model (BIM) with our new algorithm to subdivide the spaces. The second main principle is to improve the visual information by using a new method to render all three dimensional views possibly observed in a building beforehand. This has been realized by imaging serviced using client-server architecture with supercomputer computation power. <s> BIB008 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Navigation <s> This study investigated the interaction between remembered landmark and path integration strategies for estimating current location when walking in an environment without vision. We asked whether observers navigating without vision only rely on path integration information to judge their location, or whether remembered landmarks also influence judgments. Participants estimated their location in a hallway after viewing a target (remembered landmark cue) and then walking blindfolded to the same or a conflicting location (path integration cue). We found that participants averaged remembered landmark and path integration information when they judged that both sources provided congruent information about location, which resulted in more precise estimates compared to estimates made with only path integration. In conclusion, humans integrate remembered landmarks and path integration in a gated fashion, dependent on the congruency of the information. Humans can flexibly combine information about remembered landmarks with path integration cues while navigating without visual information. <s> BIB009 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Navigation <s> Whereas outdoor navigation systems typically rely upon GPS, indoor systems have to rely upon dierent techniques for localizing the user, as GPS signals cannot be received indoors. Over the past decade various indoor navigation systems have been developed. This paper provides a comprehensive overview of existing indoor navigation systems and analyzes the dierent techniques used for: (1) locating the user; (2) planning a path; (3) representing the environment; and (4) interacting with the user. Our survey identies a number of research issues that could facilitate large scale deployment of indoor navigation systems. <s> BIB010 | The term ''navigation'' defines the behavior of moving toward a destination, with all the motor, sensory, and cognitive processes that it implies . Downs and Stea define navigation as ''the process of solving one class of spatial problems, the movement of a person from one location on the earth's surface to another'' . They divided the process into four tasks: orienting oneself in the environment, choosing the route, keeping on track and recognizing that the destination has been reached. Human navigation is performed using a combination of mobility and orientation . In general, human navigation in indoor and outdoor environments is performed by measuring the distance and relative orientation to one, or multiple, reference points (context). People employ either path integration, orienting themselves relative to a starting position, or landmark-based navigation, where they rely upon perceptual cues together with an external or cognitive map. Humans may also use a combination of both path integration and landmark-based navigation BIB002 . A number of features in the environment can be used to help determine the location. To maintain a sense of where they are in such situations, humans rely on their estimates of the direction and velocity of movement obtained from their vestibular, proprioceptive, and kinesthetic senses, here referred to as path integration BIB009 . In the case of path integration, a single reference point is used throughout the navigation, and the location is estimated based on the addition of all the changes in position and orientation . In the case of landmark-based navigation, users change from reference point (landmarks) to reference point as they navigate in the environment, considering the relative position of the landmarks. In this case, a physical or cognitive map of the environment is used. By periodically measuring the displacement and changes in the orientation (based on heading and motion) and combining them with the distance and orientation relative to a reference point, such as a landmark, users can estimate their new location and orientation while navigating in an environment. A powerful assistive device combines both micro-navigation (sensing the immediate environment) and macronavigation (reaching a remote destination) functionalities. The micro-navigation functions serve to restore a set of sensorimotor behaviors based on visual object localization (context). The macro-navigation functions provide the user with global orientation and navigation skills . All navigation systems have three functional components: an input module to determine the location and orientation in space, a spatial database of the environment and an interface, which delivers information to the user. Location information is usually obtained using the individual location technologies discussed in a previous section, or by a fusion or combination of different inputs, including computer vision. These three components are used as well in the case of navigation systems designed for the visually impaired BIB007 . The location can be used for both planning the path and providing contextual information. The advantage of using a navigation system to plan a route is that the path can be optimized based on different concurring paths and specific user requirements, such as shortest or safest path. In the case of individuals with vision impairments, a path that goes along walls reduces the chance of the user getting lost and a path that avoids low ceilings is much safer BIB010 . In this context, a geographic information system (GIS) designed to enable all these assistive features must provide ways to store and deliver data of much broader extent than simple points of interest and POI categorization. An appropriate geographic system is a core element in any navigation system. Path planning algorithms use graphs or grids to represent the environment. These elements must also be stored in the GIS. To plan a path using graph-based approaches, the environment is divided into sets of nodes and edges connecting these nodes. Edges connect nodes based on the environment map and if one node is accessible from the other one. In this case, each edge may have a weight assigned to it based on different criteria for the path planning. A graph-based approach has the advantage of creating the nodes only if there are objects. Edges are created only if objects are accessible from each other. In complicated environments with many objects, the graph may become big and decrease the performance of the path planning algorithm. The weight associated with edges or cells plays an important role when customizing a path. For example, in the case of a path that should avoid stairs, the edges with stairs receive higher weights, and edges with low ceiling have higher weights when planning a path for individuals with visual impairments. Most of the current navigation systems use either Dijkstra BIB003 BIB004 BIB008 BIB005 or A* BIB003 BIB006 BIB001 for path planning. |
A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Interface <s> This paper presents incorporation of certain human vision properties in the image processing methodologies, applied in the vision substitutive system for human blind. The prototype of the system has digital video camera fixed in a headgear, stereo earphone and a laptop computer, interconnected. The processing of the captured image is designed as human vision. It involves lateral inhibition, which is developed using Feed Forward Neural Network (FFNN) and domination of the object properties with suppression of background by means of Fuzzy based Image Processing System (FLIPS). The processed image is mapped to stereo acoustic signals to the earphone. The sound is generated using non-linear frequency incremental sine wave. The sequence of the scanning to construct the acoustic signal is designed to produce stereo signals, which aids to locate the object in horizontal axis. Frequency variation implies the location of object in the vertical axis. The system is tested with blind volunteer and his suggestion in formatting, pleasantness and discrimination of sound pattern were also considered. <s> BIB001 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Interface <s> Our work is dealing with alternative interaction modes for visually impaired and blind people to use computers. The aim of the proposed approach is to exploit the human hearing capabilities to a better degree than this is done by customary screen-readers. A surrounding, three-dimensional audio interface is potentially increasing the information flow between a computer and the user. This paper presents a virtual audio reality (VAR) system which allows computer users to explore a virtual environment only by their sense of hearing. The used binaural audio rendering implements directional hearing and room acoustics via headphones to provide an authentic simulation of a real room. Users can freely move around using a joystick. The proposed application programming interface (API) is intended to ease the development of user applications for this VAR system. It provides an easy to use C++ interface to the audio rendering layer. The signal processing is performed by a digital signal processor (DSP). Besides the details of the technical realisation, this paper also investigates the user requirements for the target group. <s> BIB002 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Interface <s> Audio navigation interfaces have traditionally been studied (and implemented) using headphones. However, many potential users (especially those with visual impairments) are hesitant to adopt these emerging wayfinding technologies if doing so requires them to reduce their ability to hear environmental sounds by wearing headphones. In this study we examined the performance of the SWAN audio navigation interface using bone-conduction headphones (“bonephones”), which do not cover the ear. Bonephones enabled all participants to complete the navigation tasks with good efficiencies, though not immediately as effective as regular headphones. Given the functional success here, and considering that the spatialization routines were not optimized for bonephones (this essentially represents a worst-case scenario), the prospects are excellent for more widespread usage of bone conduction for auditory navigation, and likely for many other auditory displays. <s> BIB003 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Interface <s> The objective of this study is to improve the quality of life for the visually impaired by restoring their ability to self-navigate. In this paper we describe a compact, wearable device that converts visual information into a tactile signal. This device, constructed entirely from commercially available parts, enables the user to perceive distant objects via a different sensory modality. Preliminary data suggest that this device is useful for object avoidance in simple environments. <s> BIB004 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Interface <s> In an easel, a clamp for suspending sheets or other objects is formed with an elongated plate and a pair of brackets that support a bar. The brackets are particularly formed so that they incline downwardly towards the plate upon which they are mounted and the bar is arranged to be slidingly affixed to the brackets. The bar slides up and down and may grip objects placed between it and the plate. Ideally, the bar is provided with cushion means which provide the actual gripping action against the plate. <s> BIB005 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Interface <s> This keynote paper discusses the problem of outdoor mobility of the visually impaired and reviews key assistive technologies aiding the blind in independent travel. Space perception abilities important for mobility of the visually impaired are discussed first and related definitions and basic concepts such as: cognitive mapping, wayfinding and navigation are explained. The main mobility barriers the visually impaired encounter in every day life are pointed out. In this respect special attention is given to the information the blind traveller needs to be safer and more skilful in mobility. Also sensory substitution methods and interfaces for nonvisual presentation of the obstacles and communicating navigational data are addressed. Finally, the current projects under way and available technologies aiding the blind in key mobility tasks such as: obstacle avoidance, orientation, navigation and travel in urban environments are reviewed and discussed. <s> BIB006 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Interface <s> In this paper we introduce Audible Vision, a system that can help blind and visually impaired users navigate in large indoor open spaces. The system uses computer vision to estimate the location and orientation of the user, and enables the user to perceive his/her relative position to a landmark through 3D audio. Testing shows that Audible Vision can work reliably in real-life ever-changing environment crowded with people. <s> BIB007 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Interface <s> A vibrotactile array is a promising human computer interface which could display graphical information to users in a tactile form. This paper presents the design and testing of an image contour display system with a vibrotactile array. The tactile image display system is attached to the back of the user. It converts visual graphics into 2D tactile images and allows subjects to feel the contours of objects through vibration stimulus. The system consists of a USB camera, 48 (6×8) vibrating motors and an embedded control system. The image is captured by the camera and the 2D contour is extracted and transformed into vibrotactile stimuli using a temporal- spatial dynamic coding method. Preliminary experiments were carried out and the optimal parameters of the vibrating time and duration were explored. To evaluate the feasibility and robustness of this vibration mode, letters were also tactilely displayed and the recognition rate about the alphabet letter display was investigated. It was shown that under the condition of no pre-training for the subjects, the recognition rate was 82%. Such a recognition rate is higher than that of the scanning mode (47.5%) and the improved handwriting mode (76.8%). The results indicated that the proposed method was efficient in conveying the contour information to the visually impaired by means of vibrations. <s> BIB008 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Interface <s> This paper proposes a novel concept for helping the visually impaired know what kind of object there is in an environment. This concept is implemented as a cane system that selects a target object based on a user's demand, recognizes the object from depth data obtained by a Microsoft Kinect sensor, and returns the recognition results via a tactile device. The proposed system is evaluated through a user study where one blindfolded subject actually uses the system to find chairs in an experimental environment. The experimental results indicate that the system is promising as means of helping the visually impaired recognize objects. <s> BIB009 | Accurate recognition and distinction between the contextual elements found in the environment, whether by using computer vision or any other form of input is of the highest importance on an EOA device. Interfacing with the user to provide information about the elements found in the scene is also crucial, as the interpretation of the reality around the user directly influences his safety and, ultimately, the adoption of this kind of assistive technology. The most commonly found ways of interfacing with the user nowadays are sonification, audio description and haptic interfaces. These are the most commonly found ways to interface with an electronic assistive system for the blind, and its use is valid to both receiving alerts about the physical elements detected, as well as to receive and understand wayfinding instructions. Sonification is, by definition, the use of non-speech audio to convey information or perceptualize data. The use of acoustic (sound/sonification) patterns to provide this information to the user is very common among EOAs for the visually impaired BIB005 BIB001 . In some cases, sonification is even used to provide the relative position of the detected obstacles BIB007 . These systems use 3D audio (Fig. 2) to provide audio cues that can be perceived as if they were being generated by the detected landmark. The concept behind 3D audio is the use of different sound sources, located at different locations to provide the feeling of directional hearing (Fig. 2) . The most obvious advantage of adding spatial sound modeling to audio interfaces over sequential techniques is the natural perception. Individuals without hearing impairment use their directional hearing for orientation at all times BIB002 . This kind of interface can be used to provide simple, yet immediately perceivable cues about bearing or relative position (pose) to an obstacle. The fact that blind people often rely on audio cues from the environment for orientation creates restraints on using headphones for acoustic feedback. Alternatives like bonephones are viable BIB003 . Audio description has the same considerations as the sonification methods. According to , ''one major issue to be considered in the design of an interface is whether a rich description of the scene, or only highly symbolic information, should be provided to the user.'' Another approach is to present the information about the obstacles detected in the image through the use of haptic interfaces BIB009 BIB008 BIB004 . 3D range data may be converted into a 2D vibrating array attached to the user's body . With appropriate signal coding, the use of 2D vibrating patterns can reproduce depth information. Haptic interfaces are also used in a way in which an array of pins works in a similar way as Braille display . Some other, less usual, forms of interface are still in investigational devices, not available for commercial use. One example consists of a camera (mounted in sunglasses), one transducer and a postage-stamp-size electrode array that is positioned on the top surface of the tongue. The recorded images are translated into gentle electrical signals and displayed on the tongue. The generated stimulation patterns reflect key features of the recorded images like high contrast objects and their movements. As a general consideration, any of the user's remaining sensory channels (tactile or acoustic) can be used. However, their use should be carefully considered as it may interfere with performing other tasks that the blind users cannot do without. The amount of information to be presented to the user should be carefully considered as well, as information capacity of the non-visual senses is much smaller than vision BIB006 . The cues provided by these interface channels represent the most common ways of interfacing with assistive devices, for the blind, and provide the means to understand the information generated that can be used whether for context description or wayfinding. |
Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Introduction <s> Defining an efficient training set is one of the most delicate phases for the success of remote sensing image classification routines. The complexity of the problem, the limited temporal and financial resources, as well as the high intraclass variance can make an algorithm fail if it is trained with a suboptimal dataset. Active learning aims at building efficient training sets by iteratively improving the model performance through sampling. A user-defined heuristic ranks the unlabeled pixels according to a function of the uncertainty of their class membership and then the user is asked to provide labels for the most uncertain pixels. This paper reviews and tests the main families of active learning algorithms: committee, large margin, and posterior probability-based. For each of them, the most recent advances in the remote sensing community are discussed and some heuristics are detailed and tested. Several challenging remote sensing scenarios are considered, including very high spatial resolution and hyperspectral image classification. Finally, guidelines for choosing the good architecture are provided for new and/or unexperienced user. <s> BIB001 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Introduction <s> Deep learning has been shown to achieve outstanding performance in a number of challenging real-world applications. However, most of the existing works assume a fixed set of labeled data, which is not necessarily true in real-world applications. Getting labeled data is usually expensive and time consuming. Active labelling in deep learning aims at achieving the best learning result with a limited labeled data set, i.e., choosing the most appropriate unlabeled data to get labeled. This paper presents a new active labeling method, AL-DL, for cost-effective selection of data to be labeled. AL-DL uses one of three metrics for data selection: least confidence, margin sampling, and entropy. The method is applied to deep learning networks based on stacked restricted Boltzmann machines, as well as stacked autoencoders. In experiments on the MNIST benchmark dataset, the method outperforms random labeling consistently by a significant margin. <s> BIB002 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Introduction <s> Recent advances in microscopy imaging and genomics have created an explosion of patient data in the pathology domain. Whole-slide images (WSIs) of tissues can now capture disease processes as they unfold in high resolution, recording the visual cues that have been the basis of pathologic diagnosis for over a century. Each WSI contains billions of pixels and up to a million or more microanatomic objects whose appearances hold important prognostic information. Computational image analysis enables the mining of massive WSI datasets to extract quantitative morphologic features describing the visual qualities of patient tissues. When combined with genomic and clinical variables, this quantitative information provides scientists and clinicians with insights into disease biology and patient outcomes. To facilitate interaction with this rich resource, we have developed a web-based machine-learning framework that enables users to rapidly build classifiers using an intuitive active learning process that minimizes data labeling effort. In this paper we describe the architecture and design of this system, and demonstrate its effectiveness through quantification of glioma brain tumors. <s> BIB003 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Introduction <s> We present a simple and yet effective approach that can incorporate rationales elicited from annotators into the training of any offthe-shelf classifier. We show that our simple approach is effective for multinomial na¨ Bayes, logistic regression, and support vector machines. We additionally present an active learning method tailored specifically for the learning with rationales framework. <s> BIB004 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Introduction <s> "Wonderfully erudite, humorous, and easy to read." --KDNuggets In the world's top research labs and universities, the race is on to invent the ultimate learning algorithm: one capable of discovering any knowledge from data, and doing anything we want, before we even ask. In The Master Algorithm, Pedro Domingos lifts the veil to give us a peek inside the learning machines that power Google, Amazon, and your smartphone. He assembles a blueprint for the future universal learner--the Master Algorithm--and discusses what it will mean for business, science, and society. If data-ism is today's philosophy, this book is its bible. <s> BIB005 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Introduction <s> Interactive model analysis, the process of understanding, diagnosing, and refining a machine learning model with the help of interactive visualization, is very important for users to efficiently solve real-world artificial intelligence and data mining problems. Dramatic advances in big data analytics has led to a wide variety of interactive model analysis tasks. In this paper, we present a comprehensive analysis and interpretation of this rapidly developing area. Specifically, we classify the relevant work into three categories: understanding, diagnosis, and refinement. Each category is exemplified by recent influential work. Possible future research opportunities are also explored and discussed. <s> BIB006 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Introduction <s> Standing at the paradigm shift towards data-intensive science, machine learning techniques are becoming increasingly important. In particular, as a major breakthrough in the field, deep learning has proven as an extremely powerful tool in many fields. Shall we embrace deep learning as the key to all? Or, should we resist a 'black-box' solution? There are controversial opinions in the remote sensing community. In this article, we analyze the challenges of using deep learning for remote sensing data analysis, review the recent advances, and provide resources to make deep learning in remote sensing ridiculously simple to start with. More importantly, we advocate remote sensing scientists to bring their expertise into deep learning, and use it as an implicit general model to tackle unprecedented large-scale influential challenges, such as climate change and urbanization. <s> BIB007 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Introduction <s> The automatic detection and classification of stance (e.g., certainty or agreement) in text data using natural language processing and machine-learning methods creates an opportunity to gain insight into the speakers’ attitudes toward their own and other people’s utterances. However, identifying stance in text presents many challenges related to training data collection and classifier training. To facilitate the entire process of training a stance classifier, we propose a visual analytics approach, called ALVA, for text data annotation and visualization. ALVA’s interplay with the stance classifier follows an active learning strategy to select suitable candidate utterances for manual annotaion. Our approach supports annotation process management and provides the annotators with a clean user interface for labeling utterances with multiple stance categories. ALVA also contains a visualization method to help analysts of the annotation and training process gain a better understanding of the categories used by the annotators. The visualization uses a novel visual representation, called CatCombos, which groups individual annotation items by the combination of stance categories. Additionally, our system makes a visualization of a vector space model available that is itself based on utterances. ALVA is already being used by our domain experts in linguistics and computational linguistics to improve the understanding of stance phenomena and to build a stance classifier for applications such as social media monitoring. <s> BIB008 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Introduction <s> Assigning labels to data instances is a prerequisite for many machine learning tasks. Similarly, labeling is applied in visual-interactive analysis approaches. However, the strategies for creating labels often differ in the two fields. In this paper, we study the process of labeling data instances with the user in the loop, from both the machine learning and visual-interactive perspective. Based on a review of differences and commonalities, we propose the 'Visual-Interactive Labeling' (VIAL) process, conflating the strengths of both. We describe the six major steps of the process and highlight their related challenges. <s> BIB009 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Introduction <s> Labeled datasets are always limited, and oftentimes the quantity of labeled data is a bottleneck for data analytics. This especially affects supervised machine learning methods, which require labels for models to learn from the labeled data. Active learning algorithms have been proposed to help achieve good analytic models with limited labeling efforts, by determining which additional instance labels will be most beneficial for learning for a given model. Active learning is consistent with interactive analytics as it proceeds in a cycle in which the unlabeled data is automatically explored. However, in active learning users have no control of the instances to be labeled, and for text data, the annotation interface is usually document only. Both of these constraints seem to affect the performance of an active learning model. We hypothesize that visualization techniques, particularly interactive ones, will help to address these constraints. In this paper, we implement a pilot study of visualization in active learning for text classification, with an interactive labeling interface. We compare the results of three experiments. Early results indicate that visualization improves high-performance machine learning model building with an active learning algorithm. <s> BIB010 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Introduction <s> Labeling data instances is an important task in machine learning and visual analytics. Both fields provide a broad set of labeling strategies, whereby machine learning (and in particular active learning) follows a rather model-centered approach and visual analytics employs rather user-centered approaches (visual-interactive labeling). Both approaches have individual strengths and weaknesses. In this work, we conduct an experiment with three parts to assess and compare the performance of these different labeling strategies. In our study, we (1) identify different visual labeling strategies for user-centered labeling, (2) investigate strengths and weaknesses of labeling strategies for different labeling tasks and task complexities, and (3) shed light on the effect of using different visual encodings to guide the visual-interactive labeling process. We further compare labeling of single versus multiple instances at a time, and quantify the impact on efficiency. We systematically compare the performance of visual interactive labeling with that of active learning. Our main findings are that visual-interactive labeling can outperform active learning, given the condition that dimension reduction separates well the class distributions. Moreover, using dimension reduction in combination with additional visual encodings that expose the internal state of the learning model turns out to improve the performance of visual-interactive labeling. <s> BIB011 | Big data are leading to dramatic changes in science (with the advent of data-driven science) and in society (with potential to support economic, public health, and other advances). Machine leaning and deep learning technologies are central to leveraging big data for applications in both domains. Recent advances in machine learning and especially in deep learning, coupled with release of many open source tools (e.g., Google TensorFlow -an open-source software library for machine intelligence), creates the potential to leverage big data to address GIScience and Remote Sensing (RS) research and application challenges. But, doing so requires an in-depth understanding of the methods, their limitations, and strategies for overcoming those limitations. Two primary goals for this paper are: to synthesize ideas and results from machine learning and deep learning, plus visual analytics, and (2) to provide a base from which new GIScience and RS advances can be initiated. Machine learning (ML) and deep learning (DL), where DL is a sub-domain of ML, are increasingly successful in extracting information from big data (when mentioned together subsequently, we use the abbreviation of M&DL). The primary focus of research in M&DL has thus far been accurate results, often at the expense of human understanding of how the results were achieved BIB005 BIB006 . However, accurate results often depend on building large human-generated training data sets that can be expensive in both financial and person cost to create BIB001 BIB003 BIB004 BIB011 BIB007 BIB008 . As a result, there remain several impediments to broader adoption of M&DL, along with a range of concerns about potential negative outcomes related to the explainability of results produced. We agree here with a range of authors who have pointed to the need for human-in-the-loop strategies to both improve performance of the methods for complex problems and to increase explainability of the methods and their results BIB005 BIB006 BIB011 BIB008 BIB009 . There is a clear need for methods that allow human decision-makers to assess when to accept those results and when to treat them with caution or even skepticism. Further, we contend that advances in visual analytics offer a broad framework for addressing both the performance and explainability needs cited above. Visual analytics provides systems that enable analytical reasoning about complex problems . They accomplish this through close coupling of computational data processing methods with visual interfaces designed to help users make efficient choices: in building training data, in parameterizing and steering computational methods, and in understanding the results of those methods and how they were derived (further details about why and how visual analytics can aid M&DL, are elaborated in Section 3.2). One rapidly developing ML method, active learning (Section 3.1), aims at achieving good learning results with a limited labeled data set, by choosing the most beneficial unlabeled data to be labeled by annotators (human or machine), in order to train and thus improve ML model performance BIB002 BIB010 . Active deep learning (Section 3.4) is a method introduced to help cope with the tension between the typical DL requirement to have a very large gold standard training set and the impracticality of building such a big training set initially in domains that require expertise to label training data. As we elaborate below, recent developments in visual analytics offer strategies to enable productive human-in-the-loop active learning. In this paper, we argue specifically for taking a visual analytics approach to empowering active deep learning for (geo) text and image classification; we review a range of recent developments in the relevant fields that can be leveraged to support this approach. Our contention is that visual anaytics interfaces can reduce the time that domain experts need to devote to labeling data for text (or image) classification, by applying an iterative, active learning process. We also contextualize the potential of integrating active learning, visual analytics, and active deep learning methods in GIScience and RS through discussion of recent work. Here, we provide a road map to the rest of the paper. Section 2 outlines the scope of this review and our intended audience. Section 3, is the core of the paper, focused on synthesizing important and recent developments and their implications and applications. Here, we focus on recent advances in several subfields of Computer Science that GIScience and RS can leverage. Specifically, we examine and appraise key components of influential work in active learning (Section 3.1), visual analytics (Section 3.2), active learning with visual analytics (Section 3.3), and active deep learning (Section 3.4), respectively. In Section 4, we review recent GIScience and RS applications in (geo) text and image classification that take advantage of the methods from one or a combination of different fields covered in Section 3. The paper concludes in Section 5 with discussion of key challenges and opportunities-from both technical (Section 5.2.1) and application (Section 5.2.2, particularly for GIScience and RS) perspectives. The paper covers a wide array of recent research from multiple domains with many cross-connections. Given that text must present the sub-domains linearly, we start with a diagrammatic depiction of the domains and their relations to preview the overall structure of the review and the key connections. Specifically, Figure 1 illustrates the links between different fields covered in this paper and the flows that can guide the reader through the core part of this review. To provide background for readers (particularly those from GIScience and RS) who are new to M&DL, in Appendix A, we introduce essential terms and the main types of classification tasks in M&DL. domains. An introduction to essential concepts in machine learning (ML) and deep learning (DL) for understanding the core part of the review (i.e., Section 3) is provided in the Appendix A. |
Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Scope and Intended Audience <s> Visual data mining techniques have proven to be of high value in exploratory data analysis, and they also have a high potential for mining large databases. In this article, we describe and evaluate a new visualization-based approach to mining large databases. The basic idea of our visual data mining techniques is to represent as many data items as possible on the screen at the same time by mapping each data value to a pixel of the screen and arranging the pixels adequately. The major goal of this article is to evaluate our visual data mining techniques and to compare them to other well-known visualization techniques for multidimensional data: the parallel coordinate and stick-figure visualization techniques. For the evaluation of visual data mining techniques, the perception of data properties counts most, while the CPU time and the number of secondary storage accesses are only of secondary importance. In addition to testing the visualization techniques using real data, we developed a testing environment for database visualizations similar to the benchmark approach used for comparing the performance of database systems. The testing environment allows the generation of test data sets with predefined data characteristics which are important for comparing the perceptual abilities of visual data mining techniques. <s> BIB001 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Scope and Intended Audience <s> We present an approach to the process of constructing knowledge through structured exploration of large spatiotemporal data sets. First, we introduce our problem context and define both Geographic Visualization (GVis) and Knowledge Discovery in Databases (KDD), the source domains for methods being integrated. Next, we review and compare recent GVis and KDD developments and consider the potential for their integration, emphasizing that an iterative process with user interaction is a central focus for uncovering interest and meaningful patterns through each. We then introduce an approach to design of an integrated GVis-KDD environment directed to exploration and discovery in the context of spatiotemporal environmental data. The approach emphasizes a matching of GVis and KDD meta-operations. Following description of the GVis and KDD methods that are linked in our prototype system, we present a demonstration of the prototype applied to a typical spatiotemporal dataset. We conclude by outlining, briefly, resea... <s> BIB002 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Scope and Intended Audience <s> Abstract Voluminous geographic data have been, and continue to be, collected with modern data acquisition techniques such as global positioning systems (GPS), high-resolution remote sensing, location-aware services and surveys, and internet-based volunteered geographic information. There is an urgent need for effective and efficient methods to extract unknown and unexpected information from spatial data sets of unprecedentedly large size, high dimensionality, and complexity. To address these challenges, spatial data mining and geographic knowledge discovery has emerged as an active research field, focusing on the development of theory, methodology, and practice for the extraction of useful information and knowledge from massive and complex spatial databases. This paper highlights recent theoretical and applied research in spatial data mining and knowledge discovery. We first briefly review the literature on several common spatial data-mining tasks, including spatial classification and prediction; spatial association rule mining; spatial cluster analysis; and geovisualization. The articles included in this special issue contribute to spatial data mining research by developing new techniques for point pattern analysis, prediction in space–time data, and analysis of moving object data, as well as by demonstrating applications of genetic algorithms for optimization in the context of image classification and spatial interpolation. The papers concludes with some thoughts on the contribution of spatial data mining and geographic knowledge discovery to geographic information sciences. <s> BIB003 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Scope and Intended Audience <s> Active learning is a supervised machine learning technique in which the learner is in control of the data used for learning. That control is utilized by the learner to ask an oracle, typically a human with extensive knowledge of the domain at hand, about the classes of the instances for which the model learned so far makes unreliable predictions. The active learning process takes as input a set of labeled examples, as well as a larger set of unlabeled examples, and produces a classifier and a relatively small set of newly labeled data. The overall goal is to create as good a classifier as possible, without having to mark-up and supply the learner with more data than necessary. The learning process aims at keeping the human annotation effort to a minimum, only asking for advice where the training utility of the result of such a query is high. Active learning has been successfully applied to a number of natural language processing tasks, such as, information extraction, named entity recognition, text categorization, part-of-speech tagging, parsing, and word sense disambiguation. This report is a literature survey of active learning from the perspective of natural language processing. <s> BIB004 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Scope and Intended Audience <s> Understand the need for analyses of large, complex, information-rich data sets. Identify the goals and primary tasks of the data-mining process. Describe the roots of data-mining technology. Recognize the iterative character of a data-mining process and specify its basic steps. Explain the influence of data quality on a data-mining process. Establish the relation between data warehousing and data mining. Data mining is an iterative process within which progress is defined by discovery, through either automatic or manual methods. Data mining is most useful in an exploratory analysis scenario in which there are no predetermined notions about what will constitute an "interesting" outcome. Data mining is the search for new, valuable, and nontrivial information in large volumes of data. It is a cooperative effort of humans and computers. Best results are achieved by balancing the knowledge of human experts in describing problems and goals with the search capabilities of computers. In practice, the two primary goals of data mining tend to be prediction and description. Prediction involves using some variables or fields in the data set to predict unknown or future values of other variables of interest. Description, on the other hand, focuses on finding patterns describing the data that can be interpreted by humans. Therefore, it is possible to put data-mining activities into one of two categories: Predictive data mining, which produces the model of the system described by the given data set, or Descriptive data mining, which produces new, nontrivial information based on the available data set. <s> BIB005 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Scope and Intended Audience <s> Active learning is a machine learning technique that selects the most informative samples for labeling and uses them as training data. It has been widely explored in multimedia research community for its capability of reducing human annotation effort. In this article, we provide a survey on the efforts of leveraging active learning in multimedia annotation and retrieval. We mainly focus on two application domains: image/video annotation and content-based image retrieval. We first briefly introduce the principle of active learning and then we analyze the sample selection criteria. We categorize the existing sample selection strategies used in multimedia annotation and retrieval into five criteria: risk reduction, uncertainty, diversity, density and relevance. We then introduce several classification models used in active learning-based multimedia annotation and retrieval, including semi-supervised learning, multilabel learning and multiple instance learning. We also provide a discussion on several future trends in this research direction. In particular, we discuss cost analysis of human annotation and large-scale interactive multimedia annotation. <s> BIB006 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Scope and Intended Audience <s> The key idea behind active learning is that a machine learning algorithm can perform better with less training if it is allowed to choose the data from which it learns. An active learner may pose "queries," usually in the form of unlabeled data instances to be labeled by an "oracle" (e.g., a human annotator) that already understands the nature of the problem. This sort of approach is well-motivated in many modern machine learning and data mining applications, where unlabeled data may be abundant or easy to come by, but training labels are difficult, time-consuming, or expensive to obtain. This book is a general introduction to active learning. It outlines several scenarios in which queries might be formulated, and details many query selection algorithms which have been organized into four broad categories, or "query selection frameworks." We also touch on some of the theoretical foundations of active learning, and conclude with an overview of the strengths and weaknesses of these approaches in practice, including a summary of ongoing work to address these open challenges and opportunities. <s> BIB007 | The potential to bring the advances in M&DL to GIScience is reflected in a fairly long history of work on spatial and spatio-temporal data mining. In 2001, for example, Han and Miller provided a broad introduction to data mining and knowledge discovery methods for geographic data. In a second edition in 2009 , with reversed authorship, multivariate spatial clustering was discussed and visual exploration and explanation in geospatial analysis was touched upon. Directed to a broader audience, Han et al. BIB005 provided one of the most highly cited introductions to data mining; the third edition includes an introduction to active learning (Section 3.1) and briefly introduces neural networks (the core technology of DL), but visual analytics (Section 3.2) is not mentioned. Even though they include an introduction to data visualization and visual data mining, Han and colleagues' focus is on traditional data visualization methods for understanding data prior to making decisions on data mining methods and for understanding outcomes of data mining, not on the more integrated visual-computational approaches that characterize advances in visual analytics. Thus, their visual data mining approach, while it does propose leveraging visualization advances in productive ways, is comparable to ideas introduced in the late 1990s (e.g., BIB001 BIB002 ); it does not focus on visual interfaces to enable human input to the data mining process or on support of human reasoning about that process. In work that complements that cited above, Guo and Mennis BIB003 also investigated spatial data mining and geographic knowledge discovery, where they briefly reviewed several common spatial data mining tasks, including spatial classification and prediction, spatial cluster analysis, and geovisualization. The authors argued that data mining is data-driven, but more importantly, human-centered, with users controlling the selection and integration of data, choosing analysis methods, and interpreting results-it is an iterative and inductive learning process. Guo and Mennis pointed out that handling big and complex spatial data and understanding (hidden) complex structure are two major challenges for spatial data mining. To address these challenges, both efficient computational algorithms to process large data sets and effective visualization techniques to present and explore complex patterns from big spatial data, are required. In earlier work outside the GISience context, Fayyad et al. emphasized the potential role of information visualization in data mining and knowledge discovery. They proposed that the next breakthroughs will come from integrated solutions that allow (domain) end users to explore their data using a visual interface-with the goal being to unify data mining algorithms and visual interfaces , and thereby to enable human analysts to explore and discover patterns hidden in big data sets. The main goals of this review paper, building on the long term GIScience interest in ML, are to: (1) survey recent work on active learning, DL, and active DL to provide suggestions for new directions built upon these evolving methods, and (2) bring active learning, DL, active DL, and complementary developments in visual analytics to GIScience, and by doing so extend the current GIScience "toolbox". Through the synthesis of multiple rapidly developing research areas, this systematic review is relevant to multiple research domains, including but not limited to GIScience, computer science, data science, information science, visual analytics, information visualization, image analysis, and computational linguistics. This paper does not attempt to review pure/traditional active learning (see Figure 2 , which illustrates a typical pool-based active learning cycle); for classic and recent reviews of these topics, see: BIB007 . A survey aimed at making active learning more practical for real-world use can be found in ; a survey from the perspective of natural language processing (NLP) can be found in BIB004 ; and a survey of active learning in multimedia annotation and retrieval can be found in BIB006 . Our review focuses on investigating methods that extend and/or integrate active learning with visual analytics and DL for (geo) text and image classification, specifically on the two parts of the active learning cycle highlighted in Figure 3 . The pool-based active learning cycle |
Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> The State of the Art: Active Learning, Visual Analytics, and Deep Learning <s> User studies are important for many aspects of the design process and involve techniques ranging from informal surveys to rigorous laboratory studies. However, the costs involved in engaging users often requires practitioners to trade off between sample size, time requirements, and monetary costs. Micro-task markets, such as Amazon's Mechanical Turk, offer a potential paradigm for engaging a large number of users for low time and monetary costs. Here we investigate the utility of a micro-task market for collecting user measurements, and discuss design considerations for developing remote micro user evaluation tasks. Although micro-task markets have great potential for rapidly collecting user measurements at low costs, we found that special care is needed in formulating tasks in order to harness the capabilities of the approach. <s> BIB001 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> The State of the Art: Active Learning, Visual Analytics, and Deep Learning <s> For large, real-world inductive learning problems, the number of training examples often must be limited due to the costs associated with procuring, preparing, and storing the training examples and/or the computational costs associated with learning from them. In such circumstances, one question of practical importance is: if only n training examples can be selected, in what proportion should the classes be represented? In this article we help to answer this question by analyzing, for a fixed training-set size, the relationship between the class distribution of the training data and the performance of classification trees induced from these data. We study twenty-six data sets and, for each, determine the best class distribution for learning. The naturally occurring class distribution is shown to generally perform well when classifier performance is evaluated using undifferentiated error rate (0/1 loss). However, when the area under the ROC curve is used to evaluate classifier performance, a balanced distribution is shown to perform well. Since neither of these choices for class distribution always generates the best-performing classifier, we introduce a "budget-sensitive" progressive sampling algorithm for selecting training examples based on the class associated with each example. An empirical analysis of this algorithm shows that the class distribution of the resulting training set yields classifiers with good (nearly-optimal) classification performance. <s> BIB002 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> The State of the Art: Active Learning, Visual Analytics, and Deep Learning <s> Large-scale supervised datasets are crucial to train convolutional neural networks (CNNs) for various computer vision problems. However, obtaining a massive amount of well-labeled data is usually very expensive and time consuming. In this paper, we introduce a general framework to train CNNs with only a limited number of clean labels and millions of easily obtained noisy labels. We model the relationships between images, class labels and label noises with a probabilistic graphical model and further integrate it into an end-to-end deep learning system. To demonstrate the effectiveness of our approach, we collect a large-scale real-world clothing classification dataset with both noisy and clean labels. Experiments on this dataset indicate that our approach can better correct the noisy labels and improves the performance of trained CNNs. <s> BIB003 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> The State of the Art: Active Learning, Visual Analytics, and Deep Learning <s> Labeled datasets are always limited, and oftentimes the quantity of labeled data is a bottleneck for data analytics. This especially affects supervised machine learning methods, which require labels for models to learn from the labeled data. Active learning algorithms have been proposed to help achieve good analytic models with limited labeling efforts, by determining which additional instance labels will be most beneficial for learning for a given model. Active learning is consistent with interactive analytics as it proceeds in a cycle in which the unlabeled data is automatically explored. However, in active learning users have no control of the instances to be labeled, and for text data, the annotation interface is usually document only. Both of these constraints seem to affect the performance of an active learning model. We hypothesize that visualization techniques, particularly interactive ones, will help to address these constraints. In this paper, we implement a pilot study of visualization in active learning for text classification, with an interactive labeling interface. We compare the results of three experiments. Early results indicate that visualization improves high-performance machine learning model building with an active learning algorithm. <s> BIB004 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> The State of the Art: Active Learning, Visual Analytics, and Deep Learning <s> Providing accurate predictions is challenging for machine learning algorithms when the number of features is larger than the number of samples in the data. Prior knowledge can improve machine learning models by indicating relevant variables and parameter values. Yet, this prior knowledge is often tacit and only available from domain experts. We present a novel approach that uses interactive visualization to elicit the tacit prior knowledge and uses it to improve the accuracy of prediction models. The main component of our approach is a user model that models the domain expert's knowledge of the relevance of different features for a prediction task. In particular, based on the expert's earlier input, the user model guides the selection of the features on which to elicit user's knowledge next. The results of a controlled user study show that the user model significantly improves prior knowledge elicitation and prediction accuracy, when predicting the relative citation counts of scientific documents in a specific domain. <s> BIB005 | As outlined above, leveraging the potential of DL to increase classification accuracy (for images or text) requires extensive amounts of manually labeled data. This is particularly challenging in domains requiring experts with prior knowledge that is often tacit BIB004 BIB005 BIB003 BIB002 ]-in such cases, even crowdsourcing BIB001 , such as Amazon Mechanical Turk , will not help much. In this section, we review several techniques that are central to addressing this challenge-in particular, active learning (Section 3.1), visual analytics (Section 3.2), active learning with visual analytics (Section 3.3), and active deep learning (Section 3.4). |
Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> What's AL and Why AL? <s> Active learning differs from “learning from examples” in that the learning algorithm assumes at least some control over what part of the input domain it receives information about. In some situations, active learning is provably more powerful than learning from examples alone, giving better generalization for a fixed number of training examples.In this article, we consider the problem of learning a binary concept in the absence of noise. We describe a formalism for active concept learning called selective sampling and show how it may be approximately implemented by a neural network. In selective sampling, a learner receives distribution information from the environment and queries an oracle on parts of the domain it considers “useful.” We test our implementation, called an SG-network, on three domains and observe significant improvement in generalization. <s> BIB001 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> What's AL and Why AL? <s> Active learning is a supervised machine learning technique in which the learner is in control of the data used for learning. That control is utilized by the learner to ask an oracle, typically a human with extensive knowledge of the domain at hand, about the classes of the instances for which the model learned so far makes unreliable predictions. The active learning process takes as input a set of labeled examples, as well as a larger set of unlabeled examples, and produces a classifier and a relatively small set of newly labeled data. The overall goal is to create as good a classifier as possible, without having to mark-up and supply the learner with more data than necessary. The learning process aims at keeping the human annotation effort to a minimum, only asking for advice where the training utility of the result of such a query is high. Active learning has been successfully applied to a number of natural language processing tasks, such as, information extraction, named entity recognition, text categorization, part-of-speech tagging, parsing, and word sense disambiguation. This report is a literature survey of active learning from the perspective of natural language processing. <s> BIB002 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> What's AL and Why AL? <s> One of the principal bottlenecks in applying learning techniques to classification problems is the large amount of labeled training data required. Especially for images and video, providing training data is very expensive in terms of human time and effort. In this paper we propose an active learning approach to tackle the problem. Instead of passively accepting random training examples, the active learning algorithm iteratively selects unlabeled examples for the user to label, so that human effort is focused on labeling the most “useful” examples. Our method relies on the idea of uncertainty sampling, in which the algorithm selects unlabeled examples that it finds hardest to classify. Specifically, we propose an uncertainty measure that generalizes margin-based uncertainty to the multi-class case and is easy to compute, so that active learning can handle a large number of classes and large data sizes efficiently. We demonstrate results for letter and digit recognition on datasets from the UCI repository, object recognition results on the Caltech-101 dataset, and scene categorization results on a dataset of 13 natural scene categories. The proposed method gives large reductions in the number of training examples required over random selection to achieve similar classification accuracy, with little computational overhead. <s> BIB003 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> What's AL and Why AL? <s> Defining an efficient training set is one of the most delicate phases for the success of remote sensing image classification routines. The complexity of the problem, the limited temporal and financial resources, as well as the high intraclass variance can make an algorithm fail if it is trained with a suboptimal dataset. Active learning aims at building efficient training sets by iteratively improving the model performance through sampling. A user-defined heuristic ranks the unlabeled pixels according to a function of the uncertainty of their class membership and then the user is asked to provide labels for the most uncertain pixels. This paper reviews and tests the main families of active learning algorithms: committee, large margin, and posterior probability-based. For each of them, the most recent advances in the remote sensing community are discussed and some heuristics are detailed and tested. Several challenging remote sensing scenarios are considered, including very high spatial resolution and hyperspectral image classification. Finally, guidelines for choosing the good architecture are provided for new and/or unexperienced user. <s> BIB004 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> What's AL and Why AL? <s> Crowd sourcing has become an popular approach for annotating the large quantities of data required to train machine learning algorithms. However, obtaining labels in this manner poses two important challenges. First, naively labeling all of the data can be prohibitively expensive. Second, a significant fraction of the annotations can be incorrect due to carelessness or limited domain expertise of crowd sourced workers. Active learning provides a natural formulation to address the former issue by affordably selecting an appropriate subset of instances to label. Unfortunately, most active learning strategies are myopic and sensitive to label noise, which leads to poorly trained classifiers. We propose an active learning method that is specifically designed to be robust to such noise. We present an application of our technique in the domain of activity recognition for eldercare and validate the proposed approach using both simulated and real-world experiments using Amazon Mechanical Turk. <s> BIB005 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> What's AL and Why AL? <s> Obtaining labels can be expensive or time-consuming, but unlabeled data is often abundant and easier to obtain. Most learning tasks can be made more efficient, in terms of labeling cost, by intelligently choosing specific unlabeled instances to be labeled by an oracle. The general problem of optimally choosing these instances is known as active learning. As it is usually set in the context of supervised learning, active learning relies on a single oracle playing the role of a teacher. We focus on the multiple annotator scenario where an oracle, who knows the ground truth, no longer exists; instead, multiple labelers, with varying expertise, are available for querying. This paradigm posits new challenges to the active learning scenario. We can now ask which data sample should be labeled next and which annotator should be queried to benefit our learning model the most. In this paper, we employ a probabilistic model for learning from multiple annotators that can also learn the annotator expertise even when their expertise may not be consistently accurate across the task domain. We then focus on providing a criterion and formulation that allows us to select both a sample and the annotator/s to query the labels from. <s> BIB006 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> What's AL and Why AL? <s> The key idea behind active learning is that a machine learning algorithm can perform better with less training if it is allowed to choose the data from which it learns. An active learner may pose "queries," usually in the form of unlabeled data instances to be labeled by an "oracle" (e.g., a human annotator) that already understands the nature of the problem. This sort of approach is well-motivated in many modern machine learning and data mining applications, where unlabeled data may be abundant or easy to come by, but training labels are difficult, time-consuming, or expensive to obtain. This book is a general introduction to active learning. It outlines several scenarios in which queries might be formulated, and details many query selection algorithms which have been organized into four broad categories, or "query selection frameworks." We also touch on some of the theoretical foundations of active learning, and conclude with an overview of the strengths and weaknesses of these approaches in practice, including a summary of ongoing work to address these open challenges and opportunities. <s> BIB007 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> What's AL and Why AL? <s> Recent advances in microscopy imaging and genomics have created an explosion of patient data in the pathology domain. Whole-slide images (WSIs) of tissues can now capture disease processes as they unfold in high resolution, recording the visual cues that have been the basis of pathologic diagnosis for over a century. Each WSI contains billions of pixels and up to a million or more microanatomic objects whose appearances hold important prognostic information. Computational image analysis enables the mining of massive WSI datasets to extract quantitative morphologic features describing the visual qualities of patient tissues. When combined with genomic and clinical variables, this quantitative information provides scientists and clinicians with insights into disease biology and patient outcomes. To facilitate interaction with this rich resource, we have developed a web-based machine-learning framework that enables users to rapidly build classifiers using an intuitive active learning process that minimizes data labeling effort. In this paper we describe the architecture and design of this system, and demonstrate its effectiveness through quantification of glioma brain tumors. <s> BIB008 | Can machines learn with fewer labeled training instances than those needed in supervised learning (a full explanation of which is provided in Appendix A.2.1) if they are allowed to ask questions? The answer is "yes", with many encouraging results that have been demonstrated for a variety of problem settings and domains. AL BIB007 BIB001 ] is a sub-field of semi-supervised learning (for details, see Appendix A.2.3) that implements this question-asking idea as an iterative process. AL differs from traditional "passive" learning systems that purely "learn from examples". AL systems aim to make ML more economical and more accurate, because the learning algorithms can participate in the acquisition of their own training data, and are able to avoid using unrepresentative or poorly annotated data based on query strategies (Section 3.1.5). AL is well-motivated in many ML based applications, where unlabeled data is massive, but labels are difficult, time-consuming, or expensive to obtain. The key idea behind AL is that a ML model can achieve high accuracy with a minimum of manual labeling effort if the (machine) learner is allowed to ask for more informative labeled examples by selection query. A query is often in the form of an unlabeled instance (e.g., an image or a piece of text), picked by the machine learner according to a specific query strategy (Section 3.1.5), to be labeled by an annotator who understands the nature of the domain problem BIB007 . Informative examples refer to those instances that can help improve the machine learner's learning performance, and the informativeness is measured by different query strategies (Section 3.1.5). AL has been successfully applied to a number of natural language processing tasks BIB002 , such as information extraction, named entity recognition, text categorization, part-of-speech tagging, parsing, and word sense disambiguation. Tuia et al. BIB004 surveyed AL algorithms for RS image classification. Nalisink et al. employed AL to reduce the labeling effort for image classification BIB008 . A good example using AL to overcome label quality problems by combining experts and crowd-sourced annotators can be found in BIB005 . Another good example of using AL from crowds can be found in BIB006 , where a multi-annotator (see Section 3.1.6) AL algorithm was provided. Most AL based methods are for binary classification tasks (see Appendix A.4.1), see BIB003 for an example of multi-class classification (see Appendix A.4.2) AL for image classification. While there has been increasing attention to AL, with applications in many domains, systematic and comprehensive comparison of different AL strategies is missing in the literature. We will come back to this later in Section 3.1.7. |
Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL Problem Scenarios <s> Active learning is a machine learning technique that selects the most informative samples for labeling and uses them as training data. It has been widely explored in multimedia research community for its capability of reducing human annotation effort. In this article, we provide a survey on the efforts of leveraging active learning in multimedia annotation and retrieval. We mainly focus on two application domains: image/video annotation and content-based image retrieval. We first briefly introduce the principle of active learning and then we analyze the sample selection criteria. We categorize the existing sample selection strategies used in multimedia annotation and retrieval into five criteria: risk reduction, uncertainty, diversity, density and relevance. We then introduce several classification models used in active learning-based multimedia annotation and retrieval, including semi-supervised learning, multilabel learning and multiple instance learning. We also provide a discussion on several future trends in this research direction. In particular, we discuss cost analysis of human annotation and large-scale interactive multimedia annotation. <s> BIB001 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL Problem Scenarios <s> The key idea behind active learning is that a machine learning algorithm can perform better with less training if it is allowed to choose the data from which it learns. An active learner may pose "queries," usually in the form of unlabeled data instances to be labeled by an "oracle" (e.g., a human annotator) that already understands the nature of the problem. This sort of approach is well-motivated in many modern machine learning and data mining applications, where unlabeled data may be abundant or easy to come by, but training labels are difficult, time-consuming, or expensive to obtain. This book is a general introduction to active learning. It outlines several scenarios in which queries might be formulated, and details many query selection algorithms which have been organized into four broad categories, or "query selection frameworks." We also touch on some of the theoretical foundations of active learning, and conclude with an overview of the strengths and weaknesses of these approaches in practice, including a summary of ongoing work to address these open challenges and opportunities. <s> BIB002 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL Problem Scenarios <s> Many engineering problems require identifying feasible domains under implicit constraints. One example is finding acceptable car body styling designs based on constraints like aesthetics and functionality. Current active-learning based methods learn feasible domains for bounded input spaces. However, we usually lack prior knowledge about how to set those input variable bounds. Bounds that are too small will fail to cover all feasible domains; while bounds that are too large will waste query budget. To avoid this problem, we introduce Active Expansion Sampling (AES), a method that identifies (possibly disconnected) feasible domains over an unbounded input space. AES progressively expands our knowledge of the input space, and uses successive exploitation and exploration stages to switch between learning the decision boundary and searching for new feasible domains. We show that AES has a misclassification loss guarantee within the explored region, independent of the number of iterations or labeled samples. Thus it can be used for real-time prediction of samples' feasibility within the explored region. We evaluate AES on three test examples and compare AES with two adaptive sampling methods -- the Neighborhood-Voronoi algorithm and the straddle heuristic -- that operate over fixed input variable bounds. <s> BIB003 | The AL literature BIB002 BIB001 showcases several different problem scenarios in which the active machine learner may solicit input. The three most common scenarios considered in the literature are: membership query synthesis, stream-based selective sampling, and pool-based sampling. All three scenarios assume that machine learners query unlabeled instances to be labeled by annotators (humans or machines). Figure 4 illustrates the differences among these three AL scenarios. The dashed lines connecting instance space (set of possible observations-also called input space BIB002 BIB003 ) in Figure 4 , represent that the machine learner does not know the definition of the instance space (thus the features of the space and their ranges are not known BIB002 ). |
Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Sample an instance <s> We consider the problem of using queries to learn an unknown concept. Several types of queries are described and studied: membership, equivalence, subset, superset, disjointness, and exhaustiveness queries. Examples are given of efficient learning methods using various subsets of these queries for formal domains, including the regular languages, restricted classes of context-free languages, the pattern languages, and restricted types of prepositional formulas. Some general lower bound techniques are given. Equivalence queries are compared with Valiant's criterion of probably approximately correct identification under random sampling. <s> BIB001 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Sample an instance <s> "Selective sampling" is a form of directed search that can greatly increase the ability of a connectionist network to generalize accurately. Based on information from previous batches of samples, a network may be trained on data selectively sampled from regions in the domain that are unknown. This is realizable in cases when the distribution is known, or when the cost of drawing points from the target distribution is negligible compared to the cost of labeling them with the proper classification. The approach is justified by its applicability to the problem of training a network for power system security analysis. The benefits of selective sampling are studied analytically, and the results are confirmed experimentally. <s> BIB002 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Sample an instance <s> Active learning differs from “learning from examples” in that the learning algorithm assumes at least some control over what part of the input domain it receives information about. In some situations, active learning is provably more powerful than learning from examples alone, giving better generalization for a fixed number of training examples.In this article, we consider the problem of learning a binary concept in the absence of noise. We describe a formalism for active concept learning called selective sampling and show how it may be approximately implemented by a neural network. In selective sampling, a learner receives distribution information from the environment and queries an oracle on parts of the domain it considers “useful.” We test our implementation, called an SG-network, on three domains and observe significant improvement in generalization. <s> BIB003 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Sample an instance <s> The ability to cheaply train text classifiers is critical to their use in information retrieval, content analysis, natural language processing, and other tasks involving data which is partly or fully textual. An algorithm for sequential sampling during machine learning of statistical classifiers was developed and tested on a newswire text categorization task. This method, which we call uncertainty sampling, reduced by as much as 500-fold the amount of training data that would have to be manually classified to achieve a given level of effectiveness. <s> BIB004 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Sample an instance <s> Abstract In many real-world learning tasks, it is expensive to acquire a sufficient number of labeled examples for training. This paper proposes a general method for efficiently training probabilistic classifiers, by selecting for training only the more informative examples in a stream of unlabeled examples. The method, committee-based sampling , evaluates the informativeness of an example by measuring the degree of disagreement between several model variants. These variants (the committee) are drawn randomly from a probability distribution conditioned by the training set selected so far (Monte-Carlo sampling). The method is particularly attractive because it evaluates the expected information gain from a training example implicitly, making the model both easy to implement and generally applicable. We further show how to apply committee-based sampling for training Hidden Markov Model classifiers, which are commonly used for complex classification tasks. The method was implemented and tested for the task of tagging words in natural language sentences with parts-of-speech. Experimental evaluation of committee-based sampling versus standard sequential training showed a substantial improvement in training efficiency. <s> BIB005 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Sample an instance <s> This paper proposes an efficient example sampling method for example-based word sense disambiguation systems. To construct a database of practical size, a considerable overhead for manual sense disambiguation (overhead for supervision) is required. In addition, the time complexity of searching a large-sized database poses a considerable problem (overhead for search). To counter these problems, our method selectively samples a smaller-sized effective subset from a given example set for use in word sense disambiguation. Our method is characterized by the reliance on the notion of training utility: the degree to which each example is informative for future example sampling when used for the training of the system. The system progressively collects examples by selecting those with greatest utility. The paper reports the effectiveness of our method through experiments on about one thousand sentences. Compared to experiments with other example sampling methods, our method reduced both the overhead for supervision and the overhead for search, without the degeneration of the performance of the system. <s> BIB006 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Sample an instance <s> We propose a general active learning framework for content-based information retrieval. We use this framework to guide hidden annotations in order to improve the retrieval performance. For each object in the database, we maintain a list of probabilities, each indicating the probability of this object having one of the attributes. During training, the learning algorithm samples objects in the database and presents them to the annotator to assign attributes. For each sampled object, each probability is set to be one or zero depending on whether or not the corresponding attribute is assigned by the annotator. For objects that have not been annotated, the learning algorithm estimates their probabilities with biased kernel regression. Knowledge gain is then defined to determine, among the objects that have not been annotated, which one the system is the most uncertain. The system then presents it as the next sample to the annotator to which it is assigned attributes. During retrieval, the list of probabilities works as a feature vector for us to calculate the semantic distance between two objects, or between the user query and an object in the database. The overall distance between two objects is determined by a weighted sum of the semantic distance and the low-level feature distance. The algorithm is tested on both synthetic databases and real databases of 3D models. In both cases, the retrieval performance of the system improves rapidly with the number of annotated samples. Furthermore, we show that active learning outperforms learning based on random sampling. <s> BIB007 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Sample an instance <s> The question of whether it is possible to automate the scientific process is of both great theoretical interest1,2 and increasing practical importance because, in many scientific areas, data are being generated much faster than they can be effectively analysed. We describe a physically implemented robotic system that applies techniques from artificial intelligence3,4,5,6,7,8 to carry out cycles of scientific experimentation. The system automatically originates hypotheses to explain observations, devises experiments to test these hypotheses, physically runs the experiments using a laboratory robot, interprets the results to falsify hypotheses inconsistent with the data, and then repeats the cycle. Here we apply the system to the determination of gene function using deletion mutants of yeast (Saccharomyces cerevisiae) and auxotrophic growth experiments9. We built and tested a detailed logical model (involving genes, proteins and metabolites) of the aromatic amino acid synthesis pathway. In biological experiments that automatically reconstruct parts of this model, we show that an intelligent experiment selection strategy is competitive with human performance and significantly outperforms, with a cost decrease of 3-fold and 100-fold (respectively), both cheapest and random-experiment selection. <s> BIB008 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Sample an instance <s> There is growing interest in the application of machine learning techniques in bioinformatics. The supervised machine learning approach has been widely applied to bioinformatics and gained a lot of success in this research area. With this learning approach researchers first develop a large training set, which is a timeconsuming and costly process. Moreover, the proportion of the positive examples and negative examples in the training set may not represent the real-world data distribution, which causes concept drift. Active learning avoids these problems. Unlike most conventional learning methods where the training set used to derive the model remains static, the classifier can actively choose the training data and the size of training set increases. We introduced an algorithm for performing active learning with support vector machine and applied the algorithm to gene expression profiles of colon cancer, lung cancer, and prostate cancer samples. We compared the classification performance of active learning with that of passive learning. The results showed that employing the active learning method can achieve high accuracy and significantly reduce the need for labeled training instances. For lung cancer classification, to achieve 96% of the total positives, only 31 labeled examples were needed in active learning whereas in passive learning 174 labeled examples were required. That meant over 82% reduction was realized by active learning. In active learning the areas under the receiver operating characteristic (ROC) curves were over 0.81, while in passive learning the areas under the ROC curves were below 0.50 <s> BIB009 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Sample an instance <s> Learning ranking (or preference) functions has been a major issue in the machine learning community and has produced many applications in information retrieval. SVMs (Support Vector Machines) - a classification and regression methodology - have also shown excellent performance in learning ranking functions. They effectively learn ranking functions of high generalization based on the "large-margin" principle and also systematically support nonlinear ranking by the "kernel trick". In this paper, we propose an SVM selective sampling technique for learning ranking functions. SVM selective sampling (or active learning with SVM) has been studied in the context of classification. Such techniques reduce the labeling effort in learning classification functions by selecting only the most informative samples to be labeled. However, they are not extendable to learning ranking functions, as the labeled data in ranking is relative ordering, or partial orders of data. Our proposed sampling technique effectively learns an accurate SVM ranking function with fewer partial orders. We apply our sampling technique to the data retrieval application, which enables fuzzy search on relational databases by interacting with users for learning their preferences. Experimental results show a significant reduction of the labeling effort in inducing accurate ranking functions. <s> BIB010 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Sample an instance <s> Active learning is well-suited to many problems in natural language processing, where unlabeled data may be abundant but annotation is slow and expensive. This paper aims to shed light on the best active learning approaches for sequence labeling tasks such as information extraction and document segmentation. We survey previously used query selection strategies for sequence models, and propose several novel algorithms to address their shortcomings. We also conduct a large-scale empirical comparison using multiple corpora, which demonstrates that our proposed methods advance the state of the art. <s> BIB011 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Sample an instance <s> Active learning is a machine learning technique that selects the most informative samples for labeling and uses them as training data. It has been widely explored in multimedia research community for its capability of reducing human annotation effort. In this article, we provide a survey on the efforts of leveraging active learning in multimedia annotation and retrieval. We mainly focus on two application domains: image/video annotation and content-based image retrieval. We first briefly introduce the principle of active learning and then we analyze the sample selection criteria. We categorize the existing sample selection strategies used in multimedia annotation and retrieval into five criteria: risk reduction, uncertainty, diversity, density and relevance. We then introduce several classification models used in active learning-based multimedia annotation and retrieval, including semi-supervised learning, multilabel learning and multiple instance learning. We also provide a discussion on several future trends in this research direction. In particular, we discuss cost analysis of human annotation and large-scale interactive multimedia annotation. <s> BIB012 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Sample an instance <s> Most active learning approaches select either informative or representative unlabeled instances to query their labels. Although several active learning algorithms have been proposed to combine the two criteria for query selection, they are usually ad hoc in finding unlabeled instances that are both informative and representative. We address this challenge by a principled approach, termed QUIRE, based on the min-max view of active learning. The proposed approach provides a systematic way for measuring and combining the informativeness and representativeness of an instance. Extensive experimental results show that the proposed QUIRE approach outperforms several state-of -the-art active learning approaches. <s> BIB013 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Sample an instance <s> Active learning has received great interests from researchers due to its ability to reduce the amount of supervision required for effective learning. As the core component of active learning algorithms, query synthesis and pool-based sampling are two main scenarios of querying considered in the literature. Query synthesis features low querying time, but only has limited applications as the synthesized query might be unrecognizable to human oracle. As a result, most efforts have focused on pool-based sampling in recent years, although it is much more time-consuming. In this paper, we propose new strategies for a novel querying framework that combines query synthesis and pool-based sampling. It overcomes the limitation of query synthesis, and has the advantage of fast querying. The basic idea is to synthesize an instance close to the decision boundary using labelled data, and then select the real instance closest to the synthesized one as a query. For this purpose, we propose a synthesis strategy, which can synthesize instances close to the decision boundary and spreading along the decision boundary. Since the synthesis only depends on the relatively small labelled set, instead of evaluating the entire unlabelled set as many other active learning algorithms do, our method has the advantage of efficiency. In order to handle more complicated data and make our framework compatible with powerful kernel-based learners, we also extend our method to kernel version. Experiments on several real-world data sets show that our method has significant advantage on time complexity and similar performance compared to pool-based uncertainty sampling methods. <s> BIB014 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Sample an instance <s> In this paper, we consider the problem of actively learning a linear classifier through query synthesis where the learner can construct artificial queries in order to estimate the true decision boundaries. This problem has recently gained a lot of interest in automated science and adversarial reverse engineering for which only heuristic algorithms are known. In such applications, queries can be constructed de novo to elicit information (e.g., automated science) or to evade detection with minimal cost (e.g., adversarial reverse engineering). We develop a general framework, called dimension coupling (DC), that 1) reduces a d-dimensional learning problem to d-1 low dimensional sub-problems, 2) solves each sub-problem efficiently, 3) appropriately aggregates the results and outputs a linear classifier, and 4) provides a theoretical guarantee for all possible schemes of aggregation. The proposed method is proved resilient to noise. We show that the DC framework avoids the curse of dimensionality: its computational complexity scales linearly with the dimension. Moreover, we show that the query complexity of DC is near optimal (within a constant factor of the optimum algorithm). To further support our theoretical analysis, we compare the performance of DC with the existing work. We observe that DC consistently outperforms the prior arts in terms of query complexity while often running orders of magnitude faster. <s> BIB015 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Sample an instance <s> How can we find a general way to choose the most suitable samples for training a classifier? Even with very limited prior information? Active learning, which can be regarded as an iterative optimization procedure, plays a key role to construct a refined training set to improve the classification performance in a variety of applications, such as text analysis, image recognition, social network modeling, etc. Although combining representativeness and informativeness of samples has been proven promising for active sampling, state-of-the-art methods perform well under certain data structures. Then can we find a way to fuse the two active sampling criteria without any assumption on data? This paper proposes a general active learning framework that effectively fuses the two criteria. Inspired by a two-sample discrepancy problem, triple measures are elaborately designed to guarantee that the query samples not only possess the representativeness of the unlabeled data but also reveal the diversity of the labeled data. Any appropriate similarity measure can be employed to construct the triple measures. Meanwhile, an uncertain measure is leveraged to generate the informativeness criterion, which can be carried out in different ways. Rooted in this framework, a practical active learning algorithm is proposed, which exploits a radial basis function together with the estimated probabilities to construct the triple measures and a modified best-versus-second-best strategy to construct the uncertain measure, respectively. Experimental results on benchmark datasets demonstrate that our algorithm consistently achieves superior performance over the state-of-the-art active learning algorithms. <s> BIB016 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Sample an instance <s> Abstract Classification of evolving data streams is a challenging task, which is suitably tackled with online learning approaches. Data is processed instantly requiring the learning machinery to (self-)adapt by adjusting its model. However for high velocity streams, it is usually difficult to obtain labeled samples to train the classification model. Hence, we propose a novel o nline b atch-based a ctive l earning algorithm (OBAL) to perform the labeling. OBAL is developed for crisis management applications where data streams are generated by the social media community. OBAL is applied to discriminate relevant from irrelevant social media items. An emergency management user will be interactively queried to label chosen items. OBAL exploits the boundary items for which it is highly uncertain about their class and makes use of two classifiers: k-Nearest Neighbors (kNN) and Support Vector Machine (SVM). OBAL is equipped with a labeling budget and a set of uncertainty strategies to identify the items for labeling. An extensive analysis is carried out to show OBAL’s performance, the sensitivity of its parameters, and the contribution of the individual uncertainty strategies. Two types of datasets are used: synthetic and social media datasets related to crises. The empirical results illustrate that OBAL has a very good discrimination power. <s> BIB017 | Sample a large pool of instances ). Each starts from the set of all possible observations (the instance space at left) and applies a query strategy (light blue box) for selecting which instance to ask the human or machine annotator to label (dark blue box). Membership query synthesis was proposed in BIB001 , and further developed and extended in BIB008 BIB014 BIB015 . In this scenario, the machine learner knows the definition of the instance space (e.g., feature dimensions and ranges are known). The learner can generate (i.e., synthesize) a new instance (e.g., an image or a piece of text) from scratch (thus one that meets the parameters of the instance space, but may or may not actually exist ) that satisfies the instance space definition, and then enlist an annotator for labeling BIB014 BIB015 . Query synthesis can synthesize a new artificial (membership) query from scratch using a small amount of labelled data-it is therefore very efficient BIB014 . Query synthesis is often tractable and efficient for finite problem domains . Thus, query synthesis has recently gained interest in some domains in which labels do not come from human annotators, but from experiments, where only heuristics are known. In such domains, artificial queries can be synthesized to elicit information (e.g., automated science BIB008 ) or to detect and extract knowledge and design information with minimal cost (e.g., adversarial reverse engineering) BIB015 . Query synthesis is reasonable for some domain problems, but one major problem is that the synthesized (membership) queries are often not meaningful, and thus annotators, particularly human ones, can find it hard to assign labels . By contrast, the stream-based and pool-based scenarios introduced below can address these limitations, because the queries always correspond to real examples. Therefore, the labels can be more readily provided by annotators . In stream-based selective sampling (also called stream-based or sequential AL), given an unlabeled instance, which is drawn one at a time from the data source, the machine learner must decide whether to query its label or to discard it BIB003 BIB002 BIB005 . In a stream-based selective sampling scenario, learners can use the following two ways to query: (1) use a query strategy (Section 3.1.5.), (2) compute a region of uncertainty and pick instances falling in that region. The stream-based scenario has been studied in several real-world tasks (e.g., learning ranking functions for information retrieval BIB010 , social media text classifications BIB017 , and word sense disambiguation BIB006 , where a word such as "bank" in "river bank" can be distinguished from the word "bank" in "financial bank"). One advantage of the stream-based selective sampling AL method is that it is suitable for mobile and embedded devices where memory and power is often limited, because in this scenario, each unlabeled instance is drawn one at a time from the data source. In pool-based sampling AL BIB004 BIB011 , samples are selected from an existing pool for labeling using criteria designed to assess the informativeness of an instance. Informativeness has been defined as representing the ability of an instance to reduce the generalization error of a ML model BIB013 BIB016 ; query strategies designed to achieve informativeness of samples are discussed in Section 3.1.5. A substantial proportion of AL methods in the literature are pool-based BIB017 , with examples in domains that include: text classification (see examples later in this paper for text and image classification), image classification and retrieval BIB012 , information retrieval BIB007 , video classification and retrieval BIB012 , speech recognition , and cancer diagnosis BIB009 . Only a few AL methods employed stream-based selective sampling BIB017 . For many real-world learning problems, large collections of unlabeled data can be gathered at once. This motivates pool-based sampling, because pool-based sampling evaluates and ranks the entire collection before selecting the best query . This helps build a classifier with better performance and less labeled examples. As outlined above, the three sampling scenarios have different primary applications. Membership query synthesis is most applicable to limited applications such as automated scientific discovery and adversarial reverse engineering BIB015 , due to fact that instances produced by synthesized queries might be not recognizable to human annotators . Stream-based methods are typically used for streaming data (as the name implies) because they scan the data sequentially and make individual decisions for each instance. Because they do not consider the data as a whole, stream-based selective sampling methods are typically less effective than pool-based for any situation in which data can be assembled ahead of time. Due to the limited focus of membership query synthesis and stream-based selective sampling, and the broad focus of pool-based sampling, a substantial proportion of AL methods discussed in the literature are pool-based BIB017 . Not surprisingly, this is also true for application of AL to (geo) text and image classification. Given this overall emphasis in the literature, and within the subset directed to geospatial applications, the focus in the remainder of the paper is on pool-based sampling, with the alternatives mentioned only to highlight particular recent innovations. |
Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Batch-Mode AL <s> The goal of active learning is to select the most informative examples for manual labeling. Most of the previous studies in active learning have focused on selecting a single unlabeled example in each iteration. This could be inefficient since the classification model has to be retrained for every labeled example. In this paper, we present a framework for "batch mode active learning" that applies the Fisher information matrix to select a number of informative examples simultaneously. The key computational challenge is how to efficiently identify the subset of unlabeled examples that can result in the largest reduction in the Fisher information. To resolve this challenge, we propose an efficient greedy algorithm that is based on the property of submodular functions. Our empirical studies with five UCI datasets and one real-world medical image classification show that the proposed batch mode active learning algorithm is more effective than the state-of-the-art algorithms for active learning. <s> BIB001 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Batch-Mode AL <s> Support vector machine (SVM) active learning is one popular and successful technique for relevance feedback in content-based image retrieval (CBIR). Despite the success, conventional SVM active learning has two main drawbacks. First, the performance of SVM is usually limited by the number of labeled examples. It often suffers a poor performance for the small-sized labeled examples, which is the case in relevance feedback. Second, conventional approaches do not take into account the redundancy among examples, and could select multiple examples that are similar (or even identical). In this work, we propose a novel scheme for explicitly addressing the drawbacks. It first learns a kernel function from a mixture of labeled and unlabeled data, and therefore alleviates the problem of small-sized training data. The kernel will then be used for a batch mode active learning method to identify the most informative and diverse examples via a min-max framework. Two novel algorithms are proposed to solve the related combinatorial optimization: the first approach approximates the problem into a quadratic program, and the second solves the combinatorial optimization approximately by a greedy algorithm that exploits the merits of submodular functions. Extensive experiments with image retrieval using both natural photo images and medical images show that the proposed algorithms are significantly more effective than the state-of-the-art approaches. A demo is available at http://msm.cais.ntu.edu.sg/LSCBIR/. <s> BIB002 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Batch-Mode AL <s> Abstract Classification of evolving data streams is a challenging task, which is suitably tackled with online learning approaches. Data is processed instantly requiring the learning machinery to (self-)adapt by adjusting its model. However for high velocity streams, it is usually difficult to obtain labeled samples to train the classification model. Hence, we propose a novel o nline b atch-based a ctive l earning algorithm (OBAL) to perform the labeling. OBAL is developed for crisis management applications where data streams are generated by the social media community. OBAL is applied to discriminate relevant from irrelevant social media items. An emergency management user will be interactively queried to label chosen items. OBAL exploits the boundary items for which it is highly uncertain about their class and makes use of two classifiers: k-Nearest Neighbors (kNN) and Support Vector Machine (SVM). OBAL is equipped with a labeling budget and a set of uncertainty strategies to identify the items for labeling. An extensive analysis is carried out to show OBAL’s performance, the sensitivity of its parameters, and the contribution of the individual uncertainty strategies. Two types of datasets are used: synthetic and social media datasets related to crises. The empirical results illustrate that OBAL has a very good discrimination power. <s> BIB003 | In most AL research, queries are selected in serial (i.e., labeling one instance at a time). This is not practical when training a model is slow or expensive. By contrast, batch-mode (also batch mode) AL allows the machine learner to query a batch (i.e., group) of unlabeled instances simultaneously to be labeled, which is better suited to parallel labeling environments or models with slow training procedures to accelerate the learning speed. In batch-mode AL, the number of instances in each query group is called batch size. For some recent overview papers for batch-mode AL, see BIB003 BIB001 BIB002 . |
Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL Query Strategies <s> We propose an algorithm called query by commitee , in which a committee of students is trained on the same data set. The next query is chosen according to the principle of maximal disagreement . The algorithm is studied for two toy models: the high-low game and perceptron learning of another perceptron. As the number of queries goes to infinity, the committee algorithm yields asymptotically finite information gain. This leads to generalization error that decreases exponentially with the number of examples. This in marked contrast to learning from randomly chosen inputs, for which the information gain approaches zero and the generalization error decreases with a relatively slow inverse power law. We suggest that asymptotically finite information gain may be an important characteristic of good query algorithms. <s> BIB001 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL Query Strategies <s> We describe and evaluate experimentally a method for clustering words according to their distribution in particular syntactic contexts. Words are represented by the relative frequency distributions of contexts in which they appear, and relative entropy between those distributions is used as the similarity measure for clustering. Clusters are represented by average context distributions derived from the given words according to their probabilities of cluster membership. In many cases, the clusters can be thought of as encoding coarse sense distinctions. Deterministic annealing is used to find lowest distortion sets of clusters: as the annealing parameter increases, existing clusters become unstable and subdivide, yielding a hierarchical "soft" clustering of the data. Clusters are used as the basis for class models of word coocurrence, and the models evaluated with respect to held-out test data. <s> BIB002 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL Query Strategies <s> The ability to cheaply train text classifiers is critical to their use in information retrieval, content analysis, natural language processing, and other tasks involving data which is partly or fully textual. An algorithm for sequential sampling during machine learning of statistical classifiers was developed and tested on a newswire text categorization task. This method, which we call uncertainty sampling, reduced by as much as 500-fold the amount of training data that would have to be manually classified to achieve a given level of effectiveness. <s> BIB003 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL Query Strategies <s> Abstract In many real-world learning tasks, it is expensive to acquire a sufficient number of labeled examples for training. This paper proposes a general method for efficiently training probabilistic classifiers, by selecting for training only the more informative examples in a stream of unlabeled examples. The method, committee-based sampling , evaluates the informativeness of an example by measuring the degree of disagreement between several model variants. These variants (the committee) are drawn randomly from a probability distribution conditioned by the training set selected so far (Monte-Carlo sampling). The method is particularly attractive because it evaluates the expected information gain from a training example implicitly, making the model both easy to implement and generally applicable. We further show how to apply committee-based sampling for training Hidden Markov Model classifiers, which are commonly used for complex classification tasks. The method was implemented and tested for the task of tagging words in natural language sentences with parts-of-speech. Experimental evaluation of committee-based sampling versus standard sequential training showed a substantial improvement in training efficiency. <s> BIB004 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL Query Strategies <s> Information extraction from HTML documents requires a classifier capable of assigning semantic labels to the words or word sequences to be extracted. If completely labeled documents are available for training, well-known Markov model techniques can be used to learn such classifiers. In this paper, we consider the more challenging task of learning hidden Markov models (HMMs) when only partially (sparsely) labeled documents are available for training. We first give detailed account of the task and its appropriate loss function, and show how it can be minimized given an HMM. We describe an EM style algorithm for learning HMMs from partially labeled data. We then present an active learning algorithm that selects "difficult" unlabeled tokens and asks the user to label them. We study empirically by how much active learning reduces the required data labeling effort, or increases the quality of the learned model achievable with a given amount of user effort. <s> BIB005 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL Query Strategies <s> In many real world applications, active selection of training examples can significantly reduce the number of labelled training examples to learn a classification function. Different strategies in the field of support vector machines have been proposed that iteratively select a single new example from a set of unlabelled examples, query the corresponding class label and then perform retraining of the current classifier. However, to reduce computational time for training, it might be necessary to select batches of new training examples instead of single examples. Strategies for single examples can be extended straightforwardly to select batches by choosing the h > 1 examples that get the highest values for the individual selection criterion. We present a new approach that is especially designed to construct batches and incorporates a diversity measure. It has low computational requirements making it feasible for large scale problems with several thousands of examples. Experimental results indicate that this approach provides a faster method to attain a level of generalization accuracy in terms of the number of labelled examples. <s> BIB006 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL Query Strategies <s> We analyze the “query by committee” algorithm, a method for filtering informative queries from a random stream of inputs. We show that if the two-member committee algorithm achieves information gain with positive lower bound, then the prediction error decreases exponentially with the number of queries. We show that, in particular, this exponential decrease holds for query learning of perceptrons. <s> BIB007 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL Query Strategies <s> The paper is concerned with two-class active learning. While the common approach for collecting data in active learning is to select samples close to the classification boundary, better performance can be achieved by taking into account the prior data distribution. The main contribution of the paper is a formal framework that incorporates clustering into active learning. The algorithm first constructs a classifier on the set of the cluster representatives, and then propagates the classification decision to the other samples via a local noise model. The proposed model allows to select the most representative samples as well as to avoid repeatedly labeling samples in the same cluster. During the active learning process, the clustering is adjusted using the coarse-to-fine strategy in order to balance between the advantage of large clusters and the accuracy of the data representation. The results of experiments in image databases show a better performance of our algorithm compared to the current methods. <s> BIB008 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL Query Strategies <s> Interactively learning from a small sample of unlabeled examples is an enormously challenging task. Relevance feedback and more recently active learning are two standard techniques that have received much attention towards solving this interactive learning problem. How to best utilize the user's effort for labeling, however, remains unanswered. It has been shown in the past that labeling a diverse set of points is helpful, however, the notion of diversity has either been dependent on the learner used, or computationally expensive. In this paper, we intend to address these issues by proposing a fundamentally motivated, information-theoretic view of diversity and its use in a fast, non-degenerate active learning-based relevance feedback setting. Comparative testing and results are reported and thoughts for future work are presented. <s> BIB009 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL Query Strategies <s> With the advent and proliferation of digital cameras and computers, the number of digital photos created and stored by consumers has grown extremely large. This created increasing demand for image retrieval systems to ease interaction between consumers and personal media content. Active learning is a widely used user interaction model for retrieval systems, which learns the query concept by asking users to label a number of images at each iteration. In this paper, we study sampling strategies for active learning in personal photo retrieval. In order to reduce human annotation efforts in a content-based image retrieval setting, we propose using multiple sampling criteria for active learning: informativeness, diversity and representativeness. Our experimental results show that by combining multiple sampling criteria in active learning, the performance of personal photo retrieval system can be significantly improved. <s> BIB010 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL Query Strategies <s> Supervised and semi-supervised learning are frequently applied methods to annotate videos by map..ing low-level features into high-level semantic concepts. Though they work well for certain concepts, the performance is still far from reality due to the large gap between the features and the semantics. The main constraint of these methods is that the information contained in a limited number of labeled training samples can hardly represent the distributions of the semantic concepts. In this paper, we propose a novel semi-automatic video annotation framework, active learning with clustering tuning, to tackle the disadvantages of current video annotation solutions. In this framework, firstly an initial training set is constructed based on clustering the entire video dataset. And then a SVM-based active learning scheme is proposed, which aims at maximizing the margin of the SVM classifier by manually selectively labeling a small set of samples. Moreover, in each round of active learning, we tune/refine the clustering results based on the prediction results of current stage, which is beneficial for selecting the most informative samples in the active learning process, as well as helps further improve the final annotation accuracy in the post-processing step. Experimental results show that the proposed scheme performs superior to typical active learning algorithms in terms of both annotation accuracy and stability. <s> BIB011 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL Query Strategies <s> Relevance feedback, which uses the terms in relevant documents to enrich the user's initial query, is an effective method for improving retrieval performance. An associated key research problem is the following: Which documents to present to the user so that the user's feedback on the documents can significantly impact relevance feedback performance. This paper views this as an active learning problem and proposes a new algorithm which can efficiently maximize the learning benefits of relevance feedback. This algorithm chooses a set of feedback documents based on relevancy, document diversity and document density. Experimental results show a statistically significant and appreciable improvement in the performance of our new approach over the existing active feedback methods. <s> BIB012 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL Query Strategies <s> Active learning has been demonstrated to be an effective approach to reducing human labeling effort in multimedia annotation tasks. However, most of the existing active learning methods for video annotation are studied in a relatively simple context where concepts are sequentially annotated with fixed effort and only a single modality is applied. However, we usually have to deal with multiple modalities, and sequentially annotating concepts without preference cannot suitably assign annotation effort. To address these two issues, in this paper we propose a multi-concept multi-modality active learning method for video annotation in which multiple concepts and multiple modalities can be simultaneously taken into consideration. In each round of active learning, this method selects the concept that is expected to get the highest performance gain and a batch of suitable samples to be annotated for this concept. Then, a graph-based semi-supervised learning is conducted on each modality for the selected concept. The proposed method is able to sufficiently explore the human effort by considering both the learnabilities of different concepts and the potentials of different modalities. Experimental results on TRECVID 2005 benchmark have demonstrated its effectiveness and efficiency. <s> BIB013 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL Query Strategies <s> Active learning is well-suited to many problems in natural language processing, where unlabeled data may be abundant but annotation is slow and expensive. This paper aims to shed light on the best active learning approaches for sequence labeling tasks such as information extraction and document segmentation. We survey previously used query selection strategies for sequence models, and propose several novel algorithms to address their shortcomings. We also conduct a large-scale empirical comparison using multiple corpora, which demonstrates that our proposed methods advance the state of the art. <s> BIB014 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL Query Strategies <s> Active learning is a supervised machine learning technique in which the learner is in control of the data used for learning. That control is utilized by the learner to ask an oracle, typically a human with extensive knowledge of the domain at hand, about the classes of the instances for which the model learned so far makes unreliable predictions. The active learning process takes as input a set of labeled examples, as well as a larger set of unlabeled examples, and produces a classifier and a relatively small set of newly labeled data. The overall goal is to create as good a classifier as possible, without having to mark-up and supply the learner with more data than necessary. The learning process aims at keeping the human annotation effort to a minimum, only asking for advice where the training utility of the result of such a query is high. Active learning has been successfully applied to a number of natural language processing tasks, such as, information extraction, named entity recognition, text categorization, part-of-speech tagging, parsing, and word sense disambiguation. This report is a literature survey of active learning from the perspective of natural language processing. <s> BIB015 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL Query Strategies <s> One of the principal bottlenecks in applying learning techniques to classification problems is the large amount of labeled training data required. Especially for images and video, providing training data is very expensive in terms of human time and effort. In this paper we propose an active learning approach to tackle the problem. Instead of passively accepting random training examples, the active learning algorithm iteratively selects unlabeled examples for the user to label, so that human effort is focused on labeling the most “useful” examples. Our method relies on the idea of uncertainty sampling, in which the algorithm selects unlabeled examples that it finds hardest to classify. Specifically, we propose an uncertainty measure that generalizes margin-based uncertainty to the multi-class case and is easy to compute, so that active learning can handle a large number of classes and large data sizes efficiently. We demonstrate results for letter and digit recognition on datasets from the UCI repository, object recognition results on the Caltech-101 dataset, and scene categorization results on a dataset of 13 natural scene categories. The proposed method gives large reductions in the number of training examples required over random selection to achieve similar classification accuracy, with little computational overhead. <s> BIB016 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL Query Strategies <s> Active learning has been proven a reliable strategy to reduce manual efforts in training data labeling. Such strategies incorporate the user as oracle: the classifier selects the most appropriate example and the user provides the label. While this approach is tailored towards the classifier, more intelligent input from the user may be beneficial. For instance, given only one example at a time users are hardly able to determine whether this example is an outlier or not. In this paper we propose user-based visually-supported active learning strategies that allow the user to do both, selecting and labeling examples given a trained classifier. While labeling is straightforward, selection takes place using a interactive visualization of the classifier's a-posteriori output probabilities. By simulating different user selection strategies we show, that user-based active learning outperforms uncertainty based sampling methods and yields a more robust approach on different data sets. The obtained results point towards the potential of combining active learning strategies with results from the field of information visualization. <s> BIB017 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL Query Strategies <s> Defining an efficient training set is one of the most delicate phases for the success of remote sensing image classification routines. The complexity of the problem, the limited temporal and financial resources, as well as the high intraclass variance can make an algorithm fail if it is trained with a suboptimal dataset. Active learning aims at building efficient training sets by iteratively improving the model performance through sampling. A user-defined heuristic ranks the unlabeled pixels according to a function of the uncertainty of their class membership and then the user is asked to provide labels for the most uncertain pixels. This paper reviews and tests the main families of active learning algorithms: committee, large margin, and posterior probability-based. For each of them, the most recent advances in the remote sensing community are discussed and some heuristics are detailed and tested. Several challenging remote sensing scenarios are considered, including very high spatial resolution and hyperspectral image classification. Finally, guidelines for choosing the good architecture are provided for new and/or unexperienced user. <s> BIB018 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL Query Strategies <s> Active learning is a machine learning technique that selects the most informative samples for labeling and uses them as training data. It has been widely explored in multimedia research community for its capability of reducing human annotation effort. In this article, we provide a survey on the efforts of leveraging active learning in multimedia annotation and retrieval. We mainly focus on two application domains: image/video annotation and content-based image retrieval. We first briefly introduce the principle of active learning and then we analyze the sample selection criteria. We categorize the existing sample selection strategies used in multimedia annotation and retrieval into five criteria: risk reduction, uncertainty, diversity, density and relevance. We then introduce several classification models used in active learning-based multimedia annotation and retrieval, including semi-supervised learning, multilabel learning and multiple instance learning. We also provide a discussion on several future trends in this research direction. In particular, we discuss cost analysis of human annotation and large-scale interactive multimedia annotation. <s> BIB019 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL Query Strategies <s> In this letter, we present a novel batch-mode active learning technique for solving multiclass classification problems by using the support vector machine classifier with the one-against-all architecture. The uncertainty of each unlabeled sample is measured by defining a criterion which not only considers the smallest distance to the decision hyperplanes but also takes into account the distances to other hyperplanes if the sample is within the margin of their decision boundaries. To select batch of most uncertain samples from all over the decision region, the uncertain regions of the classifiers are partitioned into multiple parts depending on the number of geometrical margins of binary classifiers passing on them. Then, a balanced number of most uncertain samples are selected from each part. To minimize the redundancy and keep the diversity among these samples, the kernel k-means clustering algorithm is applied to the set of uncertain samples, and the representative sample (medoid) from each cluster is selected for labeling. The effectiveness of the proposed method is evaluated by comparing it with other batch-mode active learning techniques existing in the literature. Experimental results on two different remote sensing data sets confirmed the effectiveness of the proposed technique. <s> BIB020 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL Query Strategies <s> Most active learning approaches select either informative or representative unlabeled instances to query their labels. Although several active learning algorithms have been proposed to combine the two criteria for query selection, they are usually ad hoc in finding unlabeled instances that are both informative and representative. We address this challenge by a principled approach, termed QUIRE, based on the min-max view of active learning. The proposed approach provides a systematic way for measuring and combining the informativeness and representativeness of an instance. Extensive experimental results show that the proposed QUIRE approach outperforms several state-of -the-art active learning approaches. <s> BIB021 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL Query Strategies <s> Active learning methods select informative instances to effectively learn a suitable classifier. Uncertainty sampling, a frequently utilized active learning strategy, selects instances about which the model is uncertain but it does not consider the reasons for why the model is uncertain. In this article, we present an evidence-based framework that can uncover the reasons for why a model is uncertain on a given instance. Using the evidence-based framework, we discuss two reasons for uncertainty of a model: a model can be uncertain about an instance because it has strong, but conflicting evidence for both classes or it can be uncertain because it does not have enough evidence for either class. Our empirical evaluations on several real-world datasets show that distinguishing between these two types of uncertainties has a drastic impact on the learning efficiency. We further provide empirical and analytical justifications as to why distinguishing between the two uncertainties matters. <s> BIB022 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL Query Strategies <s> How can we find a general way to choose the most suitable samples for training a classifier? Even with very limited prior information? Active learning, which can be regarded as an iterative optimization procedure, plays a key role to construct a refined training set to improve the classification performance in a variety of applications, such as text analysis, image recognition, social network modeling, etc. Although combining representativeness and informativeness of samples has been proven promising for active sampling, state-of-the-art methods perform well under certain data structures. Then can we find a way to fuse the two active sampling criteria without any assumption on data? This paper proposes a general active learning framework that effectively fuses the two criteria. Inspired by a two-sample discrepancy problem, triple measures are elaborately designed to guarantee that the query samples not only possess the representativeness of the unlabeled data but also reveal the diversity of the labeled data. Any appropriate similarity measure can be employed to construct the triple measures. Meanwhile, an uncertain measure is leveraged to generate the informativeness criterion, which can be carried out in different ways. Rooted in this framework, a practical active learning algorithm is proposed, which exploits a radial basis function together with the estimated probabilities to construct the triple measures and a modified best-versus-second-best strategy to construct the uncertain measure, respectively. Experimental results on benchmark datasets demonstrate that our algorithm consistently achieves superior performance over the state-of-the-art active learning algorithms. <s> BIB023 | Query strategies are central in AL methods; they are used to identify those training examples that can contribute most to the learning performance of ML models. Various AL query strategies have been proposed, defined, and discussed in several surveys to improve over random sample selection BIB018 BIB015 BIB019 . Here we highlight the most commonly used query strategies in AL: (1) uncertainty sampling, (2) diversity, (3) density, and (4) relevance. Uncertainty sampling BIB003 picks the instances that the (machine) learner model is most uncertain about. Due to its simplicity, intuitiveness, and empirical success in many domains, uncertainty sampling is the most commonly used strategy. Though uncertainty sampling has many limitations, such as sensitivity to noise and outliers, it still works surprisingly well BIB022 . The heuristic of selecting the most uncertain instances stems from the fact that in many learning algorithms the essential classification boundary can be preserved based solely on the nearby samples, and the samples that are far from the boundary can be viewed as redundant. For a binary classification, the samples that are closest to a classification boundary will be selected. When multiple learners exist, a widely applied strategy is selecting the samples that have the maximum disagreement among the learners BIB007 BIB001 . The disagreement of multiple learners can also be viewed as an uncertainty measure. This query strategy is called query-by-committee (QBC) BIB001 . A committee of ML models are trained on the same data set. Each committee member then votes on the labelings of query candidates. The most informative query is the instance on which they most disagree. Two main disagreement measures have been proposed in the literature: (1) vote entropy BIB004 and (2) average Kullback-Leibler (KL) divergence . Vote entropy compares only the committee members' top ranked class , whereas KL divergence metric measures the difference between two probability distributions. KL divergence to the mean BIB002 is an average of the KL divergence between each distribution and the mean of all the distributions. Thus, this disagreement measure picks the instance with the largest average difference between the label distributions of any committee member and the consensus as the most informative query . Other commonly used uncertainty sampling variants include: least confident, margin sampling, and entropy. Least confident is an uncertainty sampling variant for multi-class classification (Appendix A.4.2), where the machine learner queries the instance whose prediction is the least confident (as the name implies). The least confident strategy only considers information about the most probable label, and thus, it "throws away" information about the remaining label distribution. Margin sampling BIB005 can overcome the drawback (mentioned in the preceding sentence) of the least confident strategy, by considering the posterior of the second most likely label . Entropy is an uncertainty sampling variant that uses entropy as an uncertainty measure. Entropy-based uncertainty sampling has achieved strong empirical performance across many tasks . A detailed discussion about when each variant of uncertainty sampling should be used is provided in . The second query strategy, based on a diversity criterion BIB006 , was first investigated in batch-mode AL (Section 3.1.4), where Brinker BIB006 used diversity in AL with SVMs. Diversity concerns the capability of the learning model to avoid selecting query candidates that rank well according to the heuristic (i.e., query strategy), but are redundant among each other. More specifically, a diversity based query strategy is used to select those unlabeled samples that are far from the selected set and thus can reduce redundancy within the selected samples. Diversity has been studied extensively for margin-based heuristics, where the base margin sampling heuristic is constrained using a measure of diversity between the candidates. An algorithm for a general diversity-based heuristic can be found in BIB018 . In many applications, we need to select a batch of samples instead of just one in an AL iteration. For example, updating (i.e., retraining) a model may need extensive computation, and thus labeling just one sample each time will make the AL process quite slow. Joshi et al. BIB016 proposed that the selected samples in a batch should be diverse. Dagli et al. BIB009 and Wu et al. BIB010 emphasized that the diversity criterion should not only be investigated in batch-mode but also be considered on all labeled samples, to avoid having selected samples being constrained in an (increasingly) restricted area. The third strategy used by a machine learner is to select samples using a density BIB014 criterion that selects samples within regions of high density. The main argument for a density-based criterion is that informative instances should not only be those that are uncertain, but also those that are "representative" of the underlying distribution (i.e., inhabit dense regions of the instance space). In density-based selection, the query candidates are selected from dense areas of the feature space because those instances are considered as most representative BIB021 BIB023 BIB010 . The representativeness of an instance can be evaluated by how many instances among the unlabeled data are similar to it. Density-based selection of candidates can be used to initialize an AL model when no labels are available at all. Wu et al. BIB010 proposed a representativeness measure for each sample according to the distance to its nearby samples. Another strategy uses clustering-based methods BIB008 BIB011 , which first group the samples and then selects samples at and around the cluster centers. Qi et al. BIB011 combine AL with clustering, and their method can refine the clusters with merging and splitting operations after each iteration, which is beneficial for selecting the most informative samples in the AL process, and also helps further improve the final annotation accuracy in the post-processing step. The fourth strategy, relevance criterion, is usually applied in multi-label classification tasks (Appendix A.4.3). Based on a relevance criterion, those samples that have the highest probability to be relevant for a certain class are selected BIB019 . This strategy fosters the identification of positive examples for a class. Ayache and Quénot have conducted an empirical study on different sample selection strategies for AL for indexing concepts in videos. Their experimental results clearly show that the relevance criterion can achieve better performance than an uncertainty criterion for some concepts. It is difficult to directly compare these criteria. Seifert and Granitzer's experiments BIB017 showed that the benefits of these strategies depend on specific tasks, data sets, and classifiers (Appendix A.3). Wang et al. BIB019 provided several general suggestions: (1) for binary classification problems, applying a relevance criterion may achieve the best results for some extremely unbalanced cases where positive samples are much less frequent than negative ones, (2) in batch-mode AL (Section 3.1.4), integrating a diversity criterion will be helpful for computational efficiency, (3) in many cases, these criteria are combined explicitly or implicitly, (4) the diversity and density criteria are normally not used individually (because they are not directly associated with classification results) and most commonly they are used to enhance the uncertainty criterion. The uncertainty criterion relates to the confidence of a ML algorithm in correctly classifying the considered sample, while the diversity criterion aims at selecting a set of unlabeled samples that are as diverse (distant from one another) as possible, thus reducing the redundancy among the selected samples. The combination of the two criteria results in the selection of the potentially most informative set (Section 3.1.1) of samples at each iteration of the AL process. Patra et al. BIB020 combine the uncertainty and diversity criteria, where they proposed a batch-mode AL (Section 3.1.4) method for multi-class classification (Appendix A.4.2) with SVM classifiers. In the uncertainty step, m samples are selected from all over the uncertain regions of the classifiers. In the diversity step, a batch of h (m > h > 1) samples that are diverse from each other are chosen among the m samples that are selected in the uncertainty step. Xu et al. BIB012 also employed SVM-based batch-mode AL, whereas their method incorporated diversity and density measures. To improve classifier performance for interactive video annotation, Wang et al. BIB013 have combined uncertainty, diversity, density and relevance for sample selection in AL and named the comprehensive strategy as effectiveness. |
Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Recent and Novel AL Methods <s> Obtaining labels can be expensive or time-consuming, but unlabeled data is often abundant and easier to obtain. Most learning tasks can be made more efficient, in terms of labeling cost, by intelligently choosing specific unlabeled instances to be labeled by an oracle. The general problem of optimally choosing these instances is known as active learning. As it is usually set in the context of supervised learning, active learning relies on a single oracle playing the role of a teacher. We focus on the multiple annotator scenario where an oracle, who knows the ground truth, no longer exists; instead, multiple labelers, with varying expertise, are available for querying. This paradigm posits new challenges to the active learning scenario. We can now ask which data sample should be labeled next and which annotator should be queried to benefit our learning model the most. In this paper, we employ a probabilistic model for learning from multiple annotators that can also learn the annotator expertise even when their expertise may not be consistently accurate across the task domain. We then focus on providing a criterion and formulation that allows us to select both a sample and the annotator/s to query the labels from. <s> BIB001 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Recent and Novel AL Methods <s> We present a simple and yet effective approach that can incorporate rationales elicited from annotators into the training of any offthe-shelf classifier. We show that our simple approach is effective for multinomial na¨ Bayes, logistic regression, and support vector machines. We additionally present an active learning method tailored specifically for the learning with rationales framework. <s> BIB002 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Recent and Novel AL Methods <s> Active learning methods select informative instances to effectively learn a suitable classifier. Uncertainty sampling, a frequently utilized active learning strategy, selects instances about which the model is uncertain but it does not consider the reasons for why the model is uncertain. In this article, we present an evidence-based framework that can uncover the reasons for why a model is uncertain on a given instance. Using the evidence-based framework, we discuss two reasons for uncertainty of a model: a model can be uncertain about an instance because it has strong, but conflicting evidence for both classes or it can be uncertain because it does not have enough evidence for either class. Our empirical evaluations on several real-world datasets show that distinguishing between these two types of uncertainties has a drastic impact on the learning efficiency. We further provide empirical and analytical justifications as to why distinguishing between the two uncertainties matters. <s> BIB003 | Yan et al. BIB001 , Sharma et al. BIB002 , and Sharma and Bilgic BIB003 introduced some very recent and novel AL based methods. Typical AL algorithms rely on a single annotator (i.e., oracle) who serves in the role of a "teacher". By contrast, the following multiple annotator AL scenario poses new challenges: an oracle, who knows the ground truth, does not exist, and multiple annotators, with varying expertise, are available for querying. Such scenarios are not uncommon in the real world, for example, decision making for emergency management. To bridge the gap, Yan et al. BIB001 focused on an AL scenario from multiple crowdsourcing annotators. The machine learner asks which data sample should be labeled next and which annotator should be queried to improve the performance of the classifier the most. Specifically, Yan et al. employed a probabilistic model to learn from multiple annotators-the model can also learn the annotator's expertise even when their expertise may not be consistently accurate across the task domain. The authors provided an optimization formulation that allows the machine learner to select the most uncertain sample and the most appropriate annotator to query the labels. Their experiments on multiple annotator text data and on three UCI benchmark data sets showed that their AL approach combined with information from multiple annotators improves the learning performance. One of the bottlenecks in eliciting domain knowledge from annotators is that the traditional supervised learning approaches (Appendix A.2.1) cannot handle the elicited rich feedback from domain experts. To address the gap, many methods have been developed, but they are often classifier-specific BIB002 ; these methods do not transfer directly from one domain to another. To further address this problem, Sharma et al. BIB002 proposed an AL approach that can incorporate rationales elicited from annotators into the training of any existing classifier for text classification (Appendix A.5). Their experimental results using four text categorization datasets showed that their approach is effective for incorporating rationales into the learning of multinomial Naıve Bayes, logistic regression, and SVMs classifiers. Traditional uncertainty sampling does not consider the reasons why a (machine) learner is uncertain on the selected instances. Sharma and Bilgic BIB003 addressed this gap by using an evidence-based framework to do so. Specifically, the authors focused on two types of uncertainty: conflicting-evidence uncertainty and insufficient-evidence uncertainty. In the former type of uncertainty, the model is uncertain due to presence of strong but conflicting evidence for each class; in the latter type, the model is uncertain due to insufficient evidence for either class. Their empirical evaluations on several real-world datasets using naıve Bayes for binary classification tasks showed that distinguishing between these two types of uncertainties has a drastic impact on the learning efficiency: conflicting-evidence uncertainty provides the most benefit for learning, substantially outperforming both traditional uncertainty sampling and insufficient-evidence uncertainty sampling. The authors, in their explanation of these results, showed that the instances that are uncertain due to conflicting evidence have lower density in the labeled set, compared to instances that are uncertain due to insufficient evidence; that is, there is less support in the training data for the perceived conflict than for the insufficiency of the evidence. |
Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL Summary and Discussion <s> Most of the empirical evaluations of active learning approaches in the literature have focused on a single classifier and a single performance measure. We present an extensive empirical evaluation of common active learning baselines using two probabilistic classifiers and several performance measures on a number of large datasets. In addition to providing important practical advice, our findings highlight the importance of overlooked choices in active learning experiments in the literature. For example, one of our findings shows that model selection is as important as devising an active learning approach, and choosing one classifier and one performance measure can often lead to unexpected and unwarranted conclusions. Active learning should generally improve the model's capability to distinguish between instances of different classes, but our findings show that the improvements provided by active learning for one performance measure often came at the expense of another measure. We present several such results, raise questions, guide users and researchers to better alternatives, caution against unforeseen side effects of active learning, and suggest future research directions. <s> BIB001 | Even though AL has been successfully applied to many problems in different domains, no systematic and comprehensive comparison of AL strategies have been examined. This might be caused by the fact that most of the work has been disconnected, using different data sets in different problem domains with insufficient consistency to easily compare AL strategies etc. Ramirez-Loaiza and colleagues BIB001 have made the first attempt to address this recently, but they only evaluated two classifiers and two AL strategies. Ramirez-Loaiza et al., based on a meta-analysis of 54 published papers, found that most empirical evaluations of AL approaches in the literature have focused on a single classifier (83%) and a single performance measure (91%) BIB001 . To provide important practical advice for AL methods, these authors also conducted an extensive empirical evaluation of common AL baselines, using two probabilistic classifiers (naive Bayes and logistic regression) and two of the most common AL strategies (uncertainty sampling and QBC). Their evaluation used several performance measures on a number of large datasets. They experimented with both synthetic and real-world datasets, specifically, 10 large real-world binary classification data sets. The smallest dataset had 19 K instances and the largest 490 K. The domains and class distributions of these data sets are diverse-from housing, through ecology, to handwriting and letter recognition. Ramirez-Loaiza et al. BIB001 concluded that AL algorithms can reduce the time, effort, and resources needed to train an accurate predictive model by carefully choosing which instances should be labeled. Ramirez-Loaiza and colleagues' findings also highlighted the importance of overlooked choices in AL experiments in the literature. For example, they showed that model selection is as important as development of an AL algorithm. |
Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Visual Analytics (VA) and Human-in-the-Loop <s> This is the book form of the Research and Development Agenda for Visual Analytics to be published by IEEE in 2005. <s> BIB001 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Visual Analytics (VA) and Human-in-the-Loop <s> Researchers have made significant progress in disciplines such as scientific and information visualization, statistically based exploratory and confirmatory analysis, data and knowledge representations, and perceptual and cognitive sciences. Although some research is being done in this area, the pace at which new technologies and technical talents are becoming available is far too slow to meet the urgent need. National Visualization and Analytics Center's goal is to advance the state of the science to enable analysts to detect the expected and discover the unexpected from massive and dynamic information streams and databases consisting of data of multiple types and from multiple sources, even though the data are often conflicting and incomplete. Visual analytics is a multidisciplinary field that includes the following focus areas: (i) analytical reasoning techniques, (ii) visual representations and interaction techniques, (iii) data representations and transformations, (iv) techniques to support production, presentation, and dissemination of analytical results. The R&D agenda for visual analytics addresses technical needs for each of these focus areas, as well as recommendations for speeding the movement of promising technologies into practice. This article provides only the concise summary of the R&D agenda. We encourage reading, discussion, and debate as well as active innovation toward the agenda for visual analysis. <s> BIB002 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Visual Analytics (VA) and Human-in-the-Loop <s> We are living in a world which faces a rapidly increasing amount of data to be dealt with on a daily basis. In the last decade, the steady improvement of data storage devices and means to create and collect data along the way influenced our way of dealing with information: Most of the time, data is stored without filtering and refinement for later use. Virtually every branch of industry or business, and any political or personal activity nowadays generate vast amounts of data. Making matters worse, the possibilities to collect and store data increase at a faster rate than our ability to use it for making decisions. However, in most applications, raw data has no value in itself; instead we want to extract the information contained in it. <s> BIB003 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Visual Analytics (VA) and Human-in-the-Loop <s> Visual analytics (VA) system development started in academic research institutions where novel visualization techniques and open source toolkits were developed. Simultaneously, small software companies, sometimes spin-offs from academic research institutions, built solutions for specific application domains. In recent years we observed the following trend: some small VA companies grew exponentially; at the same time some big software vendors such as IBM and SAP started to acquire successful VA companies and integrated the acquired VA components into their existing frameworks. Generally the application domains of VA systems have broadened substantially. This phenomenon is driven by the generation of more and more data of high volume and complexity, which leads to an increasing demand for VA solutions from many application domains. In this paper we survey a selection of state-of-the-art commercial VA frameworks, complementary to an existing survey on open source VA tools. From the survey results we identify several improvement opportunities as future research directions. <s> BIB004 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Visual Analytics (VA) and Human-in-the-Loop <s> Visual analytics is the science of marrying interactive visualizations and analytic algorithms to support exploratory knowledge discovery in large datasets. We argue for a shift from a `human in the loop' philosophy for visual analytics to a `human is the loop' viewpoint, where the focus is on recognizing analysts' work processes, and seamlessly fitting analytics into that existing interactive process. We survey a range of projects that provide visual analytic support contextually in the sensemaking loop, and outline a research agenda along with future challenges. <s> BIB005 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Visual Analytics (VA) and Human-in-the-Loop <s> Machine learning is one of the most important and successful techniques in contemporary computer science. It involves the statistical inference of models (such as classifiers) from data. It is often conceived in a very impersonal way, with algorithms working autonomously on passively collected data. However, this viewpoint hides considerable human work of tuning the algorithms, gathering the data, and even deciding what should be modeled in the first place. Examining machine learning from a human-centered perspective includes explicitly recognising this human work, as well as reframing machine learning workflows based on situated human working practices, and exploring the co-adaptation of humans and systems. A human-centered understanding of machine learning in human context can lead not only to more usable machine learning tools, but to new ways of framing learning computationally. This workshop will bring together researchers to discuss these issues and suggest future research questions aimed at creating a human-centered approach to machine learning. <s> BIB006 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Visual Analytics (VA) and Human-in-the-Loop <s> Predictive analytics embraces an extensive range of techniques including statistical modeling, machine learning, and data mining and is applied in business intelligence, public health, disaster management and response, and many other fields. To date, visualization has been broadly used to support tasks in the predictive analytics pipeline. Primary uses have been in data cleaning, exploratory analysis, and diagnostics. For example, scatterplots and bar charts are used to illustrate class distributions and responses. More recently, extensive visual analytics systems for feature selection, incremental learning, and various prediction tasks have been proposed to support the growing use of complex models, agent-specific optimization, and comprehensive model comparison and result exploration. Such work is being driven by advances in interactive machine learning and the desire of end-users to understand and engage with the modeling process. In this state-of-the-art report, we catalogue recent advances in the visualization community for supporting predictive analytics. First, we define the scope of predictive analytics discussed in this article and describe how visual analytics can support predictive analytics tasks in a predictive visual analytics PVA pipeline. We then survey the literature and categorize the research with respect to the proposed PVA pipeline. Systems and techniques are evaluated in terms of their supported interactions, and interactions specific to predictive analytics are discussed. We end this report with a discussion of challenges and opportunities for future research in predictive visual analytics. <s> BIB007 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Visual Analytics (VA) and Human-in-the-Loop <s> Abstract Classification can be highly challenging when the dataset is extremely large, or when the training data in the underlying domain are difficult to obtain. One feasible solution to this challenge is transfer learning, which extracts the knowledge from source tasks and applies the knowledge to target tasks. Extant transfer learning schemes typically assume that similarities between the source task and the target task to some degree. This assumption does not hold in certain actual applications; analysts unfamiliar with the learning strategy can be frustrated by the complicated transfer relations and the non-intuitive transfer process. This paper presents a suite of visual communication and interaction techniques to support the transfer learning process. Furthermore, a pioneering visual-assisted transfer learning methodology is proposed in the context of classification. Our solution includes a visual communication interface that allows for comprehensive exploration of the entire knowledge transfer process and the relevance among tasks. With these techniques and the methodology, the analysts can intuitively choose relevant tasks and data, as well as iteratively incorporate their experience and expertise into the analysis process. We demonstrate the validity and efficiency of our visual design and the analysis approach with examples of text classification. <s> BIB008 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Visual Analytics (VA) and Human-in-the-Loop <s> Dimensionality Reduction (DR) is a core building block in visualizing multidimensional data. For DR techniques to be useful in exploratory data analysis, they need to be adapted to human needs and domain-specific problems, ideally, interactively, and on-the-fly. Many visual analytics systems have already demonstrated the benefits of tightly integrating DR with interactive visualizations. Nevertheless, a general, structured understanding of this integration is missing. To address this, we systematically studied the visual analytics and visualization literature to investigate how analysts interact with automatic DR techniques. The results reveal seven common interaction scenarios that are amenable to interactive control such as specifying algorithmic constraints, selecting relevant features, or choosing among several DR algorithms. We investigate specific implementations of visual analysis systems integrating DR, and analyze ways that other machine learning methods have been combined with DR. Summarizing the results in a “human in the loop” process model provides a general lens for the evaluation of visual interactive DR systems. We apply the proposed model to study and classify several systems previously described in the literature, and to derive future research opportunities. <s> BIB009 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Visual Analytics (VA) and Human-in-the-Loop <s> Interactive model analysis, the process of understanding, diagnosing, and refining a machine learning model with the help of interactive visualization, is very important for users to efficiently solve real-world artificial intelligence and data mining problems. Dramatic advances in big data analytics has led to a wide variety of interactive model analysis tasks. In this paper, we present a comprehensive analysis and interpretation of this rapidly developing area. Specifically, we classify the relevant work into three categories: understanding, diagnosis, and refinement. Each category is exemplified by recent influential work. Possible future research opportunities are also explored and discussed. <s> BIB010 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Visual Analytics (VA) and Human-in-the-Loop <s> To realize the full potential of machine learning in diverse real-world domains, it is necessary for model predictions to be readily interpretable and actionable for the human in the loop. Analysts, who are the users but not the developers of machine learning models, often do not trust a model because of the lack of transparency in associating predictions with the underlying data space. To address this problem, we propose Rivelo, a visual analytics interface that enables analysts to understand the causes behind predictions of binary classifiers by interactively exploring a set of instance-level explanations. These explanations are model-agnostic, treating a model as a black box, and they help analysts in interactively probing the high-dimensional binary data space for detecting features relevant to predictions. We demonstrate the utility of the interface with a case study analyzing a random forest model on the sentiment of Yelp reviews about doctors. <s> BIB011 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Visual Analytics (VA) and Human-in-the-Loop <s> Visual analytics (VA) systems help data analysts solve complex problems interactively, by integrating automated data analysis and mining, such as machine learning (ML) based methods, with interactive visualizations. We propose a conceptual framework that models human interactions with ML components in the VA process, and that puts the central relationship between automated algorithms and interactive visualizations into sharp focus. The framework is illustrated with several examples and we further elaborate on the interactive ML process by identifying key scenarios where ML methods are combined with human feedback through interactive visualization. We derive five open research challenges at the intersection of ML and visualization research, whose solution should lead to more effective data analysis. <s> BIB012 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Visual Analytics (VA) and Human-in-the-Loop <s> Convolutional Neural Networks (CNNs) currently achieve state-of-the-art accuracy in image classification. With a growing number of classes, the accuracy usually drops as the possibilities of confusion increase. Interestingly, the class confusion patterns follow a hierarchical structure over the classes. We present visual-analytics methods to reveal and analyze this hierarchy of similar classes in relation with CNN-internal data. We found that this hierarchy not only dictates the confusion patterns between the classes, it furthermore dictates the learning behavior of CNNs. In particular, the early layers in these networks develop feature detectors that can separate high-level groups of classes quite well, even after a few training epochs. In contrast, the latter layers require substantially more epochs to develop specialized feature detectors that can separate individual classes. We demonstrate how these insights are key to significant improvement in accuracy by designing hierarchy-aware CNNs that accelerate model convergence and alleviate overfitting. We further demonstrate how our methods help in identifying various quality issues in the training data. <s> BIB013 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Visual Analytics (VA) and Human-in-the-Loop <s> Abstract Measured and simulated data sources from the built environment are increasing rapidly. It is becoming normal to analyze data from hundreds, or even thousands of buildings at once. Mechanistic, manual analysis of such data sets is time-consuming and not realistic using conventional techniques. Thus, a significant body of literature has been generated using unsupervised statistical learning techniques designed to uncover structure and information quickly with fewer input parameters or metadata about the buildings collected. Further, visual analytics techniques are developed as aids in this process for a human analyst to utilize and interpret the results. This paper reviews publications that include the use of unsupervised machine learning techniques as applied to non-residential building performance control and analysis. The categories of techniques covered include clustering, novelty detection, motif and discord detection, rule extraction, and visual analytics. The publications apply these technologies in the domains of smart meters, portfolio analysis, operations and controls optimization, and anomaly detection. A discussion is included of key challenges resulting from this review, such as the need for better collaboration between several, disparate research communities and the lack of open, benchmarking data sets. Opportunities for improvement are presented including methods of reproducible research and suggestions for cross-disciplinary cooperation. <s> BIB014 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Visual Analytics (VA) and Human-in-the-Loop <s> We present a design study of the TensorFlow Graph Visualizer, part of the TensorFlow machine intelligence platform. This tool helps users understand complex machine learning architectures by visualizing their underlying dataflow graphs. The tool works by applying a series of graph transformations that enable standard layout techniques to produce a legible interactive diagram. To declutter the graph, we decouple non-critical nodes from the layout. To provide an overview, we build a clustered graph using the hierarchical structure annotated in the source code. To support exploration of nested structure on demand, we perform edge bundling to enable stable and responsive cluster expansion. Finally, we detect and highlight repeated structures to emphasize a model's modular composition. To demonstrate the utility of the visualizer, we describe example usage scenarios and report user feedback. Overall, users find the visualizer useful for understanding, debugging, and sharing the structures of their models. <s> BIB015 | VA focuses on the integration of computational methods (e.g., analytical reasoning algorithms) and interactive visual interfaces to extend the perceptual and cognitive abilities of humans BIB001 , and thus to support human reasoning (via exploratory knowledge discovery) about complex phenomenon with big and often heterogeneous data. VA emphasizes the key role of visual representations as the most effective means to convey information to the human and prompt human cognition and reasoning. VA can support at least three of the core challenges in the context of M&DL: (1) building labeled data efficiently, thus in ways that minimizes the time of human annotators, (2) tuning the methods to produce the most accurate classification results with the least amount of training data and processing time, and (3) helping end users understand both the process through which classifiers are constructed and applied and the end result of their applications (thus supporting "explainable" M&DL). There is now more than a decade of research in VA, an annual conference (one of the three making up IEEE Vis), and increasing research on basic and applied VA across many domains. Thus, a comprehensive review of even the subset of VA focused on classification tasks is beyond the scope of this paper; for some recent overview papers see BIB007 BIB008 BIB014 BIB009 BIB004 BIB003 . A VA agenda is provided in BIB002 , and then for geovisual analytics and related topics in . Here, we focus specifically on the role of VA interfaces helping analysts understand M&DL, and then in Section 3.3 we review the recent efforts that are specifically focused on the integration of VA with AL methods. After surveying a range of projects that support VA contextually in the sensemaking loop, Endert et al. BIB005 argued for a shift from a 'human-in-the-loop' philosophy to a 'human is the loop' viewpoint. A similar argument about the central role of analysts can be found in BIB006 , where the authors emphasized that a human-centered understanding of ML can lead not only to more usable ML tools, but to new ways of framing learning computationally. Biewald explained why human-in-the-loop computing is the future of ML, and the related need for explainable M&DL is discussed in . In related research, Liu et al. BIB010 provided a comprehensive review about using VA via interactive visualization to understand, diagnose, and refine ML models. Additional calls for a VA-enabled human-in-the-loop approach to improve the accuracy of black-box M&DL models are discussed in BIB011 BIB012 . Beyond the arguments for the potential of VA to support ML, a few recent studies demonstrated empirically that VA based interactive interfaces can help users understand DL architectures and thus improve the models' classification accuracy. Wongsuphasawa et al. BIB015 (the Best paper of VAST 2017; IEEE VAST is the leading international conference dedicated to advances in VA) demonstrated a successful example of employing VA to visualize dataflow graphs of DL models in TensorFlow (one of the very popular M&DL libraries released open source in 2015 by Google). The approach used TensorBoard (a VA component for TensorFlow) to help TensorFlow developers understand the underlying behavior of DL models implemented in the system. In research not so closely tied to one particular toolkit, Alsallakh et al. BIB013 presented VA methods to help inspect CNNs and improve the design and accuracy for image classification. Their VA interface can reveal and analyze the hierarchy of similar classes in terms of internal features in CNNs. The authors found that this hierarchy not only influences the confusion patterns between the classes, it furthermore influences the learning behavior of CNNs. Specifically, the early layers in CNNs detect features that can separate high-level groups of classes, even after a few training epochs (in M&DL, an epoch is a complete pass through all the training examples; in other words, the classifier sees all the training examples once by the end of an epoch). By contrast, the latter layers require substantially more epochs to detect specialized features that can separate individual classes. Their methods can also identify various quality issues (e.g., overlapping class semantics, labeling issues, and imbalanced distributions) in the training data. In complementary work, Ming et al. developed a VA interface, RNNVis, for understanding and diagnosing RNNs for NLP tasks. Specifically, they designed and implemented an interactive co-clustering visualization of hidden state unit memories and word clouds, which allows domain users to explore, understand, and compare the internal behavior of different RNN models (i.e., regular RNN, LSTM, and GRU). In particular, the main VA interface of the RNNVis contains glyph-based sentence visualization, memory chips visualization for hidden state clusters, and word clouds visualization for word clusters, as well as a detail view, which shows the model's responses to selected words such as "when" and "where" and interpretations of selected hidden units. Their evaluation-two case studies (focused on language modeling and sentiment analysis) and expert interviews-demonstrated the effectiveness of using their system to understand and compare different RNN models. |
Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL With VA <s> Active learning has been proven a reliable strategy to reduce manual efforts in training data labeling. Such strategies incorporate the user as oracle: the classifier selects the most appropriate example and the user provides the label. While this approach is tailored towards the classifier, more intelligent input from the user may be beneficial. For instance, given only one example at a time users are hardly able to determine whether this example is an outlier or not. In this paper we propose user-based visually-supported active learning strategies that allow the user to do both, selecting and labeling examples given a trained classifier. While labeling is straightforward, selection takes place using a interactive visualization of the classifier's a-posteriori output probabilities. By simulating different user selection strategies we show, that user-based active learning outperforms uncertainty based sampling methods and yields a more robust approach on different data sets. The obtained results point towards the potential of combining active learning strategies with results from the field of information visualization. <s> BIB001 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL With VA <s> This paper describes DUALIST, an active learning annotation paradigm which solicits and learns from labels on both features (e.g., words) and instances (e.g., documents). We present a novel semi-supervised training algorithm developed for this setting, which is (1) fast enough to support real-time interactive speeds, and (2) at least as accurate as preexisting methods for learning with mixed feature and instance labels. Human annotators in user studies were able to produce near-state-of-the-art classifiers---on several corpora in a variety of application domains---with only a few minutes of effort. <s> BIB002 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL With VA <s> Performing exhaustive searches over a large number of text documents can be tedious, since it is very hard to formulate search queries or define filter criteria that capture an analyst's information need adequately. Classification through machine learning has the potential to improve search and filter tasks encompassing either complex or very specific information needs, individually. Unfortunately, analysts who are knowledgeable in their field are typically not machine learning specialists. Most classification methods, however, require a certain expertise regarding their parametrization to achieve good results. Supervised machine learning algorithms, in contrast, rely on labeled data, which can be provided by analysts. However, the effort for labeling can be very high, which shifts the problem from composing complex queries or defining accurate filters to another laborious task, in addition to the need for judging the trained classifier's quality. We therefore compare three approaches for interactive classifier training in a user study. All of the approaches are potential candidates for the integration into a larger retrieval system. They incorporate active learning to various degrees in order to reduce the labeling effort as well as to increase effectiveness. Two of them encompass interactive visualization for letting users explore the status of the classifier in context of the labeled documents, as well as for judging the quality of the classifier in iterative feedback loops. We see our work as a step towards introducing user controlled classification methods in addition to text search and filtering for increasing recall in analytics scenarios involving large corpora. <s> BIB003 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL With VA <s> Learning of classifiers to be used as filters within the analytical reasoning process leads to new and aggravates existing challenges. Such classifiers are typically trained ad-hoc, with tight time constraints that affect the amount and the quality of annotation data and, thus, also the users' trust in the classifier trained. We approach the challenges of ad-hoc training by inter-active learning, which extends active learning by integrating human experts' background knowledge to greater extent. In contrast to active learning, not only does inter-active learning include the users' expertise by posing queries of data instances for labeling, but it also supports the users in comprehending the classifier model by visualization. Besides the annotation of manually or automatically selected data instances, users are empowered to directly adjust complex classifier models. Therefore, our model visualization facilitates the detection and correction of inconsistencies between the classifier model trained by examples and the user's mental model of the class definition. Visual feedback of the training process helps the users assess the performance of the classifier and, thus, build up trust in the filter created. We demonstrate the capabilities of inter-active learning in the domain of video visual analytics and compare its performance with the results of random sampling and uncertainty sampling of training sets. <s> BIB004 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL With VA <s> Intelligent systems that learn interactively from their end-users are quickly becoming widespread. Until recently, this progress has been fueled mostly by advances in machine learning; however, more and more researchers are realizing the importance of studying users of these systems. In this article we promote this approach and demonstrate how it can result in better user experiences and more effective learning systems. We present a number of case studies that characterize the impact of interactivity, demonstrate ways in which some existing systems fail to account for the user, and explore new ways for learning systems to interact with their users. We argue that the design process for interactive machine learning systems should involve users at all stages: explorations that reveal human interaction patterns and inspire novel interaction methods, as well as refinement stages to tune details of the interface and choose among alternatives. After giving a glimpse of the progress that has been made so far, we discuss the challenges that we face in moving the field forward. <s> BIB005 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL With VA <s> Assigning labels to data instances is a prerequisite for many machine learning tasks. Similarly, labeling is applied in visual-interactive analysis approaches. However, the strategies for creating labels often differ in the two fields. In this paper, we study the process of labeling data instances with the user in the loop, from both the machine learning and visual-interactive perspective. Based on a review of differences and commonalities, we propose the 'Visual-Interactive Labeling' (VIAL) process, conflating the strengths of both. We describe the six major steps of the process and highlight their related challenges. <s> BIB006 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL With VA <s> Labeled datasets are always limited, and oftentimes the quantity of labeled data is a bottleneck for data analytics. This especially affects supervised machine learning methods, which require labels for models to learn from the labeled data. Active learning algorithms have been proposed to help achieve good analytic models with limited labeling efforts, by determining which additional instance labels will be most beneficial for learning for a given model. Active learning is consistent with interactive analytics as it proceeds in a cycle in which the unlabeled data is automatically explored. However, in active learning users have no control of the instances to be labeled, and for text data, the annotation interface is usually document only. Both of these constraints seem to affect the performance of an active learning model. We hypothesize that visualization techniques, particularly interactive ones, will help to address these constraints. In this paper, we implement a pilot study of visualization in active learning for text classification, with an interactive labeling interface. We compare the results of three experiments. Early results indicate that visualization improves high-performance machine learning model building with an active learning algorithm. <s> BIB007 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL With VA <s> The automatic detection and classification of stance (e.g., certainty or agreement) in text data using natural language processing and machine-learning methods creates an opportunity to gain insight into the speakers’ attitudes toward their own and other people’s utterances. However, identifying stance in text presents many challenges related to training data collection and classifier training. To facilitate the entire process of training a stance classifier, we propose a visual analytics approach, called ALVA, for text data annotation and visualization. ALVA’s interplay with the stance classifier follows an active learning strategy to select suitable candidate utterances for manual annotaion. Our approach supports annotation process management and provides the annotators with a clean user interface for labeling utterances with multiple stance categories. ALVA also contains a visualization method to help analysts of the annotation and training process gain a better understanding of the categories used by the annotators. The visualization uses a novel visual representation, called CatCombos, which groups individual annotation items by the combination of stance categories. Additionally, our system makes a visualization of a vector space model available that is itself based on utterances. ALVA is already being used by our domain experts in linguistics and computational linguistics to improve the understanding of stance phenomena and to build a stance classifier for applications such as social media monitoring. <s> BIB008 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL With VA <s> Labeling data instances is an important task in machine learning and visual analytics. Both fields provide a broad set of labeling strategies, whereby machine learning (and in particular active learning) follows a rather model-centered approach and visual analytics employs rather user-centered approaches (visual-interactive labeling). Both approaches have individual strengths and weaknesses. In this work, we conduct an experiment with three parts to assess and compare the performance of these different labeling strategies. In our study, we (1) identify different visual labeling strategies for user-centered labeling, (2) investigate strengths and weaknesses of labeling strategies for different labeling tasks and task complexities, and (3) shed light on the effect of using different visual encodings to guide the visual-interactive labeling process. We further compare labeling of single versus multiple instances at a time, and quantify the impact on efficiency. We systematically compare the performance of visual interactive labeling with that of active learning. Our main findings are that visual-interactive labeling can outperform active learning, given the condition that dimension reduction separates well the class distributions. Moreover, using dimension reduction in combination with additional visual encodings that expose the internal state of the learning model turns out to improve the performance of visual-interactive labeling. <s> BIB009 | AL alone has already been applied successfully to many applications (Section 3.1) where labeled data are limited. Here we review some work in AL empowered by VA. In the literature, the integration of interactive VA interfaces and AL methods is also known as interactive ML BIB005 . All of the reviewed work below strongly indicates that VA can play a powerful role in AL. A number of case studies were investigated by Amershi et al. BIB005 to demonstrate how interactivity results in a tight coupling between learning systems and users. The authors report three key results: (1) although AL results in faster convergence, users often get frustrated by having to answer the machine learner's long stream of questions and not having control over the interaction, (2) users naturally want to do more than just label data, and (3) the transparency of ML models can help people provide more effective labels to build a better classifier. Several additional strong arguments about the power to combine VA with AL to leverage the relative advantages of (experienced) human expertise and computational power can be found in the literature BIB009 BIB006 . In one of the more detailed accounts, Holzinger emphasized that in the health (informatics) domain, a small number of data sets or rare events is not uncommon, and so ML based approaches suffer from insufficient training samples. They also present an argument for a human-in-the-loop approach with domain experts by integrating AL with VA, proposing that this integration can be beneficial in solving computationally hard health data problems (e.g., subspace clustering and protein folding), where human expertise can help to reduce an exponential search space through heuristic selection of samples. The ultimate goal of a human-in-the-loop methodology is to design and develop M&DL algorithms that can automatically learn from data and thus can improve with experience over time and eventually without any human-in-the-loop (other than to understand and act upon the results) . Most existing AL research is focused on mechanisms and benefits of selecting meaningful instances for labeling from the machine learner's perspective . A drawback of this typical AL query strategy is that users cannot control which instances are selected to be labeled BIB009 BIB007 -this may affect the performance of an AL model BIB007 . Seifert and Granitzer BIB001 proposed user-based visually-supported AL strategies that allow the user to select and label examples posed by a machine learner. Their experiments showed that restricting human input to labeling only instances that the system picks is suboptimal. Giving users a more active role in terms of a visual selection of examples and in adapting their labeling strategies on top of tailored visualization techniques can increase labeling efficiency. In their experiments, the basis for the user's decision is a visualization of the a-posteriori probabilities of the unlabeled samples. Bernard et al. BIB006 investigated the process of labeling data instances with users in the loop, from both ML (in particular, AL) and VA perspectives. Based on reviewing similarities and differences between AL and VA, they proposed a unified process called visual-interactive labeling (VIL), through which they aim to combine the strengths of VA and AL (first initiatives for the integration of AL and VIL can be found in BIB001 BIB003 BIB004 BIB002 ). In follow on research, Bernard et al. BIB009 performed an experimental study to compare VIL and AL labeling strategies (used independently). In that project, they developed an evaluation toolkit that integrates 16 different established AL strategies, five classifiers, and four visualization techniques. Using their toolkit, Bernard et al. conducted an empirical study with 16 expert participants. Their investigation shows that VIL achieves similar performance to AL. One suggestion based on Bernard et al. BIB009 's experiment findings was to incorporate (visual) analytical guidance in the labeling process in AL. Their investigation represents an important step towards a unified labeling process that combines the individual strengths of VA and AL strategies. We share the same vision with Bernard et al. BIB009 BIB006 -while they call it VIL, we think that VA enabled AL is a more intuitive term for the integration of the power of AL and VA, because VIL "hides" the essential role of AL. Recent developments in ML and VA signal that the two fields are getting closer BIB009 -for example, Sacha et al. proposed a conceptual framework that models human interactions with ML components in the VA process, and makes the interplay between automated algorithms and interactive visualizations more concrete. At the core of the Sacha et al.'s conceptual framework lies the idea that the underlying ML models and hyper-parameters, which cannot be optimized automatically, can be steered via iterative and accessible user interactions. Interactive visualizations serve as an aid or "lens" that not only facilitates the process of interpretation and validation, but also makes the interactions with ML models accessible to domain users . AL and VA alone are not new, but interactive annotation tools empowered by M&DL classifiers for (geo) text and image data are not well developed, and the role of visualization in active learning for text and image related tasks has not been well developed, either. H'oferlin et al. BIB004 extended AL by integrating human experts' domain knowledge via an interactive VA interface for ad-hoc classifiers applied to video classification. Their classifier visualization facilitates the detection and correction of inconsistencies between the classifier trained by examples and the user's mental model of the class definition. Visual feedback of the training process helps the users evaluate the performance of the classifier and, thus, build up trust in the trained classifier. The main contributions of their approach are the quality assessment and model understanding by explorative visualization and the integration of experts' background knowledge by data annotation and model manipulation (modifying a model based on users' expertise can boost the learner, especially in early training epochs, by including fresh domain knowledge). They demonstrated the power of AL combined with VA in the domain of video VA by comparing its performance with the results of random sampling and uncertainty sampling of the training sets. Huang and colleagues' BIB007 experiments and their early results showed that active learning with VA improves learning models performance compared to methods with AL alone for text classification, with an interactive and iterative labeling interface; their AL with visualization method is for a binary (i.e., positive and negative) classification problem (Appendix A.4.1). Heimerl et al. BIB003 incorporated AL to various degrees with VA for text document retrieval to reduce the labeling effort and to increase effectiveness. Specifically, their VA interface for visual classifier training has a main view (shows the classifier's state with projected documents) , a cluster view (shows the documents with most uncertain classification), a content view (shows the selected documents), a manual view used during evaluation, a classifier history for undo/redo navigation, a labeled document view for listing labeled documents, and most importantly the labeling controls with a preview of the estimated impact of the newly labeled documents on the classifier. In more recent work, Kucher et al. BIB008 |
Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Active Deep Learning (ADL) <s> We introduce a challenging set of 256 object categories containing a total of 30607 images. The original Caltech-101 [1] was collected by choosing a set of object categories, downloading examples from Google Images and then manually screening out all images that did not fit the category. Caltech-256 is collected in a similar manner with several improvements: a) the number of categories is more than doubled, b) the minimum number of images in any category is increased from 31 to 80, c) artifacts due to image rotation are avoided and d) a new and larger clutter category is introduced for testing background rejection. We suggest several testing paradigms to measure classification performance, then benchmark the dataset using two simple metrics as well as a state-of-the-art spatial pyramid matching [2] algorithm. Finally we use the clutter category to train an interest detector which rejects uninformative background regions. <s> BIB001 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Active Deep Learning (ADL) <s> Abstract In natural language processing community, sentiment classification based on insufficient labeled data is a well-known challenging problem. In this paper, a novel semi-supervised learning algorithm called active deep network (ADN) is proposed to address this problem. First, we propose the semi-supervised learning framework of ADN. ADN is constructed by restricted Boltzmann machines (RBM) with unsupervised learning based on labeled reviews and abundant of unlabeled reviews. Then the constructed structure is fine-tuned by gradient-descent based supervised learning with an exponential loss function. Second, in the semi-supervised learning framework, we apply active learning to identify reviews that should be labeled as training data, then using the selected labeled reviews and all unlabeled reviews to train ADN architecture. Moreover, we combine the information density with ADN, and propose information ADN (IADN) method, which can apply the information density of all unlabeled reviews in choosing the manual labeled reviews. Experiments on five sentiment classification datasets show that ADN and IADN outperform classical semi-supervised learning algorithms, and deep learning techniques applied for sentiment classification. <s> BIB002 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Active Deep Learning (ADL) <s> Neural machine translation, a recently proposed approach to machine translation based purely on neural networks, has shown promising results compared to the existing approaches such as phrase-based statistical machine translation. Despite its recent success, neural machine translation has its limitation in handling a larger vocabulary, as training complexity as well as decoding complexity increase proportionally to the number of target words. In this paper, we propose a method based on importance sampling that allows us to use a very large target vocabulary without increasing training complexity. We show that decoding can be efficiently done even with the model having a very large target vocabulary by selecting only a small subset of the whole target vocabulary. The models trained by the proposed approach are empirically found to outperform the baseline models with a small vocabulary as well as the LSTM-based neural machine translation models. Furthermore, when we use the ensemble of a few models with very large target vocabularies, we achieve the state-of-the-art translation performance (measured by BLEU) on the English!German translation and almost as high performance as state-of-the-art English!French translation system. <s> BIB003 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Active Deep Learning (ADL) <s> Deep learning has been shown to achieve outstanding performance in a number of challenging real-world applications. However, most of the existing works assume a fixed set of labeled data, which is not necessarily true in real-world applications. Getting labeled data is usually expensive and time consuming. Active labelling in deep learning aims at achieving the best learning result with a limited labeled data set, i.e., choosing the most appropriate unlabeled data to get labeled. This paper presents a new active labeling method, AL-DL, for cost-effective selection of data to be labeled. AL-DL uses one of three metrics for data selection: least confidence, margin sampling, and entropy. The method is applied to deep learning networks based on stacked restricted Boltzmann machines, as well as stacked autoencoders. In experiments on the MNIST benchmark dataset, the method outperforms random labeling consistently by a significant margin. <s> BIB004 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Active Deep Learning (ADL) <s> Recently, promising results have been shown on face recognition researches. However, face recognition and retrieval across age is still challenging. Unlike prior methods using complex models with strong parametric assumptions to model the aging process, we use a data-driven method to address this problem. We propose a novel coding framework called Cross-Age Reference Coding (CARC). By leveraging a large-scale image dataset freely available on the Internet as a reference set, CARC is able to encode the low-level feature of a face image with an age-invariant reference space. In the testing phase, the proposed method only requires a linear projection to encode the feature and therefore it is highly scalable. To thoroughly evaluate our work, we introduce a new large-scale dataset for face recognition and retrieval across age called Cross-Age Celebrity Dataset (CACD). The dataset contains more than 160,000 images of 2,000 celebrities with age ranging from 16 to 62. To the best of our knowledge, it is by far the largest publicly available cross-age face dataset. Experimental results show that the proposed method can achieve state-of-the-art performance on both our dataset as well as the other widely used dataset for face recognition across age, MORPH dataset. <s> BIB005 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Active Deep Learning (ADL) <s> We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples. <s> BIB006 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Active Deep Learning (ADL) <s> Neural machine translation (NMT) systems have recently achieved results comparable to the state of the art on a few translation tasks, including English→French and English→German. The main purpose of the Montreal Institute for Learning Algorithms (MILA) submission to WMT’15 is to evaluate this new approach on a greater variety of language pairs. Furthermore, the human evaluation campaign may help us and the research community to better understand the behaviour of our systems. We use the RNNsearch architecture, which adds an attention mechanism to the encoderdecoder. We also leverage some of the recent developments in NMT, including the use of large vocabularies, unknown word replacement and, to a limited degree, the inclusion of monolingual language models. <s> BIB007 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Active Deep Learning (ADL) <s> We present a variety of new architectural features and training procedures that we apply to the generative adversarial networks (GANs) framework. We focus on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic. Unlike most work on generative models, our primary goal is not to train a model that assigns high likelihood to test data, nor do we require the model to be able to learn well without using any labels. Using our new techniques, we achieve state-of-the-art results in semi-supervised classification on MNIST, CIFAR-10 and SVHN. The generated images are of high quality as confirmed by a visual Turing test: our model generates MNIST samples that humans cannot distinguish from real data, and CIFAR-10 samples that yield a human error rate of 21.3%. We also present ImageNet samples with unprecedented resolution and show that our methods enable the model to learn recognizable features of ImageNet classes. <s> BIB008 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Active Deep Learning (ADL) <s> Improvements in hardware, the availability of massive amounts of data, and algorithmic upgrades are among the factors supporting better machine translation. <s> BIB009 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Active Deep Learning (ADL) <s> Even though active learning forms an important pillar of machine learning, deep learning tools are not prevalent within it. Deep learning poses several difficulties when used in an active learning setting. First, active learning (AL) methods generally rely on being able to learn and update models from small amounts of data. Recent advances in deep learning, on the other hand, are notorious for their dependence on large amounts of data. Second, many AL acquisition functions rely on model uncertainty, yet deep learning methods rarely represent such model uncertainty. In this paper we combine recent advances in Bayesian deep learning into the active learning framework in a practical way. We develop an active learning framework for high dimensional data, a task which has been extremely challenging so far, with very sparse existing literature. Taking advantage of specialised models such as Bayesian convolutional neural networks, we demonstrate our active learning techniques with image data, obtaining a significant improvement on existing active learning approaches. We demonstrate this on both the MNIST dataset, as well as for skin cancer diagnosis from lesion images (ISIC2016 task). <s> BIB010 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Active Deep Learning (ADL) <s> Recent successes in learning-based image classification, however, heavily rely on the large number of annotated training samples, which may require considerable human effort. In this paper, we propose a novel active learning (AL) framework, which is capable of building a competitive classifier with optimal feature representation via a limited amount of labeled training instances in an incremental learning manner. Our approach advances the existing AL methods in two aspects. First, we incorporate deep convolutional neural networks into AL. Through the properly designed framework, the feature representation and the classifier can be simultaneously updated with progressively annotated informative samples. Second, we present a cost-effective sample selection strategy to improve the classification performance with less manual annotations. Unlike traditional methods focusing on only the uncertain samples of low prediction confidence, we especially discover the large amount of high-confidence samples from the unlabeled set for feature learning. Specifically, these high-confidence samples are automatically selected and iteratively assigned pseudolabels. We thus call our framework cost-effective AL (CEAL) standing for the two advantages. Extensive experiments demonstrate that the proposed CEAL framework can achieve promising results on two challenging image classification data sets, i.e., face recognition on the cross-age celebrity face recognition data set database and object categorization on Caltech-256. <s> BIB011 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Active Deep Learning (ADL) <s> This paper is on active learning where the goal is to reduce the data annotation burden by interacting with a (human) oracle during training. Standard active learning methods ask the oracle to annotate data samples. Instead, we take a profoundly different approach: we ask for annotations of the decision boundary. We achieve this using a deep generative model to create novel instances along a 1d line. A point on the decision boundary is revealed where the instances change class. Experimentally we show on three data sets that our method can be plugged into other active learning schemes, that human oracles can effectively annotate points on the decision boundary, that our method is robust to annotation noise, and that decision boundary annotations improve over annotating data samples. <s> BIB012 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Active Deep Learning (ADL) <s> We propose a new active learning (AL) method for text classification with convolutional neural networks (CNNs). In AL, one selects the instances to be manually labeled with the aim of maximizing model performance with minimal effort. Neural models capitalize on word embeddings as representations (features), tuning these to the task at hand. We argue that AL strategies for multi-layered neural models should focus on selecting instances that most affect the embedding space (i.e., induce discriminative word representations). This is in contrast to traditional AL approaches (e.g., entropy-based uncertainty sampling), which specify higher level objectives. We propose a simple approach for sentence classification that selects instances containing words whose embeddings are likely to be updated with the greatest magnitude, thereby rapidly learning discriminative, task-specific embeddings. We extend this approach to document classification by jointly considering: (1) the expected changes to the constituent word representations; and (2) the model's current overall uncertainty regarding the instance. The relative emphasis placed on these criteria is governed by a stochastic process that favors selecting instances likely to improve representations at the outset of learning, and then shifts toward general uncertainty sampling as AL progresses. Empirical results show that our method outperforms baseline AL approaches on both sentence and document classification tasks. We also show that, as expected, the method quickly learns discriminative word embeddings. To the best of our knowledge, this is the first work on AL addressing neural models for text classification. <s> BIB013 | As discussed further in Appendix A.1, DL can discover intricate patterns hidden in big data. Advances in DL have been dramatic and rapid, and the landscape of M&DL is changing quickly as a result. For example, Jean and colleagues BIB003 BIB007 in 2015 demonstrated for the first time that DL could beat Google's existing phrase-based statistical process for language translation and by November 2016, after Google switched to that approach, evidence showed that their new system was already on par with human translation BIB009 . We have seen above many successful use cases for AL (Section 3.1) and AL integrated with VA (Section 3.3). Now we review some recent work in AL combined with DL-active deep learning (ADL). It is also called deep active learning (e.g., see ), but active deep learning is a much more commonly used term in the literature. The main process of ADL is very similar to AL. The main difference is that the machine learner in regular AL is a traditional ML algorithm (e.g., SVM), whereas in ADL, the learner is a DL one, such as CNN. As emphasized in Appendix A.1, DL has better scalability for Big Data problems than traditional ML . This motivates ADL because it combines the power of DL and AL-better scalability than ML and less labeled data than regular DL for training a good machine learner. AL has been investigated with some DL architectures for image classification and text classification (including sentiment analysis). Wang and Shang BIB004 applied AL methods in DL networks for image classification. The (DL) classifiers they used are stacked restricted Boltzmann machines (stacked RBMs) and stacked auto-encoders, with three commonly used uncertainty sampling based query strategies (i.e., least confidence, margin sampling, and entropy, see Section 3.1.5). Their experiments were run on the well-known MNIST benchmark data set (one of the classic data sets for benchmarking ML algorithms). The authors conclude that their ADL method outperforms random sampling consistently by a significant margin, regardless of the selection of uncertainty-based strategy and classifier. Gal et al. BIB010 also developed an AL framework that integrates DL for image classification, whereas the classifier they used is Bayesian CNNs. Their result showed a significant improvement on existing AL approaches. Another successful integration example of deep CNNs and AL for image classification can be found in BIB011 -the authors proposed an ADL framework called Cost-Effective Active Learning (CEAL), where the classifier can be simultaneously updated with progressively annotated informative samples. Unlike most traditional AL methods focusing on uncertain samples of low prediction confidence, their strategy selects two complementary kinds of samples to incrementally improve the classifier training and feature learning: (1) the minority informative kind contributes to training more powerful classifiers, and (2) the majority high confidence kind contributes to learning more discriminative feature representations. Although the number of samples that belongs to the first type is small (e.g., an image with a soccer ball and a dog is much more rare than images that contain only a soccer ball), the most uncertain unlabeled samples usually have great potential impact on the classifiers. Selecting and annotating them as part of the training set can contribute to a better decision boundary of the classifiers. Their framework progressively selects the minority samples among most informative samples, and automatically pseudo-labels (i.e., pick up the class which has the maximum predicted probability, and use it as if it was the true label ) majority high confidence samples from the unlabeled set for feature learning and model updating. The labeled minority samples benefit the decision boundary of the classifier and the majority pseudo-labeled samples provide sufficient training data for robust feature learning. Their experiment results, on two challenging public benchmark data sets (face recognition on CACD database BIB005 and object categorization on Caltech-256 BIB001 ), demonstrated the effectiveness of their CEAL framework. Most AL methods in the literature (Section 3.1) ask annotators to annotate data samples. By contract, Huijser and van Gemert BIB012 provide a recent example of combining AL with DL, where they took a completely different approach-it asks for annotators to annotate the decision boundary. At this point, their method focuses on a binary classification (Appendix A.4.1) and a linear classification model (i.e., SVM). Additionally, the method used a deep generative model to synthesize samples according to a small amount of labeled samples, which will not work for text related tasks (because deep generative models are designed for continuous data like images BIB006 BIB008 , rather than the discrete data of words and phrases that must be dealt with in NLP problems ). After reviewing some ADL methods for image classification, we now introduce recent ADL work for text classification problems. Zhou et al. BIB002 integrated AL with DL for semi-supervised sentiment classification using RBMs. Their experiments on five sentiment classification data sets showed that their ADL methods outperform classic semi-supervised learning algorithms and DL architectures applied for sentiment classification. Zhang and Wallace BIB013 proposed an ADL method for text classification, where the classifier is a CNN. In contrast to traditional AL approaches (e.g., uncertainty sampling), the most novel contribution is that their method is designed to quickly induce discriminative word embeddings (Appendix A.6), and thus improve text classification. Taking sentiment classification as an example, selecting examples in this way quickly pushes the embeddings of "bad" and "good" apart. Their empirical results (with three data sets about sentiment where two were categorized as positive/negative, one as subjective/objective) show that the method outperforms baseline AL approaches. However, their method is for binary classification (Appendix A.4.1), other types of classification tasks (Appendixes A.4.2-A.4.4) are not touched upon. Research on combining AL with RNNs for short-text classification is rare. To address the gap, Zhou demonstrated using AL with RNNs as classifiers for (Chinese) short-text classification. The proposed ADL algorithm dramatically decreases the amount of labeled samples without significantly influencing the classification accuracy of the original RNNs classifier, which trained on the whole data set. In some cases, the proposed ADL algorithm even achieves better classification accuracy with less trained data than the original RNNs classifier. |
Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> GIScience Applications Using AL/AL with VA <s> The increasing availability and use of positioning devices has resulted in large volumes of trajectory data. However, semantic annotations for such data are typically added by domain experts, which is a time-consuming task. Machine-learning algorithms can help infer semantic annotations from trajectory data by learning from sets of labeled data. Specifically, active learning approaches can minimize the set of trajectories to be annotated while preserving good performance measures. The ANALYTiC web-based interactive tool visually guides users through this annotation process. <s> BIB001 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> GIScience Applications Using AL/AL with VA <s> Abstract Classification of evolving data streams is a challenging task, which is suitably tackled with online learning approaches. Data is processed instantly requiring the learning machinery to (self-)adapt by adjusting its model. However for high velocity streams, it is usually difficult to obtain labeled samples to train the classification model. Hence, we propose a novel o nline b atch-based a ctive l earning algorithm (OBAL) to perform the labeling. OBAL is developed for crisis management applications where data streams are generated by the social media community. OBAL is applied to discriminate relevant from irrelevant social media items. An emergency management user will be interactively queried to label chosen items. OBAL exploits the boundary items for which it is highly uncertain about their class and makes use of two classifiers: k-Nearest Neighbors (kNN) and Support Vector Machine (SVM). OBAL is equipped with a labeling budget and a set of uncertainty strategies to identify the items for labeling. An extensive analysis is carried out to show OBAL’s performance, the sensitivity of its parameters, and the contribution of the individual uncertainty strategies. Two types of datasets are used: synthetic and social media datasets related to crises. The empirical results illustrate that OBAL has a very good discrimination power. <s> BIB002 | Júnior et al. BIB001 's very recent (2017) work on GPS trajectory classification provides solid evidence that AL can be used together with VA to help domain experts perform semantic labeling of movement data. In this work, they pose three research questions: (1) Is there a ML method that supports building a good classifier for automatic trajectory classification but with a reduced number of required human labeled trajectories? (2) Is the AL method effective for trajectory data? and (3) How can we help the user in labeling trajectories? To answer the rest of their research questions, Júnior et al. developed a web-based interactive tool named ANALYTiC to visually assist domain experts to perform GPS trajectory classification using AL and a simple VA interface, where users can pick one of the six (traditional ML) classifiers (Ada boost, decision tree, Gaussian naive Bayes, k-nearest neighbors (KNN), logistic regression, and random forest) and one of the three query strategies (uncertainty sampling, QBC, and random sampling) to start with trajectory labeling. Their interactive tool supports only binary classification (Appendix A.4.1). Júnior et al. also conducted a series of empirical evaluation experiments with three trajectories data sets (animals, fishing vessels, and GeoLife). Their results showed how the AL strategies choose the best subset to annotate and performed significantly better than the random sampling (baseline strategy). Their examples also demonstrated how the ANALYTiC web-based visual interface can support the domain expert in the AL process and specifically in the trajectory annotation using a set of visual solutions that ease the labeling inference task. They concluded that ML algorithms can infer semantic annotations defined by domain users (e.g., fishing, non-fishing) from trajectories, by learning from sets of manually labeled data. Specifically, AL approaches can reduce the set of trajectories to be labeled while preserving good performance measures. Their ANALYTiC web-based interactive tool visually guides domain experts through this annotation process. Another very recent AL study that is very closely related to GIScience problems can be found in BIB002 , where Pohl et al. applied AL methods to social media data (i.e., tweets) for crisis management. Two ML classifiers (i.e., kNN and SVM) are used in their AL application with several uncertainty strategies for binary classification (Appendix A.4.1) to distinguish between relevant and irrelevant information contained in a data stream. The authors used stream-based (Section 3.1.2) batch-mode AL (Section 3.1.4). Two types of data sets are used in their experiments: synthetic and social media data sets related to crises. Their empirical results illustrate that batch-mode AL is able to, with good performance, distinguish between relevant and irrelevant information from tweets for crisis management. Overall, the application of AL with ML (or DL) applied to non-RS GIScience problems is just beginning. Given the rapid advances in M&DL and AL, we anticipate this situation to change quickly, with additional applications to mobility data, geospatial text analysis, and a range of location-based service applications. An objective of this review, of course, is to enable such development. |
Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> RS Applications Using AL/ADL <s> The problem of scarcity of labeled pixels, required for segmentation of remotely sensed satellite images in supervised pixel classification framework, is addressed in this article. A support vector machine (SVM) is considered for classifying the pixels into different landcover types. It is initially designed using a small set of labeled points, and subsequently refined by actively querying for the labels of pixels from a pool of unlabeled data. The label of the most interesting/ ambiguous unlabeled point is queried at each step. Here, active learning is exploited to minimize the number of labeled data used by the SVM classifier by several orders. These features are demonstrated on an IRS-1A four band multi-spectral image. Comparison with related methods is made in terms of number of data points used, computational time and a cluster quality measure. <s> BIB001 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> RS Applications Using AL/ADL <s> We show how to use "complementary priors" to eliminate the explaining-away effects that make inference difficult in densely connected belief nets that have many hidden layers. Using complementary priors, we derive a fast, greedy algorithm that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory. The fast, greedy algorithm is used to initialize a slower learning procedure that fine-tunes the weights using a contrastive version of the wake-sleep algorithm. After fine-tuning, a network with three hidden layers forms a very good generative model of the joint distribution of handwritten digit images and their labels. This generative model gives better digit classification than the best discriminative learning algorithms. The low-dimensional manifolds on which the digits lie are modeled by long ravines in the free-energy landscape of the top-level associative memory, and it is easy to explore these ravines by using the directed connections to display what the associative memory has in mind. <s> BIB002 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> RS Applications Using AL/ADL <s> As the resolution of remote-sensing imagery increases, the full complexity of the scenes becomes increasingly difficult to approach. User-defined classes in large image databases are often composed of several groups of images and span very different scales in the space of low-level visual descriptors. The interactive retrieval of such image classes is then very difficult. To address this challenge, we evaluate here, in the context of satellite image retrieval, two general improvements for relevance feedback using support vector machines (SVMs). First, to optimize the transfer of information between the user and the system, we focus on the criterion employed by the system for selecting the images presented to the user at every feedback round. We put forward an active-learning selection criterion that minimizes redundancy between the candidate images shown to the user. Second, for image classes spanning very different scales in the low-level description space, we find that a high sensitivity of the SVM to the scale of the data brings about a low retrieval performance. We argue that the insensitivity to scale is desirable in this context, and we show how to obtain it by the use of specific kernel functions. Experimental evaluation of both ranking and classification performance on a ground-truth database of satellite images confirms the effectiveness of our approach <s> BIB003 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> RS Applications Using AL/ADL <s> Obtaining training data for land cover classification using remotely sensed data is time consuming and expensive especially for relatively inaccessible locations. Therefore, designing classifiers that use as few labeled data points as possible is highly desirable. Existing approaches typically make use of small-sample techniques and semisupervision to deal with the lack of labeled data. In this paper, we propose an active learning technique that efficiently updates existing classifiers by using fewer labeled data points than semisupervised methods. Further, unlike semisupervised methods, our proposed technique is well suited for learning or adapting classifiers when there is substantial change in the spectral signatures between labeled and unlabeled data. Thus, our active learning approach is also useful for classifying a series of spatially/temporally related images, wherein the spectral signatures vary across the images. Our interleaved semisupervised active learning method was tested on both single and spatially/temporally related hyperspectral data sets. We present empirical results that establish the superior performance of our proposed approach versus other active learning and semisupervised methods. <s> BIB004 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> RS Applications Using AL/ADL <s> In this paper, we propose two active learning algorithms for semiautomatic definition of training samples in remote sensing image classification. Based on predefined heuristics, the classifier ranks the unlabeled pixels and automatically chooses those that are considered the most valuable for its improvement. Once the pixels have been selected, the analyst labels them manually and the process is iterated. Starting with a small and nonoptimal training set, the model itself builds the optimal set of samples which minimizes the classification error. We have applied the proposed algorithms to a variety of remote sensing data, including very high resolution and hyperspectral images, using support vector machines. Experimental results confirm the consistency of the methods. The required number of training samples can be reduced to 10% using the methods proposed, reaching the same level of accuracy as larger data sets. A comparison with a state-of-the-art active learning method, margin sampling, is provided, highlighting advantages of the methods proposed. The effect of spatial resolution and separability of the classes on the quality of the selection of pixels is also discussed. <s> BIB005 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> RS Applications Using AL/ADL <s> This paper investigates different batch-mode active-learning (AL) techniques for the classification of remote sensing (RS) images with support vector machines. This is done by generalizing to multiclass problem techniques defined for binary classifiers. The investigated techniques exploit different query functions, which are based on the evaluation of two criteria: uncertainty and diversity. The uncertainty criterion is associated to the confidence of the supervised algorithm in correctly classifying the considered sample, while the diversity criterion aims at selecting a set of unlabeled samples that are as more diverse (distant one another) as possible, thus reducing the redundancy among the selected samples. The combination of the two criteria results in the selection of the potentially most informative set of samples at each iteration of the AL process. Moreover, we propose a novel query function that is based on a kernel-clustering technique for assessing the diversity of samples and a new strategy for selecting the most informative representative sample from each cluster. The investigated and proposed techniques are theoretically and experimentally compared with state-of-the-art methods adopted for RS applications. This is accomplished by considering very high resolution multispectral and hyperspectral images. By this comparison, we observed that the proposed method resulted in better accuracy with respect to other investigated and state-of-the art methods on both the considered data sets. Furthermore, we derived some guidelines on the design of AL systems for the classification of different types of RS images. <s> BIB006 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> RS Applications Using AL/ADL <s> In this paper, we propose a simple, fast, and reliable active-learning technique for solving remote sensing image classification problems with support vector machine (SVM) classifiers. The main property of the proposed technique consists in its robustness to biased (poor) initial training sets. The presented method considers the 1-D output space of the classifier to identify the most uncertain samples whose labeling and inclusion in the training set involve a high probability to improve the classification results. A simple histogram-thresholding algorithm is used to find out the low-density (i.e., under the cluster assumption, the most uncertain) region in the 1-D SVM output space. To assess the effectiveness of the proposed method, we compared it with other active-learning techniques proposed in the remote sensing literature using multispectral and hyperspectral data. Experimental results confirmed that the proposed technique provided the best tradeoff among robustness to biased (poor) initial training samples, computational complexity, classification accuracy, and the number of new labeled samples necessary to reach convergence. <s> BIB007 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> RS Applications Using AL/ADL <s> Defining an efficient training set is one of the most delicate phases for the success of remote sensing image classification routines. The complexity of the problem, the limited temporal and financial resources, as well as the high intraclass variance can make an algorithm fail if it is trained with a suboptimal dataset. Active learning aims at building efficient training sets by iteratively improving the model performance through sampling. A user-defined heuristic ranks the unlabeled pixels according to a function of the uncertainty of their class membership and then the user is asked to provide labels for the most uncertain pixels. This paper reviews and tests the main families of active learning algorithms: committee, large margin, and posterior probability-based. For each of them, the most recent advances in the remote sensing community are discussed and some heuristics are detailed and tested. Several challenging remote sensing scenarios are considered, including very high spatial resolution and hyperspectral image classification. Finally, guidelines for choosing the good architecture are provided for new and/or unexperienced user. <s> BIB008 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> RS Applications Using AL/ADL <s> In this letter, we present a novel batch-mode active learning technique for solving multiclass classification problems by using the support vector machine classifier with the one-against-all architecture. The uncertainty of each unlabeled sample is measured by defining a criterion which not only considers the smallest distance to the decision hyperplanes but also takes into account the distances to other hyperplanes if the sample is within the margin of their decision boundaries. To select batch of most uncertain samples from all over the decision region, the uncertain regions of the classifiers are partitioned into multiple parts depending on the number of geometrical margins of binary classifiers passing on them. Then, a balanced number of most uncertain samples are selected from each part. To minimize the redundancy and keep the diversity among these samples, the kernel k-means clustering algorithm is applied to the set of uncertain samples, and the representative sample (medoid) from each cluster is selected for labeling. The effectiveness of the proposed method is evaluated by comparing it with other batch-mode active learning techniques existing in the literature. Experimental results on two different remote sensing data sets confirmed the effectiveness of the proposed technique. <s> BIB009 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> RS Applications Using AL/ADL <s> We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry. <s> BIB010 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> RS Applications Using AL/ADL <s> Active learning (AL) algorithms have been proven useful in reducing the number of required training samples for remote sensing applications; however, most methods query samples pointwise without considering spatial constraints on their distribution. This may often lead to a spatially dispersed distribution of training points unfavorable for visual image interpretation or field surveys. The aim of this study is to develop region-based AL heuristics to guide user attention toward a limited number of compact spatial batches rather than distributed points. The proposed query functions are based on a tree ensemble classifier and combine criteria of sample uncertainty and diversity to select regions of interest. Class imbalance, which is inherent to many remote sensing applications, is addressed through stratified bootstrap sampling. Empirical tests of the proposed methods are performed with multitemporal and multisensor satellite images capturing, in particular, sites recently affected by large-scale landslide events. The assessment includes an experimental evaluation of the labeling time required by the user and the computational runtime, and a sensitivity analysis of the main algorithm parameters. Region-based heuristics that consider sample uncertainty and diversity are found to outperform pointwise sampling and region-based methods that consider only uncertainty. Reference landslide inventories from five different experts enable a detailed assessment of the spatial distribution of remaining errors and the uncertainty of the reference data. <s> BIB011 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> RS Applications Using AL/ADL <s> With the popular use of high-resolution satellite images, more and more research efforts have been placed on remote sensing scene classification/recognition. In scene classification, effective feature selection can significantly boost the final performance. In this letter, a novel deep-learning-based feature-selection method is proposed, which formulates the feature-selection problem as a feature reconstruction problem. Note that the popular deep-learning technique, i.e., the deep belief network (DBN), achieves feature abstraction by minimizing the reconstruction error over the whole feature set, and features with smaller reconstruction errors would hold more feature intrinsics for image representation. Therefore, the proposed method selects features that are more reconstructible as the discriminative features. Specifically, an iterative algorithm is developed to adapt the DBN to produce the inquired reconstruction weights. In the experiments, 2800 remote sensing scene images of seven categories are collected for performance evaluation. Experimental results demonstrate the effectiveness of the proposed method. <s> BIB012 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> RS Applications Using AL/ADL <s> Standing at the paradigm shift towards data-intensive science, machine learning techniques are becoming increasingly important. In particular, as a major breakthrough in the field, deep learning has proven as an extremely powerful tool in many fields. Shall we embrace deep learning as the key to all? Or, should we resist a 'black-box' solution? There are controversial opinions in the remote sensing community. In this article, we analyze the challenges of using deep learning for remote sensing data analysis, review the recent advances, and provide resources to make deep learning in remote sensing ridiculously simple to start with. More importantly, we advocate remote sensing scientists to bring their expertise into deep learning, and use it as an implicit general model to tackle unprecedented large-scale influential challenges, such as climate change and urbanization. <s> BIB013 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> RS Applications Using AL/ADL <s> Active deep learning classification of hyperspectral images is considered in this paper. Deep learning has achieved success in many applications, but good-quality labeled samples are needed to construct a deep learning network. It is expensive getting good labeled samples in hyperspectral images for remote sensing applications. An active learning algorithm based on a weighted incremental dictionary learning is proposed for such applications. The proposed algorithm selects training samples that maximize two selection criteria, namely representative and uncertainty. This algorithm trains a deep network efficiently by actively selecting training samples at each iteration. The proposed algorithm is applied for the classification of hyperspectral images, and compared with other classification algorithms employing active learning. It is shown that the proposed algorithm is efficient and effective in classifying hyperspectral images. <s> BIB014 | DL has achieved success in many applications, however, a large set of good quality labeled samples are needed to train a good DL classifier, as emphasized in Appendix A.1. Zhu et al. BIB013 provided a very recent survey of DL in RS, where they reviewed the recent advances and analyzed the challenges of using DL with RS data analysis. More importantly, they advocate that RS scientists should adapt DL to tackle large-scale RS challenges, such as application of RS and DL to study climate change and urbanization. However, AL (Section 3.1) and ADL (Section 3.4) based methods are not touched on in their review. In their conclusions, the authors did emphasize that limited training samples in RS represents a challenging bottle-neck to progress. Our review provides a promising solution to the challenges they pointed out. To help RS researchers get started with DL, a technical tutorial on DL for RS data is provided in . AL has a relatively long history and has been widely studied for RS applications (compared with attention given to AL in other components of GIScience). Many successful AL examples in RS in the literature (reviewed below in this section) have demonstrated that AL can aid RS image classification tasks, whereas ADL (Section 3.4) has only been recently applied to RS for image classification. Below, we first introduce some AL methods used for RS image classification, and then more recent ADL methods applied to RS image classification problems. Some pioneering work using AL for RS image classification can be found in BIB001 BIB004 BIB005 BIB006 BIB007 . BIB008 BIB008 surveyed and tested several main AL methods used in RS communities for (multispectral and hyperspectral) RS image classification. As introduced in Section 3.1, an AL process requires the interaction between the annotator (e.g., domain experts) and the model (e.g., a classifier)-the former provides labels, which integrates domain knowledge while labeling, and the latter provides the most informative pixels to enlist annotators for labels. This is crucial for the success of an AL algorithm-the machine learner needs a query strategy (Section 3.1.5) to rank the pixels in the RS image pool. Tuia et al. BIB008 used AL query strategies (Section 3.1.5), also called heuristics in the RS community BIB008 , to group the AL algorithms they reviewed into three main categories BIB005 : committee, large margin, and posterior probability-based. Tuia et al. also analyzed and discussed advantages and drawbacks of the methods they reviewed, and provided some advice for how to choose a good AL architecture. One of the directions they pointed out is the inclusion of contextual information in heuristics (i.e., AL query strategies)-they emphasized that the heuristics proposed in the literature mainly used spectral criteria, whereas few heuristics directly considered positional information and/or textures. To address the gap of lacking heuristics that consider spatial constraints, Stumpf et al. BIB011 developed region-based AL heuristics for RS image classification. Empirical tests with multitemporal and multisensor satellite images of their region-based heuristics, which considered both uncertainty and diversity criteria, demonstrated that their method outperformed pointwise sampling and region-based methods that considered only uncertainty. An early example of applying AL methods in RS can be found in BIB001 , in which Mitra et al. employed an AL technique that selects the n most uncertain samples for segmentation of multispectral RS images, using SVMs for binary classification (Appendix A.4.1). Their AL query strategy is to select the sample closest to the current separating hyperplane of each binary SVM. Ferecatu and Boujemaa BIB003 also employed an SVM classifier in their AL method for remote-sensing image retrieval. Their experimental evaluation of classification performance confirmed the effectiveness of their AL approach for RS image retrieval. Their AL selection criterion focused on minimizing redundancy between the candidate images shown to the user. Obtaining training data for land cover classification using remotely sensed imagery is time consuming and expensive, especially for relatively inaccessible locations. In an early step toward the goal of designing classifiers that use as few labeled data points as possible, Rajan et al. BIB004 proposed an AL technique that efficiently updates existing classifiers by using minimal labeled data points. Specifically, Rajan et al. BIB004 used an AL technique that selects the unlabeled sample that maximizes the information gain between the posteriori probability distribution estimated from the current training set and the (new) training set obtained by including that sample into it. The information gain is measured by the Kullback-Leibler divergence (Section 3.1.5). One main contribution they made was that their AL method can adapt classifiers when there is substantial change in the spectral signatures between labeled and unlabeled data. Their AL approach is also useful for classifying a series of spatially/temporally related images, wherein the spectral signatures vary across the images. Their empirical results provided good performance, which was tested on both single and spatially/temporally related hyperspectral data sets. As introduced in Section 3.1.4, batch-mode AL is better suited to parallel labeling environments or models with slow training procedures to accelerate the learning speed. Tuia et al. BIB005 proposed two batch-mode AL algorithms for multi-class (Appendix A.4.2) RS image classification. The first algorithm extended the SVM margin sampling (Section 3.1.5) by incorporating diversity (Section 3.1.5) in kernel space, while the second is an entropy-based (Section 3.1.5) version of the query-by-bagging algorithm. The AL algorithms in pseudo code were provided in their appendix. Demir et al. BIB006 also investigated several multi-class (Appendix A.4.2) SVM-based batch-mode AL techniques for interactive classification of RS images; one outcome of the research was a proposed cluster-based diversity criterion for informative query selection. Patra and Bruzzone BIB007 also proposed a fast cluster-assumption based AL technique, but they only considered the uncertainty criterion. In a follow up study, Patra and Bruzzone BIB009 proposed a batch-mode AL (Section 3.1.4) technique that considered both uncertainty and diversity criteria for solving multi-class classification (Appendix A.4.2) problems using SVM classifier with OAA architecture. Their experimental results running on two different RS data sets (i.e., hyperspectral and multispectral) confirmed the effectiveness of the proposed technique. Above, we have seen some successful AL methods to tackle RS problems. Now, we will introduce recent ADL (Section 3.4) work for RS image classification. A RS scene can be classified into a specific scene theme (e.g., a part of a forest, a parking lot, and a lake). In this type of classification task, supervised learning techniques are usually employed. Zou et al. BIB012 used AL for RS scene classification to remove less informative deep belief network (DBN) features BIB002 , before a t-test was applied on the remaining features for discriminative feature selection. Specifically, they used iterative execution of AL, with 200 iterations, to collect an informative feature set from the DBN features, and then perform a t-test for feature selection. It is expensive to get good labeled samples in hyperspectral images for RS applications. To address this challenge, Liu et al. BIB014 proposed an ADL method for RS hyperspectral image classification, where their algorithm selects training samples that maximize two selection criteria (i.e., representativeness and uncertainty). The performance of their algorithm was compared with several other AL (but not integrated with DL) classification algorithms that used different query strategies (i.e., random sampling, maximum uncertainty sampling , and QBC BIB008 ; see Section 3.1.5). Their results demonstrated that the proposed algorithm achieved higher accuracy with fewer training samples by actively selecting training samples. DL has been widely studied to recognize ground objects from satellite imagery, whereas Chen and Zipf also emphasized that finding ground truth especially for developing and rural areas is not easy and manually labeling a large set of training data is very expensive. To tackle this challenge, Chen and Zipf propose an ongoing research project named DeepVGI, with the goal of employing ADL (Section 3.4) to classify satellite imagery with Volunteered Geographic Information (VGI) data. In their deepVGI method, Chen and Zipf tested two classic CNNs (LeNet and AlexNet BIB010 ) and a multilayer perceptron (MLP) (a class of the feed-forward neural network) [143] . The overall testing performance of their intial DeepVGI results, compared with Deep-OSM and MapSwipe, demonstrated that DeepVGI's performance (in particular, F1 score and accuracy) is significantly better than DeepOSM, but less good than the MapSwipe volunteers (each image is voted on by three volunteers). Training neural networks with OpenStreetMap (OSM) data, DeepOSM can make predictions of mis-registered roads in OSM data by classifying roads and features from satellite imagery . The DL architecture DeepOSM used is a simple one layer CNN. MapSwipe is a crowd-sourcing mobile application that allows volunteers to label images with buildings or roads. Almost all reported methods applying DL in RS, shared the motivation that getting labeled data for RS imagery is challenging. Thus, AL/ADL, will help clear some hurdles in the process of empowering RS research with DL. |
Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Technical Challenges and Opportunities <s> In this paper we present methods of enhancing existing discriminative classifiers for multi-labeled predictions. Discriminative methods like support vector machines perform very well for uni-labeled text classification tasks. Multi-labeled classification is a harder task subject to relatively less attention. In the multi-labeled setting, classes are often related to each other or part of a is-a hierarchy. We present a new technique for combining text features and features indicating relationships between classes, which can be used with any discriminative algorithm. We also present two enhancements to the margin of SVMs for building better models in the presence of overlapping classes. We present results of experiments on real world text benchmark datasets. Our new methods beat accuracy of existing methods with statistically significant improvements. <s> BIB001 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Technical Challenges and Opportunities <s> We explore the task of automatic classification of texts by the emotions expressed. Our novel method arranges neutrality, polarity and emotions hierarchically. We test the method on two datasets and show that it outperforms the corresponding "flat" approach, which does not take into account the hierarchical information. The highly imbalanced structure of most of the datasets in this area, particularly the two datasets with which we worked, has a dramatic effect on the performance of classification. The hierarchical approach helps alleviate the effect. <s> BIB002 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Technical Challenges and Opportunities <s> Defining an efficient training set is one of the most delicate phases for the success of remote sensing image classification routines. The complexity of the problem, the limited temporal and financial resources, as well as the high intraclass variance can make an algorithm fail if it is trained with a suboptimal dataset. Active learning aims at building efficient training sets by iteratively improving the model performance through sampling. A user-defined heuristic ranks the unlabeled pixels according to a function of the uncertainty of their class membership and then the user is asked to provide labels for the most uncertain pixels. This paper reviews and tests the main families of active learning algorithms: committee, large margin, and posterior probability-based. For each of them, the most recent advances in the remote sensing community are discussed and some heuristics are detailed and tested. Several challenging remote sensing scenarios are considered, including very high spatial resolution and hyperspectral image classification. Finally, guidelines for choosing the good architecture are provided for new and/or unexperienced user. <s> BIB003 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Technical Challenges and Opportunities <s> In this letter, we present a novel batch-mode active learning technique for solving multiclass classification problems by using the support vector machine classifier with the one-against-all architecture. The uncertainty of each unlabeled sample is measured by defining a criterion which not only considers the smallest distance to the decision hyperplanes but also takes into account the distances to other hyperplanes if the sample is within the margin of their decision boundaries. To select batch of most uncertain samples from all over the decision region, the uncertain regions of the classifiers are partitioned into multiple parts depending on the number of geometrical margins of binary classifiers passing on them. Then, a balanced number of most uncertain samples are selected from each part. To minimize the redundancy and keep the diversity among these samples, the kernel k-means clustering algorithm is applied to the set of uncertain samples, and the representative sample (medoid) from each cluster is selected for labeling. The effectiveness of the proposed method is evaluated by comparing it with other batch-mode active learning techniques existing in the literature. Experimental results on two different remote sensing data sets confirmed the effectiveness of the proposed technique. <s> BIB004 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Technical Challenges and Opportunities <s> In hierarchical classification, the prediction paths may be required to always end at leaf nodes. This is called mandatory leaf node prediction (MLNP) and is particularly useful when the leaf nodes have much stronger semantic meaning than the internal nodes. However, while there have been a lot of MLNP methods in hierarchical multiclass classification, performing MLNP in hierarchical multilabel classification is much more difficult. In this paper, we propose a novel MLNP algorithm that (i) considers the global hierarchy structure; and (ii) can be used on hierarchies of both trees and DAGs. We show that one can efficiently maximize the joint posterior probability of all the node labels by a simple greedy algorithm. Moreover, this can be further extended to the minimization of the expected symmetric loss. Experiments are performed on a number of real-world data sets with tree- and DAG-structured label hierarchies. The proposed method consistently outperforms other hierarchical and flat multilabel classification methods. <s> BIB005 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Technical Challenges and Opportunities <s> Active learning methods select informative instances to effectively learn a suitable classifier. Uncertainty sampling, a frequently utilized active learning strategy, selects instances about which the model is uncertain but it does not consider the reasons for why the model is uncertain. In this article, we present an evidence-based framework that can uncover the reasons for why a model is uncertain on a given instance. Using the evidence-based framework, we discuss two reasons for uncertainty of a model: a model can be uncertain about an instance because it has strong, but conflicting evidence for both classes or it can be uncertain because it does not have enough evidence for either class. Our empirical evaluations on several real-world datasets show that distinguishing between these two types of uncertainties has a drastic impact on the learning efficiency. We further provide empirical and analytical justifications as to why distinguishing between the two uncertainties matters. <s> BIB006 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Technical Challenges and Opportunities <s> Most of the empirical evaluations of active learning approaches in the literature have focused on a single classifier and a single performance measure. We present an extensive empirical evaluation of common active learning baselines using two probabilistic classifiers and several performance measures on a number of large datasets. In addition to providing important practical advice, our findings highlight the importance of overlooked choices in active learning experiments in the literature. For example, one of our findings shows that model selection is as important as devising an active learning approach, and choosing one classifier and one performance measure can often lead to unexpected and unwarranted conclusions. Active learning should generally improve the model's capability to distinguish between instances of different classes, but our findings show that the improvements provided by active learning for one performance measure often came at the expense of another measure. We present several such results, raise questions, guide users and researchers to better alternatives, caution against unforeseen side effects of active learning, and suggest future research directions. <s> BIB007 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Technical Challenges and Opportunities <s> While deep convolutional neural networks (CNNs) have shown a great success in single-label image classification, it is important to note that real world images generally contain multiple labels, which could correspond to different objects, scenes, actions and attributes in an image. Traditional approaches to multi-label image classification learn independent classifiers for each category and employ ranking or thresholding on the classification results. These techniques, although working well, fail to explicitly exploit the label dependencies in an image. In this paper, we utilize recurrent neural networks (RNNs) to address this problem. Combined with CNNs, the proposed CNN-RNN framework learns a joint image-label embedding to characterize the semantic label dependency as well as the image-label relevance, and it can be trained end-to-end from scratch to integrate both information in a unified framework. Experimental results on public benchmark datasets demonstrate that the proposed architecture achieves better performance than the state-of-the-art multi-label classification model <s> BIB008 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Technical Challenges and Opportunities <s> In this paper, we propose the joint learning attention and recurrent neural network (RNN) models for multi-label classification. While approaches based on the use of either model exist (e.g., for the task of image captioning), training such existing network architectures typically require pre-defined label sequences. For multi-label classification, it would be desirable to have a robust inference process, so that the prediction error would not propagate and thus affect the performance. Our proposed model uniquely integrates attention and Long Short Term Memory (LSTM) models, which not only addresses the above problem but also allows one to identify visual objects of interests with varying sizes without the prior knowledge of particular label ordering. More importantly, label co-occurrence information can be jointly exploited by our LSTM model. Finally, by advancing the technique of beam search, prediction of multiple labels can be efficiently achieved by our proposed network model. <s> BIB009 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Technical Challenges and Opportunities <s> Abstract Classification of evolving data streams is a challenging task, which is suitably tackled with online learning approaches. Data is processed instantly requiring the learning machinery to (self-)adapt by adjusting its model. However for high velocity streams, it is usually difficult to obtain labeled samples to train the classification model. Hence, we propose a novel o nline b atch-based a ctive l earning algorithm (OBAL) to perform the labeling. OBAL is developed for crisis management applications where data streams are generated by the social media community. OBAL is applied to discriminate relevant from irrelevant social media items. An emergency management user will be interactively queried to label chosen items. OBAL exploits the boundary items for which it is highly uncertain about their class and makes use of two classifiers: k-Nearest Neighbors (kNN) and Support Vector Machine (SVM). OBAL is equipped with a labeling budget and a set of uncertainty strategies to identify the items for labeling. An extensive analysis is carried out to show OBAL’s performance, the sensitivity of its parameters, and the contribution of the individual uncertainty strategies. Two types of datasets are used: synthetic and social media datasets related to crises. The empirical results illustrate that OBAL has a very good discrimination power. <s> BIB010 | Below we list some main technical challenges and opportunites, from classifier and AL problem scenarios related, to VA and AL/ADL integration. • Multi-label classification: Most existing multi-label classification research has been based on simple ML models (such as logistic regression BIB006 BIB007 , naive Bayes BIB006 BIB007 , and SVM BIB003 BIB006 BIB004 BIB001 ); but, very few on DL architectures, such as CNNs and RNNs. We need to extend the traditional ML models to DL ones for Big Data problems, because as we emphasized in Appendix A.1, DL algorithms have better scalability than traditional ML algorithms . Wang et al. BIB008 and Chen et al. BIB009 have developed a CNN-RNN framework and an order-free RNN for multi-label classification for image data sets, respectively, whereas few DL based multi-label classification methods for text data have been proposed. • Hierarchical classification: As Silla et al. pointed out in their survey about hierarchical classification (Appendix A.4.4) across different application domains, flat classification (Appendix A.4.4) has received much more attention in areas such as data mining and ML. However, many important real-world classification problems are naturally cast as hierarchical classification problems, where the classes to be predicted are organized into a class hierarchy (e.g., for geospatial problems, feature type classification provides a good example)-typically a tree or a directed acyclic graph (DAG). Hierarchical classification algorithms, which utilize the hierarchical relationships between labels in making predictions, can often achieve better prediction performance than flat approaches BIB002 BIB005 . Thus, there is a clear research challenge to develop new approaches that are flexible enough to handle hierarchical classification tasks, in particular, the integration of hierarchical classification with single-label classification and with multi-label classification (i.e., HSC and HMC), respectively. • Stream-based selective sampling AL: As introduced in Section 3.1.2 and discussed in BIB010 , most AL methods in the literature use a pool-based sampling scenario; only a few methods have been developed for data streams. The stream-based approach is more appropriate for some real world scenarios, for example, when memory or processing power is limited (mobile and embedded devices) , crisis management during disaster leveraging social media data streams, or monitoring distributed sensor networks to identify categories of events that pose risks to people or the environment. To address the challenges of the rapidly increasing availability of geospatial streaming data, a key challenge is to develop more effective AL methods and applications using a stream-based AL scenario. |
Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> • <s> Abstract Voluminous geographic data have been, and continue to be, collected with modern data acquisition techniques such as global positioning systems (GPS), high-resolution remote sensing, location-aware services and surveys, and internet-based volunteered geographic information. There is an urgent need for effective and efficient methods to extract unknown and unexpected information from spatial data sets of unprecedentedly large size, high dimensionality, and complexity. To address these challenges, spatial data mining and geographic knowledge discovery has emerged as an active research field, focusing on the development of theory, methodology, and practice for the extraction of useful information and knowledge from massive and complex spatial databases. This paper highlights recent theoretical and applied research in spatial data mining and knowledge discovery. We first briefly review the literature on several common spatial data-mining tasks, including spatial classification and prediction; spatial association rule mining; spatial cluster analysis; and geovisualization. The articles included in this special issue contribute to spatial data mining research by developing new techniques for point pattern analysis, prediction in space–time data, and analysis of moving object data, as well as by demonstrating applications of genetic algorithms for optimization in the context of image classification and spatial interpolation. The papers concludes with some thoughts on the contribution of spatial data mining and geographic knowledge discovery to geographic information sciences. <s> BIB001 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> • <s> How to build a compact and informative training data set autonomously is crucial for many real-world learning tasks, especially those with large amount of unlabeled data and high cost of labeling. Active learning aims to address this problem by asking queries in a smart way. Two main scenarios of querying considered in the literature are query synthesis and pool-based sampling. Since in many cases synthesized queries are meaningless or difficult for human to label, more efforts have been devoted to pool-based sampling in recent years. However, in pool-based active learning, querying requires evaluating every unlabeled data point in the pool, which is usually very time-consuming. By contrast, query synthesis has clear advantage on querying time, which is independent of the pool size. In this paper, we propose a novel framework combining query synthesis and pool-based sampling to accelerate the learning process and overcome the current limitation of query synthesis. The basic idea is to select the data point nearest to the synthesized query as the query point. We also provide two simple strategies for synthesizing informative queries. Moreover, to further speed up querying, we employ clustering techniques on the whole data set to construct a representative unlabeled data pool based on cluster centers. Experiments on several real-world data sets show that our methods have distinct advantages in time complexity and similar performance compared to pool-based uncertainty sampling methods. <s> BIB002 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> • <s> Active learning has received great interests from researchers due to its ability to reduce the amount of supervision required for effective learning. As the core component of active learning algorithms, query synthesis and pool-based sampling are two main scenarios of querying considered in the literature. Query synthesis features low querying time, but only has limited applications as the synthesized query might be unrecognizable to human oracle. As a result, most efforts have focused on pool-based sampling in recent years, although it is much more time-consuming. In this paper, we propose new strategies for a novel querying framework that combines query synthesis and pool-based sampling. It overcomes the limitation of query synthesis, and has the advantage of fast querying. The basic idea is to synthesize an instance close to the decision boundary using labelled data, and then select the real instance closest to the synthesized one as a query. For this purpose, we propose a synthesis strategy, which can synthesize instances close to the decision boundary and spreading along the decision boundary. Since the synthesis only depends on the relatively small labelled set, instead of evaluating the entire unlabelled set as many other active learning algorithms do, our method has the advantage of efficiency. In order to handle more complicated data and make our framework compatible with powerful kernel-based learners, we also extend our method to kernel version. Experiments on several real-world data sets show that our method has significant advantage on time complexity and similar performance compared to pool-based uncertainty sampling methods. <s> BIB003 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> • <s> The automatic detection and classification of stance (e.g., certainty or agreement) in text data using natural language processing and machine-learning methods creates an opportunity to gain insight into the speakers’ attitudes toward their own and other people’s utterances. However, identifying stance in text presents many challenges related to training data collection and classifier training. To facilitate the entire process of training a stance classifier, we propose a visual analytics approach, called ALVA, for text data annotation and visualization. ALVA’s interplay with the stance classifier follows an active learning strategy to select suitable candidate utterances for manual annotaion. Our approach supports annotation process management and provides the annotators with a clean user interface for labeling utterances with multiple stance categories. ALVA also contains a visualization method to help analysts of the annotation and training process gain a better understanding of the categories used by the annotators. The visualization uses a novel visual representation, called CatCombos, which groups individual annotation items by the combination of stance categories. Additionally, our system makes a visualization of a vector space model available that is itself based on utterances. ALVA is already being used by our domain experts in linguistics and computational linguistics to improve the understanding of stance phenomena and to build a stance classifier for applications such as social media monitoring. <s> BIB004 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> • <s> The increasing availability and use of positioning devices has resulted in large volumes of trajectory data. However, semantic annotations for such data are typically added by domain experts, which is a time-consuming task. Machine-learning algorithms can help infer semantic annotations from trajectory data by learning from sets of labeled data. Specifically, active learning approaches can minimize the set of trajectories to be annotated while preserving good performance measures. The ANALYTiC web-based interactive tool visually guides users through this annotation process. <s> BIB005 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> • <s> Interactive model analysis, the process of understanding, diagnosing, and refining a machine learning model with the help of interactive visualization, is very important for users to efficiently solve real-world artificial intelligence and data mining problems. Dramatic advances in big data analytics has led to a wide variety of interactive model analysis tasks. In this paper, we present a comprehensive analysis and interpretation of this rapidly developing area. Specifically, we classify the relevant work into three categories: understanding, diagnosis, and refinement. Each category is exemplified by recent influential work. Possible future research opportunities are also explored and discussed. <s> BIB006 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> • <s> Labeling data instances is an important task in machine learning and visual analytics. Both fields provide a broad set of labeling strategies, whereby machine learning (and in particular active learning) follows a rather model-centered approach and visual analytics employs rather user-centered approaches (visual-interactive labeling). Both approaches have individual strengths and weaknesses. In this work, we conduct an experiment with three parts to assess and compare the performance of these different labeling strategies. In our study, we (1) identify different visual labeling strategies for user-centered labeling, (2) investigate strengths and weaknesses of labeling strategies for different labeling tasks and task complexities, and (3) shed light on the effect of using different visual encodings to guide the visual-interactive labeling process. We further compare labeling of single versus multiple instances at a time, and quantify the impact on efficiency. We systematically compare the performance of visual interactive labeling with that of active learning. Our main findings are that visual-interactive labeling can outperform active learning, given the condition that dimension reduction separates well the class distributions. Moreover, using dimension reduction in combination with additional visual encodings that expose the internal state of the learning model turns out to improve the performance of visual-interactive labeling. <s> BIB007 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> • <s> Abstract Classification of evolving data streams is a challenging task, which is suitably tackled with online learning approaches. Data is processed instantly requiring the learning machinery to (self-)adapt by adjusting its model. However for high velocity streams, it is usually difficult to obtain labeled samples to train the classification model. Hence, we propose a novel o nline b atch-based a ctive l earning algorithm (OBAL) to perform the labeling. OBAL is developed for crisis management applications where data streams are generated by the social media community. OBAL is applied to discriminate relevant from irrelevant social media items. An emergency management user will be interactively queried to label chosen items. OBAL exploits the boundary items for which it is highly uncertain about their class and makes use of two classifiers: k-Nearest Neighbors (kNN) and Support Vector Machine (SVM). OBAL is equipped with a labeling budget and a set of uncertainty strategies to identify the items for labeling. An extensive analysis is carried out to show OBAL’s performance, the sensitivity of its parameters, and the contribution of the individual uncertainty strategies. Two types of datasets are used: synthetic and social media datasets related to crises. The empirical results illustrate that OBAL has a very good discrimination power. <s> BIB008 | Intergration of different AL problem scenarios: As introduced in Section 3.1.2, among the three main AL problem scenarios, pool-based sampling has received substantial development. But, there is a potential to combine scenarios to take advantage of their respective strengths (e.g., use of real instances that humans are able to annotate for the pool-based sampling and efficiency of membership query synthesis). In early work in this direction, Hu et al. BIB002 and Wang et al. BIB003 have combined membership query synthesis and pool-based sampling scenarios. The conclusion, based on their experiments on several real-world data sets, showed the strength of the combination against pool-based uncertainty sampling methods in terms of time complexity. More query strategies (Section 3.1.5) and M&DL architectures need to be tested to demonstrate the robustness of the improvement of the combination. Intergration of VA with AL/ADL: As Biewald explained in , human-in-the-loop computing is the future of ML. Biewald emphasized that it is often very easy to get a ML algorithm to 80% accuracy whereas almost impossible to get an algorithm to 99%; the best ML models let humans handle that 20%, because 80% accuracy is not good enough for most real world applications. To integrate human-in-the-loop methodology into ML architectures, AL is the most successful "bridge" BIB007 BIB004 BIB008 BIB005 , and VA can further enhance and ease the human's role in the human-machine computing loop BIB006 BIB007 BIB001 . Intergrating the strengths of AL (especially ADL) and VA will raise the effectiveness and efficiency to new levels (Sections 3.1-3.4). Bernard et al. BIB007 provided solid evidence to support this thread of research (Section 3.3). |
Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Challenges and Opportunities From Application Perspective (for GIScience and RS Audience) <s> As the resolution of remote-sensing imagery increases, the full complexity of the scenes becomes increasingly difficult to approach. User-defined classes in large image databases are often composed of several groups of images and span very different scales in the space of low-level visual descriptors. The interactive retrieval of such image classes is then very difficult. To address this challenge, we evaluate here, in the context of satellite image retrieval, two general improvements for relevance feedback using support vector machines (SVMs). First, to optimize the transfer of information between the user and the system, we focus on the criterion employed by the system for selecting the images presented to the user at every feedback round. We put forward an active-learning selection criterion that minimizes redundancy between the candidate images shown to the user. Second, for image classes spanning very different scales in the low-level description space, we find that a high sensitivity of the SVM to the scale of the data brings about a low retrieval performance. We argue that the insensitivity to scale is desirable in this context, and we show how to obtain it by the use of specific kernel functions. Experimental evaluation of both ranking and classification performance on a ground-truth database of satellite images confirms the effectiveness of our approach <s> BIB001 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Challenges and Opportunities From Application Perspective (for GIScience and RS Audience) <s> The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond. <s> BIB002 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Challenges and Opportunities From Application Perspective (for GIScience and RS Audience) <s> Similarity measures have a long tradition in fields such as information retrieval, artificial intelligence, and cognitive science. Within the last years, these measures have been extended and reused to measure semantic similarity; i.e., for comparing meanings rather than syntactic differences. Various measures for spatial applications have been de- veloped, but a solid foundation for answering what they measure; how they are best ap- plied in information retrieval; which role contextual information plays; and how similarity values or rankings should be interpreted is still missing. It is therefore difficult to decide which measure should be used for a particular application or to compare results from dif- ferent similarity theories. Based on a review of existing similarity measures, we introduce a framework to specify the semantics of similarity. We discuss similarity-based information retrieval paradigms as well as their implementation in web-based user interfaces for geo- graphic information retrieval to demonstrate the applicability of the framework. Finally, we formulate open challenges for similarity research. <s> BIB003 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Challenges and Opportunities From Application Perspective (for GIScience and RS Audience) <s> We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry. <s> BIB004 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Challenges and Opportunities From Application Perspective (for GIScience and RS Audience) <s> The aim of this article is to provide a basis in evidence for or against the much-quoted assertion that 80% of all information is geospatially referenced. For this purpose, two approaches are presented that are intended to capture the portion of geospatially referenced information in user-generated content: a network approach and a cognitive approach. In the network approach, the German Wikipedia is used as a research corpus. It is considered a network with the articles being nodes and the links being edges. The Network Degree of Geospatial Reference NDGR is introduced as an indicator to measure the network approach. We define NDGR as the shortest path between any Wikipedia article and the closest article within the network that is labeled with coordinates in its headline. An analysis of the German Wikipedia employing this approach shows that 78% of all articles have a coordinate themselves or are directly linked to at least one article that has geospatial coordinates. The cognitive approach is manifested by the categories of geospatial reference CGR : direct, indirect, and non-geospatial reference. These are categories that may be distinguished and applied by humans. An empirical study including 380 participants was conducted. The results of both approaches are synthesized with the aim to 1 examine correlations between NDGR and the human conceptualization of geospatial reference and 2 to separate geospatial from non-geospatial information. From the results of this synthesis, it can be concluded that 56–59% of the articles within Wikipedia can be considered to be directly or indirectly geospatially referenced. The article thus describes a method to check the validity of the ‘80%-assertion’ for information corpora that can be modeled using graphs e.g., the World Wide Web, the Semantic Web, and Wikipedia. For the corpus investigated here Wikipedia, the ‘80%-assertion’ cannot be confirmed, but would need to be reformulated as a ‘60%-assertion’. <s> BIB005 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Challenges and Opportunities From Application Perspective (for GIScience and RS Audience) <s> The recent availability of large amounts of geotagged imagery has inspired a number of data driven solutions to the image geolocalization problem. Existing approaches predict the location of a query image by matching it to a database of georeferenced photographs. While there are many geotagged images available on photo sharing and street view sites, most are clustered around landmarks and urban areas. The vast majority of the Earth's land area has no ground level reference photos available, which limits the applicability of all existing image geolocalization methods. On the other hand, there is no shortage of visual and geographic data that densely covers the Earth - we examine overhead imagery and land cover survey data - but the relationship between this data and ground level query photographs is complex. In this paper, we introduce a cross-view feature translation approach to greatly extend the reach of image geolocalization methods. We can often localize a query even if it has no corresponding ground level images in the database. A key idea is to learn the relationship between ground level appearance and overhead appearance and land cover attributes from sparsely available geotagged ground-level images. We perform experiments over a 1600 km2 region containing a variety of scenes and land cover types. For each query, our algorithm produces a probability density over the region of interest. <s> BIB006 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Challenges and Opportunities From Application Perspective (for GIScience and RS Audience) <s> Traditional gazetteers are built and maintained by authoritative mapping agencies. In the age of Big Data, it is possible to construct gazetteers in a data-driven approach by mining rich volunteered geographic information (VGI) from the Web. In this research, we build a scalable distributed platform and a high-performance geoprocessing workflow based on the Hadoop ecosystem to harvest crowd-sourced gazetteer entries. Using experiments based on geotagged datasets in Flickr, we find that the MapReduce-based workflow running on the spatially enabled Hadoop cluster can reduce the processing time compared with traditional desktop-based operations by an order of magnitude. We demonstrate how to use such a novel spatial-computing infrastructure to facilitate gazetteer research. In addition, we introduce a provenance-based trust model for quality assurance. This work offers new insights on enriching future gazetteers with the use of Hadoop clusters, and makes contributions in connecting GIS to the cloud computing environment for the next frontier of Big Geo-Data analytics. <s> BIB007 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Challenges and Opportunities From Application Perspective (for GIScience and RS Audience) <s> In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision. <s> BIB008 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Challenges and Opportunities From Application Perspective (for GIScience and RS Audience) <s> Scene recognition is one of the hallmark tasks of computer vision, allowing definition of a context for object recognition. Whereas the tremendous recent progress in object recognition tasks is due to the availability of large datasets like ImageNet and the rise of Convolutional Neural Networks (CNNs) for learning high-level features, performance at scene recognition has not attained the same level of success. This may be because current deep features trained from ImageNet are not competitive enough for such tasks. Here, we introduce a new scene-centric database called Places with over 7 million labeled pictures of scenes. We propose new methods to compare the density and diversity of image datasets and show that Places is as dense as other scene datasets and has more diversity. Using CNN, we learn deep features for scene recognition tasks, and establish new state-of-the-art results on several scene-centric datasets. A visualization of the CNN layers' responses allows us to show differences in the internal representations of object-centric and scene-centric networks. <s> BIB009 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Challenges and Opportunities From Application Perspective (for GIScience and RS Audience) <s> Learning effective feature representations and similarity measures are crucial to the retrieval performance of a content-based image retrieval (CBIR) system. Despite extensive research efforts for decades, it remains one of the most challenging open problems that considerably hinders the successes of real-world CBIR systems. The key challenge has been attributed to the well-known ``semantic gap'' issue that exists between low-level image pixels captured by machines and high-level semantic concepts perceived by human. Among various techniques, machine learning has been actively investigated as a possible direction to bridge the semantic gap in the long term. Inspired by recent successes of deep learning techniques for computer vision and other applications, in this paper, we attempt to address an open problem: if deep learning is a hope for bridging the semantic gap in CBIR and how much improvements in CBIR tasks can be achieved by exploring the state-of-the-art deep learning techniques for learning feature representations and similarity measures. Specifically, we investigate a framework of deep learning with application to CBIR tasks with an extensive set of empirical studies by examining a state-of-the-art deep learning method (Convolutional Neural Networks) for CBIR tasks under varied settings. From our empirical studies, we find some encouraging results and summarize some important insights for future research. <s> BIB010 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Challenges and Opportunities From Application Perspective (for GIScience and RS Audience) <s> Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors. <s> BIB011 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Challenges and Opportunities From Application Perspective (for GIScience and RS Audience) <s> Deep Learning waves have lapped at the shores of computational linguistics for several years now, but 2015 seems like the year when the full force of the tsunami hit the major Natural Language Processing (NLP) conferences. However, some pundits are predicting that the final damage will be even worse. Accompanying ICML 2015 in Lille, France, there was another, almost as big, event: the 2015 Deep Learning Workshop. The workshop ended with a panel discussion, and at it, Neil Lawrence said, “NLP is kind of like a rabbit in the headlights of the Deep Learning machine, waiting to be flattened.” Now that is a remark that the computational linguistics community has to take seriously! Is it the end of the road for us? Where are these predictions of steamrollering coming from? At the June 2015 opening of the Facebook AI Research Lab in Paris, its director Yann LeCun said: “The next big step for Deep Learning is natural language understanding, which aims to give machines the power to understand not just individual words but entire sentences and paragraphs.”1 In a November 2014 Reddit AMA (Ask Me Anything), Geoff Hinton said, “I think that the most exciting areas over the next five years will be really understanding text and videos. I will be disappointed if in five years’ time we do not have something that can watch a YouTube video and tell a story about what happened. In a few years time we will put [Deep Learning] on a chip that fits into someone’s ear and have an English-decoding chip that’s just like a real Babel fish.”2 And Yoshua Bengio, the third giant of modern Deep Learning, has also increasingly oriented his group’s research toward language, including recent exciting new developments in neural machine translation systems. It’s not just Deep Learning researchers. When leading machine learning researcher Michael Jordan was asked at a September 2014 AMA, “If you got a billion dollars to spend on a huge research project that you get to lead, what would you like to do?”, he answered: “I’d use the billion dollars to build a NASA-size program focusing on natural language processing, in all of its glory (semantics, pragmatics, etc.).” He went on: “Intellectually I think that NLP is fascinating, allowing us to focus on highly structured inference problems, on issues that go to the core of ‘what is thought’ but remain eminently practical, and on a technology <s> BIB012 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Challenges and Opportunities From Application Perspective (for GIScience and RS Audience) <s> We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection. <s> BIB013 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Challenges and Opportunities From Application Perspective (for GIScience and RS Audience) <s> We explore the use of convolutional neural networks for the semantic classification of remote sensing scenes. Two recently proposed architectures, CaffeNet and GoogLeNet, are adopted, with three different learning modalities. Besides conventional training from scratch, we resort to pre-trained networks that are only fine-tuned on the target data, so as to avoid overfitting problems and reduce design time. Experiments on two remote sensing datasets, with markedly different characteristics, testify on the effectiveness and wide applicability of the proposed solution, which guarantees a significant performance improvement over all state-of-the-art references. <s> BIB014 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Challenges and Opportunities From Application Perspective (for GIScience and RS Audience) <s> Abstract Urban areas of interest (AOI) refer to the regions within an urban environment that attract people's attention. Such areas often have high exposure to the general public, and receive a large number of visits. As a result, urban AOI can reveal useful information for city planners, transportation analysts, and location-based service providers to plan new business, extend existing infrastructure, and so forth. Urban AOI exist in people's perception and are defined by behaviors. However, such perception was rarely captured until the Social Web information technology revolution. Social media data record the interactions between users and their surrounding environment, and thus have the potential to uncover interesting urban areas and their underlying spatiotemporal dynamics. This paper presents a coherent framework for extracting and understanding urban AOI based on geotagged photos. Six different cities from six different countries have been selected for this study, and Flickr photo data covering these cities in the past ten years (2004–2014) have been retrieved. We identify AOI using DBSCAN clustering algorithm, understand AOI by extracting distinctive textual tags and preferable photos, and discuss the spatiotemporal dynamics as well as some insights derived from the AOI. An interactive prototype has also been implemented as a proof-of-concept. While Flickr data have been used in this study, the presented framework can also be applied to other geotagged photos. <s> BIB015 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Challenges and Opportunities From Application Perspective (for GIScience and RS Audience) <s> The recent availability of geo-tagged images and rich geospatial data has inspired a number of algorithms for image based geolocalization. Most approaches predict the location of a query image by matching to ground-level images with known locations (e.g., street-view data). However, most of the Earth does not have ground-level reference photos available. Fortunately, more complete coverage is provided by oblique aerial or “bird's eye” imagery. In this work, we localize a ground-level query image by matching it to a reference database of aerial imagery. We use publicly available data to build a dataset of 78K aligned crossview image pairs. The primary challenge for this task is that traditional computer vision approaches cannot handle the wide baseline and appearance variation of these cross-view pairs. We use our dataset to learn a feature representation in which matching views are near one another and mismatched views are far apart. Our proposed approach, Where-CNN, is inspired by deep learning success in face verification and achieves significant improvements over traditional hand-crafted features and existing deep features learned from other large-scale databases. We show the effectiveness of Where-CNN in finding matches between street view and aerial view imagery and demonstrate the ability of our learned features to generalize to novel locations. <s> BIB016 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Challenges and Opportunities From Application Perspective (for GIScience and RS Audience) <s> This paper describes our deep learning system for sentiment analysis of tweets. The main contribution of this work is a new model for initializing the parameter weights of the convolutional neural network, which is crucial to train an accurate model while avoiding the need to inject any additional features. Briefly, we use an unsupervised neural language model to train initial word embeddings that are further tuned by our deep learning model on a distant supervised corpus. At a final stage, the pre-trained parameters of the network are used to initialize the model. We train the latter on the supervised training data recently made available by the official system evaluation campaign on Twitter Sentiment Analysis organized by Semeval-2015. A comparison between the results of our approach and the systems participating in the challenge on the official test sets, suggests that our model could be ranked in the first two positions in both the phrase-level subtask A (among 11 teams) and on the message-level subtask B (among 40 teams). This is an important evidence on the practical value of our solution. <s> BIB017 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Challenges and Opportunities From Application Perspective (for GIScience and RS Audience) <s> Visual understanding of complex urban street scenes is an enabling factor for a wide range of applications. Object detection has benefited enormously from large-scale datasets, especially in the context of deep learning. For semantic urban scene understanding, however, no current dataset adequately captures the complexity of real-world urban scenes. To address this, we introduce Cityscapes, a benchmark suite and large-scale dataset to train and test approaches for pixel-level and instance-level semantic labeling. Cityscapes is comprised of a large, diverse set of stereo video sequences recorded in streets from 50 different cities. 5000 of these images have high quality pixel-level annotations, 20 000 additional images have coarse annotations to enable methods that leverage large volumes of weakly-labeled data. Crucially, our effort exceeds previous attempts in terms of dataset size, annotation richness, scene variability, and complexity. Our accompanying empirical study provides an in-depth analysis of the dataset characteristics, as well as a performance evaluation of several state-of-the-art approaches based on our benchmark. <s> BIB018 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Challenges and Opportunities From Application Perspective (for GIScience and RS Audience) <s> The role of social media, in particular microblogging platforms such as Twitter, as a conduit for actionable and tactical information during disasters is increasingly acknowledged. However, time-critical analysis of big crisis data on social media streams brings challenges to machine learning techniques, especially the ones that use supervised learning. The Scarcity of labeled data, particularly in the early hours of a crisis, delays the machine learning process. The current state-of-the-art classification methods require a significant amount of labeled data specific to a particular event for training plus a lot of feature engineering to achieve best results. In this work, we introduce neural network based classification methods for binary and multi-class tweet classification task. We show that neural network based models do not require any feature engineering and perform better than state-of-the-art methods. In the early hours of a disaster when no labeled data is available, our proposed method makes the best use of the out-of-event data and achieves good results. <s> BIB019 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Challenges and Opportunities From Application Perspective (for GIScience and RS Audience) <s> A new methodology is introduced that leverages data harvested from social media for tasking the collection of remote-sensing imagery during disasters or emergencies. The images are then fused with multiple sources of contributed data for the damage assessment of transportation infrastructure. The capability is valuable in situations where environmental hazards such as hurricanes or severe weather affect very large areas. During these types of disasters it is paramount to ‘cue’ the collection of remote-sensing images to assess the impact of fast-moving and potentially life-threatening events. The methodology consists of two steps. First, real-time data from Twitter are monitored to prioritize the collection of remote-sensing images for evolving disasters. Commercial satellites are then tasked to collect high-resolution images of these areas. Second, a damage assessment of transportation infrastructure is carried out by fusing the tasked images with contributed data harvested from social media such as Flickr and Twitter, and any additional available data. To demonstrate its feasibility, the proposed methodology is applied and tested on the 2013 Colorado floods with a special emphasis in Boulder County and the cities of Boulder and Longmont. <s> BIB020 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Challenges and Opportunities From Application Perspective (for GIScience and RS Audience) <s> Following an avalanche, one of the factors that affect victims’ chance of survival is the speed with which they are located and dug out. Rescue teams use techniques like trained rescue dogs and electronic transceivers to locate victims. However, the resources and time required to deploy rescue teams are major bottlenecks that decrease a victim’s chance of survival. Advances in the field of Unmanned Aerial Vehicles (UAVs) have enabled the use of flying robots equipped with sensors like optical cameras to assess the damage caused by natural or manmade disasters and locate victims in the debris. In this paper, we propose assisting avalanche search and rescue (SAR) operations with UAVs fitted with vision cameras. The sequence of images of the avalanche debris captured by the UAV is processed with a pre-trained Convolutional Neural Network (CNN) to extract discriminative features. A trained linear Support Vector Machine (SVM) is integrated at the top of the CNN to detect objects of interest. Moreover, we introduce a pre-processing method to increase the detection rate and a post-processing method based on a Hidden Markov Model to improve the prediction performance of the classifier. Experimental results conducted on two different datasets at different levels of resolution show that the detection performance increases with an increase in resolution, while the computation time increases. Additionally, they also suggest that a significant decrease in processing time can be achieved thanks to the pre-processing step. <s> BIB021 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Challenges and Opportunities From Application Perspective (for GIScience and RS Audience) <s> Hate speech detection on Twitter is critical for applications like controversial event extraction, building AI chatterbots, content recommendation, and sentiment analysis. We define this task as being able to classify a tweet as racist, sexist or neither. The complexity of the natural language constructs makes this task very challenging. We perform extensive experiments with multiple deep learning architectures to learn semantic word embeddings to handle this complexity. Our experiments on a benchmark dataset of 16K annotated tweets show that such deep learning methods outperform state-of-the-art char/word n-gram methods by ~18 F1 points. <s> BIB022 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Challenges and Opportunities From Application Perspective (for GIScience and RS Audience) <s> This paper extends recent research into the usefulness of volunteered photos for land cover extraction, and investigates whether this usefulness can be automatically assessed by an easily accessible, off-the-shelf neural network pre-trained on a variety of scene characteristics. Geo-tagged photographs are sometimes presented to volunteers as part of a game which requires them to extract relevant facts about land use. The challenge is to select the most relevant photographs in order to most efficiently extract the useful information while maintaining the engagement and interests of volunteers. By repurposing an existing network which had been trained on an extensive library of potentially relevant features, we can quickly carry out initial assessments of the general value of this approach, pick out especially salient features, and identify focus areas for future neural network training and development. We compare two approaches to extract land cover information from the network: a simple post hoc weighting approach accessible to non-technical audiences and a more complex decision tree approach that involves training on domain-specific features of interest. Both approaches had reasonable success in characterizing human influence within a scene when identifying the land use types (as classified by Urban Atlas) present within a buffer around the photograph’s location. This work identifies important limitations and opportunities for using volunteered photographs as follows: (1) the false precision of a photograph’s location is less useful for identifying on-the-spot land cover than the information it can give on neighbouring combinations of land cover; (2) ground-acquired photographs, interpreted by a neural network, can supplement plan view imagery by identifying features which will never be discernible from above; (3) when dealing with contexts where there are very few exemplars of particular classes, an independent a posteriori weighting of existing scene attributes and categories can buffer against over-specificity. <s> BIB023 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Challenges and Opportunities From Application Perspective (for GIScience and RS Audience) <s> Urban planning applications (energy audits, investment, etc.) require an understanding of built infrastructure and its environment, i.e., both low-level, physical features (amount of vegetation, building area and geometry etc.), as well as higher-level concepts such as land use classes (which encode expert understanding of socio-economic end uses). This kind of data is expensive and labor-intensive to obtain, which limits its availability (particularly in developing countries). We analyze patterns in land use in urban neighborhoods using large-scale satellite imagery data (which is available worldwide from third-party providers) and state-of-the-art computer vision techniques based on deep convolutional neural networks. For supervision, given the limited availability of standard benchmarks for remote-sensing data, we obtain ground truth land use class labels carefully sampled from open-source surveys, in particular the Urban Atlas land classification dataset of $20$ land use classes across $~300$ European cities. We use this data to train and compare deep architectures which have recently shown good performance on standard computer vision tasks (image classification and segmentation), including on geospatial data. Furthermore, we show that the deep representations extracted from satellite imagery of urban environments can be used to compare neighborhoods across several cities. We make our dataset available for other machine learning researchers to use for remote-sensing applications. <s> BIB024 | As Raad emphasized in , "When data volume swells beyond a human's ability to discern the patterns in it ... GIS, infused with artificial intelligence, can help executives make better decisions", we share the same vision that GIScience researchers need to bring M&DL into our community, and start to build GeoAI. Early achievements in M&DL have thus far been greater for image data than for text BIB011 BIB012 (the main reasons are discussed in ). A major reason is the availability of big image repositories, such as ImageNet BIB002 , that support such work for benchmarking. For example, well-known pre-trained CNN models (i.e., ConvNets)-AlexNet BIB004 , VGG ConvNets BIB008 , and GoogLeNet BIB013 )-are trained on the ImageNet BIB002 . Although substantial progress has been made in applying M&DL to image-based tasks, a range of challenges remain in RS and other geospatial image domains. One key challenge is related to leveraging image data collected by the increasing variety of drone-mounted sensors. Drones can easily get a big set of image data, for example, in disaster management applications. In this context, DL has already been applied to building extraction in disaster situations , as well as avalanche support focused on finding victims BIB021 . Moving beyond "traditional" uses of supervised DL with image classification, one challenge is to develop interactive web apps that combine AL/ADL and VA to ask volunteers and domain experts to label a small set of data and then build a good classifier, which can help to quickly classify the images and then plot them on map. Doing so can help decision makers to get the big picture and generate insights in a quick and accurate manner. Such a system, of course, will require substantial testing to be usable in domains where life and property are at risk, but it is that risk that should drive research toward this objective. While M&DL for image classification has a longer history BIB011 BIB012 , success in handling NLP tasks, such as language modeling and sentiment analysis BIB022 , is catching up. As Knight emphasizes, it is hard to envision how we will collaborate with AI machines without machines understanding our language, since language is the most powerful way we make sense of the world and interact with it. These advances in text processing are particularly important since massive amounts of unstructured text are generated each day; based on industry estimates, as much as 80% of data generated by be unstructured . Estimates suggest that at least 60% of that unstructured text contains geospatial references BIB005 . These unstructured data signify and give meaning to geospatial information through natural language. However, GIScience has paid limited attention to unstructured data sources. An important step in moving from the unstructured text to meaningful information is to classify the text into categories relevant to target tasks (i.e., text classification, Appendix A.5). In Section 4, we have seen some successful applications using AL and ADL in the GIScience and RS fields. Even though most of these are based on RS imagery, with some on GPS trajectories, and only a few focus on geospatial text data, as outlined in the review above, advances in M&DL are rapidly being extending into a wide array of other domains, including to address NLP and other text related challenges. Related to these image and NLP processing advances in M&DL, there are multiple GIScience and RS problems, such as geographic information retrieval (GIR), geospatial semantics, and geolocalization, to which VA, AL, and ADL based strategies can be applied productively. We highlight just a few of these below. • Geospatial image based applications: Based on the advances achieved in M&DL, many promising geospaital applications using big geospatial image data sets are becoming possible. Diverse GIScience and RS problems can benefit from the methods we reviewed in this paper, potential applications include: land use and land cover classification BIB014 BIB023 , identification and understanding of patterns and interests in urban environments BIB015 BIB024 , and geospatial scene understanding BIB009 BIB018 and content-based image retrieval BIB001 BIB010 . Another important research direction is image geolocalization (prediction of the geolocation of a query image BIB006 ), see BIB016 for an example of DL based geolocalization using geo-tagged images, which did not touch on AL or VA. • Geospatial text based applications: GIR and spatial language processing have potential application to social media mining in domains such as emergency management. There have already been some successful examples of DL classification algorithms being applied to tackling GIScience problems relating to crisis management, sentiment analysis, sarcasm detection, and hate speech detection in tweets; see: BIB022 BIB019 BIB017 . A review of the existing geospatial semantic research can be found in , but neither DL or AL, nor VA are touched upon in that review. Thus, the research topics and challenges discussed there can find potential solutions using the methods we have investigated in this paper. For example, the methods we investigated here will be useful for semantic similarity and word-sense disambiguation, which are the important components of GIR BIB003 . Through integrating GIR with VA, AL and/or ADL, domain experts can play an important role into the DL empowered computational loop for steering the improvement of the machine learner's performance. Recently, Adams and McKenzie used character-level CNN to classify multilingual text, and their method can be improved using the "tool sets" we investigated in this paper. Some specific application problems for which we believe that VA-enabled ADL has the potential to make a dramatic impact are: identification of documents (from tweets, through news stories, to blogs) that are "about" places; classification of geographic statements by scale; and retrieval of geographic statements about movement or events. • Geospatial text and image based applications: Beyond efforts to apply AL and related methods to text alone, text-oriented applications can be expanded with the fusion of text and geospatial images (e.g., RS imagery). See Cervone et al. BIB020 for an example in which RS and social media data (specifically, tweets and Flickr images) are fused for damage assessment during floods. The integration of VA and AL/ADL should also be explored as a mechanism to generate actionable insights from heterogeneous data sources in a quick manner. Deep learning shines where big labeled data is available. Thus, existing research in digital gazetteer that used big data analytics (see BIB007 for an example, where neither DL or AL, nor VA was used) can also be advanced from the methods reviewed in this paper. More specifically, for example, the method used in BIB007 -place types from (Flickr) photo tags, can be extended and enriched by image classification and recognition from the geospatial image based applications mentioned above. Overall, based on the review above, we contend that GeoAI, as implemented via M&DL methods empowered with VA, AL, and ADL, will have a wide array of geospatial applications and thus has considerable potential to address major scientific and societal challenges. |
Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.1. Machine learning and Deep Learning <s> The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks. This motivates longer term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation, and manifold learning. <s> BIB001 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.1. Machine learning and Deep Learning <s> A new generation of data processing systems, including web search, Google’s Knowledge Graph, IBM’s Watson, and several different recommendation systems, combine rich databases with software driven by machine learning. The spectacular successes of these trained systems have been among the most notable in all of computing and have generated excitement in health care, finance, energy, and general business. But building them can be challenging, even for computer scientists with PhD-level training. If these systems are to have a truly broad impact, building them must become easier. We explore one crucial pain point in the construction of trained systems: feature engineering. Given the sheer size of modern datasets, feature developers must (1) write code with few effective clues about how their code will interact with the data and (2) repeatedly endure long system waits even though their code typically changes little from run to run. We propose brainwash, a vision for a feature engineering data system that could dramatically ease the ExploreExtract-Evaluate interaction loop that characterizes many trained system projects. <s> BIB002 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.1. Machine learning and Deep Learning <s> Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors. <s> BIB003 | Machine learning (ML) [143, is a sub-field of computer science, in particular, artificial Intelligence (AI), that focuses on algorithms for learning from data. Traditional ML relies on feature engineering, the process of using domain-specific prior knowledge to manually extract features from data BIB001 . The features are then used to generate a ML model, which can make predictions for new unseen data. In both ML and pattern recognition, a feature (sometimes also called signal) [143] is an individual measurable attribute/property or characteristic of a phenomenon being observed. Features encode information from raw data that allows ML algorithms to predict the category of an unknown object (e.g., a piece of text or an image) or a value (e.g., stock price) BIB002 . Thus, any attribute that improves the ML algorithm's performance can serve as a feature. Deep learning (DL, i.e., deep neural nets) is a subset of ML, where ML is a subset of AI (see for a detailed introduction to the relations among the three domains of research and practice). DL can discover intricate hidden patterns from big data without feature engineering BIB003 . Feature engineering is a core, human labor intensive technique for traditional ML BIB001 BIB002 , and the potential to skip this often expensive step is one motivation for recent attention to DL. Furthermore, DL algorithm performance improves dramatically when data volume increases -thus, DL algorithms have better scalability than traditional ML algorithms for Big Data problems. The expensive process of feature engineering is skipped for DL, because DL can automatically learn features from data, but it must be replaced by much larger labeled data sets that can be as time consuming to create as the process of feature engineering. While data set labeling is easier than discovering the underlying features that generalize the category it belongs to, the volume of data needed is the bottleneck for DL. This is why we need active deep learning (Section 3), to reduce the amount of data that must be labeled. |
Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.2. Types of Learning Methods <s> From the Publisher: ::: This is an interdisciplinary book on neural networks, statistics and fuzzy systems. A unique feature is the establishment of a general framework for adaptive data modeling within which various methods from statistics, neural networks and fuzzy logic are presented. Chapter summaries, examples and case studies are also included.[Includes companion Web site with ... Software for use with the book. <s> BIB001 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.2. Types of Learning Methods <s> Semi-supervised learning is a learning paradigm concerned with the study of how computers and natural systems such as humans learn in the presence of both labeled and unlabeled data. Traditionally, learning has been studied either in the unsupervised paradigm (e.g., clustering, outlier detection) where all the data is unlabeled, or in the supervised paradigm (e.g., classification, regression) where all the data is labeled.The goal of semi-supervised learning is to understand how combining labeled and unlabeled data may change the learning behavior, and design algorithms that take advantage of such a combination. Semi-supervised learning is of great interest in machine learning and data mining because it can use readily available unlabeled data to improve supervised learning tasks when the labeled data is scarce or expensive. Semi-supervised learning also shows potential as a quantitative tool to understand human category learning, where most of the input is self-evidently unlabeled. In this introductory book, we present some popular semi-supervised learning models, including self-training, mixture models, co-training and multiview learning, graph-based methods, and semi-supervised support vector machines. For each model, we discuss its basic mathematical formulation. The success of semi-supervised learning depends critically on some underlying assumptions. We emphasize the assumptions made by each model and give counterexamples when appropriate to demonstrate the limitations of the different models. In addition, we discuss semi-supervised learning for cognitive psychology. Finally, we give a computational learning theoretic perspective on semi-supervised learning, and we conclude the book with a brief discussion of open questions in the field. <s> BIB002 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.2. Types of Learning Methods <s> We present a robust and real-time monocular six degree of freedom relocalization system. Our system trains a convolutional neural network to regress the 6-DOF camera pose from a single RGB image in an end-to-end manner with no need of additional engineering or graph optimisation. The algorithm can operate indoors and outdoors in real time, taking 5ms per frame to compute. It obtains approximately 2m and 3 degrees accuracy for large scale outdoor scenes and 0.5m and 5 degrees accuracy indoors. This is achieved using an efficient 23 layer deep convnet, demonstrating that convnets can be used to solve complicated out of image plane regression problems. This was made possible by leveraging transfer learning from large scale classification data. We show that the PoseNet localizes from high level features and is robust to difficult lighting, motion blur and different camera intrinsics where point based SIFT registration fails. Furthermore we show how the pose feature that is produced generalizes to other scenes allowing us to regress pose with only a few dozen training examples. <s> BIB003 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.2. Types of Learning Methods <s> This book offers a comprehensive review of multilabel techniques widely used to classify and label texts, pictures, videos and music in the Internet. A deep review of the specialized literature on the field includes the available software needed to work with this kind of data. It provides the user with the software tools needed to deal with multilabel data, as well as step by step instruction on how to use them. The main topics covered are: The special characteristics of multi-labeled data and the metrics available to measure them. The importance of taking advantage of label correlations to improve the results. The different approaches followed to face multi-label classification. The preprocessing techniques applicable to multi-label datasets. The available software tools to work with multi-label data. This book is beneficial for professionals and researchers in a variety of fieldsbecause of the wide range of potential applications for multilabel classification. Besides its multiple applications to classify different types of online information, it is also useful in many other areas, such as genomics and biology. No previous knowledge about the subject is required. The book introduces all the needed concepts to understand multilabel data characterization, treatment and evaluation. <s> BIB004 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.2. Types of Learning Methods <s> The United States spends more than $1B each year on initiatives such as the American Community Survey (ACS), a labor-intensive door-to-door study that measures statistics relating to race, gender, education, occupation, unemployment, and other demographic factors. Although a comprehensive source of data, the lag between demographic changes and their appearance in the ACS can exceed half a decade. As digital imagery becomes ubiquitous and machine vision techniques improve, automated data analysis may provide a cheaper and faster alternative. Here, we present a method that determines socioeconomic trends from 50 million images of street scenes, gathered in 200 American cities by Google Street View cars. Using deep learning-based computer vision techniques, we determined the make, model, and year of all motor vehicles encountered in particular neighborhoods. Data from this census of motor vehicles, which enumerated 22M automobiles in total (8% of all automobiles in the US), was used to accurately estimate income, race, education, and voting patterns, with single-precinct resolution. (The average US precinct contains approximately 1000 people.) The resulting associations are surprisingly simple and powerful. For instance, if the number of sedans encountered during a 15-minute drive through a city is higher than the number of pickup trucks, the city is likely to vote for a Democrat during the next Presidential election (88% chance); otherwise, it is likely to vote Republican (82%). Our results suggest that automated systems for monitoring demographic trends may effectively complement labor-intensive approaches, with the potential to detect trends with fine spatial resolution, in close to real time. <s> BIB005 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.2. Types of Learning Methods <s> Deep learning has shown to be effective for robust and real-time monocular image relocalisation. In particular, PoseNet [22] is a deep convolutional neural network which learns to regress the 6-DOF camera pose from a single image. It learns to localize using high level features and is robust to difficult lighting, motion blur and unknown camera intrinsics, where point based SIFT registration fails. However, it was trained using a naive loss function, with hyper-parameters which require expensive tuning. In this paper, we give the problem a more fundamental theoretical treatment. We explore a number of novel loss functions for learning camera pose which are based on geometry and scene reprojection error. Additionally we show how to automatically learn an optimal weighting to simultaneously regress position and orientation. By leveraging geometry, we demonstrate that our technique significantly improves PoseNets performance across datasets ranging from indoor rooms to a small city. <s> BIB006 | There are three major types of learning methods in ML (and DL, since DL is a branch of ML) BIB004 : supervised learning, unsupervised learning, and semi-supervised learning. Appendix A.2.1. Supervised Learning Supervised learning is the ML task of inferring a function from labeled training data. In supervised learning, the data instances are labeled by human annotators or experts in a problem domain BIB001 . Labeling refers to the process of annotating each piece of text or image with one of a pre-defined set of class names. ML methods can use this information to learn a model that can infer the knowledge needed to automatically label new (i.e., never seen before) data instances. Supervised ML methods usually divide the data set into two (i.e., training and test) or three (i.e., training, validation, and test) disjoint subsets. The labels of instances in the test set will not be given to the ML algorithm, but will only be used to evaluate its performance. The main idea of supervised learning is to build a ML model (e.g., a classifier for classification tasks, or a regression model for regression tasks) using the training data set and using the testing data set to validate the model's performance. With supervised learning there are several metrics to measure success. These metrics can be used to judge the adequacy of a method in particular situations and to compare the effectiveness of different methods over various situations . Appendix A.2.2. Unsupervised Learning Unsupervised learning is the ML task of inferring a function to describe hidden structure from "unlabeled" data (i.e., without human annotation). Since the examples given to the learner are unlabeled, expert knowledge is not a foundation of the learning and there is no evaluation of the accuracy of the structure learned by the relevant algorithm. A clustering algorithm called k-means and another algorithm called principal component analysis (PCA) are popular unsupervised ML algorithms, among others. Appendix A.2.3. Semi-Supervised Learning Semi-supervised learning BIB002 is a learning paradigm concerned with the study of how computers and humans learn using both labeled and unlabeled data. One goal of research in semi-supervised learning is to understand how combining labeled and unlabeled data may change the learning behavior, and design algorithms that take advantage of such a combination. A survey focusing on semi-supervised learning for classification can be found in . In the survey, Zhu emphasized that there are some similarities between ML and human learning. Understanding human cognitive model(s) can lead to novel ML approaches . Do humans learn in a semi-supervised manner? The answer is "yes". Humans accumulate "unlabeled" input data, which (often unconsciously) are used to help build the connection between "labels" and input once labeled data is provided . As emphasized in Section 1, labeled data sets are often difficult, expensive, and/or time consuming to obtain, as they require the efforts of experienced human annotators or domain experts. Semi-supervised learning addresses this problem by using a large amount of unlabeled data, together with a relatively small amount of labeled data, to build good classifiers (Appendix A.3). Semi-supervised learning has received considerable attention both in theory and in practice in ML and data mining because it requires less human effort and gives higher accuracy than supervised methods . Appendix A.2.4. Brief Discussion of Learning Types When a data set contains both labeled and unlabeled samples, ML methods can combine techniques from the two previous categories (i.e., supervised and unsupervised) to accomplish semi-supervised learning tasks BIB002 . Labeled data instances can be used to induce a model, as in supervised learning, then the model can be refined with the information from unlabeled samples. Analogously, unsupervised tasks can be improved by introducing the clues given by the labeled instances. Active learning (Section 3.1) is semi-supervised learning, and most DL algorithms (e.g., CNN, RNN, and LSTM) belong to supervised learning. In this paper, we focus on M&DL for classification (where the output of the process is categorical/discrete). Supervised/semi-supervised ML is also used for regression tasks (where the output of the process is continuous). The application of regression is beyond the scope of this paper; interested readers can find recent overviews in BIB003 BIB005 BIB006 . Appendix A.3. Classifier A ML algorithm that implements a type of classification task (Appendix A.4) is known as a classifier. The most popular ML algorithms for classification problems are: logistic regression, naive Bayes, and support vector machine (SVM). The convolutional neural network (CNN), recurrent neural network (RNN), and two variants of RNN-long short-term memory (LSTM) and gated recurrent unit (GRU), are among most commonly used DL algorithms (also called architectures) for classification problems. |
Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.4. Types of Classification Tasks <s> Grouping images into (semantically) meaningful categories using low level visual features is a challenging and important problem in content based image retrieval. Using binary Bayesian classifiers, we attempt to capture high level concepts from low level image features under the constraint that the test image does belong to one of the classes of interest. Specifically, we consider the hierarchical classification of vacation images; at the highest level, images are classified into indoor/outdoor classes, outdoor images are further classified into city/landscape classes, and finally, a subset of landscape images is classified into sunset, forest, and mountain classes. We demonstrate that a small codebook (the optimal size of codebook is selected using a modified MDL criterion) extracted from a vector quantizer can be used to estimate the class-conditional densities of the observed features needed for the Bayesian methodology. On a database of 6931 vacation photographs, our system achieved an accuracy of 90.5% for indoor vs. outdoor classification, 95.3% for city vs. landscape classification, 96.6% for sunset vs. forest and mountain classification, and 95.5% for forest vs. mountain classification. We further develop a learning paradigm to incrementally train the classifiers as additional training samples become available and also show preliminary results for feature size reduction using clustering techniques. <s> BIB001 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.4. Types of Classification Tasks <s> For multi-class classification with Support Vector Machines (SVMs) a binary decision tree architecture is proposed for computational efficiency. The proposed SVM- based binary tree takes advantage of both the efficient computation of the tree architecture and the high classification accuracy of SVMs. A modified Self-Organizing Map (SOM), K- SOM (Kernel-based SOM), is introduced to convert the multi-class problems into binary trees, in which the binary decisions are made by SVMs. For consistency between the SOM and SVM the K-SOM utilizes distance measures at the kernel space, not at the input space. Also, by allowing overlaps in the binary decision tree, it overcomes the performance degradation of the tree structure, and shows classification accuracy comparable to those of the popular multi-class SVM approaches with "one-to-one" and "one-to-the others". Keywords—Support Vector Machine, multi-class classification, Self-Organizing Map, binary decision tree <s> BIB002 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.4. Types of Classification Tasks <s> The “one against one” and the “one against all” are the two most popular strategies for multi-class SVM; however, according to the literature review, it seems impossible to conclude which one is better for handwriting recognition. Thus, we compared these two classical strategies on two different handwritten character recognition problems. Several post-processing methods for estimating posterior probability were also evaluated and the results were compared with the ones obtained using MLP. Finally, the “one against all” strategy appears significantly more accurate for digit recognition, while the difference between the two strategies is much less obvious with upper-case letters. Besides, the “one against one” strategy is substantially faster to train and seems preferable for problems with a very large number of classes. To conclude, SVMs allow significantly better estimation of probabilities than MLP, which is promising from the point of view of their incorporation into handwriting recognition systems. <s> BIB003 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.4. Types of Classification Tasks <s> This paper deals with categorization tasks where categories are partially ordered to form a hierarchy. First, it introduces the notion of consistent classification which takes into account the semantics of a class hierarchy. Then, it presents a novel global hierarchical approach that produces consistent classification. This algorithm with AdaBoost as the underlying learning procedure significantly outperforms the corresponding “flat” approach, i.e. the approach that does not take into account the hierarchical information. In addition, the proposed algorithm surpasses the hierarchical local top-down approach on many synthetic and real tasks. For evaluation purposes, we use a novel hierarchical evaluation measure that has some attractive properties: it is simple, requires no parameter tuning, gives credit to partially correct classification and discriminates errors by both distance and depth in a class hierarchy. <s> BIB004 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.4. Types of Classification Tasks <s> Hierarchical multi-label classification (HMC) is a variant of classification where instances may belong to multiple classes at the same time and these classes are organized in a hierarchy. This article presents several approaches to the induction of decision trees for HMC, as well as an empirical study of their use in functional genomics. We compare learning a single HMC tree (which makes predictions for all classes together) to two approaches that learn a set of regular classification trees (one for each class). The first approach defines an independent single-label classification task for each class (SC). Obviously, the hierarchy introduces dependencies between the classes. While they are ignored by the first approach, they are exploited by the second approach, named hierarchical single-label classification (HSC). Depending on the application at hand, the hierarchy of classes can be such that each class has at most one parent (tree structure) or such that classes may have multiple parents (DAG structure). The latter case has not been considered before and we show how the HMC and HSC approaches can be modified to support this setting. We compare the three approaches on 24 yeast data sets using as classification schemes MIPS's FunCat (tree structure) and the Gene Ontology (DAG structure). We show that HMC trees outperform HSC and SC trees along three dimensions: predictive accuracy, model size, and induction time. We conclude that HMC trees should definitely be considered in HMC tasks where interpretable models are desired. <s> BIB005 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.4. Types of Classification Tasks <s> This paper presents a systematic analysis of twenty four performance measures used in the complete spectrum of Machine Learning classification tasks, i.e., binary, multi-class, multi-labelled, and hierarchical. For each classification task, the study relates a set of changes in a confusion matrix to specific characteristics of data. Then the analysis concentrates on the type of changes to a confusion matrix that do not change a measure, therefore, preserve a classifier's evaluation (measure invariance). The result is the measure invariance taxonomy with respect to all relevant label distribution changes in a classification problem. This formal analysis is supported by examples of applications where invariance properties of measures lead to a more reliable evaluation of classifiers. Text classification supplements the discussion with several case studies. <s> BIB006 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.4. Types of Classification Tasks <s> The widely known binary relevance method for multi-label classification, which considers each label as an independent binary problem, has been sidelined in the literature due to the perceived inadequacy of its label-independence assumption. Instead, most current methods invest considerable complexity to model interdependencies between labels. This paper shows that binary relevance-based methods have much to offer, especially in terms of scalability to large datasets. We exemplify this with a novel chaining method that can model label correlations while maintaining acceptable computational complexity. Empirical evaluation over a broad range of multi-label datasets with a variety of evaluation metrics demonstrates the competitiveness of our chaining method against related and state-of-the-art methods, both in terms of predictive performance and time complexity. <s> BIB007 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.4. Types of Classification Tasks <s> We explore the task of automatic classification of texts by the emotions expressed. Our novel method arranges neutrality, polarity and emotions hierarchically. We test the method on two datasets and show that it outperforms the corresponding "flat" approach, which does not take into account the hierarchical information. The highly imbalanced structure of most of the datasets in this area, particularly the two datasets with which we worked, has a dramatic effect on the performance of classification. The hierarchical approach helps alleviate the effect. <s> BIB008 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.4. Types of Classification Tasks <s> Many real-world applications involve multi-label classification, in which the labels are organized in the form of a tree or directed acyclic graph (DAG). However, current research efforts typically ignore the label dependencies or can only exploit the dependencies in tree-structured hierarchies. In this paper, we present a novel hierarchical multi-label classification algorithm which can be used on both tree- and DAG-structured hierarchies. The key idea is to formulate the search for the optimal consistent multi-label as the finding of the best subgraph in a tree/DAG. Using a simple greedy strategy, the proposed algorithm is computationally efficient, easy to implement, does not suffer from the problem of insufficient/skewed training data in classifier training, and can be readily used on large hierarchies. Theoretical results guarantee the optimality of the obtained solution. Experiments are performed on a large number of functional genomics data sets. The proposed method consistently outperforms the state-of-the-art method on both tree- and DAG-structured hierarchies. <s> BIB009 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.4. Types of Classification Tasks <s> In this paper we present the PHOCS-2 algorithm, which extracts a “Predicted Hierarchy Of ClassifierS”. The extracted hierarchy helps us to enhance performance of flat classification. Nodes in the hierarchy contain classifiers. Each intermediate node corresponds to a set of classes and each leaf node corresponds to a single class. In the PHOCS-2 we make estimation for each node and achieve more precise computation of false positives, true positives and false negatives. Stopping criteria are based on the results of the flat classification. The proposed algorithm is validated against nine datasets. <s> BIB010 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.4. Types of Classification Tasks <s> Comprehensive Coverage of the Entire Area of Classification Research on the problem of classification tends to be fragmented across such areas as pattern recognition, database, data mining, and machine learning. Addressing the work of these different communities in a unified way, Data Classification: Algorithms and Applications explores the underlying algorithms of classification as well as applications of classification in a variety of problem domains, including text, multimedia, social network, and biological data. This comprehensive book focuses on three primary aspects of data classification: Methods-The book first describes common techniques used for classification, including probabilistic methods, decision trees, rule-based methods, instance-based methods, support vector machine methods, and neural networks. Domains-The book then examines specific methods used for data domains such as multimedia, text, time-series, network, discrete sequence, and uncertain data. It also covers large data sets and data streams due to the recent importance of the big data paradigm. Variations-The book concludes with insight on variations of the classification process. It discusses ensembles, rare-class learning, distance function learning, active learning, visual learning, transfer learning, and semi-supervised learning as well as evaluation aspects of classifiers. <s> BIB011 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.4. Types of Classification Tasks <s> We address the task of hierarchical multi-label classification (HMC). HMC is a task of structured output prediction where the classes are organized into a hierarchy and an instance may belong to multiple classes. In many problems, such as gene function prediction or prediction of ecological community structure, classes inherently follow these constraints. The potential for application of HMC was recognized by many researchers and several such methods were proposed and demonstrated to achieve good predictive performances in the past. However, there is no clear understanding when is favorable to consider such relationships (hierarchical and multi-label) among classes, and when this presents unnecessary burden for classification methods. To this end, we perform a detailed comparative study over 8 datasets that have HMC properties. We investigate two important influences in HMC: the multiple labels per example and the information about the hierarchy. More specifically, we consider four machine learning tasks: multi-label classification, hierarchical multi-label classification, single-label classification and hierarchical single-label classification. To construct the predictive models, we use predictive clustering trees (a generalized form of decision trees), which are able to tackle each of the modelling tasks listed. Moreover, we investigate whether the influence of the hierarchy and the multiple labels carries over for ensemble models. For each of the tasks, we construct a single tree and two ensembles (random forest and bagging). The results reveal that the hierarchy and the multiple labels do help to obtain a better single tree model, while this is not preserved for the ensemble models. <s> BIB012 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.4. Types of Classification Tasks <s> Multi-label learning studies the problem where each example is represented by a single instance while associated with a set of labels simultaneously. During the past decade, significant amount of progresses have been made toward this emerging machine learning paradigm. This paper aims to provide a timely review on this area with emphasis on state-of-the-art multi-label learning algorithms. Firstly, fundamentals on multi-label learning including formal definition and evaluation metrics are given. Secondly and primarily, eight representative multi-label learning algorithms are scrutinized under common notations with relevant analyses and discussions. Thirdly, several related learning settings are briefly summarized. As a conclusion, online resources and open research problems on multi-label learning are outlined for reference purposes. <s> BIB013 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.4. Types of Classification Tasks <s> In hierarchical classification, the prediction paths may be required to always end at leaf nodes. This is called mandatory leaf node prediction (MLNP) and is particularly useful when the leaf nodes have much stronger semantic meaning than the internal nodes. However, while there have been a lot of MLNP methods in hierarchical multiclass classification, performing MLNP in hierarchical multilabel classification is much more difficult. In this paper, we propose a novel MLNP algorithm that (i) considers the global hierarchy structure; and (ii) can be used on hierarchies of both trees and DAGs. We show that one can efficiently maximize the joint posterior probability of all the node labels by a simple greedy algorithm. Moreover, this can be further extended to the minimization of the expected symmetric loss. Experiments are performed on a number of real-world data sets with tree- and DAG-structured label hierarchies. The proposed method consistently outperforms other hierarchical and flat multilabel classification methods. <s> BIB014 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.4. Types of Classification Tasks <s> Hierarchical multi-label classification assigns a document to multiple hierarchical classes. In this paper we focus on hierarchical multi-label classification of social text streams. Concept drift, complicated relations among classes, and the limited length of documents in social text streams make this a challenging problem. Our approach includes three core ingredients: short document expansion, time-aware topic tracking, and chunk-based structural learning. We extend each short document in social text streams to a more comprehensive representation via state-of-the-art entity linking and sentence ranking strategies. From documents extended in this manner, we infer dynamic probabilistic distributions over topics by dividing topics into dynamic "global" topics and "local" topics. For the third and final phase we propose a chunk-based structural optimization strategy to classify each document into multiple classes. Extensive experiments conducted on a large real-world dataset show the effectiveness of our proposed method for hierarchical multi-label classification of social text streams. <s> BIB015 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.4. Types of Classification Tasks <s> Multilabel learning has become a relevant learning paradigm in the past years due to the increasing number of fields where it can be applied and also to the emerging number of techniques that are being developed. This article presents an up-to-date tutorial about multilabel learning that introduces the paradigm and describes the main contributions developed. Evaluation measures, fields of application, trending topics, and resources are also presented. <s> BIB016 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.4. Types of Classification Tasks <s> This book offers a comprehensive review of multilabel techniques widely used to classify and label texts, pictures, videos and music in the Internet. A deep review of the specialized literature on the field includes the available software needed to work with this kind of data. It provides the user with the software tools needed to deal with multilabel data, as well as step by step instruction on how to use them. The main topics covered are: The special characteristics of multi-labeled data and the metrics available to measure them. The importance of taking advantage of label correlations to improve the results. The different approaches followed to face multi-label classification. The preprocessing techniques applicable to multi-label datasets. The available software tools to work with multi-label data. This book is beneficial for professionals and researchers in a variety of fieldsbecause of the wide range of potential applications for multilabel classification. Besides its multiple applications to classify different types of online information, it is also useful in many other areas, such as genomics and biology. No previous knowledge about the subject is required. The book introduces all the needed concepts to understand multilabel data characterization, treatment and evaluation. <s> BIB017 | Classification in M&DL is a predictive task, which aims to learn from existing labeled data and predict the label for new data BIB011 . The labels representing classes or categories are finite and discrete (otherwise the task would be regression, instead of classification) . In supervised/semi-supervised ML (Appendixes A.2.1 and A.2.3), classification tasks include the following types BIB017 BIB006 : binary, multi-class, multi-label, and hierarchical classifications. See Figure A1 . Multi-class classification (also called multiclass classification or multinomial classification) refers to the task of classifying instances into one and only one of a set of (more than two) pre-defined and mutually exclusive classes BIB017 (e.g., adding a "neutral" class to the "positive" and "negative" in sentiment analysis). Multi-class classification can be seen as a generalization of binary classification (Appendix A.4.1). Many multi-class classification algorithms rely on binarization , a method that iteratively trains a binary classifier for each class against the others, following a one-vs-all (OVA) (also called one-against-all (OAA) or one-vs-rest (OVR)) approach, or for each pair of classes, using an one-vs-one (OVO) (also called one-against-one (OAO)) technique [143] . A comparison between OAO and OAA can be found in BIB003 for handwriting recognition with SVMs. Appendix A.4.3. Multi-Label Classification Both binary and multi-class classifications are "single-label" methods (thus, binary/multi-class classifications is also called single-label classification in the literature BIB012 ), where each instance is only associated with a single class label (see Figure A1a ,b for an illustration). By contrast, multi-label classification (also multilabel classification) produces a labeled data set where each instance is associated with a vector of output values BIB017 BIB007 BIB013 BIB016 , instead of only one value. The length of this vector is fixed according to the number of different, pre-defined, and not mutually exclusive labels in the data set. Each element of the vector will be a binary value, indicating if the corresponding label is true for the sample or not. Several labels can be active simultaneously. Each distinct combination of labels is known as a labelset BIB017 . Figure A1c provides one of the most common multi-label applications, image labeling. The data set has four labels in total and each image can be assigned any of them, or even all at once if there was an image in which the four concepts, corresponding to the labels, appear. Multi-label classification has its roots as a solution for tagging documents with several but not mutually exclusive categories (e.g., a piece of text might be about any of: religion, politics, finance, and education at the same time or none of these). Multi-label classification is currently applied in many fields, most of them related to automatic labeling of social media resources such as images, music, video, news, and blog posts BIB017 . Appendix A.4.4. Hierarchical Classification Hierarchical classification, as the name implies, differs from the three types discussed above (Appendixes A.4.1-A.4.3), which all consider each class to be at the same level, called flat classification (flat here means non-hierarchical BIB008 ). For hierarchical classification, classes are defined at multiple levels and are organized in hierarchies BIB004 , as illustrated in Figure A1d . The hierarchy is predefined and cannot be changed during classification. The categories are partially ordered, usually from more generic to more specific BIB008 . In hierarchical classification, the output labels reside on a tree or directed acyclic graph (DAG) structured hierarchy BIB014 BIB005 BIB009 . Silla and Freitas provide a survey of hierarchical classification across different application domains. Many ML classification algorithms are flat, where they simply ignore the label structure and treat the labels as a loose set. By contrast, hierarchical classification algorithms, utilize the hierarchical relationships between labels in making predictions; they can often predict better than flat approaches BIB008 BIB014 . Ghazi et al. BIB008 explored text classification based on emotions expressed in the text. Their method organized neutrality, polarity, and emotions hierarchically. The authors tested their method on two datasets and showed that their method outperforms the corresponding "flat" approach. However, Sapozhnikov and Ulanov BIB010 pointed out in some cases, classification performance cannot be enhanced using a hierarchy of labels. Some authors showed that flat classification outperforms a hierarchical one in the presence of a large number of labels (See later in this section for a further discussion about a systematic comparison between hierarchical and flat classifications). Hierarchical classification combined with single-label classification (Appendix A.4.3) are called hierarchical single-label classification (HSC) in the literature BIB012 . Vailaya et al. BIB001 provided an early example of hierarchical classification combined with binary classification (Appendix A.4.1). The authors employed binary Bayesian classifiers to perform hierarchical classification of vacation images. The results of their experiments showed that high-level concepts can be detected from images if each image can be correctly classified into pre-defined categories. Hierarchical classification has also been integrated with multi-class classification (Appendix A.4.2), see BIB002 for examples. Kowsari et al. presented a new approach to hierarchical multi-class text classification, where the authors employed stacks of DL architectures to provide specialized understanding at each level of the text (document) hierarchy. Their experiment ran on a data set of documents from the Web of Science, and the authors employed a hierarchy of two levels: level-1 (they also called it parent-level) contains classes such as "Computer Science" and "Medical Sciences", and at level-2 (they also called this child-level) the parent level "Computer science" has sub-classes such as "Computer Graphics" and "Machine Learning". Their results showed that combinations of RNN at the higher level (i.e., level-1 or parent-level in their experiment) and CNN at the lower level (i.e., level-2 or child-level) achieve much better and more consistent performance than those obtained by conventional approaches using naive Bayes or SVM. Their results also showed that DL methods can improve document classification performance and that they can provide extensions to methods that only considered the multi-class problem and thus can classify documents within a hierarchy with better performance. Hierarchical classification has been integrated with multi-label classification (Appendix A.4.3), called hierarchical multi-label classification (HMC) in the literature BIB012 BIB015 . HMC is a variant of classification where the pre-defined classes are organized in a hierarchy and each instance may belong to multiple classes simultaneously BIB012 BIB005 . Ren et al. BIB015 conducted extensive experiments on a large real-world data set and their results showed the effectiveness of their method for HMC of social text streams. HMC has received attention, because many real world classification scenarios are multi-label classification and the labels are normally hierarchical in nature. But, research has not yet established when it is proper to consider such relationships (hierarchical and multi-label) among classes, and when this presents an unnecessary burden for classification methods. To address this problem, Levatic et al. BIB012 conducted a comparative study over 8 data sets that have HMC properties. The authors investigated two important influences in HMC: multiple labels per example and information about the hierarchy. Specifically, Levatic et al. considered four ML classification tasks: multi-label classification (Appendix A.4.3), HMC, single-label classification (Appendix A.4.3), and HSC. The authors concluded that the inclusion of hierarchical information in the model construction phase for single trees improves the predictive performance-whether they used HMC trees or HSC tree architecture. HMC trees should be used on domains with a well-populated class hierarchy (L > 2), while the HSC tree architecture performs better if the number of labels per example is closer to one. Appendix A.4.5. Evaluation Metrics for Classification Tasks Different types of classification tasks need different evaluation metrics. Sokolova and Lapalme BIB006 systematically analyzed and summarized twenty-four performance measures used in ML classification tasks (i.e., binary, multi-class, multi-label, and hierarchical) in tables (with formula and concise descriptions of evaluation focus). Their formal analysis was supported by examples of applications where invariance properties of measures lead to a more reliable evaluation of classifiers (Appendix A.3). |
Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.5. Text and Image Classifications <s> The automated categorization (or classification) of texts into predefined categories has witnessed a booming interest in the last 10 years, due to the increased availability of documents in digital form and the ensuing need to organize them. In the research community the dominant approach to this problem is based on machine learning techniques: a general inductive process automatically builds a classifier by learning, from a set of preclassified documents, the characteristics of the categories. The advantages of this approach over the knowledge engineering approach (consisting in the manual definition of a classifier by domain experts) are a very good effectiveness, considerable savings in terms of expert labor power, and straightforward portability to different domains. This survey discusses the main approaches to text categorization that fall within the machine learning paradigm. We will discuss in detail issues pertaining to three different problems, namely, document representation, classifier construction, and classifier evaluation. <s> BIB001 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.5. Text and Image Classifications <s> Active learning is a machine learning technique that selects the most informative samples for labeling and uses them as training data. It has been widely explored in multimedia research community for its capability of reducing human annotation effort. In this article, we provide a survey on the efforts of leveraging active learning in multimedia annotation and retrieval. We mainly focus on two application domains: image/video annotation and content-based image retrieval. We first briefly introduce the principle of active learning and then we analyze the sample selection criteria. We categorize the existing sample selection strategies used in multimedia annotation and retrieval into five criteria: risk reduction, uncertainty, diversity, density and relevance. We then introduce several classification models used in active learning-based multimedia annotation and retrieval, including semi-supervised learning, multilabel learning and multiple instance learning. We also provide a discussion on several future trends in this research direction. In particular, we discuss cost analysis of human annotation and large-scale interactive multimedia annotation. <s> BIB002 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.5. Text and Image Classifications <s> Performing exhaustive searches over a large number of text documents can be tedious, since it is very hard to formulate search queries or define filter criteria that capture an analyst's information need adequately. Classification through machine learning has the potential to improve search and filter tasks encompassing either complex or very specific information needs, individually. Unfortunately, analysts who are knowledgeable in their field are typically not machine learning specialists. Most classification methods, however, require a certain expertise regarding their parametrization to achieve good results. Supervised machine learning algorithms, in contrast, rely on labeled data, which can be provided by analysts. However, the effort for labeling can be very high, which shifts the problem from composing complex queries or defining accurate filters to another laborious task, in addition to the need for judging the trained classifier's quality. We therefore compare three approaches for interactive classifier training in a user study. All of the approaches are potential candidates for the integration into a larger retrieval system. They incorporate active learning to various degrees in order to reduce the labeling effort as well as to increase effectiveness. Two of them encompass interactive visualization for letting users explore the status of the classifier in context of the labeled documents, as well as for judging the quality of the classifier in iterative feedback loops. We see our work as a step towards introducing user controlled classification methods in addition to text search and filtering for increasing recall in analytics scenarios involving large corpora. <s> BIB003 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.5. Text and Image Classifications <s> Rapid crisis response requires real-time analysis of messages. After a disaster happens, volunteers attempt to classify tweets to determine needs, e.g., supplies, infrastructure damage, etc. Given labeled data, supervised machine learning can help classify these messages. Scarcity of labeled data causes poor performance in machine training. Can we reuse old tweets to train classifiers? How can we choose labeled tweets for training? Specifically, we study the usefulness of labeled data of past events. Do labeled tweets in different language help? We observe the performance of our classifiers trained using different combinations of training sets obtained from past disasters. We perform extensive experimentation on real crisis datasets and show that the past labels are useful when both source and target events are of the same type (e.g. both earthquakes). For similar languages (e.g., Italian and Spanish), cross-language domain adaptation was useful, however, when for different languages (e.g., Italian and English), the performance decreased. <s> BIB004 | Text classification and image classification are two important applications of classification tasks in ML (Appendix A.4). Image classification is the task of classifying images to pre-defined class names (i.e., labels). Image classification can be applied to many real-world problems, for example, retrieval of all images that contain (damaged) roads. A survey of multimedia (i.e., images and videos) annotation and retrieval using active learning (Section 3.1) can be found in BIB002 . A review on deep learning algorithms in computer vision for tasks, such as image classification and image retrieval, can be found in . Text classification (also called text categorization), analogous to image classification, is the task of classifying text to pre-defined categories. Text classification in ML is a fundamental step in making large repositories of unstructured text searchable and has important applications in the real world BIB003 . For example, automatically tagging social media messages during natural disasters by topics can facilitate information retrieval for crisis management BIB004 . Text classification is also closely related to standard natural language processing (NLP) problems such as named entity recognition (NER), in which words are classified into: person, location, organization, etc. Some of the best methods to accomplish this task are ML based (e.g., Stanford NER [221, 222] ). A comprehensive review of text classification methods and results can be found in BIB001 , including evaluation of text classifiers, particularly measures of text categorization effectiveness. Significance tests in the evaluation of text classification methods can be found in . |
Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.6. Word Embedding <s> The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. ::: ::: An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of "Canada" and "Air" cannot be easily combined to obtain "Air Canada". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible. <s> BIB001 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.6. Word Embedding <s> We propose two novel model architectures for computing continuous vector representations of words from very large data sets. The quality of these representations is measured in a word similarity task, and the results are compared to the previously best performing techniques based on different types of neural networks. We observe large improvements in accuracy at much lower computational cost, i.e. it takes less than a day to learn high quality word vectors from a 1.6 billion words data set. Furthermore, we show that these vectors provide state-of-the-art performance on our test set for measuring syntactic and semantic word similarities. <s> BIB002 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.6. Word Embedding <s> Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75% on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition. <s> BIB003 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.6. Word Embedding <s> Context-predicting models (more commonly known as embeddings or neural language models) are the new kids on the distributional semantics block. Despite the buzz surrounding these models, the literature is still lacking a systematic comparison of the predictive models with classic, count-vector-based distributional semantic approaches. In this paper, we perform such an extensive evaluation, on a wide range of lexical semantics tasks and across many parameter settings. The results, to our own surprise, show that the buzz is fully justified, as the context-predicting models obtain a thorough and resounding victory against their count-based counterparts. <s> BIB004 | We have introduced text and image classifications above (Appendix A.5). When using DL algorithms for text classification and image classification, one of the big technical differences is that images have matrix representations and thus can be directly fed into deep neural nets. But, for text data, translation into word embeddings is needed. In NLP and DL, a word embedding is basically vectors of real numbers mapped from words or phrases from the vocabulary to represent semantic/syntactic information of words in a way that computers can understand. Once word embeddings have been trained, we can use them to obtain relations such as similarities between words. Word2Vec BIB001 BIB002 and GloVe (Global Vectors for word representation) BIB003 are two popular word embedding algorithms used to construct vector representations for words. Word2Vec "vectorizes" words-it is a two-layer neural network that processes text. Its input is a text corpus and its output is a vocabulary in which each item has a vector attached to it, which can be fed into a deep neural net or simply queried to detect relationships between words. While Word2vec is not a deep neural network, it turns text into a numerical form that deep nets can understand. So we can start with powerful mathematical operations on words to detect semantic similarities between words. Similar to Word2Vec, GloVe is an unsupervised learning algorithm (Appendix A.2.2) used to compute vector representations for words. Training is performed on aggregated global word-word co-occurrence statistics from a input corpus, and the resulting representations showcase linear substructures of the word vector space. The main difference between Word2Vec and GloVe is that the former is a "predictive" model, whereas the latter is a "count-based" model BIB004 . If we can control well all the hyper-parameters of Word2Vec and GloVe, the embeddings generated using the two methods are very similarly in NLP tasks. One advantage of GloVe over Word2Vec is that it is easier to parallelize the implementation BIB003 , which means it is easier to train over a big volume of data on GPUs for parallelism. |
Literature Survey on Interaction Design and Existing Software Applications for Dyslectic Users <s> INTRODUCTION <s> This paper describes the effects of using a word processor on the creative writing of a small group of children with learning disabilities. Each week the children wrote one word-processed and one handwritten story. The effects of using a word processor seemed to be influenced by the particular problems the children were experiencing with written work. For the children with severe spelling problems, using a word processor seemed to result in fewer spelling errors, while for the children who were still predominantly concerned with the mechanics of the writing task, using a word processor seemed to result in longer stories. <s> BIB001 </s> Literature Survey on Interaction Design and Existing Software Applications for Dyslectic Users <s> INTRODUCTION <s> Abstract The ability to read is one of the main skills of a human being. However, some of us have reading difficulties, regardless of social status, level of intelligence or education. This disorder is the main characteristic of dyslexia and is maintained throughout life, requiring early and specialized intervention. Dyslexia is defined as a learning disturbance in the area of reading, writing and spelling. Although the numbers of prevalence rely heavily on the type of investigation conducted, several studies indicate that up to 17% of the world population is dyslexic, and that men have greater prevalence. In this work we will address the use of assistive mobile applications for dyslexia by analyzing possible solutions and proposing a prototype of a mobile application that can be used by dyslexic and whilst giving feedback both to the dyslexic him/herself and to the assisting technician or teacher. The implemented prototype focuses the Portuguese language and was tested with Portuguese students with ages between 10 and 12 years old. Preliminary results show that the proposed gamified set of activities, allow dyslexics to improve multisensory perception, constituting an added value facilitator of adaptiveness and learning. <s> BIB002 | Dyslexia is a hidden learning disorder in reading, spelling and written language, and maybe in number work. It is a learning disability, which cannot be completely treated and has negative consequences for dyslectics' life by making it complicated . Learning difficulties, caused by dyslexia, have often a negative impact on the way dyslectics are used to thinking, behaving and living. Statistics have shown that approximately 70-80% of people with reading problems are probably dyslectics, and one out of five students have a language-based learning disability . Research have shown that dyslexia is a cognitive disorder, which affects deeply dyslectics' daily routine by isolating them often from the community. It is very usual for a dyslectic person to complain that (s)he is not able to be focused on a specific task, recall tasks, orders, messages, routes or even their daily schedule . Furthermore, it is important to point out that research supports that there is a relation between dyslexia and the type of languages. A language can be either opaque (e.g. the English, Danish, French languages, etc.), or transparent (e.g. the Greek, Italian, Spanish languages, etc.), which difference affects the level of a language's complexity, and has an impact on dyslectics' reading and writing performance BIB002 . Studies have also proved that assistive technology contributes significantly the improvement of dyslectics' cognitive skills BIB002 , BIB001 , . Technology is an alternative and modern way of helping people with dyslexia improve their skills on reading, writing, memory, organization or numeracy conceptual areas. Maybe technology is not able to treat dyslexia yet, but it is able to facilitate dyslectics by enhancing the motivation for improvement [8] , . Especially Human-Computer Interaction (HCI) field can enhance this trial through designing systems for building a dyslexia friendly environment After a systematic literature research on interaction design of systems and existing software applications supporting dyslectic users, we realized that related studies to the field of dyslexia are very limited, even though dyslexia is a cognitive disorder with strong impacts to dyslectics' life. With this study, our goal is to contribute future research on developing designs for software applications addressing to dyslectic users. |
Literature Survey on Interaction Design and Existing Software Applications for Dyslectic Users <s> Related Work on IxD Guidelines and Parameters <s> In this paper, we present an exploratory study of the web navigation experiences of dyslexic users. Findings indicate that dyslexics exhibit distinctive web navigation behaviour and preferences. We believe that the outcomes of this study add to our understanding of the particular needs of this web user population and have implications for the design of effective navigation structures. <s> BIB001 </s> Literature Survey on Interaction Design and Existing Software Applications for Dyslectic Users <s> Related Work on IxD Guidelines and Parameters <s> Compared to the online interaction behavior of other users, little is known about the difficulties dyslexic Web users encounter online. This paper reviews existing literature at the intersection of dyslexia and accessibility research to determine what useful knowledge exists regarding this important and relatively large group of users. This review uncovers that, although there are few published usability tests with dyslexic users, there is a considerable body of knowledge on dyslexia as well as many design guidelines for authoring dyslexic-accessible interfaces. Through a comparison of existing accessibility guidelines for dyslexic and non-dyslexic users and discussion of the plain language movement, it is argued that dyslexic-accessible practices may redress difficulties encountered by all Internet users. This conclusion suggests that usability testing yielding a clearer profile of the dyslexic user would further inform the practice of universal design, but also that enough knowledge is already available to allow doing more to increase accessibility for dyslexic Internet users. <s> BIB002 </s> Literature Survey on Interaction Design and Existing Software Applications for Dyslectic Users <s> Related Work on IxD Guidelines and Parameters <s> This paper presents an empirical study on problems encountered by users with dyslexia when using websites. The study was performed by a user evaluation of 16 websites by a panel of 13 participants with dyslexia, each participant evaluating 10 websites. The results presented in the paper are based on 693 instances of accessibility and usability problems. Most frequent problems were related to navigation issues, problems with presentation and organisation of information, lack or misfunctioning of specific funtionality in websites, and issues with language. <s> BIB003 </s> Literature Survey on Interaction Design and Existing Software Applications for Dyslectic Users <s> Related Work on IxD Guidelines and Parameters <s> In this paper, we offer set of guidelines and a web service that presents Web texts in a more more accessible way to people with dyslexia. The layout guidelines for developing this service are based on a user study with a group of twenty two dyslexic users. The data collected from our study combines qualitative data from interviews and questionnaires and quantitative data from tests carried out using eye tracking. We analyze and compare both kinds of data and present a set of layout guidelines for making the text Web more readable for dyslexic users. To the best of our knowledge, our methodology for defining dyslexic-friendly guidelines and our web service are novel. <s> BIB004 </s> Literature Survey on Interaction Design and Existing Software Applications for Dyslectic Users <s> Related Work on IxD Guidelines and Parameters <s> In 2012, Wikipedia was the sixth-most visited website on the Internet. Being one of the main repositories of knowledge, students from all over the world consult it. But, around 10% of these students have dyslexia, which impairs their access to text-based websites. How could Wikipedia be presented to be more readable for this target group? In an experiment with 28 participants with dyslexia, we compare reading speed, comprehension, and subjective readability for the font sizes 10, 12, 14, 18, 22, and 26 points, and line spacings 0.8, 1.0, 1.4, and 1.8. The results show that font size has a significant effect on the readability and the understandability of the text, while line spacing does not. On the basis of our results, we recommend using 18-point font size when designing web text for readers with dyslexia. Our results significantly differ from previous recommendations, presumably, because this is the first work to cover a wide range of values and to study them in the context of an actual website. <s> BIB005 </s> Literature Survey on Interaction Design and Existing Software Applications for Dyslectic Users <s> Related Work on IxD Guidelines and Parameters <s> People with dyslexia often face difficulties on consuming written content at the Web. This occurs mainly because websites' designs do not consider the barriers faced by them, since dyslexia is not taken into account as often as other functional limitations. Guidelines for designing accessible Web pages are being consolidated and studied. Meanwhile, people with dyslexia face barriers and develop workarounds to overcome these difficulties. This work presents a customization toolbar called Firefixia, especially designed to support people with dyslexia to adapt the presentation of Web content according to their preferences. Firefixia was tested by 4 participants with diagnosed dyslexia. The participants evaluated and provided us feedback regarding the toolbar most/least useful features. From the presented results, one expects to highlight the need for end-user customization features that are easy to access, easy to use, and easy to explore. Participants reported that the most useful customization features are the text size, the text alignment, and the link color. Finally, this work indicates promising directions for end-user customization tools such as Firefixia. <s> BIB006 </s> Literature Survey on Interaction Design and Existing Software Applications for Dyslectic Users <s> Related Work on IxD Guidelines and Parameters <s> We present a user study for two different automatic strategies that simplify text content for people with dyslexia. The strategies considered are the standard one (replacing a complex word with the most simpler synonym) and a new one that presents several synonyms for a complex word if the user requests them. We compare texts transformed by both strategies with the original text and to a gold standard manually built. The study was undertook by 96 participants, 47 with dyslexia plus a control group of 49 people without dyslexia. To show device independence, for the new strategy we used three different reading devices. Overall, participants with dyslexia found texts presented with the new strategy significantly more readable and comprehensible. To the best of our knowledge, this is the largest user study of its kind. <s> BIB007 </s> Literature Survey on Interaction Design and Existing Software Applications for Dyslectic Users <s> Related Work on IxD Guidelines and Parameters <s> Around 10% of the people have dyslexia, a neurological disability that impairs a person's ability to read and write. There is evidence that the presentation of the text has a significant effect on a text's accessibility for people with dyslexia. However, to the best of our knowledge, there are no experiments that objectively measure the impact of the font type on reading performance. In this paper, we present the first experiment that uses eye-tracking to measure the effect of font type on reading speed. Using a within-subject design, 48 subjects with dyslexia read 12 texts with 12 different fonts. Sans serif, monospaced and roman font styles significantly improved the reading performance over serif, proportional and italic fonts. On the basis of our results, we present a set of more accessible fonts for people with dyslexia. <s> BIB008 | Otávio et al investigated Web accessibility issues for users with dyslexia by involving in their study related literature studies on interaction design parameters. A number of related works on interaction design for dyslexia have been mentioned in their research. Some of them focused on functionality and some others on the user interface: From one hand, the studies of Freire et al BIB003 and Al-Wabil et al BIB001 focused on functionalities that could help dyslectic users improve their performance. In their studies, they refer to a number of 693 problems on accessibility and usability, which problems are related to difficulties in navigation, architecture of information, the form of texts, the organization of the content, the language and the amount of information that makes harder for dyslectics to scan a text. Because of the fact that such difficulties can be distracting for dyslectics, interaction design of systems for dyslexia has to be focused on fulfil these functionalities. On the other hand, the studies of Rello et al BIB004 , BIB005 , , Santana et al BIB006 , Rello & Barbosa , and Rello & Baeza-Yates BIB007 , BIB008 focused on user interface design-parameters. The recommended design-parameters allow users to highlight content of texts, adjust the size and type of fonts, the alignment of a text, the spacing of characters, the fore-and background colours, the length of texts, and its borders. Additionally, there are suggestions, which could improve dyslectics' reading skills: Rello & Baeza-Yates recommend Helvetica, Courier, Arial, Verdana and Computer Modern Unicode font types as the best font types for dyslectic users BIB007 , BIB008 . Jacob McCarthy et al. [21] , BIB002 included into their study a literature survey on interaction design for dyslectic users, which resulted in a number of parameters focused on the user interface as well. In this study, there have been mentioned features that allow dyslectic users to adjust the size of a text, and design parameters that refer to short sentences, use of pictures, dark background, and San Serif fonts of 12pt or larger. These recommendations are an overview of other researchers' studies , , which McCarthy provides us. |
Literature Survey on Interaction Design and Existing Software Applications for Dyslectic Users <s> Interaction Design guidelines and parameters <s> This paper outlines and explains the guideline needed to design an effective interaction design (IxD) for dyslexic children’s reading application. The guideline is developed based on theories that underly dyslexia and its effects towards reading, with emphasis given to the visual related theories and phonological deficit theory and core-affect theory. The needs of a dyslexic child to read properly and correctly with understanding of the related theories inspires the development of this guideline as it is aimed to aid the process of learning to read by facilitating them with useful design. Tested on a number of dyslexic children, the design seems to reduce their memory load for this particular task and thus reduce their difficulties in reading. Hence the role of an interaction designer is needed to answer the whats and hows and to design an interactive product (in this case – reading applications) to help dyslexic children to read. <s> BIB001 </s> Literature Survey on Interaction Design and Existing Software Applications for Dyslectic Users <s> Interaction Design guidelines and parameters <s> Part I: Essentials of designing interactive systems 1. Designing interactive systems: A fusion of skills 2. PACT: A framework for designing interactive systems 3. The process of human-centred interactive systems design 4. Usability 5. Experience design 6. The Home Information Centre (HIC): A case study in designing interactive systems Part II: Techniques for designing interactive systems 7. Understanding 8. Envisionment 9. Design 10. Evaluation 11. Task analysis 12. Visual user interface design 13. Multimodal user interface design Part III: Contexts for designing interactive systems 14. Designing websites 15. Social media 16. Collaborative environments 17. Agents and avatars 18. Ubiquitous computing 19. Mobile computing 20. Wearable computing Part IV: Foundations of designing interactive systems 21. Memory and attention 22. Affect 23. Cognition and action 24. Social interaction 25. Perception and navigation 1.1 The variety of interactive systems 1.2 The concerns of interactive systems design 1.3 Being digital 1.4 The skills of the interactive systems designer 1.5 Why being human-centred is important 2.1 Introduction 2.2 People 2.3 Activites 2.4 Contexts 2.5 Technologies 2.6 Scoping a Problem with PACT 3.1 Introduction 3.2 Developing personas and scenarios 3.3 Using scenarios throughout design 3.4 A scenario-based design method 4.1 Introduction 4.2 Accessiblity 4.3 Usability 4.4 Acceptability 4.5 Design principles 5.1 Introduction 5.2 Engagement 5.3 Designing for pleasure 5.4 Aesthetics 5.5 Service design 6.1 Introduction 6.2 Scenarios for the HIC 6.3 Evaluating early interface prototypes 6.4 A first design 6.5 The second interface design 7.1 Understanding requirements 7.2 Participative design 7.3 Interviews 7.4 Questionnaires 7.5 Probes 7.6 Card sorting techniques 7.7 Working with groups 7.8 Fieldwork: Observing activites in situ 7.9 Artefact collection and 'desk work' 8.1 Finding suitable representations 8.2 Basic techniques 8.3 Prototypes 8.4 Envisionment in practice 9.1 Introduction 9.2 Conceptual design 9.3 Metaphors in design 9.4 Conceptual design using scenarios 9.5 Physical deisgn 9.6 Designing interactions 10.1 Introduction 10.2 Expert evaluation 10.3 Participant-based evaluation 10.4 Evaluation in practice 10.5 Evaluation: further issues 11.1 Goals, tasks and actions 11.2 Task analysis and systems design 11.3 Hierarchical task analysis 11.4 GOMS: a cognitive model of procedural knowledge 11.5 Structural knowledge 11.6 Cognitive work analysis 12.1 Introduction 12.2 Graphical user interfaces 12.3 Interface design guidelines 12.4 Psychological principles and interface design 12.5 Information design 13.1 Introduction 13.2 Using sound at the interface 13.3 Tangible interaction 13.4 Getting a feel for tangible computing 13.5 Gestural interaction and surface computing 14.1 Introduction 14.2 Website development 14.3 The information architecture of websites 14.4 Navigation design for websites 14.5 Case study: designing the Robert Louis Stevenson website 15.1 Introduction 15.2 Background ideas 15.3 Social networking 15.4 Sharing with others 15.5 Cloud computing 16.1 Introduction 16.2 Issues for cooperative working 16.3 Technologies to support cooperative working 16.4 Collabroative virtual environments 16.5 Case study: developing a collaborative Table-Top application 17.1 Agents 17.2 Adaptive systems 17.3 An architecture for agents 17.4 Other aplications of agent-based interaction 17.5 Avatars and conversational agents 18.1 Ubiquitious Computing 18.2 Information spaces 18.3 Blended Spaces 18.4 Home environments 18.5 Navigating in wireless sensor networks 19.1 Introduction 19.2 Context awareness 19.3 Undertanding in mobile computing 19.4 Design 19.5 Evaluation 20.1 Introduction 20.2 Smart materials 20.3 Material design 20.4 From materials to implants 21.1 Introduction 21.2 Memory 21.3 Attention 21.4 Human error 22.1 Introduction 22.2 Psychological thoeries of emotion 22.3 Detecting and recognising emotions 22.4 Expressing emotion 22.5 Potential applications and key issues for further research 23.1 Human information processing 23.2 Situated action 23.3 Distributed cognition 23.4 Embodied cognition 23.5 Activity theory 24.1 Introduction 24.2 Human communication 24.3 People in groups 24.4 Presence 24.5 Culture and identity 25.1 Introduction 25.2 Visual perception 25.3 Non-visual perception 25.4 Navigation <s> BIB002 | Research on interaction design guidelines resulted in one design guideline with an emphasis on three design dimensions -Form, Content and Behavior-mentioned of high importance for software applications' design addressing to dyslectic users. To be more precise the interaction design guideline supports that these dimensions and their elements facilitate users, who address visual (the form dimension), or phonological deficits (the content and behavior dimensions) due to dyslexia , [28] , BIB002 , BIB001 . Simple and clear layouts with font sizes from 12 to 14 and Sans Serif fonts, as well as features that allow dyslectic users to adjust the font size, the style, and colors, or specific combinations on colors and contrasts by avoiding bright colors, have been recommended as supportive to dyslectic users and able to improve their reading performance. Additionally, features that provide explanations, enrichment of texts with pictures and audio elements make reading tasks more accessible for users with dyslexia. Moving forward, our literature analysis led us to Rello and Barbosa study on IxD parameters of software applications for dyslectic users. These interaction design parameters focus on the Form dimension, as visual deficits affect deeply dyslectic users' reading performance. This study recommends a number of layout-design parameters as appropriate to help dyslectic users improve their reading performance . Specifically, Font Types /Sizes: Arial, Comic Sans Verdana, Century, Gothic, Trebuchet, Dan Sassoon Primary, Times New Roman, Courier, Dyslexie/12 or 14, and extra-large letter spacing, Brightness-Colors: Low Brightness & color differences among text and background, and Light grey as font color, Space/Lines/Columns: Lines of 60-70 /Characters Clear Spacing between letter combinations/Line spacing: 1.3, 1.4, 1.5, 1.5-2/ Narrow columns should be avoided. Explaining the Rello and Barbosa text layout parameters, Sans Serif fonts of a size between 12 and 14, low brightness and light contrasts between background and fonts' colors have been recommended by their study. Furthermore, suggestions for lines of 60 to 70 characters maximum and clear spacing between letter combinations, as well as line spacing from 1.3 to 2, and avoidance of narrow columns have been recommended as supportive to dyslectic users and able to improve their reading performance . Based on comparisons among the IxD guidelines/parameters there are many similarities on (i) the font type and size, (ii) the recommendations about avoiding bright colors and narrow columns, and (iii) the suggesting number of characters and line spacing (see table 2 ). Table 3 . Suggested IxD generated by Comparisons Regarding the interaction design guidelines and parameters shown in the study, and their relation to the design parameters of the related works, there is a clear agreement among them. Both IxD guidelines/parameters of literature research and related work focused on user interface and functionalities that help dyslectic users improve their reading performance. In both related works and literature research's IxD guidelines, design parameters have been proposed for developing designs for software applications addressing to dyslectic users. Their aim? To facilitate and help dyslectic users improve their reading skills and performance. |
Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> Introduction. <s> This paper describes a new technique for implementing educational programming languages using tangible interface technology. It emphasizes the use of inexpensive and durable parts with no embedded electronics or power supplies. Students create programs in offline settings---on their desks or on the floor---and use a portable scanning station to compile their code. We argue that languages created with this approach offer an appealing and practical alternative to text-based and visual languages for classroom use. In this paper we discuss the motivations for our project and describe the design and implementation of two tangible programming languages. We also describe an initial case study with children and outline future research goals. <s> BIB001 </s> Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> Introduction. <s> This paper argues that the "kindergarten approach to learning" -- characterized by a spiraling cycle of Imagine, Create, Play, Share, Reflect, and back to Imagine -- is ideally suited to the needs of the 21st century, helping learners develop the creative-thinking skills that are critical to success and satisfaction in today's society. The paper discusses strategies for designing new technologies that encourage and support kindergarten-style learning, building on the success of traditional kindergarten materials and activities, but extending to learners of all ages, helping them continue to develop as creative thinkers. <s> BIB002 </s> Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> Introduction. <s> "Digital fluency" should mean designing, creating, and remixing, not just browsing, chatting, and interacting. <s> BIB003 </s> Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> Introduction. <s> Learning to program is hard, also because it requires students to deal with abstraction. A program is an abstract construct: most pieces of a program are not concrete, literal values, but they are abstract symbols standing for things or actions in the system they model and control. Thus, when learning to program, novices not only need to learn about the system, but they also need to learn about the programming language. They need to think about the concrete effects in the system their abstract program constructs will cause once the program will execute. This thinking on two levels of abstraction (program and system), and in two dimensions of time (coding and execution) can cause a significant burden. In this short paper we propose to collapse those two levels. We wonder whether it would be possible to devise a programming environment where the program is the system. To do this we need languages that are the system, instead of languages that are about the system. We propose to use tangible languages as a way towards this idea. We briefly present three such languages which we used in the context of an informal learning setting and report our initial lessons learned. <s> BIB004 | While programming is often seen as a key element of constructionist 1 approaches (starting from LOGO (Feuerzeig et al., 1970) , a programming language designed to enable learning abstract concepts of disciplines like math, geometry, physics, and potentially all others, by manipulating computational objects ), the research on learning to program through a constructionist strategy is somewhat limited, mostly focusing on how to bring the abstract and formal nature of programming languages into "concrete" or even tangible objects, accessible also to children with lim-ited abstraction power BIB003 BIB001 BIB004 . Notwithstanding this, programming is in some sense intrinsically constructionist, as it always involves the production of an artifact that can be shown and shared. Of course, this does not mean that programming automatically leads to constructivist/constructionist pedagogies: in facts, we see very different approaches, from open project-based learning to much more traditional education through lectures and closed exercises. Specific languages and environments play an important role too: for example, visual programming languages make it easier (by removing the request to face unnatural textual syntactic rules) to realize small but meaningful projects, keeping students motivated, and support a constructionist approach where students are encouraged to develop and share their projects -video games, animated stories, or simulations of simple real-world phenomena. Constructionist ideas are also floating around mainstream programming practice and they are even codified in some software engineering approaches: agile methods like eXtreme Programming , for example, suggest several techniques that can be easily connected to the constructionist word of advice about discussing, sharing, and productively collaborating to successfully build knowledge together ; moreover the incremental and iterative process of creative thinking and learning BIB002 fits well with the agile preference to "responding to change over following a plan" . It actually originated by observing how the traditional kindergarten approach to learning is ideally suited to learn to think creatively, and it is now called "creative learning spiral" (Fig. 1) . According to this model, when one learns by creating something (e.g., a computer program) she imagines what she wants to do, creates a project based on this idea, plays with her creation, shares her idea and her creation with others, reflects on the experience and feedback received from others, and all this leads her to imagine new ideas, new functionalities, new improvements for her project, or new projects. The process is iterated many times. This spiral describes an iterative process, highly overlapping with the iterative software development cycle. ). |
Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> What does it mean to learn programming? <s> This article brings together, summarizes, and comments on several threads of research that have contributed to our understanding of the challenges that novice programmers face when learning about the runtime dynamics of programs and the role of the computer in program execution. More specifically, the review covers the literature on programming misconceptions, the cognitive theory of mental models, constructivist theory of knowledge and learning, phenomenographic research on experiencing programming, and the theory of threshold concepts. These bodies of work are examined in relation to the concept of a “notional machine”—an abstract computer for executing programs of a particular kind. As a whole, the literature points to notional machines as a major challenge in introductory programming education. It is argued that instructors should acknowledge the notional machine as an explicit learning objective and address it in teaching. Teaching within some programming paradigms, such as object-oriented programming, may benefit from using multiple notional machines at different levels of abstraction. Pointers to some promising pedagogical techniques are provided. <s> BIB001 | The basic premise behind programming -i.e., producing a precise description of how to carry out a task or to solve a problem -is that an interpreter, different from the producer of the description, can understand it and effectively carry out the task as described. There are thus two distinct but tightly tied aspects in programming: the program itself (the text or other streams of symbols or actions that build up the i. digital coding of an algorithm); the actions that take place when the program is run by the interpreter. ii. This distinction is explicit in most of the professional programming environments, but it is conceptually present even in those environments designed for very small children, where the program is somewhat implicit. The Bee-Bot 2 , for example, is a beeshaped robot that can be programmed by pushing the buttons on its back: the program, while recorded and then executed by the machine, is not explicit nor visible in its static form by the children, but it exists, and the programmer needs to master the relationship between the actions she records into the bee and the actions the bee will perform when the program will be executed. In this paper, however, we focus on programs in which the source code is explicit, as it is common in programming activities proposed to secondary school pupils. Thus, one needs to know the interpreter in order to program, in particular: the set of basic actions it is able to perform; • the language it is able to understand, with rules on how to compose basic actions; • the relation between • syntax and semantics, that is what actions it will perform given a description, and, conversely, how to describe a given sequence of actions so that it will perform them. The first aspect, that is the program source code, is explicit, visible. The second one instead, that is the actions that take place when the program is run, is somewhat implicit, hidden in the execution time world, and not so immediate to grasp for novices. Moreover, this aspect is sometimes underestimated by both teachers and learners: teachers, as experts, give it for granted; learners tend to construct personal intuitive, not necessarily coherent, ideas of what will happen. This dichotomy of programming -its static visible code and its implicit dynamics -emerges as a critical issue when learning to program, as shown by studies from different perspectives. In the following we cite a few BIB001 . Phenomenography studies show how novice programmers tend to perceive pro-• gramming as no more than the production of code, missing to relating instructions in the program to what happens when the program is executed. Studies on programming misconceptions point out how most of programming mis-• conceptions have to do with aspects that are not readily visible in the code but are related to the execution time, both in term of what will happen and of what will not unless explicitly specified in the code. Threshold concept theory identifies program dynamics as a candidate threshold • concept in programming as it has many of the features that characterize threshold concepts; among others: it is a troublesome barrier to student understanding, it transforms how the student perceives the subject, it marks a boundary between programmers and end users. To help novice programmers take into account also the dynamic side of programming, the concept of notional machine BIB001 has been proposed. A notional machine is a characterisation of the computer in its role as executor of programs in a particular language (or set of languages, or even a subset of a language) for didactic purposes. It thus gives a convenient description of the association syntaxsemantics. The following learning outcomes should therefore be considered when teaching to program: the development by students of a perception of programming that does not reduce • to the production of code, but includes relating instructions to what will happen when the program is executed, and eventually comes to include producing applications for use and seeing it as a way to solve problems; the development of a mental model of a notional machine that allows them to make • the association (static) syntax -(dynamic) semantics and to trace program execution correctly and coherently. In particular, this latter outcome goal will include the development of some important skills. Given a program (typically one's own) and an observed behaviour: • identify when debugging is needed because the behaviour is somewhat not the ○ one intended; identify where a bug has occurred; ○ be able to correct the code. ○ Given a program and its specification, be able to test it. • Understand that there can be multiple correct ways to program a solution. • If these are crucial points in learning to write executable descriptions, however, programming is indeed a multifaceted competence, and the knowledge to construct and the skills to develop span over several dimensions, besides predicting concrete semantics of abstract descriptions. A skilled programmer needs to: understand general properties of automatic interpreters able to manipulate digital 1. information; think about problems in a way suitable to automatic elaboration; 2. devise, analyze, compare solutions; 3. adapt solutions to emerging hurdles and needs; 4. integrate into teamwork and be able to elicit, organize, and share the abstract 5. knowledge related to a software project. Here we mainly focus on skill 1 and the support provided by programming languages and environments. Moreover we highlight the opportunity provided by agile methodologies to develop skill 5. |
Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> No technological hurdles: <s> The importance of computer science education in secondary, and even primary school, has been pointed out by many authors. But too often pupils only experience ICT, both at home and at school, and confuse it with computer science. We organized a game-contest, the KangourouofInformatics, with the aim to attract all pupils (not only the talented ones), expose them to the scientific aspects of informatics in a fun way, and convey a correct conception of the discipline. Peculiarities of the game are its focus on team work and on engaging pupils in discovering what lays behind what they experience every day. <s> BIB001 </s> Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> No technological hurdles: <s> We describe a teaching activity about word-processors we proposed to a group of 25 pupils in 9th/10th grades of an Italian secondary school. While the pupils had some familiarity with word-processor operations, they had had no formal instruction about the automatic elaboration of formatted texts. The proposed kinesthetic/tactile activities turned out to be a good way for conveying non-trivial abstract computing concepts. <s> BIB002 </s> Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> No technological hurdles: <s> Mathematics popularization is an important, creative kind of research, entangled with many other research programs of basic interest -- Mike Fellows ::: ::: This chapter is a history of the Computer Science Unplugged project, and related work on math and computer science popularization that Mike Fellows has been a driving force behind, including MEGA-Mathematics and games design. Mike's mission has been to open up the knowns and unknowns of mathematical science to the public. We explore the genesis of MEGA-Math and "Unplugged" in the early 1990s, and then the sudden growth of interest in Unplugged after the year 2003, including the contributions from many different cultures and its deployment in a large variety of contexts. Woven through this history is the importance of story: that presenting math and computing topics through story-telling and drama can captivate children and adults alike, and provides a whole new level of engagement with what can be perceived as a dry topic. It is also about not paying attention to boundaries -- whether teaching advanced computer science concepts to elementary school children or running a mathematics event in a park. <s> BIB003 </s> Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> No technological hurdles: <s> Many students hold incorrect ideas and negative attitudes about computer science (CS). In order to address these difficulties, a series of learning activities called Computer Science Unplugged was developed by Tim Bell and his colleagues. These activities expose young people to central concepts in CS in an entertaining way without requiring a computer. The CS Unplugged activities have become more and more popular among CS educators and several activities are recommended in the ACM K-12 curriculum for elementary schools. CS Unplugged is used worldwide and has been translated into many languages. We examined the effect of the CS Unplugged activities on middle-school students’ ideas about CS and their desire to consider and study it in high school. The results indicate that following the activities the ideas of the students on what CS is about were partially improved, but their desire to study CS lessened. In order to provide possible explanations to these results, we analyzed the CS Unplugged activities to determine to what extent the objectives of CS Unplugged were addressed in the activities. In addition, we checked whether the activities were designed according to constructivist principles and whether they were explicitly linked to central concepts in CS. We found that only some of the objectives were addressed in the activities, that the activities do not engage with the students’ prior knowledge and that most of the activities are not explicitly linked to central concepts in CS. We offer suggestions for modifying the CS Unplugged activities so that they will be more likely to achieve their objectives. <s> BIB004 </s> Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> No technological hurdles: <s> In this paper we report on our experiments in teaching computer science concepts with a mix of tangible and abstract object manipulations. The goal we set ourselves was to let pupils discover the challenges one has to meet to automatically manipulate formatted text. We worked with a group of 25 secondary-school pupils (9-10th grade), and they were actually able to “invent” the concept of mark-up language. From this experiment we distilled a set of activities which will be replicated in other classes (6th grade) under the guidance of math teachers. <s> BIB005 </s> Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> No technological hurdles: <s> In order to introduce informatic concepts to students of Italian secondary schools, we devised a number of interactive workshops conceived for pupils aged 10–17. Each workshop is intended to give pupils the opportunity to explore a computer science topic: investigate it firsthand, make hypotheses that can then be tested in a guided context during the activity, and construct viable mental models. This paper reports about how we designed and conducted these workshops. <s> BIB006 | • they allow students (and teachers) to have meaningful experiences related to important CS concepts (like algorithms) without having to wait until they get some technology and programming fluency (Bell and Lodi, to appear) . It is important to note that evidence shows unplugged activities should not replace programming activities, but can be helpful to make them more effective . The following two examples, taken from CS Unplugged 3 and ALaDDIn 4 , illustrate typical unplugged approaches to introduce children to programming. In CS Unplugged "Rescue Mission", pupils are given by the teacher a very simple language with only three commands: 1 step forward, 90 degrees left, 90 degrees right. The task is to compose a sequence of instructions to move a robot from one given cell on a grid to a given other cell. Pupils are divided into groups of three where each one has a role: either programmer, bot, or tester. This division of roles is done to emphasize the fact that programs cannot be adjusted on the fly; they must be first planned, then implemented, then tested and debugged until they work correctly. ALaDDIn "Algomotricity and Mazes" is an activity designed according to a strategy called algomotricity BIB001 BIB002 BIB005 BIB006 , where pupils are exposed to an informatic concept/process by playful activities which involve a mix of tangible and abstract object manipulations; they can investigate it firsthand, make hypotheses that can then be tested in a guided context during the activity, and eventually construct viable mental models. Algomotricity starts "unplugged" BIB003 but ends with a computer-based phase to close the loop with pupils' previous acquaintance with applications BIB004 . "Algomotricity and Mazes" focuses on primitives and control structures. The task is that of verbally guiding a "robot" (a blindfolded person) through a simple path. Working in groups, pupils are requested to propose a very limited set of primitives to be written each on a sticky note, and to compose them into a program to be executed by the "robot". Also, they have the possibility of exploiting basic control structures (if, repeat-until, repeat-n-times) . The conductor may decide to swap some programs and "robots", in order to emphasize the ambiguity of some instructions or the dependency of programs on special features of the "robot" (e.g., step/foot size). In the last phase, students are given computers and a slightly modified version of Scratch. They are requested to write programs that guide a sprite through mazes of increasing complexity where shape patterns foster the use of loops. |
Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> Notional Machines <s> This article brings together, summarizes, and comments on several threads of research that have contributed to our understanding of the challenges that novice programmers face when learning about the runtime dynamics of programs and the role of the computer in program execution. More specifically, the review covers the literature on programming misconceptions, the cognitive theory of mental models, constructivist theory of knowledge and learning, phenomenographic research on experiencing programming, and the theory of threshold concepts. These bodies of work are examined in relation to the concept of a “notional machine”—an abstract computer for executing programs of a particular kind. As a whole, the literature points to notional machines as a major challenge in introductory programming education. It is argued that instructors should acknowledge the notional machine as an explicit learning objective and address it in teaching. Teaching within some programming paradigms, such as object-oriented programming, may benefit from using multiple notional machines at different levels of abstraction. Pointers to some promising pedagogical techniques are provided. <s> BIB001 </s> Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> Notional Machines <s> Comprehension of programming and programs is known to be a difficult task for many beginning students, with many computing courses showing significant drop out and failure rates. In this paper, we present a new notional machine design and implementation to help with understanding of programming and its dynamics for beginning learners. The notional machine offers an abstraction of the physical machine designed for comprehension and learning purposes. We introduce the notional machine and a graphical notation for its representation. We also present Novis, an implementation of a dynamic real-time visualiser of this notional machine, integrated into BlueJ. <s> BIB002 | An important intuition for approaching programming from a constructionist perspective is that programs are a join point between our mind and the computer, the interpreter of the formal description of what we have in mind. Thus, programs appeal to our curiosity and ingenuity and are wonderful artifacts to share and discuss with other active minds. Such a sharing, however, assumes that the interpreter is a shared knowledge among peers. When a group of people programs the same 'machine', a shared semantics is in fact given, but unfortunately people, especially novices, do not necessarily write their programs for the formal interpreter they use, rather for the notional machine BIB001 BIB002 they actually have in their minds. A notional machine is an abstract computer responsible for executing programs of a particular kind BIB001 and its grasping refers to all the general properties of the machine that one is learning to control . The purpose of a notional machine is to explain, to give intuitive meaning to the code a programmer writes. It normally encompasses an idealized version of the interpreter and other aspects of the development and run-time environment; moreover, it should bring also a complementary intuition of what the notional machine cannot do, at least without specific directions of the programmer. To introduce a notional machine to the students is often the initial role of the instructors. Ideally this should be somewhat incremental in complexity, but not all programming languages are suitable for incremental models: in fact, most of the success for introductory courses of visual languages or Lisp dialects is that they allow shallow presentations of syntax, thus letting the learners focus on the more relevant parts of their notional machines. An explicit reference to the notional machine can foster meta-cognition and, during teamwork, it can help in identifying misconceptions. But how can the notional machine be made explicit? Tracing of the computational process and visualization of the execution are effective candidate tools. They allow instructors to make as clear as possible: (i) what novice programmers should expect the notional machine will do and (ii) what it actually does. |
Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> Abstract Programming Patterns <s> We look at the essential thinking skills students need to learn in the introductory computer science course based on object-oriented programming. We create a framework for such a course based on the elementary programming and design patterns. Some of these patterns are known in the pattern community, others enrich the collection. Our goal is to help students focus on mastering reasoning and design skills before the language idiosynchracies muddy the water. <s> BIB001 </s> Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> Abstract Programming Patterns <s> The use of all variables in 109 novice-level, but expert written, procedural programs were analyzed in order to find a small but still comprehensive set of generic roles that describe the nature of the successive values a variable obtains. This paper gives the results of the analysis: a list of only nine roles that cover 99% of variables, frequencies of the roles, and discovered role changes. <s> BIB002 </s> Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> Abstract Programming Patterns <s> Roles can be assigned to occurrences of variables in programs according to a small number of patterns of use that are both language- and algorithm-independent. Preliminary studies on explicitly teaching roles of variables to novice students have shown that roles are an excellent pedagogical tool for clarifying the structure and meaning of programs. This paper describes the results of an investigation designed to test the understandability and acceptability of the role concept and of the individual roles as seen by computer science educators. The investigation consisted of a short tutorial on roles, a brief training session on assigning roles to variables, a test evaluating the subjects' ability to assign roles, and a set of open questions concerning their opinions of roles. Roles were identified with 85 accuracy, and in typical uses of variables with 93 accuracy. <s> BIB003 | A small number of abstract programming patterns can be applied to a potentially infinite spectrum of specific conditions. This is often a challenge for novices, given that most of the times the discipline is taught (i) introducing one or more primitive tools (e.g., variables), and (ii) showing some examples highlighting how these tools can be used to solve specific problems. This might lead to the rise of misconceptions of pupils w.r.t. the above-mentioned tools. The concept of role of variables BIB002 BIB001 has been proposed in order to guide novice programmers from the operational knowledge of a variable as the holder of a mutable value to the ability to identify abstract use cases following a small number of roles (such as those in Fig. 3 ). Such ability is of great help when tackling the solution of a specific problem, for instance, that of computing the maximal value within a sequence. Indeed, this is a great opportunity for letting pupils realize that this problem is a special case of the more general quest for optimal value. The latter can be found using a most-wanted holder to be compared with each element of the sequence and containing the highest value seen so far. This method easily fits the search of the maximal as well as the minimal value, and it also efficiently handles less obvious cases such as that of finding the distinct vowels occurring in a sentence. These roles can also be gradually introduced following the hierarchy of Fig. 3 , starting from the concept of literal (e.g., an integer value or a string) and building knowledge about one role on the top of already understood roles. For selection and iteration as well there are several standard use patterns that occur over and over again. Selection patterns (Bergin, 1999) and loop patterns have been introduced with the same goal. For instance, to illustrate the idea, the loop and a half pattern is an efficient processing strategy for a sequence of elements whose end can be detected only after at least one element has been read. It uses an infinite loop whose body accesses the next sequence element. If there are no more elements, the loop is escaped through a controlled jump, otherwise some special actions are possibly executed before continuing the iteration. The code snippet shown in Fig. 4 shows one of the canonical incarnations of this pattern: the possibly repeated check of a value given as input, detecting and ignoring invalid entries. Selection and loop patterns fit well within a constructionist-based learning path: they might be naturally discovered when critically analyzing software implementations. For instance, the previous loop could be the end point of a reasoning scheme started from the detection of a duplicated line of code in a quick-and-dirty initial implementation. Fig. 3 . Roles of variables, organized in a constructionist-like hierarchy where the predecessor of an arrow is a prerequisite for learning the corresponding successor (source: BIB002 ). In general, abstract programming patterns are provided in a short number, in order to cover them within a standard introductory computer programming course; moreover, the related concepts are easily grasped by experienced computer science teachers (BenAri and BIB003 , thus they can be embedded in already existing curricula with low effort. |
Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> Misconceptions <s> This article brings together, summarizes, and comments on several threads of research that have contributed to our understanding of the challenges that novice programmers face when learning about the runtime dynamics of programs and the role of the computer in program execution. More specifically, the review covers the literature on programming misconceptions, the cognitive theory of mental models, constructivist theory of knowledge and learning, phenomenographic research on experiencing programming, and the theory of threshold concepts. These bodies of work are examined in relation to the concept of a “notional machine”—an abstract computer for executing programs of a particular kind. As a whole, the literature points to notional machines as a major challenge in introductory programming education. It is argued that instructors should acknowledge the notional machine as an explicit learning objective and address it in teaching. Teaching within some programming paradigms, such as object-oriented programming, may benefit from using multiple notional machines at different levels of abstraction. Pointers to some promising pedagogical techniques are provided. <s> BIB001 | Sorva defines misconceptions as "understandings that are deficient or inadequate for many practical programming contexts" BIB001 . Some authors ) believe that computer science has an exceptional position in constructivist's view of knowledge constructed by individuals or groups rather than a copy of an ontological reality: in fact, the computer forms an "accessible ontological reality" and programming features many concepts that are precisely defined and implemented within technical systems [...] BIB001 . |
Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> sometimes a novice programmer "doesn't get" a concept or "gets it wrong" in a way that is not a harmless (or desirable) alternative interpretation. Incorrect and incomplete understandings of programming concepts result in unproductive programming behavior and dysfunctional programs <s> This article brings together, summarizes, and comments on several threads of research that have contributed to our understanding of the challenges that novice programmers face when learning about the runtime dynamics of programs and the role of the computer in program execution. More specifically, the review covers the literature on programming misconceptions, the cognitive theory of mental models, constructivist theory of knowledge and learning, phenomenographic research on experiencing programming, and the theory of threshold concepts. These bodies of work are examined in relation to the concept of a “notional machine”—an abstract computer for executing programs of a particular kind. As a whole, the literature points to notional machines as a major challenge in introductory programming education. It is argued that instructors should acknowledge the notional machine as an explicit learning objective and address it in teaching. Teaching within some programming paradigms, such as object-oriented programming, may benefit from using multiple notional machines at different levels of abstraction. Pointers to some promising pedagogical techniques are provided. <s> BIB001 </s> Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> sometimes a novice programmer "doesn't get" a concept or "gets it wrong" in a way that is not a harmless (or desirable) alternative interpretation. Incorrect and incomplete understandings of programming concepts result in unproductive programming behavior and dysfunctional programs <s> Efforts to improve computer science education are underway, and teachers of computer science are challenged in introductory programming courses to help learners develop their understanding of programming and computer science. Identifying and addressing students’ misconceptions is a key part of a computer science teacher's competence. However, relevant research on this topic is not as fully developed in the computer science education field as it is in mathematics and science education. In this article, we first review relevant literature on general definitions of misconceptions and studies about students’ misconceptions and other difficulties in introductory programming. Next, we investigate the factors that contribute to the difficulties. Finally, strategies and tools to address difficulties including misconceptions are discussed. Based on the review of literature, we found that students exhibit various misconceptions and other difficulties in syntactic knowledge, conceptual knowledge, and strategic knowledge. These difficulties experienced by students are related to many factors including unfamiliarity of syntax, natural language, math knowledge, inaccurate mental models, lack of strategies, programming environments, and teachers’ knowledge and instruction. However, many sources of students’ difficulties have connections with students’ prior knowledge. To better understand and address students’ misconceptions and other difficulties, various instructional approaches and tools have been developed. Nevertheless, the dissemination of these approaches and tools has been limited. Thus, first, we suggest enhancing the dissemination of existing tools and approaches and investigating their long-term effects. Second, we recommend that computing education research move beyond documenting misconceptions to address the development of students’ (mis)conceptions by integrating conceptual change theories. Third, we believe that developing and enhancing instructors’ pedagogical content knowledge (PCK), including their knowledge of students’ misconceptions and ability to apply effective instructional approaches and tools to address students’ difficulties, is vital to the success of teaching introductory programming. <s> BIB002 | According to Clancy, there are two macro-causes of misconceptions: over-or undergeneralizing and a confused computational model. High-level languages provide an abstraction on control and data, making programming simpler and more powerful, but, by contrast, hiding details of the executor to the user, who can consequently find mysterious some constructs and behaviors (Clancy, 2004) . Much literature about misconceptions in CSEd can be found: we list some of the most important causes of misconceptions, experienced especially by novices, divided into different areas, found mainly in (Clancy, 2004; BIB001 and in the works they reference. For a complete review see for example BIB002 . |
Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> Programming Languages for Learning to Program <s> Most ideas come from previous ideas. The sixties, particularly in the ARPA community, gave rise to a host of notions about “human-computer symbiosis” through interactive time-shared computers, graphics screens and pointing devices. Advanced computer languages were invented to simulate complex systems such as oil refineries and semi-intelligent behavior. The soon to follow paradigm shift of modern personal computing, overlapping window interfaces, and object-oriented design came from seeing the work of the sixties as something more than a “better old thing”. That is, more than a better way: to do mainframe computing; for end-users to invoke functionality; to make data structures more abstract. Instead the promise of exponential growth in computing/$/volume demanded that the sixties be regarded as “ almost a new thing” and to find out what the actual “new things” might be. For example, one would compute with a handheld “Dynabook” in a way that would not be possible on a shared mainframe; millions of potential users meant that the user interface would have to become a learning environment along the lines of Montessori and Bruner; and needs for large scope, reduction in complexity, and end-user literacy would require that data and control structures be done away with in favor of a more biological scheme of protected universal cells interacting only through messages that could mimic any desired behavior. Early Smalltalk was the first complete realization of these new points of view as parented by its many predecessors in hardware, language and user interface design. It became the exemplar of the new computing, in part, because we were actually trying for a qualitative shift in belief structures—a new Kuhnian paradigm in the same spirit as the invention of the printing press—and thus took highly extreme positions which almost forced these new styles to be invented. <s> BIB001 </s> Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> Programming Languages for Learning to Program <s> ➧1 In the past few decades, computer science has driven innovation across a variety of academic fields and become a robust part of democratic participation and the labor economy. Today’s youth are surrounded with applications of these new technologies that affect how they access and produce information and communicate with friends, family, and educators. Yet, though students often gain skills as “users” of these technologies in schools, too many have been denied opportunities to study computer science and produce new knowledge required to become “creators” of computing innovations. The students who do study computer science courses often represent only a narrow band of students that excludes significant numbers of girls and students of color. Further, for a field that depends on creativity, a homogenous workforce fails to take advantage of those with diverse experiences and world viewpoints that likely foster divergent and fresh thinking. This article will provide an overview of Exploring Computer Science (ECS), a curriculum and program developed to broaden participation in computing for high school students in the Los Angeles Unified School District. This program is framed around a three-pronged approach to reform: curricular development, teacher professional development, and policy work across a variety of educational institutions. The focus is to provide the necessary structures and support to schools and teachers that leads to high quality teaching and learning in computer science classrooms. In ECS classrooms, high quality teaching and learning is viewed within the frame of inquiry-based teaching strategies that lead to deep student content learning and engagement. The incorporation of equity-based teaching practices is an essential part of setting up the classroom culture that facilitates inquiry-based learning. As the second largest and one of the most diverse districts in the United States, the Los Angeles Unified School District provides an important context to understand opportunities and obstacles encountered while engaging in institutional K-12 computer science education reform. This article will begin with an account of the educational research that provided key information about the obstacles students encounter in computer science classrooms. Next, we will describe the key elements of the ECS program. Finally, we will highlight several lessons that we have learned that inform the CS 10K campaign (see Jan Cuny’s Critical Perspective “Transforming High School Computing: A Call to Action”, this issue). <s> BIB002 </s> Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> Programming Languages for Learning to Program <s> COPPER (CustOmizable Puzzle Programming EnviRonment) is a meta-configurable tool for creating coding puzzles on a grid using a blocks-based programming language, similar to puzzles in Code.org's Hour of Code. COPPER has the potential to increase student interest and engagement by allowing a teacher to customize levels for individual classes or students. Teachers can create characters for specialized puzzles by uploading pictures to customize their appearance and using the block-language to design the character's behavior. They can then place these characters onto a grid, and arrange them into a puzzle for their students to solve. A teacher can specify the goal of each coding puzzle, as well as restrict which blocks a student may use, allowing a teacher to gradually introduce programming concepts. For example, an elementary school teacher could highlight concepts from a history lesson by building a customized grid where characters from a historical context navigate around objects relevant to the topic being studied. COPPER uses Google's Blockly framework to eliminate the mental overhead of memorizing textual syntax, allowing students to focus on building computational thinking skills. Block-based languages have been shown to be more effective than text-based languages when teaching programming to first-learners. Combined with customization, COPPER has the potential to lead to higher student interest and comprehension of programming concepts in a customized context. This poster will also summarize results obtained through initial experimentation through collaboration with K-8 teachers and their students. <s> BIB003 | From a constructionist viewpoint of learning, programming languages have a major role: they are a key means for sharing artifacts and expressing one's theories of the world. The crucial part is that artifacts can be executed independently from the creator: someone's (coded) mental process can become part of the experience of others, and thus criticized, improved, or adapted to a new project. In fact, the origin of the notion itself of constructionism goes back to Papert's experiments with a programming environment (LOGO) designed exactly to let pupils tinker with math and geometry . Does this strategy work even when the learning objective is the programming activity itself? Can a generic programming language be used to give a concrete reification of the computational thinking of a novice programmer? Or do we need something specifically designed for this activity? Alan Kay says that programming languages can be categorized in two classes: "agglutination of features" or "crystallization of style" BIB001 . What is more important for learning effectively in a constructivist way? Features or style? In the last decade, a number of block-based programming tools have been introduced to help students have an easier time when first practicing programming. These tools, often based on web-based technologies, as well as an increase in the number of smartphones and tablets, opened up new ways for innovative coding concepts . In general, they focus on younger learners, support novices in their first programming steps, can be used in informal learning situations, and provide a visual language which allows students to recognize blocks instead of recalling syntax BIB003 . Many popular efforts for spreading computer science in schools, like BIB002 or the teaching material from Code.org, 5 rely on the use of such environments. In addition, such tools have been adopted into many computing classes all over the world (Meerbaum-Salant, Armoni, and Ben-Ari, 2010). |
Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> Scheme, Racket <s> DrScheme is a programming environment for Scheme. It fully integrates a graphics-enriched editor, a parser for multiple variants of Scheme, a functional read-eval-print loop, and an algebraic printer. The environment is especially useful for students, because it has a tower of syntactically restricted variants of Scheme that are designed to catch typical student mistakes and explain them in terms the students understand. The environment is also useful for professional programmers, due to its sophisticated programming tools, such as the static debugger, and its advanced language features, such as units and mixins. Beyond the ordinary programming environment tools, DrScheme provides an algebraic stepper, a context-sensitive syntax checker, and a static debugger. The stepper reduces Scheme programs to values, according to the reduction semantics of Scheme. It is useful for explaining the semantics of linguistic facilities and for studying the behavior of small programs. The syntax checker annotates programs with font and color changes based on the syntactic structure of the program. On demand, it draws arrows that point from bound to binding occurrences of identifiers. It also supports α-renaming. Finally, the static debugger provides a type inference system that explains specific inferences in terms of a value-flow graph, selectively overlaid on the program text. <s> BIB001 </s> Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> Scheme, Racket <s> DrJava is a pedagogic programming environment for Java that enables students to focus on designing programs, rather than learning how to use the environment. The environment provides a simple interface based on a "read-eval-print loop" that enables a programmer to develop, test, and debug Java programs in an interactive, incremental fashion. This paper gives an overview of DrJava including its pedagogic rationale, functionality, and implementation. <s> BIB002 | Scheme ) is a language originally aimed at bringing structured programming in the lands of Lisp (mainly by adding lexical scoping). The language has nowadays a wide and energetic community of users. Its importance in education, however, is chiefly related to a book, "Structure and Interpretation of Computer Programs" (SICP) , which had a tremendous impact on the practice of programming education. The book derived from a semester course taught at MIT. It has the peculiarity to present programming as a way of organizing thinking and problem solving. Every detail of the Scheme notional machine is worked out in the book: at the end, the reader should be able to understand the mechanics of a Scheme interpreter and to program one by herself (in Scheme). The book, which enjoyed widespread adoption, was originally directed to MIT undergraduates and it is certainly not suitable either for children or even adults without a scientific background: examples are often taken from college-level mathematics and physics. A spin-off of SICP explicitly directed to learning is Racket. Born as 'PLT Scheme', one of its strength is the programming environment DrScheme BIB001 (now DrRacket): it supports educational scaffolding, it suggests proper documentation, and it can use different flavours of the language, starting from a very basic one (Beginning Student Language, it includes only notation for function definitions, function applications, and conditional expressions) to multi-paradigm dialects. The DrRacket approach is supported by an online book "How to design programs" (HTDP) 6 and it has been adapted to other mainstream languages, like Java BIB002 and Python. The availability of different languages directed to the progression of learning should help in overcoming what the DrRacket proponents identify as "the crucial problem" in the interaction between the learner and the programming environment: beginners make mistakes before they know much of the language, but development tools yet diagnose these errors as if the programmer already knew the whole notional machine. Moreover, DrRacket has a minimal interface aimed at not confusing novices, with just two simple interactive panes: a definitions area, and an interactions area, which allows a programmer to ask for the evaluation of expressions that may refer to the definitions. Similarly to what happens in visual languages, Racket allows for direct manipulation of sprites, see an example in Fig. 6 . The authors of HTDP claim that "program design -but not programming -deserves the same role in a liberal arts education as mathematics and language skills." They aim at systematically designed programs thanks to systematic thought, planning, and understanding from the very beginning, at every stage, and for every step. To this end, the HTDP approach is to present "design recipes", supported by predefined scaffolding that should be iteratively refined to match the problem at hand. This is indeed very close to the idea of micropatterns discussed above. 6 Current version: http://www.htdp.org/2018-01-06/Book/index.html |
Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> Scratch, Snap!, Alice, and others <s> "Digital fluency" should mean designing, creating, and remixing, not just browsing, chatting, and interacting. <s> BIB001 </s> Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> Scratch, Snap!, Alice, and others <s> Scratch is a visual programming environment that allows users (primarily ages 8 to 16) to learn computer programming while working on personally meaningful projects such as animated stories and games. A key design goal of Scratch is to support self-directed learning through tinkering and collaboration with peers. This article explores how the Scratch programming language and environment support this goal. <s> BIB002 | EToys worlds with pre-defined -although programmable -objects, evolved in a generic environment in which everything can be defined in terms of 'statement' blocks. Scratch BIB001 , originally written in Smalltalk, is the most popular and successful visual block-based programming environment. Launched in 2007 by the MIT Media Lab, the Scratch site has grown to more than 25 million registered members with over 29 million Scratch projects shared programs. Unlike traditional programming languages, here graphical programming blocks are used that automatically snap together like Lego bricks when they make syntactical sense . In visual programming languages, a block represents a command or action and they are arranged in scripts. The composition of individual scripts equals the construction of an algorithm. The building blocks offer the possibility, e.g., to animate different objects on a stage, thus defining their behavior. The Scratch environment has some distinctive characteristics, according to its authors BIB002 . Among the ones the authors highlight, some are particularly relevant in the constructionist approach: |
Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> Learning to Program in Teams <s> From the Book: ::: “Clean code that works” is Ron Jeffries’ pithy phrase. The goal is clean code that works, and for a whole bunch of reasons: ::: Clean code that works is a predictable way to develop. You know when you are finished, without having to worry about a long bug trail.Clean code that works gives you a chance to learn all the lessons that the code has to teach you. If you only ever slap together the first thing you think of, you never have time to think of a second, better, thing. Clean code that works improves the lives of users of our software.Clean code that works lets your teammates count on you, and you on them.Writing clean code that works feels good.But how do you get to clean code that works? Many forces drive you away from clean code, and even code that works. Without taking too much counsel of our fears, here’s what we do—drive development with automated tests, a style of development called “Test-Driven Development” (TDD for short). ::: In Test-Driven Development, you: ::: Write new code only if you first have a failing automated test.Eliminate duplication. ::: ::: Two simple rules, but they generate complex individual and group behavior. Some of the technical implications are:You must design organically, with running code providing feedback between decisionsYou must write your own tests, since you can’t wait twenty times a day for someone else to write a testYour development environment must provide rapid response to small changesYour designs must consist of many highly cohesive, loosely coupled components, just to make testing easy ::: ::: The two rules imply an order to the tasks ofprogramming: ::: 1. Red—write a little test that doesn’t work, perhaps doesn’t even compile at first ::: 2. Green—make the test work quickly, committing whatever sins necessary in the process ::: 3. Refactor—eliminate all the duplication created in just getting the test to work ::: ::: ::: Red/green/refactor. The TDD’s mantra. ::: ::: Assuming for the moment that such a style is possible, it might be possible to dramatically reduce the defect density of code and make the subject of work crystal clear to all involved. If so, writing only code demanded by failing tests also has social implications: ::: If the defect density can be reduced enough, QA can shift from reactive to pro-active workIf the number of nasty surprises can be reduced enough, project managers can estimate accurately enough to involve real customers in daily developmentIf the topics of technical conversations can be made clear enough, programmers can work in minute-by-minute collaboration instead of daily or weekly collaborationAgain, if the defect density can be reduced enough, we can have shippable software with new functionality every day, leading to new business relationships with customers ::: ::: ::: So, the concept is simple, but what’s my motivation? Why would a programmer take on the additional work of writing automated tests? Why would a programmer work in tiny little steps when their mind is capable of great soaring swoops of design? Courage. ::: Courage ::: Test-driven development is a way of managing fear during programming. I don’t mean fear in a bad way, pow widdle prwogwammew needs a pacifiew, but fear in the legitimate, this-is-a-hard-problem-and-I-can’t-see-the-end-from-the-beginning sense. If pain is nature’s way of saying “Stop!”, fear is nature’s way of saying “Be careful.” Being careful is good, but fear has a host of other effects: ::: Makes you tentativeMakes you want to communicate lessMakes you shy from feedbackMakes you grumpy ::: ::: None of these effects are helpful when programming, especially when programming something hard. So, how can you face a difficult situation and: ::: Instead of being tentative, begin learning concretely as quickly as possible.Instead of clamming up, communicate more clearly.Instead of avoiding feedback, search out helpful, concrete feedback.(You’ll have to work on grumpiness on your own.) ::: ::: Imagine programming as turning a crank to pull a bucket of water from a well. When the bucket is small, a free-spinning crank is fine. When the bucket is big and full of water, you’re going to get tired before the bucket is all the way up. You need a ratchet mechanism to enable you to rest between bouts of cranking. The heavier the bucket, the closer the teeth need to be on the ratchet. ::: ::: The tests in test-driven development are the teeth of the ratchet. Once you get one test working, you know it is working, now and forever. You are one step closer to having everything working than you were when the test was broken. Now get the next one working, and the next, and the next. By analogy, the tougher the programming problem, the less ground should be covered by each test. ::: ::: Readers of Extreme Programming Explained will notice a difference in tone between XP and TDD. TDD isn’t an absolute like Extreme Programming. XP says, “Here are things you must be able to do to be prepared to evolve further.” TDD is a little fuzzier. TDD is an awareness of the gap between decision and feedback during programming, and techniques to control that gap. “What if I do a paper design for a week, then test-drive the code? Is that TDD?” Sure, it’s TDD. You were aware of the gap between decision and feedback and you controlled the gap deliberately. ::: ::: That said, most people who learn TDD find their programming practice changed for good. “Test Infected” is the phrase Erich Gamma coined to describe this shift. You might find yourself writing more tests earlier, and working in smaller steps than you ever dreamed would be sensible. On the other hand, some programmers learn TDD and go back to their earlier practices, reserving TDD for special occasions when ordinary programming isn’t making progress. ::: ::: There are certainly programming tasks that can’t be driven solely by tests (or at least, not yet). Security software and concurrency, for example, are two topics where TDD is not sufficient to mechanically demonstrate that the goals of the software have been met. Security relies on essentially defect-free code, true, but also on human judgement about the methods used to secure the software. Subtle concurrency problems can’t be reliably duplicated by running the code. ::: ::: Once you are finished reading this book, you should be ready to: ::: Start simplyWrite automated testsRefactor to add design decisions one at a time ::: ::: This book is organized into three sections. ::: An example of writing typical model code using TDD. The example is one I got from Ward Cunningham years ago, and have used many times since, multi-currency arithmetic. In it you will learn to write tests before code and grow a design organically.An example of testing more complicated logic, including reflection and exceptions, by developing a framework for automated testing. This example also serves to introduce you to the xUnit architecture that is at the heart of many programmer-oriented testing tools. In the second example you will learn to work in even smaller steps than in the first example, including the kind of self-referential hooha beloved of computer scientists.Patterns for TDD. Included are patterns for the deciding what tests to write, how to write tests using xUnit, and a greatest hits selection of the design patterns and refactorings used in the examples. ::: ::: I wrote the examples imagining a pair programming session. If you like looking at the map before wandering around, you may want to go straight to the patterns in Section 3 and use the examples as illustrations. If you prefer just wandering around and then looking at the map to see where you’ve been, try reading the examples through and refering to the patterns when you want more detail about a technique, then using the patterns as a reference. ::: ::: Several reviewers have commented they got the most out of the examples when they started up a programming environment and entered the code and ran the tests as they read. ::: ::: A note about the examples. Both examples, multi-currency calculation and a testing framework, appear simple. There are (and I have seen) complicated, ugly, messy ways of solving the same problems. I could have chosen one of those complicated, ugly, messy solutions to give the book an air of “reality.” However, my goal, and I hope your goal, is to write clean code that works. Before teeing off on the examples as being too simple, spend 15 seconds imagining a programming world in which all code was this clear and direct, where there were no complicated solutions, only apparently complicated problems begging for careful thought. TDD is a practice that can help you lead yourself to exactly that careful thought. <s> BIB001 </s> Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> Learning to Program in Teams <s> School software projects, as they are common e.g. in German CS classes, traditionally apply inflexible process models, mostly an adapted waterfall model. Typically, such projects are conducted at the end of a school year. In this paper we pursue the question, if and how changing process model and time may help bringing the advantages of project based learning into play. We describe and compare practical experiences of a study with 140 students, considering four different contexts. By applying agile methods, flexibility was gained. The evaluation of the different implementations results in a more holistic and comprehensive view of projects in CSE. <s> BIB002 </s> Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> Learning to Program in Teams <s> Context: Empirical investigations regarding using Agile programming methodologies in high schools are scarce in the literature. Objective: This paper evaluates (i) the performance, (ii) the code quality, and (iii) the satisfaction of both students and teachers in using Agile practices in education. Method: this study includes an experiment, administered in a laboratory controlled setting to measure students' performances and a case study to value the code quality improvements. Questionnaires were also used to evaluate qualitative aspects of Agile practices. Results: groups of students with mixed skills performed significantly better than groups with the same skill level. Moreover, there was also a general increase in code quality along with satisfaction. Conclusions: Agile methodologies are useful in the High School education of young software developers. <s> BIB003 | Constructivist approaches often emphasize the importance of social context in which the learning happens (see e.g. ). Working in developers teams requires new skills, especially because software products (even the ones in the reach of novices) are often tangled with many dependencies and division of labour is hard: it inevitably requires appropriate communication and coordination. Therefore, it is important that novice programmers learn to program in an "organized" way, discovering that as a group they are able to solve more challenging and open-ended problems, maybe with interdisciplinary contributions. To this end, agile methodologies fit well with constructivist pedagogies involving learning in teams, and they are increasingly exploited in educational settings (see for example BIB002 BIB003 ). Agile teams are typically small groups of 4-8 co-workers. • Agile values ) (individuals and interactions over processes and tools; customer collaboration over contract negotiation; responding to change over following a plan; working software over comprehensive documentation) relate well with constructivist philosophies. Agile teams are self-organizing, emphasize the need for reflecting regularly on how • to become more effective, and tune and adjust their behavior accordingly. Agile techniques like pair programming, test driven development, iterative software • development, continuous integration are very attractive for a learning context. The iterative nature of agile methods is well exemplified by test-driven development, or TDD BIB001 . This technique reverses the order between code implementation and correctness test. Namely, the specification of the programming task at hand is actually provided with a test the defines correct behavior. The development cycle is then based on the iteration of the following procedure: write a test known to fail according to the current stage of the implementation; i. perform the smallest code update which satisfies all tests, including the one introii. duced in the previous point; optionally refactor the produced code. iii. TDD makes testing the engine driving the overall development process: one of the hardest-to-find contributions for facilitators in an active programming learning context is suggesting a good next test. This has the role of letting pupils aware that their belief at a broad level ("the program works") is false, thus an analogous belief at a smaller scale (for instance, "this function always returns the correct result") should be false, too. This amounts to the destruction of knowledge necessary to build new knowledge (aka a working program) in a constructivist setting. Moreover, refactoring corresponds to the constructivist re-organization of knowledge following the discovery of more viable solutions: most of the developing activities consist in realizing that a system which was thought to correctly work is actually not able to cope with a new test case. This applies of course also to the simplest tasks faced by students engaged in learning the basics of computer programming. Once pupils are convinced that their implementation is flawed, the localization of the code lines to be reconsidered is the other pillar of an active learning setting. Again, a paramount contribution for a successful learning process should be provided by a facilitator suggesting suitable debugging techniques (e.g., proposing critical input values, suggesting points in the execution flow to be verified, or giving advice about variables to be tracked during the next run). |
Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> Conclusions <s> This paper describes a new technique for implementing educational programming languages using tangible interface technology. It emphasizes the use of inexpensive and durable parts with no embedded electronics or power supplies. Students create programs in offline settings---on their desks or on the floor---and use a portable scanning station to compile their code. We argue that languages created with this approach offer an appealing and practical alternative to text-based and visual languages for classroom use. In this paper we discuss the motivations for our project and describe the design and implementation of two tangible programming languages. We also describe an initial case study with children and outline future research goals. <s> BIB001 </s> Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> Conclusions <s> "Digital fluency" should mean designing, creating, and remixing, not just browsing, chatting, and interacting. <s> BIB002 </s> Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> Conclusions <s> Learning to program is hard, also because it requires students to deal with abstraction. A program is an abstract construct: most pieces of a program are not concrete, literal values, but they are abstract symbols standing for things or actions in the system they model and control. Thus, when learning to program, novices not only need to learn about the system, but they also need to learn about the programming language. They need to think about the concrete effects in the system their abstract program constructs will cause once the program will execute. This thinking on two levels of abstraction (program and system), and in two dimensions of time (coding and execution) can cause a significant burden. In this short paper we propose to collapse those two levels. We wonder whether it would be possible to devise a programming environment where the program is the system. To do this we need languages that are the system, instead of languages that are about the system. We propose to use tangible languages as a way towards this idea. We briefly present three such languages which we used in the context of an informal learning setting and report our initial lessons learned. <s> BIB003 | The literature on learning to program through a constructionist strategy has often focused on how to bring the abstract and formal nature of programming languages into the manipulation of more concrete (or even tangible) "objects" BIB001 BIB002 BIB003 . Many proposals aim at overcoming the (initial) hurdles which textual rules of syntax may pose to children. Also, several environments have been designed in order to increase the appeal of programming by connecting this activity to real-world devices or providing fancy libraries. Instead, more work is probably needed to make educators and learners more aware of the so-called notional machine behind the programming language. Programming environments could be more explicit about the complex relationship between the code one writes and the actions that take place when the program is executed. Moreover, micro-patterns should be exploited in order to enhance problem solving skills of novice programmers, such that they become able to think about the solution of problems in the typical way that make the former suitable to automatic elaboration. Agile methodologies, now also common in professional settings, seem to fit well with constructionist learning. Besides the stress on teamworking, particularly useful seems the agile emphasis on having running artifacts through all the development cycle and the common practice of driving development with explicit or even executable "definitions of done". |
A Survey on Closed Frequent Pattern Mining <s> CHARM <s> The task of mining association rules consists of two main steps. The first involves finding the set of all frequent itemsets. The second step involves testing and generating all high confidence rules among itemsets. In this paper we show that it is not necessary to mine all frequent itemsets in the first step, instead it is sufficient to mine the set of closed frequent itemsets, which is much smaller than the set of all frequent itemsets. It is also not necessary to mine the set of all possible rules. We show that any rule between itemsets is equivalent to some rule between closed itemsets. Thus many redundant rules can be eliminated. Furthermore, we present CHARM, an efficient algorithm for mining all closed frequent itemsets. An extensive experimental evaluation on a number of real and synthetic databases shows that CHARM outperforms previous methods by an order of magnitude or more. It is also linearly scalable in the number of transactions and the number of closed itemsets found. <s> BIB001 </s> A Survey on Closed Frequent Pattern Mining <s> CHARM <s> The set of frequent closed itemsets uniquely determines the exact frequency of all itemsets, yet it can be orders of magnitude smaller than the set of all frequent itemsets. In this paper, we present CHARM, an efficient algorithm for mining all frequent closed itemsets. It enumerates closed sets using a dual itemset-tidset search tree, using an efficient hybrid search that skips many levels. It also uses a technique called diffsets to reduce the memory footprint of intermediate computations. Finally, it uses a fast hash-based approach to remove any "nonclosed" sets found during computation. We also present CHARM-L, an algorithm that outputs the closed itemset lattice, which is very useful for rule generation and visualization. An extensive experimental evaluation on a number of real and synthetic databases shows that CHARM is a state-of-the-art algorithm that outperforms previous methods. Further, CHARM-L explicitly generates the frequent closed itemset lattice. <s> BIB002 </s> A Survey on Closed Frequent Pattern Mining <s> CHARM <s> Previous studies have presented convincing arguments that a frequent pattern mining algorithm should not mine all frequent patterns but only the closed ones because the latter leads to not only a more compact yet complete result set but also better efficiency. However, most of the previously developed closed pattern mining algorithms work under the candidate maintenance-and- test paradigm, which is inherently costly in both runtime and space usage when the support threshold is low or the patterns become long. In this paper, we present BIDE, an efficient algorithm for mining frequent closed sequences without candidate maintenance. It adopts a novel sequence closure checking scheme called Bl-Directional Extension and prunes the search space more deeply compared to the previous algorithms by using the BackScan pruning method. A thorough performance study with both sparse and dense, real, and synthetic data sets has demonstrated that BIDE significantly outperforms the previous algorithm: It consumes an order(s) of magnitude less memory and can be more than an order of magnitude faster. It is also linearly scalable in terms of database size. <s> BIB003 </s> A Survey on Closed Frequent Pattern Mining <s> CHARM <s> This paper presents a new classification and search method of 3D object features views. This method is an application of algorithms: • Charm for an object views classification purpose • Algorithm for extracting association rules in order to extract the characteristic view. We use the geometric descriptor of Zernike Moments to index 2D views of 3D object. The proposed method relies on a Bayesian probabilistic approach for search queries. The resulting outcome is presented by a collection of 120 3D models of the Princeton-based benchmark and then compared to those obtained from conventional methods. <s> BIB004 | CHARM BIB004 BIB001 BIB003 BIB002 stands for Closed Association Rule Mining algorithm is used to mine the closed frequent patterns. It explores patternset and didset(Document idset) space simultaneously which skips many levels quickly to identify the closed frequent patterns. It uses two pruning strategies, Candidate pruning are not only based on the subset infrequency but also branches are prunes based on nonclosure property. The fundamental operation used is union of two patternset and an intersection of their document ids. The key features of the CHARM algorithm are: It explores both itemset and didsets for quick mining of closed frequent patterns and it uses pure bottom up approaches. |
A Survey on Closed Frequent Pattern Mining <s> CLOSET+ <s> Mining frequent closed itemsets provides complete and non-redundant results for frequent pattern analysis. Extensive studies have proposed various strategies for efficient frequent closed itemset mining, such as depth-first search vs. breadthfirst search, vertical formats vs. horizontal formats, tree-structure vs. other data structures, top-down vs. bottom-up traversal, pseudo projection vs. physical projection of conditional database, etc. It is the right time to ask "what are the pros and cons of the strategies?" and "what and how can we pick and integrate the best strategies to achieve higher performance in general cases?"In this study, we answer the above questions by a systematic study of the search strategies and develop a winning algorithm CLOSET+. CLOSET+ integrates the advantages of the previously proposed effective strategies as well as some ones newly developed here. A thorough performance study on synthetic and real data sets has shown the advantages of the strategies and the improvement of CLOSET+ over existing mining algorithms, including CLOSET, CHARM and OP, in terms of runtime, memory usage and scalability. <s> BIB001 </s> A Survey on Closed Frequent Pattern Mining <s> CLOSET+ <s> This paper presents a new scalable algorithm for discovering closed frequent itemsets, a lossless and condensed representation of all the frequent itemsets that can be mined from a transactional database. Our algorithm exploits a divide-and-conquer approach and a bitwise vertical representation of the database and adopts a particular visit and partitioning strategy of the search space based on an original theoretical framework, which formalizes the problem of closed itemsets mining in detail. The algorithm adopts several optimizations aimed to save both space and time in computing itemset closures and their supports. In particular, since one of the main problems in this type of algorithms is the multiple generation of the same closed itemset, we propose a new effective and memory-efficient pruning technique, which, unlike other previous proposals, does not require the whole set of closed patterns mined so far to be kept in the main memory. This technique also permits each visited partition of the search space to be mined independently in any order and, thus, also in parallel. The tests conducted on many publicly available data sets show that our algorithm is scalable and outperforms other state-of-the-art algorithms like CLOSET+ and FP-CLOSE, in some cases by more than one order of magnitude. More importantly, the performance improvements become more and more significant as the support threshold is decreased. <s> BIB002 | The CLOSET+ BIB002 BIB001 algorithm is used to mine closed frequent pattern. Initially, it scans the database only once to find the global frequent patterns and sort the database in support descending order and forms the frequent pattern list, scans the document and builds the FP-Tree using the pattern list, using divide and conquer technique and depth first searching paradigm it finds the closed frequent patterns. Finally, stop the process until all the patterns in the global header are mined. The frequent closed patterns are obtained either from result tree or from the output file. The key features of the CLOSET+ algorithm are: It uses hybrid tree projection method for the conditional projected database and it uses horizontal data format. |
A Survey on Closed Frequent Pattern Mining <s> CARPENTER <s> The growth of bioinformatics has resulted in datasets with new characteristics. These datasets typically contain a large number of columns and a small number of rows. For example, many gene expression datasets may contain 10,000-100,000 columns but only 100-1000 rows.Such datasets pose a great challenge for existing (closed) frequent pattern discovery algorithms, since they have an exponential dependence on the average row length. In this paper, we describe a new algorithm called CARPENTER that is specially designed to handle datasets having a large number of attributes and relatively small number of rows. Several experiments on real bioinformatics datasets show that CARPENTER is orders of magnitude better than previous closed pattern mining algorithms like CLOSET and CHARM. <s> BIB001 </s> A Survey on Closed Frequent Pattern Mining <s> CARPENTER <s> Microarray data typically contains a large number of columns and a small number of rows, which poses a great challenge for existing frequent (closed) pattern mining algorithms that discover patterns in item enumeration space. In this paper, we propose two algorithms that explore the row enumeration space to mine frequent closed patterns. Several experiments on real-life gene expression data show that the algorithms are faster than existing algorithms, including CLOSET, CHARM, CLOSET+ and CARPENTER. <s> BIB002 </s> A Survey on Closed Frequent Pattern Mining <s> CARPENTER <s> Unlike the traditional datasets, gene expression datasets typically contain a huge number of items and few transactions. Though there were a large number of algorithms that had been developed for mining frequent closed patterns, their running time increased exponentially with the average length of the transactions increasing. Therefore, most current methods for high-dimensional gene expression datasets were impractical. In this paper, we proposed a new data structure, tidset-prefix-plus tree (TP+-tree), to store the compressed transposed table of dataset. Based on TP+-tree, an algorithm, TP+close, was developed for mining frequent closed patterns in gene expression datasets. TP+close adopted top-down and divide-and-conquer search strategies on the transaction space. Moreover, TP+close combined efficient pruning and effective optimizing methods. Several experiments on real-life gene expression datasets showed that TP+close was faster than RERII and CARPENTER, two existing algorithms. <s> BIB003 | CARPENTER BIB001 BIB002 BIB003 . In the second step, according to the transpose table, construct the row enumeration tree which enumerates row ids with predefined order and search the tree in depth first order without any pruning strategies. It consists of three pruning strategies, in the prune 1 method, it prunes the branch which are not having enough depth, in prune 2 method, if rj has 100% support in project table of ri , prune the branch rj where support is the depth of the node .and in prune 3 method, At any node in the enumeration tree, if the corresponding itemset of the node has been found before it prunes the branch rooted at this node. The Key features of the CARPENTER algorithm are: It uses row enumeration search for the optimized search and it use depth first approach. |
A Survey on Closed Frequent Pattern Mining <s> CFIM-P <s> The mining of frequent itemsets is often challenged by the length of the patterns mined and also by the number of transactions considered for the mining process. Another acute challenge that concerns the performance of any association rule mining algorithm is the presence of „null‟ transactions. This work proposes a closed frequent itemset mining algorithm viz., Closed Frequent Itemset Mining and Pruning (CFIM-P) algorithm using the sub-itemset pruning strategy. CFIM-P algorithm has attempted to eliminate redundant patterns by pruning closed frequent sub-itemsets. An attempt has even been made towards eliminating the null transactions by using Vertical Data Format representation technique for finding the frequent itemsets. <s> BIB001 </s> A Survey on Closed Frequent Pattern Mining <s> CFIM-P <s> The mining of frequent itemsets is often challenged by the length of the patterns mined and also by the number of transactions considered for the mining process. Another acute challenge that concerns the performance of any association rule mining algorithm is the presence of null transactions. This work proposes a closed frequent itmeset mining algorithm viz., Closed Frequent Itemset Mining and Pruning (CFIM-P) algorithm using the sub-itemset pruning strategy. CFIM-P algorithm has attempted to eliminate redundant patterns by pruning closed frequent sub-itemsets. An attempt has even made towards eliminating the null transactions by using the vertical data format representation technique for finding the frequent itemsets. <s> BIB002 | CFIM-P BIB001 BIB002 algorithm stands for Closed Frequent Itemset Mining and Pruning algorithm for mining the closed frequent patterns. The algorithm consists of 3 phases. In the first phase, it traces the null document and filters them for ensuing mining procedures. In the second phase, it mines the closed frequent pattern based on the minimum support count. If the already mined superset exists for the subset of frequent pattern then subset is eliminated by the top down manner. After obtaining the closed frequent itemset, it is added to the list of frequent itemset. In the third phase, the mined closed frequent itemset constitute to form patterns. The key features of the CFIM-P algorithm are: It uses Top down strategy and it eliminates the null transaction before starts the mining process. |
Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Demand node model <s> This paper presents an automatic cellular network design algorithm which determines the location of transmitters with respect to co-channel interference (CCI). The proposed method is capable of maximizing the average CCI ratio in the planning region while optimizing the covered teletraffic demand. Additionally, we investigate how the proposed algorithm can be extended for locating micro- and macro-cells. <s> BIB001 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Demand node model <s> We consider the following optimization problem for UMTS networks: For a specified teletraffic demand and possible base station locations, choose positions for base stations such that the construction costs are below a given limit, as much teletraffic as possible is supplied, the ongoing costs are minimal, and the intra-cell interference in the range of each base station is low. We prove that for a particular specification of teletraffic (the so called demand node concept), this problem has a polynomial-time approximation scheme, but cannot have a fully polynomial-time approximation scheme unless P = NP. <s> BIB002 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Demand node model <s> In UMTS networks, radio planning cannot only be based on signal predictions but it must also consider the traffic distribution, the power control mechanism as well as the power limits and the signal quality constraints. In this paper we propose mathematical programming models aimed at supporting the decisions in the process of planning where to locate the new base stations and which configuration (e.g., maximum emission power) to select for each of them from a given set of possibilities. In the proposed models we assume as quality measure the signal-to-interference ratio (SIR) and consider two power control mechanisms which keep the received signal power or the SIR at a given target value. Computational results obtained with greedy randomized algorithms are also reported for uplink instances generated using classical propagation models. <s> BIB003 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Demand node model <s> We propose a new solution to the problem of positioning base station transmitters of a mobile phone network and assigning frequencies to the transmitters, both in an optimal way. Since an exact solution cannot be expected to run in polynomial time for all interesting versions of this problem (they are all NP-hard), our algorithm follows a heuristic approach based on the evolutionary paradigm. For this evolution to be efficient, i.e., goal-oriented and sufficiently random at the same time, problem-specific knowledge is embedded in the operators. The problem requires both the minimization of the cost and of the channel interference. We examine and compare two standard multiobjective techniques and a new algorithm - the steady-state evolutionary algorithm with Pareto tournaments. One major finding of the empirical investigation is a strong influence of the choice of the multiobjective selection method on the utility of the problem-specific recombination leading to a significant difference in the solution quality. <s> BIB004 | The concept of demand nodes was introduced first by , and it has since been used in different works (e.g. BIB001 BIB002 BIB003 BIB004 . The basic idea is that the demand node represents the centre of an area where the traffic is being generated by the users. The main advantage of this model is that by combining the traffic of a small region in a single point, the computational requirements are drastically reduced; the drawback is that the realism of the problem is also simplified. The demand nodes comprise a number of test points, hence the need for fewer nodes; however, merging test points into a single demand node has the same effect as applying a lossy compression mechanism: the resolution is reduced. Most of the research work using this model also allows total freedom as regards the positioning of candidate sites. This allows the uniform distribution of the sites over the full area to be covered, which usually is not possible in practice as a site cannot simply be placed anywhere, e.g. in the middle of a motorway. |
Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Disc model <s> In this paper we introduce the minimum-order approach to frequency assignment and present a theory which relates this approach to the traditional one. This new approach is potentially more desirable than the traditional one. We model assignment problems as both frequency-distance constrained and frequency constrained optimization problems. The frequency constrained approach should be avoided if distance separation is employed to mitigate interference. A restricted class of graphs, called disk graphs, plays a central role in frequency-distance constrained problems. We introduce two generalizations of chromatic number and show that many frequency assignment problems are equivalent to generalized graph coloring problems. Using these equivalences and recent results concerning the complexity of graph coloring, we classify many frequency assignment problems according to the "execution time efficiency" of algorithms that may be devised for their solution. We discuss applications to important real world problems and identify areas for further work. <s> BIB001 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Disc model <s> This paper presents an innovative algorithm for automatic base station placement and dimensioning. A highly efficient optimization strategy forms the core of the proposed algorithm that determines the number of base stations, their sites, and parameters to achieve a high-quality network that meets the requirements of area coverage, traffic capacity, and interference level, while trying to minimize system costs, including the frequency and financial costs. First, the hierarchical approach is outlined and it is applied to place base stations (BSs) for a large-scale network design. Also a fuzzy expert system is developed to exploit the expert experience to adjust BS parameters, e.g., the transmitted power, to improve the network performance. Simulation results are presented and analyzed. <s> BIB002 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Disc model <s> This paper considers automatic cell planning for mobile radio network design. The problem is how to determine the number of cells, the optimal cell sites and parameters in order to meet the system requirements while minimizing the costs involved, including the spectral cost and financial cost. Our solution consists of three parts. First, a fuzzy expert system is used to adjust the parameters of each cell. Second, an optimization strategy using the genetic algorithm is proposed to find the optimal cell sites. Furthermore, we develop an algorithm based on the technique of cell splitting to accommodate the traffic growth in an existing network. <s> BIB003 | The first use of disc (circle) graphs in the design of cellular networks was in BIB001 , where it was applied to solve the frequency assignment problem. Later extensions to this model consider intersections among discs and non-uniform traffic distributions (Huang et al. 2000a,b,c) . The main advantage of the approach presented in BIB002 is that it is possible to take into account different goals related to the design of the network; thus, the problems of cell planning and frequency assignment can be addressed simultaneously. Furthermore, the computational costs are not high. The main inconvenience of the disc model has to do with the fact that it assumes an ideal propagation model, so all the cells have the same shape. Even though the size of the cells can vary depending on a non-uniform traffic distribution BIB003 , the shape is always a circle. Another issue is that sites may be located anywhere, so the same problems as in the demand node model arise. |
Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Cell and test point model <s> In this paper, the evolution of mobile radio network is presented. First of all, the network life cycle is considered. A mathematical modeling of these life periods is developed inside an optimization problem: optimal location of base stations. It is a combinatorial optimization problem. A multi-period model is built on a concentrator link approach. Finally, three different multi-period techniques are identified, they are based on using the genetic algorithm (GA) to tackle this problem of the design of microcellular networks. <s> BIB001 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Cell and test point model <s> The antenna-positioning problem concerns finding a set of sites for antennas from a set of pre-defined candidate sites, and for each selected site, to determine the number and types of antennas, as well as the associated values for each of the antenna parameters. All these choices must satisfy a set of imperative constraints and optimize a set of objectives. This paper presents a heuristic approach for tackling this complex and highly combinatorial problem. The proposed approach is composed of three phases: a constraint-based pre-processing phase to filter out bad configurations, an optimization phase using tabu search, and a post-optimization phase to improve solutions given by tabu search. To validate the approach, computational results are presented using large and realistic data sets. <s> BIB002 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Cell and test point model <s> This paper deals with the automatic selection and configuration of base station sites for mobile cellular networks. An optimization framework based on simulated annealing is used for site selection and for base-station configuration. Realistic path-loss estimates incorporating terrain data are used. The configuration of each base station involves selecting antenna type, power control, azimuth, and tilt. Results are presented for several design scenarios with between 250 and 750 candidate sites and show that the optimization framework can generate network designs with desired characteristics such as high area coverage and high traffic capacity. The work shows that cellular network design problems are tractable for realistic problem instances. <s> BIB003 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Cell and test point model <s> The placement of antennas is an important step in the design of mobile radio networks. We introduce a model for the antenna placement problem (APP) that addresses cover, traffic demand, interference, different parameterized antenna types, and the geometrical structure of cells. The resulting optimization problem is constrained and multiobjective. We present an evolutionary algorithm, capable of dealing with more than 700 candidate sites in the working area. The results show that the APP is tractable. The automatically generated designs enable experts to focus their efforts on the difficult parts of a network design problem. <s> BIB004 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Cell and test point model <s> The antenna placement problem, or cell planning problem, involves locating and configuring infrastructure for cellular wireless networks. From candidate site locations, a set needs to be selected against objectives relating to issues such as financial cost and service provision. This is an NP-hard optimization problem and consequently heuristic approaches are necessary for large problem instances. In this study, we use a greedy algorithm to select and configure base station locations. The performance of this greedy approach is dependent on the order in which the candidate sites are considered. We compare the ability of four state-of-the-art multiple objective genetic algorithms to find an optimal ordering of potential base stations. Results and discussion on the performance of the algorithms are provided. <s> BIB005 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Cell and test point model <s> Cellular network design is a major issue in second generation GSM mobile telecommunication systems. In this paper, a new model of the problem in its full practical complexity, based on multiobjective constrained combinatorial optimization, has been used. We propose an evolutionary algorithm that aims at approximating the Pareto frontier of the problem, which removes the need for a cellular network designer to rank or weight objectives a priori. Specific coding scheme and genetic operators have been designed. Advanced intensification and diversification search techniques, such as elitism and adaptive sharing, have been used. Three complementary hierarchical parallel models have been designed to improve the solution quality and robustness, to speed-up the search and to solve large instances of the problem. The obtained Pareto fronts and speed-ups on different parallel architectures show the efficiency and the scalability of the parallel model. Performance evaluation of the algorithm has been carried out on different realistic benchmarks. The obtained results show the impact of the proposed parallel models and the introduced search mechanisms. <s> BIB006 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Cell and test point model <s> Cellular network design is a major issue in mobile telecommunication systems. In this paper, a model of the problem in its full practical complexity, based on multiobjective constrained combinatorial optimization, has been investigated. We adopted the Pareto approach at resolution in order to compute a set of diversified non-dominated networks, thus removing the need for the designer to rank or weight objectives a priori. We designed and implemented a ''ready-to-use'' platform for radio network optimization that is flexible regarding both the modeling of the problem (adding, removing, updating new antagonist objectives and constraints) and the solution methods. It extends the ''white-box'' ParadisEO framework for metaheuristics applied to the resolution of mono/multi-objective Combinatorial Optimization Problems requiring both the use of advanced optimization methods and the exploitation of large-scale parallel and distributed environments. Specific coding scheme and genetic and neighborhood operators have been designed and embedded. On the other side, we make use of many generic features related to advanced intensification and diversification search techniques, hybridization of metaheuristics and grid computing for the distribution of the computations. They aim at improving the quality of networks and their robustness. They also allow, to speed-up the search and obtain results in a tractable time, and so efficiently solving large instances of the problem. Using three realistic benchmarks, the computed networks and speed-ups on different parallel and/or distributed architectures show the efficiency and the scalability of hierarchical parallel hybrid models. <s> BIB007 | Although this model is known thanks to the works of BIB001 Caminada (1998a,b, 2001) , it appeared first in . In it, the working area is discretized into a set of test points which are spread over the whole area. These test points are used to measure the amount of signal Downloaded by [UMA University of Malaga] at 03:59 04 October 2013 strength in the region where the network operator intends to service the traffic demand of a set of customers. Three subsets of test points are defined: reception test points (RTPs), where the signal quality is tested; service test points (STPs), where the signal quality must exceed a minimum threshold to be usable by customers; and traffic test points (TTPs), where a certain amount of traffic is associated with each customer (measured in Erlangs). In this model, the set of candidate site locations does not have to be uniformly distributed in the terrain, so it is a better representation of the scenarios presented by the operators. Its main advantage is that it allows measuring all the network objectives (such as coverage and capacity). Notwithstanding, there is a clear inconvenience: the computational cost increases because a high number of points is usually used to face the problem (e.g. test points every 200 meters) in order to increase the realism. This realism is the main reason that this model is widely adopted in the literature (e.g. BIB002 BIB003 BIB004 BIB005 BIB006 BIB007 ). |
Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> ACP problems addressed by EAs <s> Abstract This paper describes and analyzes CHC, a nontraditional genetic algorithm which combines a conservative selection strategy that always preserves the best individuals found so far with a radical (highly disruptive) recombination operator that produces offspring that are maximally different from both parents. The traditional reasons for preferring a recombination operator with a low probability of disrupting schemata may not hold when such a conservative selection strategy is used. On the contrary, certain highly disruptive crossover operators provide more effective search. Empirical evidence is provided to support these claims. <s> BIB001 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> ACP problems addressed by EAs <s> Genetic algorithms (GAs) are biologically motivated adaptive systems which have been used, with varying degrees of success, for function optimization. In this study, an abstraction of the basic genetic algorithm, the Equilibrium Genetic Algorithm (EGA), and the GA in turn, are reconsidered within the framework of competitive learning. This new perspective reveals a number of different possibilities for performance improvements. This paper explores population-based incremental learning (PBIL), a method of combining the mechanisms of a generational genetic algorithm with simple competitive learning. The combination of these two methods reveals a tool which is far simpler than a GA, and which out-performs a GA on large set of optimization problems in terms of both speed and accuracy. This paper presents an empirical analysis of where the proposed technique will outperform genetic algorithms, and describes a class of problems in which a genetic algorithm may be able to perform better. Extensions to this algorithm are discussed and analyzed. PBIL and extensions are compared with a standard GA on twelve problems, including standard numerical optimization functions, traditional GA test suite problems, and NP-Complete problems. <s> BIB002 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> ACP problems addressed by EAs <s> This paper uses a realistic combinatorial optimization problem as an example to show how a genetic algorithm can be parallelized in an efficient way. The problem considered is the selection of the best set of transmitter locations in order to cover a given geographical region at optimal cost. It is shown that it is possible to obtain good solutions to the problem even with a very low communication load. The parallel program is tested, first on an artificial example, then on a real-life case. <s> BIB003 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> ACP problems addressed by EAs <s> Cellular network operators are dealing with complex problems when planning the network operation. In order to automate the planning process, the development of simulation and optimization tools are under research. In this paper genetic algorithms with three different approaches are studied in order to optimize the base station sites. This research shows that a proper approach in developing the individual structure and fitness function has crucial importance in solving practical base station siting problems with genetic algorithms. <s> BIB004 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> ACP problems addressed by EAs <s> With the imminent introduction of the 3G systems throughout the world precise cell planning in macrocell, microcell and picocell environments have become equally important. Beside coverage of the basic radio link quality parameter others such as rms delay spread and a measure of the system capacity have become increasingly important. Our contribution addresses the planning inside microcells based on a 3D deterministic ray-tracing propagation tool. It is based on the IHE model (Cichon, 1984) and a simple genetic algorithm (SGA) for the base station location optimization. At this stage the optimization is based on coverage and rms delay spread considerations. Our algorithm has as inputs the delay spread threshold and the minimum field strength. The cost function to be minimized is the number of locations in which the values of these parameters are above the threshold in the case of delay spread, and respectively below the threshold in the case of the field strength. <s> BIB005 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> ACP problems addressed by EAs <s> In this article, evolutionary algorithms (EAs) are applied to solve the radio network design problem (RND). The task is to find the best set of transmitter locations in order to cover a given geographical region at an optimal cost. Usually, parallel EAs are needed to cope with the high computational requirements of such a problem. Here, we develop and evaluate a set of sequential and parallel genetic algorithms (GAs) to solve the RND problem efficiently. The results show that our distributed steady state GA is an efficient and accurate tool for solving RND that even outperforms existing parallel solutions. The sequential algorithm performs very efficiently from a numerical point of view, although the distributed version is much faster. <s> BIB006 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> ACP problems addressed by EAs <s> The antenna placement problem, or cell planning problem, involves locating and configuring infrastructure for cellular wireless networks. From candidate site locations, a set needs to be selected against objectives relating to issues such as financial cost and service provision. This is an NP-hard optimization problem and consequently heuristic approaches are necessary for large problem instances. In this study, we use a greedy algorithm to select and configure base station locations. The performance of this greedy approach is dependent on the order in which the candidate sites are considered. We compare the ability of four state-of-the-art multiple objective genetic algorithms to find an optimal ordering of potential base stations. Results and discussion on the performance of the algorithms are provided. <s> BIB007 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> ACP problems addressed by EAs <s> An application of an automatic tool for the planning of a cellular GSM network in a real environment is presented. The basic principles of the algorithm were previously developed by the authors, and in this paper solutions to the problems arising from its application to a real town are proposed. One of the main issues concerns the field prediction models, so two algorithms are used in the real environment: one is based on an artificial neural network (ANN), and one on the Cost231 model, modified for hilly terrain, and in both cases the position and the height of the buildings are considered in detail. The whole planning procedure is applied to the town of Ancona and the results give the optimized location of the radio base stations (RBS), the heights of their antennas, and their transmitted power <s> BIB008 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> ACP problems addressed by EAs <s> Cellular network design is a major issue in mobile telecommunication systems. In this paper, a model of the problem in its full practical complexity, based on multiobjective constrained combinatorial optimization, has been investigated. We adopted the Pareto approach at resolution in order to compute a set of diversified non-dominated networks, thus removing the need for the designer to rank or weight objectives a priori. We designed and implemented a ''ready-to-use'' platform for radio network optimization that is flexible regarding both the modeling of the problem (adding, removing, updating new antagonist objectives and constraints) and the solution methods. It extends the ''white-box'' ParadisEO framework for metaheuristics applied to the resolution of mono/multi-objective Combinatorial Optimization Problems requiring both the use of advanced optimization methods and the exploitation of large-scale parallel and distributed environments. Specific coding scheme and genetic and neighborhood operators have been designed and embedded. On the other side, we make use of many generic features related to advanced intensification and diversification search techniques, hybridization of metaheuristics and grid computing for the distribution of the computations. They aim at improving the quality of networks and their robustness. They also allow, to speed-up the search and obtain results in a tractable time, and so efficiently solving large instances of the problem. Using three realistic benchmarks, the computed networks and speed-ups on different parallel and/or distributed architectures show the efficiency and the scalability of hierarchical parallel hybrid models. <s> BIB009 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> ACP problems addressed by EAs <s> Radio network design (RND) is a fundamental problem in cellular networks for telecommunications. In these networks, the terrain must be covered by a set of base stations (or antennae), each of which defines a covered area called cell. The problem may be reduced to figure out the optimal placement of antennae out of a list of candidate sites trying to satisfy two objectives: to maximize the area covered by the radio signal and to reduce the number of used antennae. Consequently, RND is a bi-objective optimization problem. Previous works have solved the problem by using single-objective techniques which combine the values of both objectives. The used techniques have allowed to find optimal solutions according to the defined objective, thus yielding a unique solution instead of the set of Pareto optimal solutions. In this paper, we solve the RND problem using a multi-objective version of the algorithm CHC, which is the metaheuristic having reported the best results when solving the single-objective formulation of RND. This new algorithm, called MOCHC, is compared against a binary-coded NSGA-II algorithm and also against the provided results in the literature. Our experiments indicate that MOCHC outperfoms NSGA-II and, more importantly, it is more efficient finding the optimal solutions than single-objectives techniques. <s> BIB010 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> ACP problems addressed by EAs <s> The base station placement problem, with n potential candidate sites is NP-Hard with 2 n solutions (Mathar and Niessen, Wirel. Netw. 6, 421---428, 2000). When dimensioned on m unknown variable settings (e.g., number of power settings?+?number of tilt settings, etc.) the computational complexity becomes (m+1) n (Raisanen, PhD. thesis, 2006). We introduce a novel approach to reduce the computational complexity by dimensioning sites only once to guarantee traffic hold requirements are satisfied. This approach works by determining the maximum set of service test points candidate sites can handle without exceeding a hard traffic constraint, T MAX . Following this, the ability of two evolutionary strategies (binary and permutation-coded) to search for the minimum set cover are compared. This reverses the commonly followed approach of achieving service coverage first and then dimensioning to meet traffic hold. To test this approach, three realistic GSM network simulation environments are engineered, and a series of tests performed. Results indicate this approach can quickly meet network operator objectives. <s> BIB011 | • Cell: this column indicates how the cell or service area of BTSs is computed. • P w, T i and Az: these three columns show, respectively, whether the power, tilt and azimuth of the BTSs are optimized. These are the most common settings adjusted when BTS dimensioning is addressed. • Objectives: different aspects of the cellular network that are optimized. • Constraints: aspects of the cellular network that are considered as constraints during the optimization process. From the algorithmic point of view, classic GAs have been used in the literature for solving the ACP problem, both generational (genGA) and steady-state ones (ssGA). Indeed, they are applied in almost 50% of the works reviewed. Rather specific evolutionary techniques such as CHC BIB001 , differential evolution (DE, Storn and Price 1995) , PBIL BIB002 , or artificial immune systems (AIS, de Melo Carvalho Filho and de Alencar 2008) are also found. It can be seen that not only sequential approaches exist, but also parallel models deployed on standard parallel platforms such as clusters of computers (dGAs, Calégari et al. 2001, Alba and BIB006 and even grid computing systems BIB009 . If multiobjective approaches are considered, NSGA-II and SPEA2 , the two best known algorithms in the evolutionary multiobjective research community, have been applied in eight of the analysed works. Other specific multiobjective algorithms used are SEAMO BIB007 and MOCHC BIB010 . From the point of view of the formulation, the first proposals have adopted a single objective approach in which the different network aspects to be optimized are weighted into a single (aggregative) function BIB003 BIB004 , Reiningeret al. 1999 . However, recent advances in multiobjective EAs have meant that the number of works using this multiobjective formulation has increased in latter years BIB008 BIB010 BIB009 BIB011 . Figure 2 summarizes the number of reviewed contributions that fall into different categories: mono/multi, ACP model, site selection, cell shape computation, and BTS parameter optimization. Now, each group of columns of the figure is analysed. In the first group, from all the works in the literature reviewed, monoobjective formulations have been more widely used in spite of the fact that the ACP problem is naturally amenable to multiobjective ones. The additional complexity added by the Pareto optimality mechanisms makes ACP researchers reluctant to adopt this kind of technique. However, the multiobjective approach may be the most appropriate because it can provide the decision maker (network designer) with a set of different configurations for the BTSs, none of which is better than the others (non-dominated). These configurations could be used in particular scenarios that may appear during the operational lifetime of the network. The second group of columns shows the ACP models used in the analysed contributions. It is clear that the demand node and test points are the most widely adopted models. Simplicity and low computational requirements in the former case, and realism in the latter, are the reasons that explain these facts. The Disc model has more to do with theoretical studies. Indeed, cellular networks composed exclusively of omnidirectional antennae are hardly found in the real world (vectorization allows the network capacity to be greatly increased). Looking at the third group of columns in Figure 2 , it can be observed that using a candidate site list (CSL) instead of freely placing the BTSs in any location of the network is the most common option. This is because it is unlikely many network operators are granted such freedom (e.g. no BTS can be placed near schools or in the middle of a lake). The fourth group of columns also reflects the preferred choice for computing the cells (serving areas) of the BTSs: propagation models such as the free space model, the Okumura-Hata model or the Walfish-Ikegami model (COST231 1991 or another depends mainly on the computational effort required (ITU 1997) . Omnidirectional and square cells also appear in several contributions (eight and six works, respectively). Tables 1 and 2 include alternative methods for computing the cell associated to BTSs such as modern ray tracing techniques BIB005 . Finally, the last group of columns summarizes the number of articles in which the power, tilt and azimuth are involved in the optimization process. That is, they are decision variables of the search space. Even though differences here are smaller, it can be seen that the power parameter is more often optimized than the other two. It applies to any kind of BTS (omnidirectional, directive, square, etc.) as the main setting to manage the cell size. The tilt and azimuth angles usually appear in very accurate ACP models. They normally lead to highly expensive computational tasks, which explains the lower incidence in the literature. To conclude with this discussion about the analysed works, the objective functions and the constraints used in the different approaches are now analysed. On the objectives side, a clear trend exists in considering the network cost, measured in terms of number of installed sites, and the quality of services (QoS) provided by these sites. These two objective functions are clearly contradictory. The main difference between many contributions lies in the concept of QoS. Maximizing the network coverage is the most widely used option and it appears in 78% of the revised contributions. However, a more realistic approach is based on using such objectives as a constraint (e.g. at least 90% of the network must be covered) so as to discard useless configurations. Indeed, it does not make any sense to deploy an expensive, fully operational network infrastructure just to cover a small percentage of a given target area. Other ways of measuring the network QoS in the literature have taken into consideration the interference caused by cell overlapping or the traffic capacity of the network. As to the constraints, the handover, or the capability of the network to guarantee continuous communication while the mobile user is moving from one cell to another, is the one that most appears. |
Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Details on EAs for the ACP problem <s> This paper uses a realistic combinatorial optimization problem as an example to show how a genetic algorithm can be parallelized in an efficient way. The problem considered is the selection of the best set of transmitter locations in order to cover a given geographical region at optimal cost. It is shown that it is possible to obtain good solutions to the problem even with a very low communication load. The parallel program is tested, first on an artificial example, then on a real-life case. <s> BIB001 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Details on EAs for the ACP problem <s> Cellular network operators are dealing with complex problems when planning the network operation. In order to automate the planning process, the development of simulation and optimization tools are under research. In this paper genetic algorithms with three different approaches are studied in order to optimize the base station sites. This research shows that a proper approach in developing the individual structure and fitness function has crucial importance in solving practical base station siting problems with genetic algorithms. <s> BIB002 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Details on EAs for the ACP problem <s> With the imminent introduction of the 3G systems throughout the world precise cell planning in macrocell, microcell and picocell environments have become equally important. Beside coverage of the basic radio link quality parameter others such as rms delay spread and a measure of the system capacity have become increasingly important. Our contribution addresses the planning inside microcells based on a 3D deterministic ray-tracing propagation tool. It is based on the IHE model (Cichon, 1984) and a simple genetic algorithm (SGA) for the base station location optimization. At this stage the optimization is based on coverage and rms delay spread considerations. Our algorithm has as inputs the delay spread threshold and the minimum field strength. The cost function to be minimized is the number of locations in which the values of these parameters are above the threshold in the case of delay spread, and respectively below the threshold in the case of the field strength. <s> BIB003 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Details on EAs for the ACP problem <s> Several optimization approaches have been compared to optimally locate and power size the RBSs of a GSM network, in order to efficiently dimension the transceiver properties without exceeding radioprotection limits. Such optimization tools are embedded within a planning system where several modules interact, exploiting each other’s features. In particular, the optimum planning tools exploit the available radiopropagation models implemented to take into account the different features of current scenarios. Tabu Search methods appear as robust and viable tools for this purpose. I. Introduction In planning modern telecommunication systems, critical parameters are related to locating, power sizing and tilting the basic elements of wireless networks such as radio base stations (RBSs). Network designers need to rely on sophisticated and user friendly tools in order to accurately estimate the electromagnetic (EM) field levels in complex environments and to cheaply and efficiently dimension the transceiver properties without exceeding radioprotection limits. In order to meet such requirements, a planning system, based on a EM predictioning (EMP) tool, enclosing several radiopropagation models, interconnected with a optimum planning tool (OPT) in which several optimization routines are embedded, was developed ([1]-[3]). In this paper we focus on optimization approaches, comparing metaheuristics such as Tabu Search (TS) and Genetic Algorithms (GA), with analytical methods provided by commercial software packages, solving Mixed Integer Linear and Non Linear Programming Models (MIL/NLPM). In section II the implemented radiopropagation approaches are summarized. Section III is devoted to optimization strategies while computational results are discussed in section IV and conclusions are drawn in section V. II. EM field level estimation and radiopropagation models EM field estimation can be performed by means of several radiopropagation models depending on the geographical properties of the observed scenario. Friis’s formula works well in line-of-sight (LOS) condition, but when applied to urban scenarios, it often results in the over-estimation of actual values. More accurate results can be obtained by more sophisticated approaches, such as the empirical COST 231 Okumura-Hata model ([4]-[5]), the semi-empirical COST 231 Walfisch-Ikegami model ([6]-[7]), and a simple ray-tracing algorithm in order to perform EM field calculation in small geographic areas, in which is considered the first reflection contribution and/or the “single-knife-edge” effects. <s> BIB004 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Details on EAs for the ACP problem <s> The authors present a method for planning a base station's position in a mobile communication system taking into account both the requirement to minimise the environmental impact of the radiated electromagnetic fields and the requirement to assure a suitable quality of service, i.e. C/I ratio, coverage, efficiency, served traffic. The model is based on five functionals and the overall optimisation procedure is carried out by a genetic algorithm. As an example of its application, the proposed method is applied to an imaginary town, subdivided into areas with different constraints for the previously mentioned requirements. Results reported show the behaviour of each functional, as well as the global optimisation of the network. <s> BIB005 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Details on EAs for the ACP problem <s> An application of an automatic tool for the planning of a cellular GSM network in a real environment is presented. The basic principles of the algorithm were previously developed by the authors, and in this paper solutions to the problems arising from its application to a real town are proposed. One of the main issues concerns the field prediction models, so two algorithms are used in the real environment: one is based on an artificial neural network (ANN), and one on the Cost231 model, modified for hilly terrain, and in both cases the position and the height of the buildings are considered in detail. The whole planning procedure is applied to the town of Ancona and the results give the optimized location of the radio base stations (RBS), the heights of their antennas, and their transmitted power <s> BIB006 | This section reviews the main features of the EAs found in the literature for solving the ACP problem. The potential advantages and drawbacks of each algorithm are analysed in light of their corresponding encoding schemes, genetic operators, local search and parallelization. The first usage of this encoding scheme appears when the optimization task is simply to position the BTSs of the network by selecting a subset of sites from a candidate site list (CSL). Then, EAs work on bit strings of length N , where N is the total number of candidate sites. Each position of the bit string corresponds to a site, i.e. the ith position represents the ith site. The value of the ith is 1 if the ith site is selected, and zero otherwise. This approach is specially used when solving ACP problems that follow the demand node model (see Section 2.2.1): , BIB001 , Chamaret and Condevaux-Lanloy (1998), BIB002 Binary encoding has also been used when the BTSs can be freely placed anywhere on the geographical area of the network (no CSL exists). In this case, the bit string encodes the binary representation of a list of real numbers that represent the (x, y) coordinates of the sites. However, in all the material analysed the tentative solutions also include one or more values that allow dimensioning of the BTS (i.e. allow the BTS service area to be configured). Indeed, in BIB004 and the binary string has also considered the power level of emission. In the works of BIB005 and BIB006 , the authors have not only included the encoding of the emission power, but also the tilt of the antennae. So, for each BTS, 24 bits are used: 9 + 9 bits for the coordinates, 3 bits for the radiated power, and 3 bits for the tilt. BIB003 have just added the height of the BTSs. The main advantage of this binary encoding is that it allows the evolutionary search to be performed by means of classical EA operators. These operators have been originally developed to manipulate binary genotypes ), as will be further analysed in Section 3.2.2. |
Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Integer encoding. <s> It is increasingly important to optimally select base stations in the design of cellular networks, as customers demand cheaper and better wireless services. From a set of potential site locations, a subset needs to be selected which optimizes two critical objectives: service coverage and financial cost. As this is an NP-hard optimization problem, heuristic approaches are required for problems of practical size. Our approach consists of two phases which act upon a set of candidate site permutations at each generation. Firstly, a sequential greedy algorithm is designed to commission sites from an ordering of candidate sites, subject to satisfying an alterable constraint. Secondly, an evolutionary optimization technique, which is tested against a randomized approach, is used to search for orderings of candidate sites which optimize multiple objectives. The two-phase strategy is vigorously tested and the results delineated. <s> BIB001 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Integer encoding. <s> The antenna placement problem, or cell planning problem, involves locating and configuring infrastructure for cellular wireless networks. From candidate site locations, a set needs to be selected against objectives relating to issues such as financial cost and service provision. This is an NP-hard optimization problem and consequently heuristic approaches are necessary for large problem instances. In this study, we use a greedy algorithm to select and configure base station locations. The performance of this greedy approach is dependent on the order in which the candidate sites are considered. We compare the ability of four state-of-the-art multiple objective genetic algorithms to find an optimal ordering of potential base stations. Results and discussion on the performance of the algorithms are provided. <s> BIB002 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Integer encoding. <s> The base station placement problem, with n potential candidate sites is NP-Hard with 2 n solutions (Mathar and Niessen, Wirel. Netw. 6, 421---428, 2000). When dimensioned on m unknown variable settings (e.g., number of power settings?+?number of tilt settings, etc.) the computational complexity becomes (m+1) n (Raisanen, PhD. thesis, 2006). We introduce a novel approach to reduce the computational complexity by dimensioning sites only once to guarantee traffic hold requirements are satisfied. This approach works by determining the maximum set of service test points candidate sites can handle without exceeding a hard traffic constraint, T MAX . Following this, the ability of two evolutionary strategies (binary and permutation-coded) to search for the minimum set cover are compared. This reverses the commonly followed approach of achieving service coverage first and then dimensioning to meet traffic hold. To test this approach, three realistic GSM network simulation environments are engineered, and a series of tests performed. Results indicate this approach can quickly meet network operator objectives. <s> BIB003 | Integer encoding has been used by Larry Raisanen, Roger Witaker and Steve Hurley at Cardiff University in several works: BIB001 , Whitaker et al. (2004a,b) , BIB002 , BIB003 . Their approach is based on considering that each BTS is identified by an integer. Then, given n candidate BTSs, a permutation π of size n represents a solution to the ACP problem. That is, EAs manipulate integer permutations, so special care has to be taken with the genetic operators used. These BTS permutations are then translated into a cell plan by using a decoder. The decoder works by iteratively packing cells as densely as possible, subject to certain constraints not being violated. This cell plan is then used to compute the fitness function. |
Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Real encoding. <s> In this paper, we find the best base station placement using a genetic approach. A new representation describing base station placement with a real number is proposed, and new genetic operators are introduced. This new representation can describe not only the locations of the base stations but also the number of those. Considering both coverage and economic efficiency, we also suggest a weighted objective function. Our algorithm is applied to an obvious optimization problem and then is verified. Moreover, our approach is tried in an inhomogeneous traffic density environment. The simulation result proves that the algorithm enables one to find near optimal base station placement and the efficient number of base stations. <s> BIB001 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Real encoding. <s> In this paper, the base station placement is automatically determined using genetic approach, and the transmit power is estimated considering the interference situation in the case of interference-dominant systems. For applying a genetic algorithm to the base station placement problem, a new representation scheme with real numbers is proposed. And, corresponding operators such as crossover and mutation are introduced. A weighted objective function is designed for performing the cell planning coverage, cost-effectively. To verify the proposed algorithm, the situation where the optimum positions and number of base stations are obvious is considered. The proposed algorithm is applied to an inhomogeneous traffic density environment, where a base station's coverage may be limited by offered traffic loads. Simulation result proves that the algorithm enables us to find near optimal base station placement and the efficient number of base stations. <s> BIB002 | The real encoding is mainly used for solving ACP problems based on freely positioning the BTSs in the working area of the cellular network. Therefore, the tentative solutions are made up of real numbers that represent the BTS coordinates. This scheme is mainly used in works dealing with the disc model (see Section 2.2.2). Indeed, this is the approach used in BIB001 and BIB002 . If K is the maximum number of BTSs to be placed, solutions are encoded as arrays (c 1 , . . . , c K ) , where c i = (x i , y i ) are the coordinates of the ith BTS. When a BTS is not supposed to be deployed, a special 'NULL' value is used. This is the mechanism adopted in these three works to avoid using a variable-length representation and therefore special genetic operators have been developed. |
Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> ACP-targeted encoding. <s> An important class of computational problems are grouping problems, where the aim is to group together members of a set (i.e., find a good partition of the set). We show why both the standard and the ordering GAs fare poorly in this domain by pointing out their inherent difficulty to capture the regularities of the functional landscape of the grouping problems. We then propose a new encoding scheme and genetic operators adapted to these problems, yielding the Grouping Genetic Algorithm (GGA). We give an experimental comparison of the GGA with the other GAs applied to grouping problems, and we illustrate the approach with two more examples of important grouping problems successfully treated with the GGA: the problems of Bin Packing and Economies of Scale. <s> BIB001 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> ACP-targeted encoding. <s> In this paper, the evolution of mobile radio network is presented. First of all, the network life cycle is considered. A mathematical modeling of these life periods is developed inside an optimization problem: optimal location of base stations. It is a combinatorial optimization problem. A multi-period model is built on a concentrator link approach. Finally, three different multi-period techniques are identified, they are based on using the genetic algorithm (GA) to tackle this problem of the design of microcellular networks. <s> BIB002 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> ACP-targeted encoding. <s> Engineering of mobile telecommunication networks endures two major problems: the design of the network and the frequency assignment. We address the first problem in this paper, which has been formulated as a multiobjective constrained combinatorial optimisation problem. We propose a genetic algorithm (GA) that aims to approximate the Pareto frontier of the problem. Advanced techniques have been used, such as Pareto ranking, sharing and elitism. The GA has been implemented in parallel on a network of workstations to speed up the search. To evaluate the performance of the GA, we have introduced two new quantitative indicators: the entropy and the contribution. Encouraging results are obtained on real-life problems. <s> BIB003 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> ACP-targeted encoding. <s> The cell planning problem with capacity expansion is examined in wireless communications. The problem decides the location and capacity of each new base station to cover expanded and increased traffic demand. The objective is to minimize the cost of new base stations. The coverage by the new and existing base stations is constrained to satisfy a proper portion of traffic demands. The received signal power at the base station also has to meet the receiver sensitivity. The cell planning is formulated as an integer linear programming problem and solved by a tabu search algorithm. In the tabu search intensification by add and drop move is implemented by short-term memory embodied by two tabu lists. Diversification is designed to investigate proper capacities of new base stations and to restart the tabu search from new base station locations. Computational results show that the proposed tabu search is highly effective. A 10% cost reduction is obtained by the diversification strategies. The gap from the optimal solutions is approximately 1/spl sim/5% in problems that can be handled in appropriate time limits. The proposed tabu search also outperforms the parallel genetic algorithm. The cost reduction by the tabu search approaches 10/spl sim/20% in problems: with 2500 traffic demand areas (TDAs) in code division multiple access (CDMA). <s> BIB004 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> ACP-targeted encoding. <s> We give a short introduction to the results of our theoretical analysis of evolutionary algorithms. These results are used to design an algorithm for a large real-world problem: the placement of antennas for mobile radio networks. Our model for the antenna placement problem (APP) addresses cover, traffic demand, interference, different parameterized antenna types, and the geometrical structure of cells. The resulting optimization problem is constrained and multi-objective. The evolutionary algorithm derived from our theoretical analysis is capable of dealing with more than 700 candidate sites in the working area. The results show that the APP is tractable. The automatically generated designs enable experts to focus their efforts on the difficult parts of a network design problem. <s> BIB005 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> ACP-targeted encoding. <s> Cellular network design is a very large and complex combinatorial optimization problem. It consists of antenna location and parameters settings. Until now, the design is done using radio quality criteria. Radio coverage, traffic capacity and field overlap are the main factors considered within optimization process to make decisions about network solutions. Nevertheless, such objectives do not lead to an efficient organization of network cells whereas this is a major assessment for radio expert planners. Absence of a clear geometrical structure of network cells prevents experts using many theoretical concepts on network design. This paper proposes an original model to evaluate the cell shape and a bi-criteria approach using an Evolutionary Algorithm to handle cells overlap and cells geometry as criteria for real-life network optimization. <s> BIB006 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> ACP-targeted encoding. <s> As Third Generation (3G) mobile networks start to be implemented, there is a need for effective network planning. However, deciding upon the optimum placement for the base stations of the networks is a complex task requiring vast computational resource. This paper discusses the conflicting objectives of base station planning and characterises a multi-objective optimisation problem. We present a genetic encoding of the third generation mobile network planning problem and parallel genetic algorithms to solve it. <s> BIB007 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> ACP-targeted encoding. <s> Automatic cell planning aims at optimising the performance of UMTS networks in terms of capacity, coverage and quality of service by automatically adjusting antenna parameters and common channel powers. This paper presents an overview on optimisation strategies that correspond to different scenarios depending on the operational context. Starting from capacity optimisation, we show how an Automatic Cell Planner (ACP) can be enhanced with specific functionalities such as joint coverage/capacity optimisation, automatic site selection or steered optimisation. Finally, we show how the improvement in quality of service brought about by an ACP can be accurately assessed with dynamic simulations using adequate key performance indicators (KPI). <s> BIB008 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> ACP-targeted encoding. <s> We focus on the dimensioning process of cellular networks that addresses the evaluation of equipment global costs to cover a city. To deal with frequency assignment, that constitutes the most critical resource in mobile systems, the network is usually modeled as a pattern of regular hexagonal cells. Each cell represents the area covered by the signal of a transmitter or base station (BS). Our work emphasizes on the design of irregular hexagonal cells in an adaptive way. Hexagons transform themselves and adapt their shapes according to a traffic density map and to geometrical constraints. This process, called adaptive meshing (AM), may be seen as a solution to minimize the required number of BS to cover a region and to propose a basis for transmitter positioning. The solution we present to the mesh generation problem for mobile network dimensioning is based on the use of an evolutionary algorithm. This algorithm, called hybrid island evolutionary strategy (HIES), performs distributed computation. It allows the user to tackle problem instances with large traffic density map requiring several hundreds of cells. HIES combines local search fast computation on individuals, incorporated into a global island-like strategy. Experiments are done on one real case representing the mobile traffic load of the second French city of Lyon and on several other traffic maps from urban fictive data sets. <s> BIB009 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> ACP-targeted encoding. <s> We propose a new solution to the problem of positioning base station transmitters of a mobile phone network and assigning frequencies to the transmitters, both in an optimal way. Since an exact solution cannot be expected to run in polynomial time for all interesting versions of this problem (they are all NP-hard), our algorithm follows a heuristic approach based on the evolutionary paradigm. For this evolution to be efficient, i.e., goal-oriented and sufficiently random at the same time, problem-specific knowledge is embedded in the operators. The problem requires both the minimization of the cost and of the channel interference. We examine and compare two standard multiobjective techniques and a new algorithm - the steady-state evolutionary algorithm with Pareto tournaments. One major finding of the empirical investigation is a strong influence of the choice of the multiobjective selection method on the utility of the problem-specific recombination leading to a significant difference in the solution quality. <s> BIB010 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> ACP-targeted encoding. <s> Cellular network design is a major issue in mobile telecommunication systems. In this paper , a model of the problem in its full practical complexity, based on multiobjective constrained combinatorial optimization, has been investigated. We adopted the Pareto approach at resolution in order to compute a set of diversified non-dominated networks, thus removing the need for the designer to rank or weight objectives. We design an asynchronous steady-state evolutionary algorithm for its resolution. Specific coding scheme and genetic and neighborhood operators have been designed for the tackled problem. On the other side, we make use of many generic features related to advanced intensification and diversification search techniques, hybridization of metaheuristics and grid computing for the distribution of the computations. They aim at improving the quality of networks and robustness, at speeding-up the search, hence efficiently solving large instances of the problem. Using realistic benchmarks, the computed networks and speed-ups on parallel/distributed architectures show the efficiency and the scalability of hierarchical models of hybridization and parallelization used in conjunction. <s> BIB011 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> ACP-targeted encoding. <s> Cellular network design is a major issue in second generation GSM mobile telecommunication systems. In this paper, a new model of the problem in its full practical complexity, based on multiobjective constrained combinatorial optimization, has been used. We propose an evolutionary algorithm that aims at approximating the Pareto frontier of the problem, which removes the need for a cellular network designer to rank or weight objectives a priori. Specific coding scheme and genetic operators have been designed. Advanced intensification and diversification search techniques, such as elitism and adaptive sharing, have been used. Three complementary hierarchical parallel models have been designed to improve the solution quality and robustness, to speed-up the search and to solve large instances of the problem. The obtained Pareto fronts and speed-ups on different parallel architectures show the efficiency and the scalability of the parallel model. Performance evaluation of the algorithm has been carried out on different realistic benchmarks. The obtained results show the impact of the proposed parallel models and the introduced search mechanisms. <s> BIB012 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> ACP-targeted encoding. <s> Cellular network design is a major issue in mobile telecommunication systems. In this paper, a model of the problem in its full practical complexity, based on multiobjective constrained combinatorial optimization, has been investigated. We adopted the Pareto approach at resolution in order to compute a set of diversified non-dominated networks, thus removing the need for the designer to rank or weight objectives a priori. We designed and implemented a ''ready-to-use'' platform for radio network optimization that is flexible regarding both the modeling of the problem (adding, removing, updating new antagonist objectives and constraints) and the solution methods. It extends the ''white-box'' ParadisEO framework for metaheuristics applied to the resolution of mono/multi-objective Combinatorial Optimization Problems requiring both the use of advanced optimization methods and the exploitation of large-scale parallel and distributed environments. Specific coding scheme and genetic and neighborhood operators have been designed and embedded. On the other side, we make use of many generic features related to advanced intensification and diversification search techniques, hybridization of metaheuristics and grid computing for the distribution of the computations. They aim at improving the quality of networks and their robustness. They also allow, to speed-up the search and obtain results in a tractable time, and so efficiently solving large instances of the problem. Using three realistic benchmarks, the computed networks and speed-ups on different parallel and/or distributed architectures show the efficiency and the scalability of hierarchical parallel hybrid models. <s> BIB013 | The encoding schemes shown in this section have been designed especially to deal with ACP problems, so they do not properly fall into any of the previously defined categories. The most widely used non-classical scheme in the EA literature encodes all the optimizable parameter settings of each BTS in the tentative solution. Let us call it network encoding. This encoding is usually aimed not only at positioning the BTSs but also at dimensioning them. Figure 3 displays an example in which the BTS type, the emission power, and the tilt and azimuth angles are to be optimized. power tilt and azimuth are actually real-valued parameters, they are usually discretized into a rather small set of values in order to reduce the complexity of the optimization problem. This is the approach used in BIB002 , BIB003 , , Altman et al. (2002a,b) , BIB005 , Jamaa et al. (2004a,b) , BIB006 , BIB008 , BIB011 , , BIB012 and BIB013 . The main advantage of this encoding scheme is that EAs are put to work on real solutions so therefore problem-domain specific knowledge can be easily included in the search. Otherwise, no classical well-known operators can be used and newly specific ones have to be developed. Other specific encodings are analysed next. With the goal of minimizing the number of BTSs required to cover a given area, BIB009 have adaptively transformed the hexagonal cell shapes typically used in cellular networks. This adaptive meshing is performed according to a traffic density map and to geometrical constraints. Then, for each cell of the network, the encoding scheme includes six vertices (two real values) plus an attribute that indicates whether it is visible or not. This latter attribute is the particularity of this approach. BIB004 have used group encoding BIB001 to maximize the coverage of traffic demand areas (TDAs) using as few BTSs as possible. In this group encoding, each tentative solution has two parts: the TDA part and the BTS part. In the TDA part a BTS is assigned to each TDA. The BTSs used in the TDA part are then represented in the BTS part. Specific group-oriented operators have been applied. BIB007 have proposed a matrix encoding with size 3 × N , where N is the maximum number of BTSs. All the BTSs are labelled so that the ith column corresponds to the ith BTS. In this encoding, the three values of the ith BTS indicate whether the BTS is present or not in the network (BTS selection), the BTS height and the BTS emission power. This encoding has many drawbacks but no further discussion is given since the authors only present their proposal in the article, with no experimentation at all. Consequently, this article will not be considered further in this survey. The work of BIB010 presents an encoding that mixes real and integer values, as well as a set of frequencies. This specialized encoding is required because it addresses both the BTS positioning and the frequency assignment simultaneously. A candidate solution includes, for each BTS, two real values representing its coordinates, two integer values encoding the transmitting power and the number of available channels in the BTS, and the set of channels assigned to the BTS. |
Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Operators <s> An important class of computational problems are grouping problems, where the aim is to group together members of a set (i.e., find a good partition of the set). We show why both the standard and the ordering GAs fare poorly in this domain by pointing out their inherent difficulty to capture the regularities of the functional landscape of the grouping problems. We then propose a new encoding scheme and genetic operators adapted to these problems, yielding the Grouping Genetic Algorithm (GGA). We give an experimental comparison of the GGA with the other GAs applied to grouping problems, and we illustrate the approach with two more examples of important grouping problems successfully treated with the GGA: the problems of Bin Packing and Economies of Scale. <s> BIB001 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Operators <s> In this paper we present a genetic algorithm-based heuristic especially for the weighted maximum independent set problem (IS). The proposed approach treats also some equivalent combinatorial optimization problems. We introduce several modifications to the basic genetic algorithm, by (i) using a crossover called two-fusion operator which creates two new different children and (ii) replacing the mutation operator by the heuristic-feasibility operator tailored specifically for the weighted independent set. The performance of our algorithm was evaluated on several randomly generated problem instances for the weighted independent set and on some instances of the DIMACS Workshop for the particular case: the unweighted maximum clique problem. Computational results show that the proposed approach is able to produce high-quality solutions within reasonable computational times. This algorithm is easily parallelizable and this is one of its important features. <s> BIB002 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Operators <s> Cellular network operators are dealing with complex problems when planning the network operation. In order to automate the planning process, the development of simulation and optimization tools are under research. In this paper genetic algorithms with three different approaches are studied in order to optimize the base station sites. This research shows that a proper approach in developing the individual structure and fitness function has crucial importance in solving practical base station siting problems with genetic algorithms. <s> BIB003 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Operators <s> The cost and complexity of a network is closely related to the number of base-stations (BSs) required to achieve the system operator's service objectives. The location of BSs is not an easy task and there are numerous factors that must be taken into account when deciding the optimum position of BSs. This paper discusses the performance of three different algorithms developed to solve the BS location problem: the greedy algorithm (GR), the genetic algorithm (GA) and the combination algorithm for total optimisation (CAT). These three methods are compared and results are given for a typical test scenario. <s> BIB004 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Operators <s> In this paper, the evolution of mobile radio network is presented. First of all, the network life cycle is considered. A mathematical modeling of these life periods is developed inside an optimization problem: optimal location of base stations. It is a combinatorial optimization problem. A multi-period model is built on a concentrator link approach. Finally, three different multi-period techniques are identified, they are based on using the genetic algorithm (GA) to tackle this problem of the design of microcellular networks. <s> BIB005 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Operators <s> This paper presents an innovative algorithm for automatic base station placement and dimensioning. A highly efficient optimization strategy forms the core of the proposed algorithm that determines the number of base stations, their sites, and parameters to achieve a high-quality network that meets the requirements of area coverage, traffic capacity, and interference level, while trying to minimize system costs, including the frequency and financial costs. First, the hierarchical approach is outlined and it is applied to place base stations (BSs) for a large-scale network design. Also a fuzzy expert system is developed to exploit the expert experience to adjust BS parameters, e.g., the transmitted power, to improve the network performance. Simulation results are presented and analyzed. <s> BIB006 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Operators <s> Engineering of mobile telecommunication networks endures two major problems: the design of the network and the frequency assignment. We address the first problem in this paper, which has been formulated as a multiobjective constrained combinatorial optimisation problem. We propose a genetic algorithm (GA) that aims to approximate the Pareto frontier of the problem. Advanced techniques have been used, such as Pareto ranking, sharing and elitism. The GA has been implemented in parallel on a network of workstations to speed up the search. To evaluate the performance of the GA, we have introduced two new quantitative indicators: the entropy and the contribution. Encouraging results are obtained on real-life problems. <s> BIB007 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Operators <s> The cell planning problem with capacity expansion is examined in wireless communications. The problem decides the location and capacity of each new base station to cover expanded and increased traffic demand. The objective is to minimize the cost of new base stations. The coverage by the new and existing base stations is constrained to satisfy a proper portion of traffic demands. The received signal power at the base station also has to meet the receiver sensitivity. The cell planning is formulated as an integer linear programming problem and solved by a tabu search algorithm. In the tabu search intensification by add and drop move is implemented by short-term memory embodied by two tabu lists. Diversification is designed to investigate proper capacities of new base stations and to restart the tabu search from new base station locations. Computational results show that the proposed tabu search is highly effective. A 10% cost reduction is obtained by the diversification strategies. The gap from the optimal solutions is approximately 1/spl sim/5% in problems that can be handled in appropriate time limits. The proposed tabu search also outperforms the parallel genetic algorithm. The cost reduction by the tabu search approaches 10/spl sim/20% in problems: with 2500 traffic demand areas (TDAs) in code division multiple access (CDMA). <s> BIB008 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Operators <s> With the imminent introduction of the 3G systems throughout the world precise cell planning in macrocell, microcell and picocell environments have become equally important. Beside coverage of the basic radio link quality parameter others such as rms delay spread and a measure of the system capacity have become increasingly important. Our contribution addresses the planning inside microcells based on a 3D deterministic ray-tracing propagation tool. It is based on the IHE model (Cichon, 1984) and a simple genetic algorithm (SGA) for the base station location optimization. At this stage the optimization is based on coverage and rms delay spread considerations. Our algorithm has as inputs the delay spread threshold and the minimum field strength. The cost function to be minimized is the number of locations in which the values of these parameters are above the threshold in the case of delay spread, and respectively below the threshold in the case of the field strength. <s> BIB009 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Operators <s> In this paper, we find the best base station placement using a genetic approach. A new representation describing base station placement with a real number is proposed, and new genetic operators are introduced. This new representation can describe not only the locations of the base stations but also the number of those. Considering both coverage and economic efficiency, we also suggest a weighted objective function. Our algorithm is applied to an obvious optimization problem and then is verified. Moreover, our approach is tried in an inhomogeneous traffic density environment. The simulation result proves that the algorithm enables one to find near optimal base station placement and the efficient number of base stations. <s> BIB010 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Operators <s> In this paper, the base station placement is automatically determined using genetic approach, and the transmit power is estimated considering the interference situation in the case of interference-dominant systems. For applying a genetic algorithm to the base station placement problem, a new representation scheme with real numbers is proposed. And, corresponding operators such as crossover and mutation are introduced. A weighted objective function is designed for performing the cell planning coverage, cost-effectively. To verify the proposed algorithm, the situation where the optimum positions and number of base stations are obvious is considered. The proposed algorithm is applied to an inhomogeneous traffic density environment, where a base station's coverage may be limited by offered traffic loads. Simulation result proves that the algorithm enables us to find near optimal base station placement and the efficient number of base stations. <s> BIB011 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Operators <s> Summary form only given. Evolutionary algorithms (EAs) are applied to solve the radio network design problem (RND). The task is to find the best set of transmitter locations in order to cover a given geographical region at an optimal cost. Usually, parallel EAs are needed in order to cope with the high computational requirements of such a problem. Here, we try to develop and evaluate a set of sequential and parallel genetic algorithms (GAs) in order to solve efficiently the RND problem. The results show that our distributed steady state GA is an efficient and accurate tool for solving RND that even outperforms existing parallel solutions. The sequential algorithm performs very efficiently from a numerical point of view, although the distributed version is much faster, with an observed linear speedup. <s> BIB012 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Operators <s> WCDMA is an interference-limited system and its coverage and data throughput are sensitive to background noise. The results of background noise measurements in urban Taipei city for the licence bands of 3G systems issued in Taiwan are presented. The measurements involve FDD mode uplink and downlink frequency bands measured on building tops and at street level, respectively. The severeness of spectrum pollution of these bands is evaluated by extracting three statistical parameters from the measurement data, and the impact of the background noise on coverage and throughput is analysed for WCDMA systems. Also, based on measurement results, a better solution using a genetic algorithm with the help of a propagation model and digitised building information is proposed for the deployment of the base stations of WCDMA systems, by which the required coverage can be met with a suitable number of base stations, locations, antenna heights and transmitting power. A system is obtained that suffers less impact from background noise and achieves a higher data throughput with minimum cost. <s> BIB013 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Operators <s> It is increasingly important to optimally select base stations in the design of cellular networks, as customers demand cheaper and better wireless services. From a set of potential site locations, a subset needs to be selected which optimizes two critical objectives: service coverage and financial cost. As this is an NP-hard optimization problem, heuristic approaches are required for problems of practical size. Our approach consists of two phases which act upon a set of candidate site permutations at each generation. Firstly, a sequential greedy algorithm is designed to commission sites from an ordering of candidate sites, subject to satisfying an alterable constraint. Secondly, an evolutionary optimization technique, which is tested against a randomized approach, is used to search for orderings of candidate sites which optimize multiple objectives. The two-phase strategy is vigorously tested and the results delineated. <s> BIB014 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Operators <s> In this article, evolutionary algorithms (EAs) are applied to solve the radio network design problem (RND). The task is to find the best set of transmitter locations in order to cover a given geographical region at an optimal cost. Usually, parallel EAs are needed to cope with the high computational requirements of such a problem. Here, we develop and evaluate a set of sequential and parallel genetic algorithms (GAs) to solve the RND problem efficiently. The results show that our distributed steady state GA is an efficient and accurate tool for solving RND that even outperforms existing parallel solutions. The sequential algorithm performs very efficiently from a numerical point of view, although the distributed version is much faster. <s> BIB015 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Operators <s> The antenna placement problem, or cell planning problem, involves locating and configuring infrastructure for cellular wireless networks. From candidate site locations, a set needs to be selected against objectives relating to issues such as financial cost and service provision. This is an NP-hard optimization problem and consequently heuristic approaches are necessary for large problem instances. In this study, we use a greedy algorithm to select and configure base station locations. The performance of this greedy approach is dependent on the order in which the candidate sites are considered. We compare the ability of four state-of-the-art multiple objective genetic algorithms to find an optimal ordering of potential base stations. Results and discussion on the performance of the algorithms are provided. <s> BIB016 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Operators <s> We focus on the dimensioning process of cellular networks that addresses the evaluation of equipment global costs to cover a city. To deal with frequency assignment, that constitutes the most critical resource in mobile systems, the network is usually modeled as a pattern of regular hexagonal cells. Each cell represents the area covered by the signal of a transmitter or base station (BS). Our work emphasizes on the design of irregular hexagonal cells in an adaptive way. Hexagons transform themselves and adapt their shapes according to a traffic density map and to geometrical constraints. This process, called adaptive meshing (AM), may be seen as a solution to minimize the required number of BS to cover a region and to propose a basis for transmitter positioning. The solution we present to the mesh generation problem for mobile network dimensioning is based on the use of an evolutionary algorithm. This algorithm, called hybrid island evolutionary strategy (HIES), performs distributed computation. It allows the user to tackle problem instances with large traffic density map requiring several hundreds of cells. HIES combines local search fast computation on individuals, incorporated into a global island-like strategy. Experiments are done on one real case representing the mobile traffic load of the second French city of Lyon and on several other traffic maps from urban fictive data sets. <s> BIB017 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Operators <s> We propose a new solution to the problem of positioning base station transmitters of a mobile phone network and assigning frequencies to the transmitters, both in an optimal way. Since an exact solution cannot be expected to run in polynomial time for all interesting versions of this problem (they are all NP-hard), our algorithm follows a heuristic approach based on the evolutionary paradigm. For this evolution to be efficient, i.e., goal-oriented and sufficiently random at the same time, problem-specific knowledge is embedded in the operators. The problem requires both the minimization of the cost and of the channel interference. We examine and compare two standard multiobjective techniques and a new algorithm - the steady-state evolutionary algorithm with Pareto tournaments. One major finding of the empirical investigation is a strong influence of the choice of the multiobjective selection method on the utility of the problem-specific recombination leading to a significant difference in the solution quality. <s> BIB018 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Operators <s> Radio network design (RND) is a fundamental problem in cellular networks for telecommunications. In these networks, the terrain must be covered by a set of base stations (or antennae), each of which defines a covered area called cell. The problem may be reduced to figure out the optimal placement of antennae out of a list of candidate sites trying to satisfy two objectives: to maximize the area covered by the radio signal and to reduce the number of used antennae. Consequently, RND is a bi-objective optimization problem. Previous works have solved the problem by using single-objective techniques which combine the values of both objectives. The used techniques have allowed to find optimal solutions according to the defined objective, thus yielding a unique solution instead of the set of Pareto optimal solutions. In this paper, we solve the RND problem using a multi-objective version of the algorithm CHC, which is the metaheuristic having reported the best results when solving the single-objective formulation of RND. This new algorithm, called MOCHC, is compared against a binary-coded NSGA-II algorithm and also against the provided results in the literature. Our experiments indicate that MOCHC outperfoms NSGA-II and, more importantly, it is more efficient finding the optimal solutions than single-objectives techniques. <s> BIB019 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Operators <s> The base station placement problem, with n potential candidate sites is NP-Hard with 2 n solutions (Mathar and Niessen, Wirel. Netw. 6, 421---428, 2000). When dimensioned on m unknown variable settings (e.g., number of power settings?+?number of tilt settings, etc.) the computational complexity becomes (m+1) n (Raisanen, PhD. thesis, 2006). We introduce a novel approach to reduce the computational complexity by dimensioning sites only once to guarantee traffic hold requirements are satisfied. This approach works by determining the maximum set of service test points candidate sites can handle without exceeding a hard traffic constraint, T MAX . Following this, the ability of two evolutionary strategies (binary and permutation-coded) to search for the minimum set cover are compared. This reverses the commonly followed approach of achieving service coverage first and then dimensioning to meet traffic hold. To test this approach, three realistic GSM network simulation environments are engineered, and a series of tests performed. Results indicate this approach can quickly meet network operator objectives. <s> BIB020 | Several genetic operators have been investigated in the literature for solving ACP problems (Table 2 ). This section is only aimed at discussing the crossover and mutation operators since they are the ones which depend on the encoding schemes (selection and replacement operators are based on the fitness of the individuals). Downloaded by [UMA University of Malaga] at 03:59 04 October 2013 3.2.2.1. Crossover. The classical single point crossover (SPX) has been extensively used for solving ACP problems. Most of the existing work using binary encoding has adopted this approach (e.g. BIB003 BIB004 . With this encoding, other wellknown operators such as two point crossover BIB012 BIB015 and uniform crossover BIB009 BIB013 ) have been applied. It is also worth mentioning that algorithm-specific crossover operators also appear when particular algorithms have been used. The works of , BIB019 , Vega-Rodrŕguez et al. (2007a) and Vega-Rodrŕguez et al. (2007b) use the highly disruptive crossover (HUX) designed for the CHC algorithm, whereas the two-fusion crossover BIB002 ) is applied in BIB006 . In the case of the integer encoding scheme, the cycle crossover has been used in the works of BIB014 , Whitaker et al. (2004b,a) , BIB016 and BIB020 . Since their algorithms work on integer permutations, this crossover operator is aimed at preserving the permutation, and as a result no repair mechanism is required. It is important to remark here that using the decoder procedure that translates the permutation of BTSs into a cell plan avoids the main concern of this representation: different permutations represent the same solution in the objective space. Traditional recombination operators are not applied with the real encoding scheme since no pure real-valued strings have been used. Indeed, in the works of BIB010 and BIB011 this operator has to deal with the special NULL value used in any given position to indicate that the corresponding BTSs are not deployed. This way, given two parents p 1 and p 2 , the operator returns one single child, c, in which the position of the ith BTS is computed as follows. If p 1 (i) = NULL and p 2 (i) = NULL, then c(i) = NULL; if either p 1 (i) = NULL or p 2 (i) = NULL, c(i) receives the genetic material of the non-NULL parent; otherwise, the ith BTS is placed somewhere near the BTS positions of the parents (sampling a Gaussian distribution). The main disadvantages of all these crossover operators is that they just manipulate genes, without taking into account the links with other genes (epistasis). Indeed, as explained in the introduction, either activating, deactivating or redimensioning one given BTS in a cellular network will surely affect the influence of other BTSs in the ACP problem at hand. It is therefore worth giving particular attention to the development of operators specially designed for ACP problems that use classical encoding schemes in their resolution. When ACP-targeted encoding schemes are adopted, this crossover specialization is already addressed. Most of the works that use the network encoding (see previous section) apply the so-called geographical crossover defined in BIB007 . This operator is based on exchanging the configuration of the sites located within a given random radius around a randomly chosen site. Figure 4 shows an example of the working principles of the geographical crossover. The main advantage of this operator is that it considers somehow the connection between the sites in a topological way: only nearby sites are modified. However, under this encoding, the classical SPX crossover has also been used by BIB005 . Other specialized crossover operators have been defined for dealing with ACP-targeted encodings. BIB017 have proposed a mechanism that combines the vertices of the hexagonal cells used to cover the traffic demand in the cellular network. It works by selecting two individuals as follows. The first one, i 1 , is chosen by fitness-proportional probability (e.g. roulette-wheel selection), whereas the second, i 2 is picked randomly. Since it is assumed that the former will have a better fitness than the latter, the crossover operator generates a child in which i 1 attracts i 2 by using a weighted average sum. BIB008 , who have used the grouping GA, have adopted the grouping crossover operator defined by BIB001 . Finally, BIB018 have implemented a crossover operator based on the decomposition of the service area of the cellular network. Two halves along one of the dimensions are generated and then, for each half, the fitness of the parent individuals is evaluated. The offspring will inherit the configuration for each of the sub-areas from the fittest parent for that sub-area. this approach is that the operator may generate unfeasible individuals, therefore requiring that the authors apply a repair function. |
Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Mutation. <s> An important class of computational problems are grouping problems, where the aim is to group together members of a set (i.e., find a good partition of the set). We show why both the standard and the ordering GAs fare poorly in this domain by pointing out their inherent difficulty to capture the regularities of the functional landscape of the grouping problems. We then propose a new encoding scheme and genetic operators adapted to these problems, yielding the Grouping Genetic Algorithm (GGA). We give an experimental comparison of the GGA with the other GAs applied to grouping problems, and we illustrate the approach with two more examples of important grouping problems successfully treated with the GGA: the problems of Bin Packing and Economies of Scale. <s> BIB001 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Mutation. <s> Engineering of mobile telecommunication networks endures two major problems: the design of the network and the frequency assignment. We address the first problem in this paper, which has been formulated as a multiobjective constrained combinatorial optimisation problem. We propose a genetic algorithm (GA) that aims to approximate the Pareto frontier of the problem. Advanced techniques have been used, such as Pareto ranking, sharing and elitism. The GA has been implemented in parallel on a network of workstations to speed up the search. To evaluate the performance of the GA, we have introduced two new quantitative indicators: the entropy and the contribution. Encouraging results are obtained on real-life problems. <s> BIB002 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Mutation. <s> The cell planning problem with capacity expansion is examined in wireless communications. The problem decides the location and capacity of each new base station to cover expanded and increased traffic demand. The objective is to minimize the cost of new base stations. The coverage by the new and existing base stations is constrained to satisfy a proper portion of traffic demands. The received signal power at the base station also has to meet the receiver sensitivity. The cell planning is formulated as an integer linear programming problem and solved by a tabu search algorithm. In the tabu search intensification by add and drop move is implemented by short-term memory embodied by two tabu lists. Diversification is designed to investigate proper capacities of new base stations and to restart the tabu search from new base station locations. Computational results show that the proposed tabu search is highly effective. A 10% cost reduction is obtained by the diversification strategies. The gap from the optimal solutions is approximately 1/spl sim/5% in problems that can be handled in appropriate time limits. The proposed tabu search also outperforms the parallel genetic algorithm. The cost reduction by the tabu search approaches 10/spl sim/20% in problems: with 2500 traffic demand areas (TDAs) in code division multiple access (CDMA). <s> BIB003 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Mutation. <s> In this paper, we find the best base station placement using a genetic approach. A new representation describing base station placement with a real number is proposed, and new genetic operators are introduced. This new representation can describe not only the locations of the base stations but also the number of those. Considering both coverage and economic efficiency, we also suggest a weighted objective function. Our algorithm is applied to an obvious optimization problem and then is verified. Moreover, our approach is tried in an inhomogeneous traffic density environment. The simulation result proves that the algorithm enables one to find near optimal base station placement and the efficient number of base stations. <s> BIB004 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Mutation. <s> In this paper, the base station placement is automatically determined using genetic approach, and the transmit power is estimated considering the interference situation in the case of interference-dominant systems. For applying a genetic algorithm to the base station placement problem, a new representation scheme with real numbers is proposed. And, corresponding operators such as crossover and mutation are introduced. A weighted objective function is designed for performing the cell planning coverage, cost-effectively. To verify the proposed algorithm, the situation where the optimum positions and number of base stations are obvious is considered. The proposed algorithm is applied to an inhomogeneous traffic density environment, where a base station's coverage may be limited by offered traffic loads. Simulation result proves that the algorithm enables us to find near optimal base station placement and the efficient number of base stations. <s> BIB005 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Mutation. <s> We give a short introduction to the results of our theoretical analysis of evolutionary algorithms. These results are used to design an algorithm for a large real-world problem: the placement of antennas for mobile radio networks. Our model for the antenna placement problem (APP) addresses cover, traffic demand, interference, different parameterized antenna types, and the geometrical structure of cells. The resulting optimization problem is constrained and multi-objective. The evolutionary algorithm derived from our theoretical analysis is capable of dealing with more than 700 candidate sites in the working area. The results show that the APP is tractable. The automatically generated designs enable experts to focus their efforts on the difficult parts of a network design problem. <s> BIB006 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Mutation. <s> The antenna placement problem, or cell planning problem, involves locating and configuring infrastructure for cellular wireless networks. From candidate site locations, a set needs to be selected against objectives relating to issues such as financial cost and service provision. This is an NP-hard optimization problem and consequently heuristic approaches are necessary for large problem instances. In this study, we use a greedy algorithm to select and configure base station locations. The performance of this greedy approach is dependent on the order in which the candidate sites are considered. We compare the ability of four state-of-the-art multiple objective genetic algorithms to find an optimal ordering of potential base stations. Results and discussion on the performance of the algorithms are provided. <s> BIB007 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Mutation. <s> We focus on the dimensioning process of cellular networks that addresses the evaluation of equipment global costs to cover a city. To deal with frequency assignment, that constitutes the most critical resource in mobile systems, the network is usually modeled as a pattern of regular hexagonal cells. Each cell represents the area covered by the signal of a transmitter or base station (BS). Our work emphasizes on the design of irregular hexagonal cells in an adaptive way. Hexagons transform themselves and adapt their shapes according to a traffic density map and to geometrical constraints. This process, called adaptive meshing (AM), may be seen as a solution to minimize the required number of BS to cover a region and to propose a basis for transmitter positioning. The solution we present to the mesh generation problem for mobile network dimensioning is based on the use of an evolutionary algorithm. This algorithm, called hybrid island evolutionary strategy (HIES), performs distributed computation. It allows the user to tackle problem instances with large traffic density map requiring several hundreds of cells. HIES combines local search fast computation on individuals, incorporated into a global island-like strategy. Experiments are done on one real case representing the mobile traffic load of the second French city of Lyon and on several other traffic maps from urban fictive data sets. <s> BIB008 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Mutation. <s> We propose a new solution to the problem of positioning base station transmitters of a mobile phone network and assigning frequencies to the transmitters, both in an optimal way. Since an exact solution cannot be expected to run in polynomial time for all interesting versions of this problem (they are all NP-hard), our algorithm follows a heuristic approach based on the evolutionary paradigm. For this evolution to be efficient, i.e., goal-oriented and sufficiently random at the same time, problem-specific knowledge is embedded in the operators. The problem requires both the minimization of the cost and of the channel interference. We examine and compare two standard multiobjective techniques and a new algorithm - the steady-state evolutionary algorithm with Pareto tournaments. One major finding of the empirical investigation is a strong influence of the choice of the multiobjective selection method on the utility of the problem-specific recombination leading to a significant difference in the solution quality. <s> BIB009 | The analysis of the mutation operators in the literature for solving ACP problems with EAs is similar to that performed for the crossover operators. It depends greatly on the encoding used. The classical bit flip mutation is the preferred operator for binary encoding schemes (see Section 3.2.1). In works using integer permutation encoding, a random swap that simply transposes two randomly chosen positions in the permutation is adopted (e.g. BIB007 . Again, this operator is safe and no repair function is needed. The two works categorized with real encoding, i.e. BIB004 and BIB005 , have to manage the NULL value which is used to represent that a BTS is not deployed in the network. This way, for each BTS, the mutation operator either randomly updates the current position of the deployed BTSs or it is assigned with a NULL value; otherwise, if the BTS is not deployed yet, it can remain where it is or it can be placed in an arbitrary position in the network. Using the network encoding (ACP-targeted encoding), the mutation operator usually works by first selecting a given site and then updating the configuration of this site. This is called multilevel mutation since it operates at different levels of the encoding. Depending on the parameters of each site, the mutation may affect the following. • Activation toggling. If the site is activated, then it is just deactivated. On the other hand, if L i is deactivated, then an entire random configuration for the site is generated. • BTS power tuning. It requires the site to be activated. It randomly chooses a BTS of the site and then the power is randomly changed to one of its discretized values. • BTS tilt tuning. The same as power tuning, but changing the tilt angle. • BTS azimuth tuning. The same as power and tilt tuning, but modifying the azimuth angle. • BTS diagram tuning. This mutation also requires the site to be activated. The goal of this operator is to change the BTS type, that is, from an omnidirectional BTS to several directive BTSs, or vice versa. The configuration for each newly generated BTS is randomly generated. This is the approach used in BIB002 , Altman et al. (2002a,b) On the other hand, the works of BIB006 have further detailed these mutations by defining more specialized search operators. The authors have distinguished between repair operators (RepairTraffic, RepairHole, DecreasePower, IncreasePower, ChangeAzimuth, ChangeTilt, DissipateTraffic) and climb operators (RemoveWeakAntenna, RemoveAntenna, RemoveWeakSite, RemoveSite, IncreaseCompactness, ReduceIrregularities and MinimizePower). They all are applied one at a time by randomly choosing one of them. Because unfeasible solutions may be generated, a repair phase is used. Other mutation operators used with ACP-targeted encoding schemes are described next. BIB008 have developed the macromutation operator. This is intended to perform simultaneous moves on the vertices of the cells that cover the cellular network, thus allowing these cells to exit or to reach traffic demand areas. BIB003 have adopted the grouping crossover operator defined by BIB001 for the grouping GA. Finally, Finally, BIB009 have applied both directed and random mutations. The former ones (six different operators) include problem knowledge, and feasibility is always guaranteed since several preconditions have to be met prior to their application. However, directed mutations find it difficult to explore the entire search space, so this is why random mutations have been used. The goal is to promote exploration, but the drawback is that feasibility is no longer guaranteed and a repair function has to be applied. |
Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Local search <s> A new heuristic approach for minimizing possibly nonlinear and non differentiable continuous space functions is presented. By means of an extensive testbed, which includes the De Jong functions, it will be demonstrated that the new method converges faster and with more certainty than Adaptive Simulated Annealing as well as the Annealed Nelder&Mead approach, both of which have a reputation for being very powerful. The new method requires few control variables, is robust, easy to use and lends itself very well to parallel computation. ________________________________________ 1)International Computer Science Institute, 1947 Center Street, Berkeley, CA 94704-1198, Suite 600, Fax: 510-643-7684. E-mail: storn@icsi.berkeley.edu. On leave from Siemens AG, ZFE T SN 2, OttoHahn-Ring 6, D-81739 Muenchen, Germany. Fax: 01149-636-44577, Email:rainer.storn@zfe.siemens.de. 2)836 Owl Circle, Vacaville, CA 95687, kprice@solano.community.net. <s> BIB001 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Local search <s> It is increasingly important to optimally select base stations in the design of cellular networks, as customers demand cheaper and better wireless services. From a set of potential site locations, a subset needs to be selected which optimizes two critical objectives: service coverage and financial cost. As this is an NP-hard optimization problem, heuristic approaches are required for problems of practical size. Our approach consists of two phases which act upon a set of candidate site permutations at each generation. Firstly, a sequential greedy algorithm is designed to commission sites from an ordering of candidate sites, subject to satisfying an alterable constraint. Secondly, an evolutionary optimization technique, which is tested against a randomized approach, is used to search for orderings of candidate sites which optimize multiple objectives. The two-phase strategy is vigorously tested and the results delineated. <s> BIB002 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Local search <s> We focus on the dimensioning process of cellular networks that addresses the evaluation of equipment global costs to cover a city. To deal with frequency assignment, that constitutes the most critical resource in mobile systems, the network is usually modeled as a pattern of regular hexagonal cells. Each cell represents the area covered by the signal of a transmitter or base station (BS). Our work emphasizes on the design of irregular hexagonal cells in an adaptive way. Hexagons transform themselves and adapt their shapes according to a traffic density map and to geometrical constraints. This process, called adaptive meshing (AM), may be seen as a solution to minimize the required number of BS to cover a region and to propose a basis for transmitter positioning. The solution we present to the mesh generation problem for mobile network dimensioning is based on the use of an evolutionary algorithm. This algorithm, called hybrid island evolutionary strategy (HIES), performs distributed computation. It allows the user to tackle problem instances with large traffic density map requiring several hundreds of cells. HIES combines local search fast computation on individuals, incorporated into a global island-like strategy. Experiments are done on one real case representing the mobile traffic load of the second French city of Lyon and on several other traffic maps from urban fictive data sets. <s> BIB003 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Local search <s> The antenna placement problem, or cell planning problem, involves locating and configuring infrastructure for cellular wireless networks. From candidate site locations, a set needs to be selected against objectives relating to issues such as financial cost and service provision. This is an NP-hard optimization problem and consequently heuristic approaches are necessary for large problem instances. In this study, we use a greedy algorithm to select and configure base station locations. The performance of this greedy approach is dependent on the order in which the candidate sites are considered. We compare the ability of four state-of-the-art multiple objective genetic algorithms to find an optimal ordering of potential base stations. Results and discussion on the performance of the algorithms are provided. <s> BIB004 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Local search <s> Cellular network design is a major issue in mobile telecommunication systems. In this paper, a model of the problem in its full practical complexity, based on multiobjective constrained combinatorial optimization, has been investigated. We adopted the Pareto approach at resolution in order to compute a set of diversified non-dominated networks, thus removing the need for the designer to rank or weight objectives a priori. We designed and implemented a ''ready-to-use'' platform for radio network optimization that is flexible regarding both the modeling of the problem (adding, removing, updating new antagonist objectives and constraints) and the solution methods. It extends the ''white-box'' ParadisEO framework for metaheuristics applied to the resolution of mono/multi-objective Combinatorial Optimization Problems requiring both the use of advanced optimization methods and the exploitation of large-scale parallel and distributed environments. Specific coding scheme and genetic and neighborhood operators have been designed and embedded. On the other side, we make use of many generic features related to advanced intensification and diversification search techniques, hybridization of metaheuristics and grid computing for the distribution of the computations. They aim at improving the quality of networks and their robustness. They also allow, to speed-up the search and obtain results in a tractable time, and so efficiently solving large instances of the problem. Using three realistic benchmarks, the computed networks and speed-ups on different parallel and/or distributed architectures show the efficiency and the scalability of hierarchical parallel hybrid models. <s> BIB005 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Local search <s> The base station placement problem, with n potential candidate sites is NP-Hard with 2 n solutions (Mathar and Niessen, Wirel. Netw. 6, 421---428, 2000). When dimensioned on m unknown variable settings (e.g., number of power settings?+?number of tilt settings, etc.) the computational complexity becomes (m+1) n (Raisanen, PhD. thesis, 2006). We introduce a novel approach to reduce the computational complexity by dimensioning sites only once to guarantee traffic hold requirements are satisfied. This approach works by determining the maximum set of service test points candidate sites can handle without exceeding a hard traffic constraint, T MAX . Following this, the ability of two evolutionary strategies (binary and permutation-coded) to search for the minimum set cover are compared. This reverses the commonly followed approach of achieving service coverage first and then dimensioning to meet traffic hold. To test this approach, three realistic GSM network simulation environments are engineered, and a series of tests performed. Results indicate this approach can quickly meet network operator objectives. <s> BIB006 | Adding ACP problem knowledge to the exploration performed by EAs can be further promoted with the usage of local search algorithms. That is, engineering hybrid algorithms BIB001 . So far, this problem-specific knowledge has been added by using specific encoding schemes and genetic operators (as has been shown in the previous sections). However, there are several proposals in the literature in which EAs are endowed with highly tailored search methods, allowing the search to be intensified in promising regions of the search space. When adaptively meshing the cell shapes of a cellular network, BIB003 have used a local search algorithm based on a Lamarckian adaptive process. This process applies small mutations on isolated vertices of the hexagonal cells which makes an individual evolve to a local minimum. The mutation operator, called micromutation, performs a small move on some randomly chosen vertex. BIB005 have designed a multiobjective local search to be used with the network encoding explained above. It is an iterative process that starts from a set of non-dominated solutions (or network configurations). Then, for each activated BTS of any network, it successively tests its removal, the updating of the power, azimuth and tilt with any of the available discretized values. By using the newly generated solutions, the set of non-dominated solutions is continuously updated. Finally, the local search algorithm restarts from any newly inserted solution, and so on. Finally, the decoder approach of BIB002 , Whitaker et al. (2004a,b) , BIB004 and BIB006 for translating the integer permutation of BTSs into a cell plan can also be mentioned here. As its authors have indicated, this decoder can be considered a local search algorithm. |
Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Parallelization <s> This paper uses a realistic combinatorial optimization problem as an example to show how a genetic algorithm can be parallelized in an efficient way. The problem considered is the selection of the best set of transmitter locations in order to cover a given geographical region at optimal cost. It is shown that it is possible to obtain good solutions to the problem even with a very low communication load. The parallel program is tested, first on an artificial example, then on a real-life case. <s> BIB001 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Parallelization <s> The cell planning problem with capacity expansion is examined in wireless communications. The problem decides the location and capacity of each new base station to cover expanded and increased traffic demand. The objective is to minimize the cost of new base stations. The coverage by the new and existing base stations is constrained to satisfy a proper portion of traffic demands. The received signal power at the base station also has to meet the receiver sensitivity. The cell planning is formulated as an integer linear programming problem and solved by a tabu search algorithm. In the tabu search intensification by add and drop move is implemented by short-term memory embodied by two tabu lists. Diversification is designed to investigate proper capacities of new base stations and to restart the tabu search from new base station locations. Computational results show that the proposed tabu search is highly effective. A 10% cost reduction is obtained by the diversification strategies. The gap from the optimal solutions is approximately 1/spl sim/5% in problems that can be handled in appropriate time limits. The proposed tabu search also outperforms the parallel genetic algorithm. The cost reduction by the tabu search approaches 10/spl sim/20% in problems: with 2500 traffic demand areas (TDAs) in code division multiple access (CDMA). <s> BIB002 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Parallelization <s> Engineering of mobile telecommunication networks endures two major problems: the design of the network and the frequency assignment. We address the first problem in this paper, which has been formulated as a multiobjective constrained combinatorial optimisation problem. We propose a genetic algorithm (GA) that aims to approximate the Pareto frontier of the problem. Advanced techniques have been used, such as Pareto ranking, sharing and elitism. The GA has been implemented in parallel on a network of workstations to speed up the search. To evaluate the performance of the GA, we have introduced two new quantitative indicators: the entropy and the contribution. Encouraging results are obtained on real-life problems. <s> BIB003 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Parallelization <s> This paper uses a realistic problem taken from the telecommunication world as the basis for comparing different combinatorial optimization algorithms. The problem recalls the minimum hitting set problem, and is solved with greedy-like, Darwinism and genetic algorithms. These three paradigms are described and analyzed with emphasis on the Darwinism approach, which is based on the computation of e-nets. <s> BIB004 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Parallelization <s> Summary form only given. Evolutionary algorithms (EAs) are applied to solve the radio network design problem (RND). The task is to find the best set of transmitter locations in order to cover a given geographical region at an optimal cost. Usually, parallel EAs are needed in order to cope with the high computational requirements of such a problem. Here, we try to develop and evaluate a set of sequential and parallel genetic algorithms (GAs) in order to solve efficiently the RND problem. The results show that our distributed steady state GA is an efficient and accurate tool for solving RND that even outperforms existing parallel solutions. The sequential algorithm performs very efficiently from a numerical point of view, although the distributed version is much faster, with an observed linear speedup. <s> BIB005 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Parallelization <s> In this article, evolutionary algorithms (EAs) are applied to solve the radio network design problem (RND). The task is to find the best set of transmitter locations in order to cover a given geographical region at an optimal cost. Usually, parallel EAs are needed to cope with the high computational requirements of such a problem. Here, we develop and evaluate a set of sequential and parallel genetic algorithms (GAs) to solve the RND problem efficiently. The results show that our distributed steady state GA is an efficient and accurate tool for solving RND that even outperforms existing parallel solutions. The sequential algorithm performs very efficiently from a numerical point of view, although the distributed version is much faster. <s> BIB006 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Parallelization <s> We focus on the dimensioning process of cellular networks that addresses the evaluation of equipment global costs to cover a city. To deal with frequency assignment, that constitutes the most critical resource in mobile systems, the network is usually modeled as a pattern of regular hexagonal cells. Each cell represents the area covered by the signal of a transmitter or base station (BS). Our work emphasizes on the design of irregular hexagonal cells in an adaptive way. Hexagons transform themselves and adapt their shapes according to a traffic density map and to geometrical constraints. This process, called adaptive meshing (AM), may be seen as a solution to minimize the required number of BS to cover a region and to propose a basis for transmitter positioning. The solution we present to the mesh generation problem for mobile network dimensioning is based on the use of an evolutionary algorithm. This algorithm, called hybrid island evolutionary strategy (HIES), performs distributed computation. It allows the user to tackle problem instances with large traffic density map requiring several hundreds of cells. HIES combines local search fast computation on individuals, incorporated into a global island-like strategy. Experiments are done on one real case representing the mobile traffic load of the second French city of Lyon and on several other traffic maps from urban fictive data sets. <s> BIB007 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Parallelization <s> Cellular network design is a major issue in second generation GSM mobile telecommunication systems. In this paper, a new model of the problem in its full practical complexity, based on multiobjective constrained combinatorial optimization, has been used. We propose an evolutionary algorithm that aims at approximating the Pareto frontier of the problem, which removes the need for a cellular network designer to rank or weight objectives a priori. Specific coding scheme and genetic operators have been designed. Advanced intensification and diversification search techniques, such as elitism and adaptive sharing, have been used. Three complementary hierarchical parallel models have been designed to improve the solution quality and robustness, to speed-up the search and to solve large instances of the problem. The obtained Pareto fronts and speed-ups on different parallel architectures show the efficiency and the scalability of the parallel model. Performance evaluation of the algorithm has been carried out on different realistic benchmarks. The obtained results show the impact of the proposed parallel models and the introduced search mechanisms. <s> BIB008 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Parallelization <s> Cellular network design is a major issue in mobile telecommunication systems. In this paper , a model of the problem in its full practical complexity, based on multiobjective constrained combinatorial optimization, has been investigated. We adopted the Pareto approach at resolution in order to compute a set of diversified non-dominated networks, thus removing the need for the designer to rank or weight objectives. We design an asynchronous steady-state evolutionary algorithm for its resolution. Specific coding scheme and genetic and neighborhood operators have been designed for the tackled problem. On the other side, we make use of many generic features related to advanced intensification and diversification search techniques, hybridization of metaheuristics and grid computing for the distribution of the computations. They aim at improving the quality of networks and robustness, at speeding-up the search, hence efficiently solving large instances of the problem. Using realistic benchmarks, the computed networks and speed-ups on parallel/distributed architectures show the efficiency and the scalability of hierarchical models of hybridization and parallelization used in conjunction. <s> BIB009 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Parallelization <s> Cellular network design is a major issue in mobile telecommunication systems. In this paper, a model of the problem in its full practical complexity, based on multiobjective constrained combinatorial optimization, has been investigated. We adopted the Pareto approach at resolution in order to compute a set of diversified non-dominated networks, thus removing the need for the designer to rank or weight objectives a priori. We designed and implemented a ''ready-to-use'' platform for radio network optimization that is flexible regarding both the modeling of the problem (adding, removing, updating new antagonist objectives and constraints) and the solution methods. It extends the ''white-box'' ParadisEO framework for metaheuristics applied to the resolution of mono/multi-objective Combinatorial Optimization Problems requiring both the use of advanced optimization methods and the exploitation of large-scale parallel and distributed environments. Specific coding scheme and genetic and neighborhood operators have been designed and embedded. On the other side, we make use of many generic features related to advanced intensification and diversification search techniques, hybridization of metaheuristics and grid computing for the distribution of the computations. They aim at improving the quality of networks and their robustness. They also allow, to speed-up the search and obtain results in a tractable time, and so efficiently solving large instances of the problem. Using three realistic benchmarks, the computed networks and speed-ups on different parallel and/or distributed architectures show the efficiency and the scalability of hierarchical parallel hybrid models. <s> BIB010 | As early as in the first works published on EAs for solving the ACP problem, i.e. and BIB001 , it was soon understood that this optimization problem involved tasks demanding high computational resources. With the aim of not only speeding up the computation but also improving the solution quality, most of the parallel EAs analysed have adopted the coarse-grained scheme, also known as the island model (Alba and Tomassini Downloaded by [UMA University of Malaga] at 03:59 04 October 2013 2002). They have also used a unidirectional ring topology: BIB001 , , BIB002 , BIB004 , BIB005 and BIB006 . The work of BIB007 has also used the island model with a unidirectional ring topology but, instead of subpopulations, each island runs a hybrid evolution strategy. BIB003 have used a master/slave approach for the parallel implementation of the function evaluation, i.e. each function evaluation is distributed to different processors. BIB008 have extended this work by using the master/slave scheme not only for the parallel evaluation of the function evaluation, but also for evaluating each tentative solution of the EA asynchronously in parallel. They have also used the island model in this work. Finally, the works of BIB009 and BIB010 have again proposed extensions of these previous publications by deploying a parallel hybrid EA on a computational grid . This EA is hybrid because a local search is used to improve the solutions generated within the evolutionary loop. The parallelism is applied at different levels: the main EA model is an island model. Then, on each island, individuals undergo local search in parallel. The third level of parallelism considers each single function evaluation in parallel by decomposing the fitness function. |
Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Future trends <s> A new heuristic approach for minimizing possibly nonlinear and non differentiable continuous space functions is presented. By means of an extensive testbed, which includes the De Jong functions, it will be demonstrated that the new method converges faster and with more certainty than Adaptive Simulated Annealing as well as the Annealed Nelder&Mead approach, both of which have a reputation for being very powerful. The new method requires few control variables, is robust, easy to use and lends itself very well to parallel computation. ________________________________________ 1)International Computer Science Institute, 1947 Center Street, Berkeley, CA 94704-1198, Suite 600, Fax: 510-643-7684. E-mail: storn@icsi.berkeley.edu. On leave from Siemens AG, ZFE T SN 2, OttoHahn-Ring 6, D-81739 Muenchen, Germany. Fax: 01149-636-44577, Email:rainer.storn@zfe.siemens.de. 2)836 Owl Circle, Vacaville, CA 95687, kprice@solano.community.net. <s> BIB001 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Future trends <s> This paper contains a modern vision of the parallelization techniques used for evolutionary algorithms (EAs). The work is motivated by two fundamental facts: 1) the different families of EAs have naturally converged in the last decade while parallel EAs (PEAs) are still lack of unified studies; and 2) there is a large number of improvements in these algorithms and in their parallelization that raise the need for a comprehensive survey. We stress the differences between the EA model and its parallel implementation throughout the paper. We discuss the advantages and drawbacks of PEAs. Also, successful applications are mentioned and open problems are identified. We propose potential solutions to these problems and classify the different ways in which recent results in theory and practice are helping to solve them. Finally, we provide a highly structured background relating to PEAs in order to make researchers aware of the benefits of decentralizing and parallelizing an EA. <s> BIB002 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Future trends <s> Some designs are sufficiently creative that they are considered to be inventions. The invention process is typically characterized by a singular moment when the prevailing thinking concerning a long-standing problem is, in a “flash of genius,” overthrown and replaced by a new approach that could not have been logically deduced from what was previously known. This paper discusses such logical discontinuities using an example based on the history of one of the most important inventions of the 20th century in electrical engineering, namely, the invention of negative feedback by AT&T's Harold S. Black. This 1927 invention overthrew the then prevailing idiom of positive feedback championed by Westinghouse's Edwin Howard Armstrong. The paper then shows how this historically important discovery can be readily replicated by an automated design and invention technique patterned after the evolutionary process in nature, namely, genetic programming. Genetic programming employs Darwinian natural selection along with analogs of recombination (crossover), mutation, gene duplication, gene deletion, and mechanisms of developmental biology to breed an ever improving population of structures. Genetic programming rediscovers negative feedback by conducting an evolutionary search for a structure that satisfies Black's stated high-level goal (i.e., reduction of distortion in amplifiers). Like evolution in nature, genetic programming conducts its search probabilistically without resort to logic using a process that is replete with logical discontinuities. The paper then shows that genetic programming can routinely produce many additional inventive and creative results. In this regard, the paper discusses the automated rediscovery of numerous 20th-century patented inventions involving analog electrical circuits and controllers, the Sallen–Key filter, and six 21st-century patented inventions. <s> BIB003 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Future trends <s> We focus on the dimensioning process of cellular networks that addresses the evaluation of equipment global costs to cover a city. To deal with frequency assignment, that constitutes the most critical resource in mobile systems, the network is usually modeled as a pattern of regular hexagonal cells. Each cell represents the area covered by the signal of a transmitter or base station (BS). Our work emphasizes on the design of irregular hexagonal cells in an adaptive way. Hexagons transform themselves and adapt their shapes according to a traffic density map and to geometrical constraints. This process, called adaptive meshing (AM), may be seen as a solution to minimize the required number of BS to cover a region and to propose a basis for transmitter positioning. The solution we present to the mesh generation problem for mobile network dimensioning is based on the use of an evolutionary algorithm. This algorithm, called hybrid island evolutionary strategy (HIES), performs distributed computation. It allows the user to tackle problem instances with large traffic density map requiring several hundreds of cells. HIES combines local search fast computation on individuals, incorporated into a global island-like strategy. Experiments are done on one real case representing the mobile traffic load of the second French city of Lyon and on several other traffic maps from urban fictive data sets. <s> BIB004 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Future trends <s> Cellular network design is a major issue in mobile telecommunication systems. In this paper, a model of the problem in its full practical complexity, based on multiobjective constrained combinatorial optimization, has been investigated. We adopted the Pareto approach at resolution in order to compute a set of diversified non-dominated networks, thus removing the need for the designer to rank or weight objectives a priori. We designed and implemented a ''ready-to-use'' platform for radio network optimization that is flexible regarding both the modeling of the problem (adding, removing, updating new antagonist objectives and constraints) and the solution methods. It extends the ''white-box'' ParadisEO framework for metaheuristics applied to the resolution of mono/multi-objective Combinatorial Optimization Problems requiring both the use of advanced optimization methods and the exploitation of large-scale parallel and distributed environments. Specific coding scheme and genetic and neighborhood operators have been designed and embedded. On the other side, we make use of many generic features related to advanced intensification and diversification search techniques, hybridization of metaheuristics and grid computing for the distribution of the computations. They aim at improving the quality of networks and their robustness. They also allow, to speed-up the search and obtain results in a tractable time, and so efficiently solving large instances of the problem. Using three realistic benchmarks, the computed networks and speed-ups on different parallel and/or distributed architectures show the efficiency and the scalability of hierarchical parallel hybrid models. <s> BIB005 | There are several research lines that can be explored to address the ACP problem with EAs further. At a lower algorithmic level, the design of new encodings and genetic operators for the problem, as well as the analysis of current existing ones, are of great interest. Concretely, the more complex encoding, the network encoding presented in Section 3.2, has only been evaluated with a few genetic operators (multilevel mutation and geographical crossover, mainly). Additional operator developments may take advantage of this ACP-targeted encoding. Evaluating this encoding and operators with the search engine of well-known algorithms such as NSGA-II or SPEA2 is also a matter for research. At a higher algorithmic level, a promising research line is targeted at hybridizing EAs BIB001 , especially with other EAs. Up to now, EAs have been hybridized in the literature with local search algorithms (e.g. see BIB004 BIB005 or Tabu Search to solve ACP problems, but hybrid algorithms involving two different EAs have not been found. The aim here would be to profit from the different search capabilities, for example, of Downloaded by [UMA University of Malaga] at 03:59 04 October 2013 Engineering Optimization 687 a GA (diversification) and an evolution strategy (intensification). In the context of multiobjective EAs, hybridization is underexplored in the literature. Checking whether other unused EAs can successfully address the ACP problem is a promising research topic as well. To the best of our knowledge, two main unused EAs have been left unexplored in the literature. On the one hand, no genetic programming approach has been found in the literature for ACP, even when this kind of EA performs well on other design problems BIB003 . On the other hand, the cellular model of structured EAs BIB002 has not been used either. Cellular EAs have been shown to be very effective in other domains , so evaluating their enhanced search engine may lead to an improvement in the current state-of-the-art algorithms. There are several additional studies whose conclusions may result in relevant outcomes especially for telecommunications engineers who use EAs to solve their ACP problems. The analysis of both the scalability and the convergence speed of EAs on this problem also requires more investigation. The increasing size of cellular networks means EAs are faced with problem instances with thousand of decision variables. Therefore, evaluating the algorithms that perform better on very large instances is of great interest for cellular operators, since they can afford larger and more efficient network deployments. The study of how quickly EAs converge towards optimal solutions would also be of interest to the telecommunications industry. Indeed, execution time becomes a critical constraint for the operators and mainly for the software companies that are developing software for the operators. Within commercial applications, reaching 'good' solutions in a very short time is usually essential in order to provide operators with competitive software tools. These studies have to pay special attention to the statistical analysis of the results, which must be rigorously performed in order to draw useful conclusions. However, the works analysed in this articles for the most part lack such thorough analyses. |
Subsets and Splits