reference
stringlengths
376
444k
target
stringlengths
31
68k
A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> EE using ML techniques <s> A hybrid network architecture has been proposed for machine-to-machine M2M communications in the fifth generation wireless systems, where M2M gateways connect the capillary networks and cellular networks. In this paper, we develop novel energy efficient and end-to-end delay duty cycle control scheme for controllers at the gateway and the capillary networks coordinator. We first formulate a duty cycle control problem with joint-optimisation of energy consumption and end-to-end delay. Then, a distributed duty cycle control scheme is proposed. The proposed scheme consists of two parts i a transmission policy, which decides the optimal number of packets to be transmitted between M2M devices, coordinators and gateways; and ii a duty cycle control for IEEE 802.15.4. We analytically derived the optimal duty cycle control and developed algorithms to compute the optimal duty cycle. It is to increase the feasibility of implementing the control on computation-limited devices where a suboptimal low complexity rollout algorithm-based duty cycle control RADutyCon is proposed. The simulation results show that RADutyCon achieves an exponential reduction of the computation complexity as compared with that of the optimal duty cycle control. The simulation results show that RADutyCon performs close to the optimal control, and it performs no worse than the heuristic base control. Copyright © 2014 John Wiley & Sons, Ltd. <s> BIB001 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> EE using ML techniques <s> The explosive growth of mobile multimedia services has caused tremendous network traffic in wireless networks and a great part of the multimedia services are delay-sensitive. Therefore, it is important to design efficient radio resource allocation algorithms to increase network capacity and guarantee the delay QoS. In this paper, we study the power control problem in the downlink of two-tier femtocell networks with the consideration of the delay QoS provisioning. Specifically, we introduce the effective capacity (EC) as the network performance measure instead of the Shannon capacity to provide the statistical delay QoS provisioning. Then, the optimization problem is modeled as a non- cooperative game and the existence of Nash Equilibriums (NE) is investigated. However, in order to enhance the selforganization capacity of femtocells, based on non-cooperative game, we employ a Q-learning framework in which all of the femtocell base stations (FBSs) are considered as agents to achieve power allocation. Then a distributed Q- learning-based power control algorithm is proposed to make femtocell users (FUs) gain maximum EC. Numerical results show that the proposed algorithm can not only maintain the delay requirements of the delay-sensitive services, but also has a good convergence performance. <s> BIB002 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> EE using ML techniques <s> We study the energy efficiency issue in 5G communications scenarios, where cognitive femtocells coexist with picocells operating at the same frequency bands. Optimal energy-efficient power allocation based on the sensing-based spectrum sharing (SBSS) is proposed for the uplink cognitive femto users operating in a multiuser MIMO mode. Both hard-decision and soft-decision schemes are considered for the SBSS. Different from the existing energy-efficient designs in multiuser scenarios, which consider system-wise energy efficiency, we consider user-wise energy efficiency and optimize them in a Pareto sense. To resolve the nonconvexity of the formulated optimization problem, we include an additional power constraint to convexify the problem without losing global optimality. Simulation results show that the proposed schemes significantly enhance the energy efficiency of the cognitive femto users compared with the existing spectral-efficient designs. <s> BIB003 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> EE using ML techniques <s> Heterogeneous cloud radio access networks (H-CRAN) is a new trend of SC that aims to leverage the heterogeneous and cloud radio access networks advantages. Low power remote radio heads (RRHs) are exploited to provide high data rates for users with high quality of service requirements (QoS), while high power macro base stations (BSs) are deployed for coverage maintenance and low QoS users support. However, the inter-tier interference between the macro BS and RRHs and energy efficiency are critical challenges that accompany resource allocation in H-CRAN. Therefore, we propose a centralized resource allocation scheme using online learning, which guarantees interference mitigation and maximizes energy efficiency while maintaining QoS requirements for all users. To foster the performance of such scheme with a model-free learning, we consider users' priority in resource blocks (RBs) allocation and compact state representation based learning methodology to enhance the learning process. Simulation results confirm that the proposed resource allocation solution can mitigate interference, increase energy and spectral efficiencies significantly, and maintain users' QoS requirements. <s> BIB004
Duty cycle control with joint optimization of delay and energy efficiency for capillary machine-to-machine networks in 5G communication system BIB001 Distributed power control for two tier femtocell networks with QoS provisioning based on Q-learning BIB002 Spectrum sensing techniques using both hard and soft decisions BIB003 EE resource allocation in 5G heterogeneous cloud radio access network BIB004
A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Base Station Energy Consumption and Cell Switch Off Techniques <s> In this paper heterogeneous wireless cellular networks based on two-tier architecture consisting of macrocells and femtocells are considered. Methods of femtocells deployment and management are explored in order to determine their effects on performance of wireless cellular networks. Thus, network performance parameters are described and analytically calculated for different two-tier network architectures. A specific approach is presented in the paper, where calculations of the network performance parameters are supported with some of the results obtained using an appropriate simulation tool. In such a manner, energy efficiency of the considered two-tier network architectures is studied by introducing a number of so called green metrics. It is clearly shown that significant energy efficiency, as well as throughput, improvements can be achieved by adopting heterogeneous architecture for wireless cellular networks. <s> BIB001 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Base Station Energy Consumption and Cell Switch Off Techniques <s> Dynamic adaptation of the base stations on/off activity or transmit power, according to space and time traffic variations, are measures accepted in the most contemporary resource management approaches dedicated to improving energy efficiency of cellular access networks. Practical implementation of both measures results in changes to instantaneous base station power consumption. In this paper, extensive analyses presenting influence of the transmit power scaling and on/off switching on instantaneous macro base stations power consumption are given. Based on real on-site measurements performed on a set of macro base stations of different access technologies and production years, we developed linear power consumption models. These models are developed by means of linear regression and precisely model the influence of transmit power on instantaneous power consumption for the second, third and fourth generations of macro base stations. In order to estimate the potential energy savings of transmit power scaling and on/off switching for base stations of different generations, statistical analyses of measured power consumptions are performed. Also, transient times and variations of base stations instantaneous power consumption during transient periods initiated with on/off switching and transmit power scaling are presented. Since the developed power consumption models have huge confidence follow measured results, they can be used as general models for expressing the relationship between transmitted and consumed power for macro base stations of different technologies and generations. <s> BIB002 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Base Station Energy Consumption and Cell Switch Off Techniques <s> The deployment of a large number of small cells poses new challenges to energy efficiency, which has often been ignored in 5G cellular networks. While massive MIMO will reduce the transmission power at the expense of higher computational cost, the question remains as to which (computation or transmission power) is more important in the energy efficiency of 5G small cell networks. Thus, the main objective in this article is to investigate the computation power based on the Landauer principle. Simulation results reveal that more than 50 percent of the energy is consumed by the computation power at 5G small cell BSs. Moreover, the computation power of a 5G small cell BS can approach 800 W when massive MIMO (e.g., 128 antennas) is deployed to transmit high volume traffic. This clearly indicates that computation power optimization can play a major role in the energy efficiency of small cell networks. <s> BIB003 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Base Station Energy Consumption and Cell Switch Off Techniques <s> In next generation wireless networks along with the overwhelming demand of high data rate and network capacity, the user demands ubiquitous connectivity with the network. In order to fulfill the demand of anywhere at any time data services, the network operators have to install more and more base stations that eventually leads towards high power consumption. For this, the potential solution is derived from 5G network that proposes a heterogeneous environment of wireless access networks. More particularly, deployment of Femto and Pico cell under the umbrella of Macro cell base stations (BS). Such networking strategy will result high network capacity and energy efficiency along with better network coverage. In this article, an analysis of energy efficiency has been carried out by using two-tier and three tier network configurations. The simulation results demonstrate that rational deployment of small cells improves the energy efficiency of wireless network. <s> BIB004 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Base Station Energy Consumption and Cell Switch Off Techniques <s> As we make progress towards the era of fifth generation (5G) communication networks, energy efficiency (EE) becomes an important design criterion because it guarantees sustainable evolution. In this regard, the massive multiple-input multiple-output (MIMO) technology, where the base stations (BSs) are equipped with a large number of antennas so as to achieve multiple orders of spectral and energy efficiency gains, will be a key technology enabler for 5G. In this article, we present a comprehensive discussion on state-of-the-art techniques which further enhance the EE gains offered by massive MIMO (MM). We begin with an overview of MM systems and discuss how realistic power consumption models can be developed for these systems. Thereby, we discuss and identify few shortcomings of some of the most prominent EE-maximization techniques present in the current literature. Then, we discuss "hybrid MM systems" operating in a 5G architecture, where MM operates in conjunction with other potential technology enablers, such as millimetre wave, heterogenous networks, and energy harvesting networks. Multiple opportunities and challenges arise in such a 5G architecture because these technologies benefit mutually from each other and their coexistence introduces several new constraints on the design of energy-efficient systems. Despite clear evidence that hybrid MM systems can achieve significantly higher EE gains than conventional MM systems, several open research problems continue to roadblock system designers from fully harnessing the EE gains offered by hybrid MM systems. Our discussions lead to the conclusion that hybrid MM systems offer a sustainable evolution towards 5G networks and are therefore an important research topic for future work. <s> BIB005 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Base Station Energy Consumption and Cell Switch Off Techniques <s> Energy efficiency is a major requirement for next generation mobile networks both as an end to reduce operational expenses and to increase the systems' ecological friendliness. Another integral part of 5G networks is the increased density of the deployment of small radius base stations, such as femtocells. Based on the design principle that demands a system to be active and transmitting only when and where it is needed, we evaluate the energy savings harvested when sleep mode techniques are enforced in dense femtocell deployments. We present our novel variations of sleep mode combined with hybrid access strategies and we estimate capacity and energy benefits. Our simulations show significant advantages in performance and energy efficiency. <s> BIB006 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Base Station Energy Consumption and Cell Switch Off Techniques <s> As a new promising of higher data rates and to enable the Internet of Things (IoT), the thirst of energy efficiency in communication networks has become an important milestone in the design and operation. With the emergence of the 5G of wireless networks and the deployment of billions of base stations to the connected devices, the requirement for system design and energy efficiency management will become more attractive. In addition, in the next era of cellular, the energy efficiency is the most important requirement determined by the needs in reducing the carbon footprint of communications, and also in extending the life of the terminal battery. Nevertheless, the new challenge has emerged especially in the backbone of the networks. Therefore, the aim of this paper is to present the potential of 5G system to meet the increasing needs in devices and explosive capacity without causing any significant energy consumption based on functional split architecture particularly for 5G backhaul. <s> BIB007 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Base Station Energy Consumption and Cell Switch Off Techniques <s> Cell switch-off (CSO) is an important approach to reducing energy consumption in cellular networks during off-peak periods. CSO addresses the research question of which cells to switch off when. Whereas online CSO, based on immediate user demands and channel states, is problematic to implement and difficult to model, off-line CSO is more practical and tractable. Furthermore, it is known that regular cell layouts generally provide the best coverage and spectral efficiency, which leads us to prefer regular static (off-line) CSO. We introduce sector-based regular CSO patterns for the first time. We organize the existing and newly introduced patterns using a systematic nomenclature; studying 26 patterns in total. We compare these patterns in terms of energy efficiency and the average number of users supported, via a combination of analysis and simulation. We also compare the performance of CSO with two benchmark algorithms. We show that the average number of users can be captured by one parameter. Moreover, we find that the distribution of the number of users is close to Gaussian, with a tractable variance. Our results demonstrate that several patterns that activate only one out of three sectors are particularly beneficial; such CSO patterns have not been studied before. <s> BIB008
Knowing the accurate energy consumption of a base station constitutes an important part of the understanding of the energy budget of a wireless network. For this purpose, authors in BIB003 have specifically discussed energy conservation at equipment level by presenting the breakdown of a base station. A typical BS has been presented by dividing it into five parts, namely antenna interface, power amplifier, RF chains, Baseband unit, mains power supply and the DC-DC supply. These modules have been shown in Figure 2 . An important claim has been made stating that up to 57% of the power consumption at a base station is experienced at the transmission end, i.e., the power amplifier and antenna interface. Yet, with small cells, the power consumption per base station has been reduced due to shorter distances between the base stations and the users BIB003 BIB004 . In BIB004 , analytical modelling of the energy efficiency for a heterogeneous network comprising upon macro, pico and femto base stations has been discussed. To a certain extent emphasis has been put on the baseband unit which is specifically in charge of the computing operations and must be sophisticated enough to handle huge bursts of traffic. A baseband unit has been described to be composed of four different logical systems including a baseband system used for evaluating Fast Fourier Transforms (FFT) and wireless channel coding, the control system for resource allocation, the transfer system used for management operations among neighbouring base stations and finally the system for powering up the entire base station site including cooling and monitoring systems. Furthermore, the use of mmWave and massive MIMO would need an even greater push on the computation side of the base station since more and more users are now being accommodated. The study in discusses the achievable sum rates and energy efficiency of a downlink single cell M-MIMO systems under various precoding schemes whereas several design constraints and future opportunities concerning existing and upcoming MIMO technologies have been discussed in BIB005 . The computation power of base station would increase when number of antennas and the bandwidth increases. In the case of using 128 antennas the computation power would go as high as 3000 W for a macrocell and 800 W for a small cell according to BIB003 . Authors in BIB007 have discussed the utility of taking most of the baseband processing functionality away from the base station towards a central, more powerful and organized unit for supporting higher data rates and traffic density. Users have envisioned experiencing more flexibility using this central RAN since they would be able to get signaling from one BS and get data transfer through another best possible neighboring BS. Visible gains in latency and fronthaul bandwidth have thus been observed by having stronger backhaul links but this research avenue still needs to be formally exploited for devising globally energy efficient mechanisms. The choice of the best suited BS would allow the network to have a lower transmission power thus increasing the energy efficiency. An analysis of throughput as a performance metric has been provided for a two-tier heterogeneous network comprising upon macro and femto cells in BIB001 . The claimed improvement in throughput originates from a distributed mesh of small cells so that the minimal transmission distance between the end user and the serving base station would be cashed out in terms of reduced antenna's transmission power. Considering these findings on BS energy consumption, cell switch-off techniques have been explored in the literature. An incentive based sleeping mechanism for densely deployed femtocells has been considered in BIB006 and energy consumption reduction up to 40% has been observed by turning the RF chains off and only keeping the backhaul links alive. The key enabler here would be to have prompt toggling between active and sleep modes for maintaining the quality of service. According to BIB006 , a "sniffer" component installed at these small cells that would be responsible for detecting activity in the network by checking the power in uplink connections, a value surpassing the threshold, would indicate a connection with the macrocell. Mobility Management Entity (MME) has also been suggested to potentially take a lead by sending wake up signals to the respective femtocells and keeping others asleep. In contrast to the usual techniques of handing their users over to the neighbouring base stations and turning that cell off, it would be beneficial to give incentives to users for connecting to a neighbouring cell if they get to have better data rates. Authors in BIB008 have conducted a thorough study for classification of the switching techniques as well as calculation of the outage probability of UEs, under realistic constraints. Their claim states that the energy consumption of the base station is not directly proportional to its load so an improved switching algorithm was needed that would allow the UEs to maintain the SINR thresholds. They have thus brought forward a sector based switching technique for the first time. Furthermore, their claim favors an offline switching technique instead of a more dynamic online scheme because of practical constraints such as random UE distribution and realistic interference modelling. Authors in BIB002 discuss influence of the transmit power scaling and on/off switching on instantaneous macro base stations power consumption. The proposed power consumption models have been claimed to be used as generic models for the relationship between transmitted and consumed power for macro base stations of different technologies and generations. In addition to these techniques, recently, machine learning techniques have been used to implement cell switch off which are discussed in Section 6.
A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Interference-Aware Energy Efficiency Techniques in 5G Ultra Dense Networks <s> This monograph presents a unified framework for energy efficiency maximization in wireless networks via fractional programming theory. The definition of energy efficiency is introduced, with reference to single-user and multi-user wireless networks, and it is observed how the problem of resource allocation for energy efficiency optimization is naturally cast as a fractional program. An extensive review of the state-of-the-art in energy efficiency optimization by fractional programming is provided, with reference to centralized and distributed resource allocation schemes. A solid background on fractional programming theory is provided. The key-notion of generalized concavity is presented and its strong connection with fractional functions described. A taxonomy of fractional problems is introduced, and for each class of fractional problem, general solution algorithms are described, discussing their complexity and convergence properties. The described theoretical and algorithmic framework is applied to solve energy efficiency maximization problems in practical wireless networks. A general system and signal model is developed which encompasses many relevant special cases, such as one-hop and two-hop heterogeneous networks, multi-cell networks, small-cell networks, device-to-device systems, cognitive radio systems, and hardware-impaired networks, wherein multiple-antennas and multiple subcarriers are possibly employed. Energy-efficient resource allocation algorithms are developed, considering both centralized, cooperative schemes, as well as distributed approaches for self-organizing networks. Finally, some remarks on future lines of research are given, stating some open problems that remain to be studied. It is shown how the described framework is general enough to be extended in these directions, proving useful in tackling future challenges that may arise in the design of energy-efficient future wireless networks. <s> BIB001 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Interference-Aware Energy Efficiency Techniques in 5G Ultra Dense Networks <s> Recent trend of network communication is leading towards the innovation of high speed wireless broadband technology. The scheduling of real-time traffic in certain network will give high impact on the system, so the most efficient scheduling is crucial. This paper proposes an energy-efficient resource allocation scheduler with QoS aware support for LTE network. The ultimate aim is to promote and achieve the green wireless LTE network and environmental friendly. Some related works on green LTE networks are also being discussed. <s> BIB002 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Interference-Aware Energy Efficiency Techniques in 5G Ultra Dense Networks <s> In this paper, we investigate the interference management problem in a full-duplex cellular network from a spectrum resource allocation perspective. In order to maximize the full-duplex network throughput, we propose an interference area based resource allocation algorithm, which can pair the downlink UE and uplink UE with limited mutual interference. The simulation results verify the efficiency of the proposed interference area based resource allocation algorithm in the investigated full-duplex cellular network. <s> BIB003 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Interference-Aware Energy Efficiency Techniques in 5G Ultra Dense Networks <s> Ultra-dense networks can further improve the spectrum efficiency (SE) and the energy efficiency (EE). However, the interference avoidance and the green design are becoming more complex due to the intrinsic densification and scalability. It is known that the much denser small cells are deployed, the more cooperation opportunities exist among them. In this paper, we characterize the cooperative behaviors in the Nash bargaining cooperative game-theoretic framework, where we maximize the EE performance with a certain sacrifice of SE performance. We first analyze the relationship between the EE and the SE, based on which we formulate the Nash-product EE maximization problem. We achieve the closed-form sub-optimal SE equilibria to maximize the EE performance with and without the minimum SE constraints. We finally propose a CE2MG algorithm, and numerical results verify the improved EE and fairness of the presented CE2MG algorithm compared with the non-cooperative scheme. <s> BIB004 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Interference-Aware Energy Efficiency Techniques in 5G Ultra Dense Networks <s> Wireless cellular networks have seen dramatic growth in number of mobile users. As a result, data requirements, and hence the base-station power consumption has increased significantly. It in turn adds to the operational expenditures and also causes global warming. The base station power consumption in long-term evolution (LTE) has, therefore, become a major challenge for vendors to stay green and profitable in competitive cellular industry. It necessitates novel methods to devise energy efficient communication in LTE. Importance of the topic has attracted huge research interests worldwide. Energy saving (ES) approaches proposed in the literature can be broadly classified in categories of energy efficient resource allocation, load balancing, carrier aggregation, and bandwidth expansion. Each of these methods has its own pros and cons leading to a tradeoff between ES and other performance metrics resulting into open research questions. This paper discusses various ES techniques for the LTE systems and critically analyses their usability through a comprehensive comparative study. <s> BIB005 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Interference-Aware Energy Efficiency Techniques in 5G Ultra Dense Networks <s> In this paper, device-to-device (D2D) communication and small cell technology are introduced into cellular networks to form three layers of heterogeneous network (HetNet). The resource allocation problem of D2D users and small cellular users (SCUEs) is studied in this network, and a resource allocation method under satisfying the communication quality of macro cellular users, D2D users and SCUEs is proposed. Firstly, in order to reduce the computational complexity, regional restrictions on macro base station and users are conducted; Then, in order to improve the system throughput, a resource allocation method based on interference control is proposed. The simulation results show that the proposed method can effectively reduce the computational complexity and improve the overall system throughput. <s> BIB006 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Interference-Aware Energy Efficiency Techniques in 5G Ultra Dense Networks <s> The Orthogonal Frequency Division Multiplexing (OFDM) has been widely used in the next generation networks. With the increasing of the wireless equipment, the problem of energy consumption for the wireless network has become a big challenge. Power control is the key of the network management, while power allocations and channel assignments have been investigated for maximizing energy efficiency in each cell in the OFDM-based cellular network. The optimal problem of maximizing energy efficiency of networks has been formulated as a non-linear fractional program. The dual decomposition and sub-gradient iteration have been used to solve it. Furthermore, a numerical simulation has been proposed to verify the algorithm proposed in this paper. The simulation results show that the maximum energy efficiency in each cell can be obtained. <s> BIB007 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Interference-Aware Energy Efficiency Techniques in 5G Ultra Dense Networks <s> Abstract Spurred by both economic and environmental concerns, energy efficiency (EE) has now become one of the key pillars for the fifth generation (5G) mobile communication networks. To maximize the downlink EE of the 5G ultra dense network (UDN), we formulate a constrained EE maximization problem and translate it into a convex representation based on the fractional programming theory. To solve this problem, we first adopt a centralized algorithm to reach the optimum based on Dinkelbach’s procedure. To improve the efficiency and reduce the computational complexity, we further propose a distributed iteration resource allocation algorithm based on alternating direction method of multipliers (ADMM). For the proposed distributed algorithm, the local and dual variables are updated by each base station (BS) in parallel and independently, and the global variables are updated through the coordination and information exchange among BSs. Moreover, as the noise may lead to imperfect information exchange among BSs, the global variables update may be subject to failure. To cope with this problem, we propose a robust distributed algorithm, for which the global variable only updates as the information exchange is successful. We prove that this modified robust distributed algorithm converges to the optimal solution of the primal problem almost surely. Simulation results validate our proposed centralized and distributed algorithms. Especially, the proposed robust distributed algorithm can effectively eliminate the impact of noise and converge to the optimal value at the cost of a little increase of computational complexity. <s> BIB008 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Interference-Aware Energy Efficiency Techniques in 5G Ultra Dense Networks <s> Energy and spectral efficiencies are key metrics to assess the performance of networks and compare different configurations or techniques. There are many ways to define those metrics, and the performance indicators used in their calculation can also be measured in different ways. Using an LTE-A network, we measure different performance indicators and the metrics' outputs are compared. Modifying the transmitted output power, the bandwidth, and the number of base stations, different network configurations are also compared. As expected, the measurements show that increasing the bandwidth increases the throughput more than it increases the energy consumption. Results clearly show that using inappropriate indicators can be misleading. The power indicator should include all energy consumed and the throughput should be dependent on the traffic, taking into account the idle time of the network, if any. There is a need to include more performance indicators into the metrics, especially those related to quality of service. <s> BIB009
The advantages of small cell deployment, in terms of increased system capacity and better load balancing capability, have been discussed in the previous sections. Yet, it is important to mention that densification suffers from added system complexity. Therefore, energy efficiency as well as spectral efficiency becomes harder to evaluate. Nash energy efficiency maximization theory has been presented for discussing the relationship between energy and spectral efficiency in BIB004 . Both are inversely related to each other, increase in one of them demands a natural decrease in the other quantity which usually has been the case of medium to high transmission power. Most of the research conducted in ultra-dense small cell networks has been on coming up with techniques optimizing both energy efficiency (EE) and spectral efficiency (SE). Authors in BIB004 also brings forth the idea of gaining energy efficiency at the cost of spectral efficiency where the small cells are under the coverage of a macro cell and pose interference issues due to the sharing of bandwidth among them.In such a scenario, all the small cells participate in energy efficiency maximization according to a game theoretic methodology. The suggested game theoretic model has been deemed to be a distributed model and utilizes Nash product for maximizing cooperative energy efficiency. Analysis of the algorithms shows that energy efficiency, although it increases with the increase in the number of small cells, it saturates after about 200 cells and afterwards only experiences a minor increase. Fractional programming has been extensively used in BIB001 for modelling the energy efficiency ratio for a Point-to-Point (P2P) network as well as for a full scaled communication network using MIMO. EE has been considered as a cost benefit ratio and minimum rate constraints have been put together for modelling real life scenarios. In addition, fairness in resource allocation has been considered a major factor in the overall energy distribution. These two constraints might tend to increase the power consumption in case the minimum thresholds tend to be too high. Adding to the use cases of fractional programming, BIB008 laid out a robust distributed algorithm for reducing the adverse effects of computational complexity and noise towards resource allocation. Authors in BIB009 , have presented an experimental setup for defining the right kind of key performance indicators when measuring either EE or SE. The setup includes a set of UE(s), three small BS(s) and running iperf traffic using User Datagram Protocol (UDP) and File Transfer Protocol (FTP). Results have indicated that utilization of a higher bandwidth would not increase the power consumption, that throughput must incorporate the traffic density and that the idle power of the equipment needs to be considered for energy consumption calculations. In BIB005 , use of varying transmission power levels by the aid of custom power levels in a two-tier network has been encouraged for the optimization of needed power in Long Term Evolution (LTE). Intelligent switching of control channels in the DL and tuning the power levels according to the UE's feedback have been envisioned to aid in allocation of the resource blocks with an optimum power. Authors in BIB002 , have discussed the opportunities for the less explored domain of user scheduling in LTE. 3GPP has no fixed requirement on scheduling and thus researchers have devised their own mechanisms depending upon their pain points. Authors have proposed the idea of associating Quality of Service (QoS) with scheduling for accommodating cell edge users. Authors in BIB003 have proposed a resource allocation technique for minimizing the interference at the UE side. Considering a full duplex communication setup, a circular interference area for a DL UE has been demarcated by the BS based upon a predefined threshold. Resource block for this UE has been shared by an UL UE from outside the interference region for keeping the mutual interference to a minimal level. Simulation results claim to improve the overall network throughput based on the efficient pairing of UEs but the throughput might degrade with a large increase in the distance between the paired UEs. A heuristic algorithm presented in BIB006 improves the system throughput using resource reuse in the three-tier architecture while regulating the interference regions of UEs being served by either macro BS, small BS or in a D2D way. Visible gains in the throughput have been noted with an increased user density for an efficient user selection and having a minimum distance between the UEs being served in a D2D fashion for a stronger link retention. Moreover in BIB007 , authors have constructed objective functions for EE maximization and have thus compared max-min power consumption model against their nonlinear fractional optimization model. Results have been promising for a reduction in the power consumption because of the mutual participation of cells as their number starts to increase.
A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Energy Efficient and Cache-Enabled 5G <s> Mobility, resource constraints and unreliable wireless links of mobile P2P networks will cause high data access latency and the communication overhead. Cooperative caching is widely seen as an effective solution to improve the overall system performance in mobile P2P networks. In this paper we present a novel cooperative caching scheme for mobile P2P networks. In our scheme the caching space of each node is divided into three parts: locale caching, cooperative caching and path caching, which respectively store the requested data objects of the nodes, the hot data objects in the networks and the data objects path. We also put forward the cache replacement strategy according to our scheme. Proposed cache replacement strategy not only takes into account the need of the nodes, but also pays attention to collaborative work between nodes. We evaluate the performance of our scheme by using NS-2. The experimental results show that the cache hit ratio is effectively increased and the average hops count is reduced. <s> BIB001 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Energy Efficient and Cache-Enabled 5G <s> Traditional wireless networks mainly rely on macro cell deployments, meanwhile with the advances in forth generation networks, the recent architectures of LTE and LTE-A support Heterogeneous Networks (HetNets) that employ a mix of macro and small cells. Small cells aim at increasing coverage and capacity. Coverage both at cell edges and indoor environments can be significantly improved by relays and small cells. Capacity is inherently limited because of the limited spectrum, and although 4G wireless networks have been able to provide a considerable amount of increase in capacity, it has always been challenging to keep up with the growing user demands. In particular, the high volume of traffic resulting from video uploads or downloads is the major reason for the ever growing user demand. In the Internet, content caching at locations closer to the users have been a successful approach to enhance resource utilization. Very recently, content caching within the wireless network has been considered for 4G networks. In this paper, we propose an Integer Linear Programming (ILP)-based energy-efficient content placement approach for small cells. The proposed model, namely minimize Uplink Power and Caching Power (minUPCA), jointly minimizes uplink and caching powers. We compare the performance of minUPCA with a scheme that only aims to minimize uplink power. Our results show that minUPCA provides a compromise between the uplink energy budget of the User Equipment (UE) and the caching energy budget of the Small Cell Base Station (SCBS). <s> BIB002 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Energy Efficient and Cache-Enabled 5G <s> The emerging 5G wireless networks will pose extreme requirements such as high throughput and low latency. Caching as a promising technology can effectively decrease latency and provide customized services based on group users behaviour (GUB). In this paper, we carry out the energy efficiency analysis in the cache-enabled hyper cellular networks (HCNs), where the macro cells and small cells (SCs) are deployed heterogeneously with the control and user plane (C/U) split. Benefiting from the assistance of macro cells, a novel access scheme is proposed according to both user interest and fairness of service, where the SCs can turn into semi- sleep mode. Expressions of coverage probability, throughput and energy efficiency (EE) are derived analytically as the functions of key parameters, including the cache ability, search radius and backhaul limitation. Numerical results show that the proposed scheme in HCNs can increase the network coverage probability by more than 200% compared with the single- tier networks. The network EE can be improved by 54% than the nearest access scheme, with larger research radius and higher SC cache capacity under lower traffic load. Our performance study provides insights into the efficient use of cache in the 5G software defined networking (SDN). <s> BIB003 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Energy Efficient and Cache-Enabled 5G <s> Content caching is an efficient technique to reduce delivery latency and system congestion during peak-traffic times by bringing data closer to end users. Existing works consider caching only at higher layers separated from physical layer. In this paper, we study wireless caching networks by taking into account cache capability when designing the signal transmission. In particular, we investigate multi-layer caching and their performance in edge-caching wireless networks where both base station (BS) and users are capable of storing content data in their local cache. Two notable uncoded and coded caching strategies are studied. Firstly, we propose a coded caching strategy that is applied to arbitrary value of cache size. The required backhaul and access rates are given as a function of the BS and user cache size. Secondly, closed-form expressions for the system energy efficiency (EE) corresponding to the two caching methods are derived. Thirdly, the system EE is maximized via precoding vectors design and optimization while satisfying the user request rate. Finally, numerical results are presented to verify the effectiveness of the two caching methods. <s> BIB004 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Energy Efficient and Cache-Enabled 5G <s> Using a network of cache enabled small cells, traffic during peak hours can be reduced by proactively fetching the content that is most likely to be requested. In this paper, we aim to explore the impact of proactive caching on an important metric for future generation networks, namely, energy efficiency (EE). We argue that, exploiting the spatial repartitions of users in addition to the correlation in their content popularity profiles, can result in considerable improvement of the achievable EE. In this paper, the optimization of EE is decoupled into two related subproblems. The first one addresses the issue of content popularity modeling. While most existing works assume similar popularity profiles for all users, we consider an alternative framework in which, users are clustered according to their popularity profiles. In order to showcase the utility of the proposed clustering, we use a statistical model selection criterion, namely, Akaike information criterion. Using stochastic geometry, we derive a closed-form expression of the achievable EE and we find the optimal active small cell density vector that maximizes it. The second subproblem investigates the impact of exploiting the spatial repartitions of users. After considering a snapshot of the network, we formulate a combinatorial problem that optimizes content placement in order to minimize the transmission power. Numerical results show that the clustering scheme considerably improves the cache hit probability and consequently the EE, compared with an unclustered approach. Simulations also show that the small base station allocation algorithm improves the energy efficiency and hit probability. <s> BIB005 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Energy Efficient and Cache-Enabled 5G <s> In this paper, we study delay-aware cooperative online content caching with limited caching space and unknown content popularity in dense small cell wireless networks. We propose a Cooperative Online Content cAching algorithm (COCA) that decides in which BS the requested content should be cached with considerations of three important factors: the residual cache space in each small cell basestation (SBS), the number of coordinated connections each SBS establishes with other SBSs, and the number of served users in the coverage area of each SBS. In addition, due to limited storage space in the cache, the proposed COCA algorithm eliminates the least recently used (LRU) contents to free up the space. We compare the delay performance of the proposed COCA algorithm with the existing offline cooperative caching schemes through simulations. Simulation results demonstrate that the proposed COCA algorithm has a better delay performance than the existing offline algorithms. <s> BIB006
In BIB005 , the idea of proactive caching based on the content popularity on small cells has been proposed for improving the energy efficiency. Owing to the abundance of small cells, networks are getting constrained by the overall backhaul link capacity and much of the load is corresponding to transactions of the same requests repeatedly. Energy efficiency has been evaluated with regards to the content placement techniques and more emphasis has been put into organizing the content based on user locations and constantly fine tuning the clusters based on the content popularity distribution instead of spanning the same content across the network. Various topologies are shown in Figure 4 . Energy efficiency has been formulated in relation to the small cell density vector. A heterogeneous file popularity distribution has been considered and a popularity vector has been maintained at every user. Users have been grouped into clusters depending upon the similarity in their interests and the cached files are an average of these popularity vectors. Users would usually be allowed to communicate with the base station within a specified distance of their cluster and in case of a cache miss event, the content would then be requested from the core via backhaul links. Spanning the same data across the network tends to sacrifice the information diversity and hence a content-based clustering approach has been brought forward. Simulations have been presented to demonstrate that with the increased base station density, significant energy efficiency gains have been experienced since the allocation problem gets simplified and interference and transmission powers would be reduced. In a unique approach for addressing the energy efficiency challenge has been presented. The proposed E3 ratio thus incorporates a cost factor when calculating the number of UEs being served against the power spent over this operation by the BS. It has been made clear that although the cost factor might not have a direct impact on the spectral efficiency, it would be an important factor when regulating the cost of the entire network. Thus, operators have been addressed to carefully incorporate the features of edge caching and gigabit X-haul links to strike a fair balance between the cost overhead and the need of the feature. Otherwise it would be an overkill which has been meant to be strictly avoided. Mathematical analysis for EE maximization presented in BIB004 supports the fact that for the cases of low user cache size, non coded schemes should be utilized for a faster delivery system. Highlight of the research work conducted in BIB006 has been the assumption of a finite cache memory for a more realistic analysis. Delay bounds of an online cooperative caching scheme have been brought forward as compared to offline and a random caching scheme. The cache being periodically updated promises to deliver a tighter user association and aims to have minimum possible latency. The algorithm also aims to accurately cache the data in highest demand with an increased user density. Application of cooperative caching on P2P networks has been discussed in BIB001 , authors have demonstrated the effectiveness of the algorithm by the segmentation of cache memory at the base stations. It would not only keep track of the cached data of the highly demanded information but would also record data paths and the newly requested data. The simulations have illustrated the usefulness of this optimization technique by the reduced number of hops and latency. On the other hand, uplink energy conservation has been considered in the context of dense small cells BIB002 . In BIB003 , energy efficiency analysis of heterogeneous cache enabled 5G hyper cellular networks was performed. The control and user plane separation is considered to aid in devising enhanced access schemes and retain fairness in service. Furthermore, base station on-off strategy is taken into account to help in cutting down costs spent on redundant small cells BIB003 . In that scenario, macro cells would be the masters handling mobility, home subscriber and the user admission whereas small cells would be the slave part of the radio resource management scheme. With this increasing growth of the network infrastructure, irregularities in traffic behavior must be taken into account along with the actual user distribution for a realistic scenario. Caching has been sought after as a viable solution for reducing the end to end latency by storing content at the base stations. Small cells would typically involve macro base station in its communication with the UE in a semi sleep mode and ensure that it would always be aware of the UE positioning in the network as well as the cache memory statistics. Macro cell also ensures that the UE would be served by the closest and best possible small cell and would turn off the remaining ones to concentrate on a specified area for improving the throughput. On the other hand, there would be a predefined search radius and content would be fetched from a neighbouring base station within that distance. Otherwise, UE would associate to the macro base station for getting access to the needed content. Expressions for the coverage probability for the UE to get signal to interference (SIR) ratio within the threshold, throughput and power consumption and efficiency have been documented in BIB003 .
A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Resource Sharing in 5G with Energy-Efficiency Goal <s> In this paper we evaluate the energy efficiency of a 5G radio access network (RAN) based on LTE technology when comparing two small cell deployment strategies to enhance the RAN capacity. Specifically, we compare densifying a 3-sector macrocell RAN with small cells against first upgrading to a 6-sector macrocell RAN before densifying with small cells. The latter strategy has been used in urban areas by 4G network operators. The energy consumption gain (ECG) is used as a figure of merit in this paper. The radio base station power consumption is estimated by using a realistic power consumption model. Our results show that deploying a small cell overlay in a 3-sector macrocell RAN is more energy efficient than deploying a small cell overlay in a 6-sector macrocell RAN even though the latter uses fewer small cells. Further energy savings can be achieved by implementing an adaptive sectorisation technique. An energy saving of 25% is achieved for 6-sectors when progressively decreasing the number of active sectors from 6 to 1 in accordance with the temporal average traffic load. Irrespective, the 3-sector option with or without incorporating the adaptive sectorisation technique is always more energy efficient. <s> BIB001 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Resource Sharing in 5G with Energy-Efficiency Goal <s> Wireless networks have made huge progress over the past three decades. Nevertheless, emerging fifth-generation (5G) networks are under pressure to continue in this direction at an even more rapid pace, at least for the next ten to 20 years. This pressure is exercised by rigid requirements as well as emerging technology trends that are aimed at introducing improvements to the 5G wireless world. <s> BIB002 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Resource Sharing in 5G with Energy-Efficiency Goal <s> Along with spectral efficiency (SE), energy efficiency (EE) is a key performance metric for the design of 5G and beyond 5G (B5G) wireless networks. At the same time, infrastructure sharing among multiple operators has also emerged as a new trend in wireless communication networks. This paper presents an optimization framework for EE and SE maximization in a network, where radio resources are shared among multiple operators. We define a heterogeneous service level agreement (SLA) framework for a shared network, in which the constraints of different operators are handled by two different multi-objective optimization approaches namely the utility profile and scalarization methods. Pareto-optimal solutions are obtained by merging these approaches with the theory of generalized fractional programming. The approach applies to both noise-limited and interference-limited systems, with single-carrier or multi-carrier transmission. Extensive numerical results illustrate the effect of the operator specific SLA requirements on the global spectral and EE. Three network scenarios are considered in the numerical results, each one corresponding to a different SLA, with different operator-specific EE and SE constraints. <s> BIB003 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Resource Sharing in 5G with Energy-Efficiency Goal <s> Recently, Fog-RANs have been introduced as the evolution of Cloud Radio Access Networks (CRAN) for enabling edge computing in 5G systems. By alleviating the fronthaul burden for data transfer, transport delays are expected to be greatly reduced. However, in order to support envisioned 5G real-time and delay-sensitive applications, tailored radio resource and interference management schemes become necessary. Therefore, this paper investigates the issues of user scheduling and beamforming for energy efficient Fog-RAN. We formulate the energy efficiency maximization problem, taking into account the local user clustering constraint specific to Fog-RANs. Given the difficulty of this non-convex optimization problem, we propose a strategy where the energy efficient user scheduling is split in two parts: first, we solve an equivalent sum-rate maximization problem, then, the most energy-efficient FogAPs are activated in a greedy manner. To meet the requirement of low computational complexity of FogAPs, local beamforming is performed given fixed user scheduling. Simulation results show that the proposed scheme not only provides similar levels of user rates and fairness, but also largely outperforms the system energy efficiency in comparison with the baseline scheme1. <s> BIB004
Spectrum and physical resource sharing needs to be considered for accomplishing the energy efficiency goal of 5G. However, the need of service quality retention with respect to throughput and packet drops must also be addressed. Thoughts on infrastructure sharing have been gaining enough traction owing to several factors, for example, lack of space acquisition for site deployment or utilizing the available resources at their full potential and refraining from any new deployment. This section puts together the studies for bringing improvements in energy efficiency by a mutual sharing of infrastructure. Operators would have the flexibility of resource sharing at either full or partial level naturally emphasizing improved security for their equipment. Additionally, the cost of commissioning every site would lead to a higher expenditure and would minimize the expected revenues. Projects such as EARTH and GREEN TOUCH detail this avenue and brings forth an expectation of a decreased energy consumption by 1000 folds BIB002 . For this level of sophisticated resource sharing, a complete knowledge about the functionality and capacity of the network entities needs to be available which may not be possible in practice. However, the avenue of spectrum sharing still welcomes more discussion and aims to be a potential pathway for gaining solutions to the resource scarcity problem. Details of system level simulations for comparisons drawn between energy consumption and shared infrastructure at different load levels have been documented in BIB002 where a gain of up to 55% for energy efficiency in the dense areas has been demonstrated. Other significant advantages of resource sharing would include less interference by a planned cell deployment in accordance with the user demands per area. These efforts aim to eliminate the problems of either over provisioning or under-utilization of the deployed network entities. Authors in BIB004 have discussed the application of an improved resource allocation in a fog RAN. The suggested idea relies upon the fact that the usage of a centralized baseband processing unit, which, while increasing the processing power of the system, remains at risk of getting outdated measurements from the radio heads because of larger transport delays. The suggested algorithm starts off by switching off the redundant access points for conserving the energy and then modifying the beam weights for providing the end user with an optimum signal to leakage and noise ratio. User association is made centrally and then the information gets passed on to the fog access points after being scheduled for users. Following this phase, the proposed greedy algorithm tracks the global as well as the local energy efficiency readings and switches off the access points not needed until the rising trend of global energy efficiency ceases. Simulations have been carried out using a layout of macro and pico cells showing about a three-fold increase in the reported Channel State Information (CSI). Furthermore, authors in BIB001 have demonstrated the EE gains in a dynamic six-sector BS, capable of operating at either one or a maximum of all the sectors fully functioning, to be up to 75% as compared to the case of an always on approach. In BIB003 , a case study of infrastructure sharing between different operators has been presented as well. Service level agreement between the participating operators is defined and handled by multi-objective optimization methods. In such a shared environment, QoS should go hand in hand with fair resource utilization. Authors have specifically considered the case of obeying operator specific energy and spectral efficiency criteria along with the global spectral and energy efficiency maximization. The most prominent outcomes of this research are the global energy and spectral efficiency maximization in a shared noise-limited environment and the application of the framework to a network shared by any number of operators each serving different numbers of users and an optimal fulfillment of utility targets. Detailed mathematical analysis has been presented for system modelling with noise and interference constraints. SINR equations, which originally were used as a starting point, were thus gradually modified by incorporating weighting factors for influencing the priorities. This model turns out to be working in a polynomial complexity and maximizes the given objective function. Moreover, maximum and minimum bounds have been enclosed. In the paper, authors have presented the application of the mathematical tools by presenting the case of a base station installed in a crowded place such as an airport or shopping mall where the site owner is the neutral party and the frequency resources are either pooled or one of the operators grants some of his portion to others. Firstly, the case of two operators has been presented when they do not have any global constraints and the multi-objective problem set of noise limited scenario would be used. Secondly, site owner restricts the interference level or the global energy efficiency for both the operators and both of them target a minimum QoS constraint. Thirdly, there would be three operators with the same condition as of the first case. The work has laid the foundation to establish the criterion for the energy-spectral trade off in a single/multi carrier scenario.
A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Energy Efficient Resource Allocation in NOMA <s> As a promising downlink multiple access scheme for future radio access (FRA), this paper discusses the concept and practical considerations of non-orthogonal multiple access (NOMA) with a successive interference canceller (SIC) at the receiver side. The goal is to clarify the benefits of NOMA over orthogonal multiple access (OMA) such as OFDMA adopted by Long-Term Evolution (LTE). Practical considerations of NOMA, such as multi-user power allocation, signalling overhead, SIC error propagation, performance in high mobility scenarios, and combination with multiple input multiple output (MIMO) are discussed. Using computer simulations, we provide system-level performance of NOMA taking into account practical aspects of the cellular system and some of the key parameters and functionalities of the LTE radio interface such as adaptive modulation and coding (AMC) and frequency-domain scheduling. We show under multiple configurations that the system-level performance achieved by NOMA is higher by more than 30% compared to OMA. <s> BIB001 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Energy Efficient Resource Allocation in NOMA <s> This paper focuses on resource allocation in energy-cooperation enabled two-tier heterogeneous networks (HetNets) with non-orthogonal multiple access (NOMA), where base stations (BSs) are powered by both renewable energy sources and the conventional grid. Each BS can serve multiple users at the same time and frequency band. To deal with the fluctuation of renewable energy harvesting, we consider that renewable energy can be shared between BSs via the smart grid. In such networks, user association and power control need to be re-designed, since existing approaches are based on OMA. Therefore, we formulate a problem to find the optimum user association and power control schemes for maximizing the energy efficiency of the overall network, under quality-of-service constraints. To deal with this problem, we first propose a distributed algorithm to provide the optimal user association solution for the fixed transmit power. Furthermore, a joint user association and power control optimization algorithm is developed to determine the traffic load in energy-cooperation enabled NOMA HetNets, which achieves much higher energy efficiency performance than existing schemes. Our simulation results demonstrate the effectiveness of the proposed algorithm, and show that NOMA can achieve higher energy efficiency performance than OMA in the considered networks. <s> BIB002 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Energy Efficient Resource Allocation in NOMA <s> Non-orthogonal multiple access (NOMA) has been recently considered as a promising multiple access technique for fifth generation (5G) mobile networks as an enabling technology to meet the demands of low latency, high reliability, massive connectivity, and high throughput. The two dominants types of NOMA are: power-domain and code-domain. The key feature of power-domain NOMA is to allow different users to share the same time, frequency, and code, but with different power levels. In code-domain NOMA, different spread-spectrum codes are assigned to different users and are then multiplexed over the same time-frequency resources. This paper concentrates on power-domain NOMA. In power-domain NOMA, Successive Interference Cancellation (SIC) is employed at the receiver. In this paper, the optimum received uplink power levels using a SIC detector is determined analytically for any number of transmitters. The optimum uplink received power levels using the SIC decoder in NOMA strongly resembles the μ-law encoding used in pulse code modulation (PCM) speech companders. <s> BIB003 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Energy Efficient Resource Allocation in NOMA <s> NOMA has been recognized as a highly promising FRA technology to satisfy the requirements of the fifth generation era on high spectral efficiency and massive connectivity. Since the EE has become a growing concern in FRA from both the industrial and societal perspectives, this article discusses the sustainability issues of NOMA. We first thoroughly examine the theoretical power regions of NOMA to show the minimum transmission power with fixed data rate requirement, demonstrating the EE performance advantage of NOMA over orthogonal multiple access. Then we explore the role of energy-aware resource allocation and grant-free transmission in further enhancing the EE performance of NOMA. Based on this exploration, a hybrid NOMA strategy that reaps the joint benefits of resource allocation and grantfree transmission is investigated to simultaneously accomplish high throughput, large connectivity, and low energy cost. Finally, we identify some important and interesting future directions for NOMA designers to follow in the next decade. <s> BIB004 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Energy Efficient Resource Allocation in NOMA <s> By analytically showing that index coding (IC) is more power efficient than superposition coding (SC) when appropriate caching contents are available for a pair of users, we propose a sub-optimal joint user clustering and power allocation scheme for a single-cell downlink non-orthogonal multiple access network with caching memory at the receivers that alternate between IC and SC. Simulation studies demonstrate that the proposed scheme significantly reduces the transmission power when compared with the benchmark scheme that only allows SC. <s> BIB005
In 5G, attempts have been made to possibly explore the area of non-orthogonal multiple access (NOMA), employing power control for saving resources in both time and frequency domain. This concept is highlighted in the following Figure 5 . Operators would benefit from this technique by getting to serve the maximum number of users within the same frequency band, thus improving spectral efficiency BIB002 . This research area has been active for a while now for the reasons of increasing the network capacity and improving the data rates. An intelligent coordination among the base stations must be in place for maximum utilization of the available overall network energy. This corresponds to the fact that the harvested green energy has mostly been volatile, and a constant input source could not be guaranteed. For this reason, a detailed mathematical model has been presented for the power control of the UEs being serviced for minimizing interference as much as possible. A comparison of user association based genetic algorithms against a fixed transmit power was drawn. NOMA based techniques were demonstrated to outperform the conventional techniques for EE improvement for a larger number of nodes. The application was extended to a two-tier RAN having a macro base station covering a region of several pico base stations, being powered by both green and conventional energy sources. The proposed mathematical model uses a ratio of the network's data rate over the entire energy consumption as the network utility. Incorporation of improved user association techniques were suggested in BIB001 for improvement of user throughput and error containment in NOMA. In BIB003 , authors presented the mathematical feasibility for the utilization of successive interference cancellation at the receiver side. The signal that is being processed considers others to be noise, cancels them out and its iterative nature aims to decode all of them. With an increase in the number of transmitters having a fixed SINR, a linear relationship has been observed. On the other hand, this formulation might lead to a saturation point for the explosive number of IoT devices. The authors in , have taken an interesting approach for a fair comparison of NOMA and a relay-aided multiple access (RAMA) technique and a simulation was carried out for maximization of the sum rate. It was established via mathematical formulation that sum rate is an increasing function of user's transmission power and for the cases of a high data rate demand of the farthest user, NOMA proved to have maximized the sum rate. Distance between the users has been a key figure and with an increased separation between them, NOMA provides maximum rates whereas for the smaller separation relay-based setup provides a good enough sum rate. Authors in BIB004 have endorsed the advantages of nonorthogonal multiple access (NOMA) for the future radio access networks. Apart from the fact that the technique aids in getting a better spectral efficiency, authors instead have analyzed the feasibility of acquiring a better energy efficiency out of it as well. Considering the example of one base station serving two users, relationships between SE and EE have been observed which reflects that NOMA can potentially regulate the energy within the network by the allocation of more bandwidth to a cell center user in the uplink and more power to the cell edge user in the downlink. Considering the potential of NOMA, the problem was tackled with respect to its deployment scenario for the maximum exploitation. For a single cell deployment, EE mapping against resource allocation was considered as an NP hard problem because each user would be competing for the same radio resource, however, user scheduling and multiple access methods would aid for improving this situation. For the network level NOMA, a joint transmission technique could be beneficial for organizing the traffic load on the radio links and users must be scheduled accordingly when it comes to energy harvesting to keep the users with critical needs prioritized. Lastly, Grant free transmission has been studied for saving the signaling overhead, as soon as the user acquires data in its buffer it should start the uplink transmission and selection of the received data would be based upon its unique multiple access signature. Multiple access signature is deemed to be the basis of this proposal, but the signature pool must be carefully devised with an optimal tradeoff between the pool size and mutual correlation. It would greatly help for collision avoidance and detection. The users remain inactive for cutting down on the grant signaling and hence more energy is typically conserved. The proposed hybrid technique transitions between grant free and scheduled NOMA based on the current traffic load which eventually lowers down the collision probability and improves latency. In contrast with the above works that have discussed the use cases of caching in orthogonal multiple access (OMA), authors in BIB005 explored index based chaching instead of superposition chaching while adopting a sub optimal user clustering technique for significant reductions in the transmitted power while using NOMA. Owing to the enormous number of users, optimal user clustering was discouraged and user association based upon their differences in terms of link gain and cached data was suggested instead. The iterative power allocation algorithm was demonstrated to converge after several iterations.
A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Energy Efficient Virtualization in 5G <s> This paper focuses on energy efficiency aspects and related benefits of radio-access-network-as-a-service (RANaaS) implementation (using commodity hardware) as architectural evolution of LTE-advanced networks toward 5G infrastructure. RANaaS is a novel concept introduced recently, which enables the partial centralization of RAN functionalities depending on the actual needs as well as on network characteristics. In the view of future definition of 5G systems, this cloud-based design is an important solution in terms of efficient usage of network resources. The aim of this paper is to give a vision of the advantages of the RANaaS, to present its benefits in terms of energy efficiency and to propose a consistent system-level power model as a reference for assessing innovative functionalities toward 5G systems. The incremental benefits through the years are also discussed in perspective, by considering technological evolution of IT platforms and the increasing matching between their capabilities and the need for progressive virtualization of RAN functionalities. The description is complemented by an exemplary evaluation in terms of energy efficiency, analyzing the achievable gains associated with the RANaaS paradigm. <s> BIB001 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Energy Efficient Virtualization in 5G <s> A number of merits could be brought by network function virtualization (NFV) such as scalability, on demand allocation of resources, and the efficient utilization of network resources. In this paper, we introduce a framework for designing an energy efficient architecture for 5G mobile network function virtualization. In the proposed architecture, the main functionalities of the mobile core network which include the packet gateway (P-GW), serving gateway (S-GW), mobility management entity (MME), policy control and charging role function, and the home subscriber server (HSS) functions are virtualized and provisioned on demand. We also virtualize the functions of the base band unit (BBU) of the evolved node B (eNB) and offload them from the mobile radio side. We leverage the capabilities of gigabit passive optical networks (GPON) as the radio access technology to connect the remote radio head (RRH) to new virtualized BBUs. We consider the IP/WDM backbone network and the GPON based access network as the hosts of virtual machines (VMs) where network functions will be implemented. Two cases were investigated; in the first case, we considered virtualization in the IP/WDM network only (since the core network is typically the location that supports virtualization) and in the second case we considered virtualization in both the IP/WDM and GPON access network. Our results indicate that we can achieve energy savings of 22% on average with virtualization in both the IP/WDM network and GPON access network compared to the case where virtualization is only done in the IP/WDM network. <s> BIB002 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Energy Efficient Virtualization in 5G <s> 5G wireless technology is paving the way to revolutionize future ubiquitous and pervasive networking, wireless applications, and user quality of experience. To realize its potential, 5G must provide considerably higher network capacity, enable massive device connectivity with reduced latency and cost, and achieve considerable energy savings compared to existing wireless technologies. The main objective of this article is to explore the potential of NFV in enhancing 5G radio access networks' functional, architectural, and commercial viability, including increased automation, operational agility, and reduced capital expenditure. The ETSI NFV Industry Specification Group has recently published drafts focused on standardization and implementation of NFV. Harnessing the potential of 5G and network functions virtualization, we discuss how NFV can address critical 5G design challenges through service abstraction and virtualized computing, storage, and network resources. We describe NFV implementation with network overlay and SDN technologies. In our discussion, we cover the first steps in understanding the role of NFV in implementing CoMP, D2D communication, and ultra densified networks. <s> BIB003 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Energy Efficient Virtualization in 5G <s> Several critical benefits are encompassed by the concept of NFV when this concept is brought under the roof of 5G such as scalability, high level of flexibility, efficient utilisation of network resources, cost and power reduction, and on demand allocation of network resources. NFV could reduce the cost for installing and maintaining network equipment through consolidating the hardware resources. By deploying NFV, network resources could be shared between different users and several network functions in a facile and flexible way. Beside this the network resources could be rescaled and allocated to each function in the network. As a result, the NFV can be customised according the precise demands, so that all the network components and users could be handled and accommodated efficiently. In this paper we extend the virtualization framework that was introduced in our previous work to include a large range of virtual machine workloads with the presence of mobile core network virtual machine intra communication. In addition, we investigate a wide range of traffic reduction factors which are caused by base band virtual machines (BBUVM) and their effect on the power consumption. We used two general scenarios to group our finding, the first one is virtualization in both IP over WDM (core network) and GPON (access network) while the second one is only in IP over WDM network (core network). We illustrate that the virtualization in IP over WDM and GPON can achieve power saving around (16.5% – 19.5%) for all cases compared to the case where no NFV is deployed, while the virtualization in IP over WDM records around (13.5% – 16.5%). <s> BIB004 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Energy Efficient Virtualization in 5G <s> Network Function Virtualization (NFV) enables mobile operators to virtualize their network entities as Virtualized Network Functions (VNFs), offering fine-grained on-demand network capabilities. VNFs can be dynamically scale-in/out to meet the performance desire and other dynamic behaviors. However, designing the auto-scaling algorithm for desired characteristics with low operation cost and low latency, while considering the existing capacity of legacy network equipment, is not a trivial task. In this paper, we propose a VNF Dynamic Auto Scaling Algorithm (DASA) considering the tradeoff between performance and operation cost. We develop an analytical model to quantify the tradeoff and validate the analysis through extensive simulations. The results show that the DASA can significantly reduce operation cost given the latency upper-bound. Moreover, the models provide a quick way to evaluate the cost- performance tradeoff and system design without wide deployment, which can save cost and time. <s> BIB005 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Energy Efficient Virtualization in 5G <s> In cloud computing paradigm, virtual resource autoscaling approaches have been intensively studied recent years. Those approaches dynamically scale in/out virtual resources to adjust system performance for saving operation cost. However, designing the autoscaling algorithm for desired performance with limited budget, while considering the existing capacity of legacy network equipment, is not a trivial task. In this paper, we propose a Deadline and Budget Constrained Autoscaling (DBCA) algorithm for addressing the budget-performance tradeoff. We develop an analytical model to quantify the tradeoff and cross-validate the model by extensive simulations. The results show that the DBCA can significantly improve system performance given the budget upper-bound. In addition, the model provides a quick way to evaluate the budget-performance tradeoff and system design without wide deployment, saving on cost and time. <s> BIB006 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Energy Efficient Virtualization in 5G <s> We propose and compare different potential placement schemes for baseband functions and mobile edge computing on their energy efficiency. Simulation results show that NFV enabled flexible placement reduces more than 20% power than traditional solutions. <s> BIB007 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Energy Efficient Virtualization in 5G <s> In this paper, network function virtualization (NVF) is identified as a promising key technology that can contribute to energy-efficiency improvement in 5G networks. An optical network supported architecture is proposed and investigated in this work to provide the wired infrastructure needed in 5G networks and to support NFV towards an energy efficient 5G network. In this architecture the mobile core network functions as well as baseband function are virtualized and provided as VMs. The impact of the total number of active users in the network, backhaul/fronthaul configurations and VM inter-traffic are investigated. A mixed integer linear programming (MILP) optimization model is developed with the objective of minimizing the total power consumption by optimizing the VMs location and VMs servers’ utilization. The MILP model results show that virtualization can result in up to 38% (average 34%) energy saving. The results also reveal how the total number of active users affects the baseband virtual machines (BBUVMs) optimal distribution whilst the core network virtual machines (CNVMs) distribution is affected mainly by the inter-traffic between the VMs. For real-time implementation, two heuristics are developed, an Energy Efficient NFV without CNVMs inter-traffic (EENFVnoITr) heuristic and an Energy Efficient NFV with CNVMs inter-traffic (EENFVwithITr) heuristic, both produce comparable results to the optimal MILP results. Finally, a Genetic algorithm is developed for further verification of the results. <s> BIB008
Virtualization has been a very sought out way of reducing the time to market for the newer mobile technologies but with the emerging technological trends it might be a very useful way forward for reducing the energy consumption. In this case, hardware would serve as a bare metal for running multiple applications simultaneously for saving up on the cost of additional deployments of dedicated hardware and software components BIB008 . Most of the functions previously deployed on dedicated hardware would now be rolling out as software defined network functions thus promising scalability, performance maximization and mobility with in the cellular network. The virtual network architecture described in BIB003 lays out the interconnection between several virtual as well as the physical units being interconnected to form a larger system. A generalized 5G architecture incorporating virtualization has been illustrated in Figure 6 . The smooth integration of different technologies with virtualized environment thus becomes the key of reaping the expected efficiency outcomes. Resource and operations management plays a vital role in actively regulating the system for a fine tuned state of execution that helps mitigate issues including redundancy and keeping the operating expenses under control. Furthermore, usage of an openflow switch would come in handy for efficient packet traversal within the network. Significant advantage in terms of reduced energy consumption of about 30% have been experienced by incorporating the current architecture with Network Function Virtualization (NFV). Authors have assumed an ideal case scenario that the virtual BBU will not consume any energy when it stays idle and also the advantage of the enormous computational pool in the form of cloud have been used. Authors in BIB002 presented the significant energy conservation advantages of having virtual nodes in both access as well as the core network instead of having the physical nodes for executing only a single function. The proposed topology suggests baseband pooling for higher performance in the cloud, a direct gigabit optical connection from the remote radio heads to the core network and an even distribution of the core network nodes. The nearest available core network node would then be the one responsible of serving the incoming requests from the respective radio heads. The proposed architecture boasts the flexibility of resource distribution by having a single node running multiple virtualized access/core network functions e.g., serving gateway, packet gateway, etc. and the readiness of activating these functions wherever needed based on the work load. A visible gain of about 22% was recorded using mixed integer linear programming for modelling the work load across the nodes and both the core and access network were virtualized. Apart from the EE gains, a higher performance would also be achieved because of a reduced distance between the node requesting and the node serving the request. Research in BIB004 extends the same idea where the EE gains are deemed to be higher with an increased number of virtual function deployments in the access network which typically consumes more energy, about 70% of the entire demand of the end to end network. The suggested topology entails gigabit optical connectivity as the fronthaul technology instead of the Common Public Radio Interface (CPRI) connection between radio and baseband units. This brings out more deployment opportunities for the virtual machines by having more active nodes closer to the user. Authors documented a gain of about 19% with the proposed architecture. According to the authors in BIB007 , existing RAN architecture needs modification for meeting the upcoming traffic demands. Baseband unit has been decomposed into two main parts, namely distributed unit and a central unit. Both units find their optimal placements either close to the users for serving the low latency demands or in remote areas for providing a pool of computational power. Mobile edge computing uses the same concept and NFV proves to be an enabling technology to use it to its full potential. The network layout comprises upon active antenna units and the central office for edge and access computation. Mobile edge computing units were housed along with the distributed and the central units and was the aggregator for the traffic. Both latter functions were virtualized on general purpose processors and finally the electronic switch was responsible for the traffic routing. Simulations conducted on this topology have revealed about 20% power saving as compared to the case of fixed deployment of hardware units. Moreover, Reference BIB001 also supports the idea of flexible centralization of RAN functions of small cells. Prominent outcomes would comprise upon interference mitigation in a dense deployment and reduced radio access processing. Authors in BIB005 devised an analytical model for calculating the optimal number of active operator's resources. Dynamic Auto Scaling Algorithm, or DASA, was envisioned to provide a way for operators to better understand their cost vs performance trade off and authors have thus used real life data from Facebook's data center for a realistic estimation. On top of the already established legacy infrastructure comprising mainly upon mobile management entity, serving gateway, packet gateway and the policy & charging function, 3GPP has now proposed specifications for a virtualized packet core providing on demand computational resources for catering to the massive incoming user requests. A comparison was drawn between the consumed power and the response time of the servers for the jobs in a queue by varying different factors including total number of virtual network function (VNF) instances, total number of servers available as well as the rate of the incoming jobs, total system capacity and the virtual machine (VM) setup times. Trends recorded from the plots have signified the saturation point of the system and have paved a way for operators to optimize their infrastructure to be robust without taking in more power than needed. Similarly BIB006 extends the above mentioned approach by taking into account the rejection of incoming requests in case the saturation point has been reached. A more realistic framework was presented that incorporates either dropping the jobs from the queue or even blocking them out from being registered until some resources could be freed up.
A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Machine Learning Techniques for Energy-Efficiency in 5G <s> A hybrid network architecture has been proposed for machine-to-machine M2M communications in the fifth generation wireless systems, where M2M gateways connect the capillary networks and cellular networks. In this paper, we develop novel energy efficient and end-to-end delay duty cycle control scheme for controllers at the gateway and the capillary networks coordinator. We first formulate a duty cycle control problem with joint-optimisation of energy consumption and end-to-end delay. Then, a distributed duty cycle control scheme is proposed. The proposed scheme consists of two parts i a transmission policy, which decides the optimal number of packets to be transmitted between M2M devices, coordinators and gateways; and ii a duty cycle control for IEEE 802.15.4. We analytically derived the optimal duty cycle control and developed algorithms to compute the optimal duty cycle. It is to increase the feasibility of implementing the control on computation-limited devices where a suboptimal low complexity rollout algorithm-based duty cycle control RADutyCon is proposed. The simulation results show that RADutyCon achieves an exponential reduction of the computation complexity as compared with that of the optimal duty cycle control. The simulation results show that RADutyCon performs close to the optimal control, and it performs no worse than the heuristic base control. Copyright © 2014 John Wiley & Sons, Ltd. <s> BIB001 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Machine Learning Techniques for Energy-Efficiency in 5G <s> The explosive growth of mobile multimedia services has caused tremendous network traffic in wireless networks and a great part of the multimedia services are delay-sensitive. Therefore, it is important to design efficient radio resource allocation algorithms to increase network capacity and guarantee the delay QoS. In this paper, we study the power control problem in the downlink of two-tier femtocell networks with the consideration of the delay QoS provisioning. Specifically, we introduce the effective capacity (EC) as the network performance measure instead of the Shannon capacity to provide the statistical delay QoS provisioning. Then, the optimization problem is modeled as a non- cooperative game and the existence of Nash Equilibriums (NE) is investigated. However, in order to enhance the selforganization capacity of femtocells, based on non-cooperative game, we employ a Q-learning framework in which all of the femtocell base stations (FBSs) are considered as agents to achieve power allocation. Then a distributed Q- learning-based power control algorithm is proposed to make femtocell users (FUs) gain maximum EC. Numerical results show that the proposed algorithm can not only maintain the delay requirements of the delay-sensitive services, but also has a good convergence performance. <s> BIB002 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Machine Learning Techniques for Energy-Efficiency in 5G <s> We study the energy efficiency issue in 5G communications scenarios, where cognitive femtocells coexist with picocells operating at the same frequency bands. Optimal energy-efficient power allocation based on the sensing-based spectrum sharing (SBSS) is proposed for the uplink cognitive femto users operating in a multiuser MIMO mode. Both hard-decision and soft-decision schemes are considered for the SBSS. Different from the existing energy-efficient designs in multiuser scenarios, which consider system-wise energy efficiency, we consider user-wise energy efficiency and optimize them in a Pareto sense. To resolve the nonconvexity of the formulated optimization problem, we include an additional power constraint to convexify the problem without losing global optimality. Simulation results show that the proposed schemes significantly enhance the energy efficiency of the cognitive femto users compared with the existing spectral-efficient designs. <s> BIB003 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Machine Learning Techniques for Energy-Efficiency in 5G <s> Next-generation wireless networks are expected to support extremely high data rates and radically new applications, which require a new wireless radio technology paradigm. The challenge is that of assisting the radio in intelligent adaptive learning and decision making, so that the diverse requirements of next-generation wireless networks can be satisfied. Machine learning is one of the most promising artificial intelligence tools, conceived to support smart radio terminals. Future smart 5G mobile terminals are expected to autonomously access the most meritorious spectral bands with the aid of sophisticated spectral efficiency learning and inference, in order to control the transmission power, while relying on energy efficiency learning/inference and simultaneously adjusting the transmission protocols with the aid of quality of service learning/inference. Hence we briefly review the rudimentary concepts of machine learning and propose their employment in the compelling applications of 5G networks, including cognitive radios, massive MIMOs, femto/small cells, heterogeneous networks, smart grid, energy harvesting, device-todevice communications, and so on. Our goal is to assist the readers in refining the motivation, problem formulation, and methodology of powerful machine learning algorithms in the context of future networks in order to tap into hitherto unexplored applications and services. <s> BIB004 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Machine Learning Techniques for Energy-Efficiency in 5G <s> The massive deployment of small cells (SCs) represents one of the most promising solutions adopted by 5G cellular networks to meet the foreseen huge traffic demand. The high number of network elements entails a significant increase in the energy consumption. The usage of renewable energies for powering the small cells can help reduce the environmental impact of mobile networks in terms of energy consumption and also save on electric bills. In this paper, we consider a two-tier cellular network architecture where SCs can offload macro base stations and solely rely on energy harvesting and storage. In order to deal with the erratic nature of the energy arrival process, we exploit an ON/OFF switching algorithm, based on reinforcement learning, that autonomously learns energy income and traffic demand patterns. The algorithm is based on distributed multi-agent Q-learning for jointly optimizing the system performance and the self-sustainability of the SCs. We analyze the algorithm by assessing its convergence time, characterizing the obtained ON/OFF policies, and evaluating an offline trained variant. Simulation results demonstrate that our solution is able to increase the energy efficiency of the system with respect to simpler approaches. Moreover, the proposed method provides an harvested energy surplus, which can be used by mobile operators to offer ancillary services to the smart electricity grid. <s> BIB005 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Machine Learning Techniques for Energy-Efficiency in 5G <s> Heterogeneous cloud radio access networks (H-CRAN) is a new trend of SC that aims to leverage the heterogeneous and cloud radio access networks advantages. Low power remote radio heads (RRHs) are exploited to provide high data rates for users with high quality of service requirements (QoS), while high power macro base stations (BSs) are deployed for coverage maintenance and low QoS users support. However, the inter-tier interference between the macro BS and RRHs and energy efficiency are critical challenges that accompany resource allocation in H-CRAN. Therefore, we propose a centralized resource allocation scheme using online learning, which guarantees interference mitigation and maximizes energy efficiency while maintaining QoS requirements for all users. To foster the performance of such scheme with a model-free learning, we consider users' priority in resource blocks (RBs) allocation and compact state representation based learning methodology to enhance the learning process. Simulation results confirm that the proposed resource allocation solution can mitigate interference, increase energy and spectral efficiencies significantly, and maintain users' QoS requirements. <s> BIB006
Recently, machine learning techniques have been employed to various areas of wireless networks including approaches to enhance energy efficiency of the wireless network BIB004 . A typical example would include a smart transmission point, such as the one shown in Figure 8 that would evolve itself overtime by its observations. In BIB005 , the authors proposed switch-on/off policies for energy harvesting small cells through distributed Q-learning. A two tier network architecture was presented for discussion on on-off switching schemes based upon reinforcement learning. It is assumed that small cells are equipped to get their associated macrocell to transfer its load over to them and they themselves would rely upon the harvested energy, for example, solar energy. Application of Q-learning enables them to learn about the incoming traffic requests over time so they could tweak their operation to an optimal level. The proposed scenario includes a macro cell running on electricity and small cells running on solar energy with a distributed Q learning technique being used to gain knowledge about the current radio resource policies. Reward function for the online Q-learning proposes to turn off the small cells if users experience higher drop rates or use the ones that would already be on to take the burden from the macro cell. On the other hand, authors in BIB001 devised a novel EE and E2E delay duty cycle control scheme for controllers at the gateway of cellular and capillary networks. Formulation of a duty cycle control problem with joint-optimization of energy consumption and E2E delay was addressed followed by the distributed duty cycle control scheme. In BIB002 , the authors highlighted a distributed power control for two tier femtocell networks with QoS provisioning based on q-learning. Power control in the downlink of the two tier femtocell network was discussed and an effective network capacity measure was introduced for incorporating the statistical delay. Self-organization of small cells was also discussed with the perspective of Q-learning and utilization of a non cooperative game theory BIB002 . The proposed system model involves a macro base station covering several femtocells in its vicinity, each of them serving their own set of users. Expressions for SINR for both macro and femto cell users were also documented BIB002 . For the consumer's energy efficiency, Pareto optimization was opted for as compared to the traditional multi-user scenarios, focusing on a system level energy efficiency instead. Meanwhile in BIB003 , the deployment of macro and pico base stations were made similar to the above scenario. However, the random deployment of femto BS by consumers cause interference problems and cognitive radio technology was put together with these femto BS for an improved spectrum access. Spectrum sensing techniques provide benefits for UL transmission since the femto cells are power limited as compared to the macro cells. Detailed mathematical analysis for spectrum sensing techniques using both hard and soft decisions were demonstrated in BIB003 . Authors formulated objective functions in such a way that although they are computing optimal power allocation for the users, the whole scheme incorporates constraints for energy efficiency maximization. In BIB006 , the authors also use machine learning techniques for energy-efficient resource allocation in 5G heterogeneous cloud radio access network. Cloud radio access networks are considered as a key enabler in upcoming 5G era by providing higher data rates and lower inter cell interference. It consists of both small cells and macro base stations for accommodating more users, providing them with superior quality of service and for enhancing coverage area respectively where resources are scheduled through a cloud RAN. A resource allocation scheme was put together with the aim of maximizing energy efficiency of UEs served by the radio heads while minimizing inter tier interference BIB006 . Available spectrum was divided into two resource blocks and assigned to different UE groups depending upon their location and QoS demands. A central controller interfaced with the baseband unit pool gets to learn about the network state through the interfaced macro base station and then take certain actions needed for energy efficiency optimization. Furthermore, compact state representation was utilized for approximating algorithm's convergence. The resource block as well as the power allocation with respect to energy saving in the downlink channel of remote radio heads in accordance with the QoS constraints has also been documented. Since the given model depends upon the prior UE knowledge for it to make transitions for optimization, Q-learning was proposed to practically model the objectives and system specifications. The resource allocation is mainly carried out at the controller in the BBU pool and the control signalling is carried out via the X1 and S1 links. The hierarchy of UEs and RRHs operate under macro base station and convey their states to the controller.
A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Challenges and Open Issues <s> The deployment of a large number of small cells poses new challenges to energy efficiency, which has often been ignored in 5G cellular networks. While massive MIMO will reduce the transmission power at the expense of higher computational cost, the question remains as to which (computation or transmission power) is more important in the energy efficiency of 5G small cell networks. Thus, the main objective in this article is to investigate the computation power based on the Landauer principle. Simulation results reveal that more than 50 percent of the energy is consumed by the computation power at 5G small cell BSs. Moreover, the computation power of a 5G small cell BS can approach 800 W when massive MIMO (e.g., 128 antennas) is deployed to transmit high volume traffic. This clearly indicates that computation power optimization can play a major role in the energy efficiency of small cell networks. <s> BIB001 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Challenges and Open Issues <s> Along with spectral efficiency (SE), energy efficiency (EE) is a key performance metric for the design of 5G and beyond 5G (B5G) wireless networks. At the same time, infrastructure sharing among multiple operators has also emerged as a new trend in wireless communication networks. This paper presents an optimization framework for EE and SE maximization in a network, where radio resources are shared among multiple operators. We define a heterogeneous service level agreement (SLA) framework for a shared network, in which the constraints of different operators are handled by two different multi-objective optimization approaches namely the utility profile and scalarization methods. Pareto-optimal solutions are obtained by merging these approaches with the theory of generalized fractional programming. The approach applies to both noise-limited and interference-limited systems, with single-carrier or multi-carrier transmission. Extensive numerical results illustrate the effect of the operator specific SLA requirements on the global spectral and EE. Three network scenarios are considered in the numerical results, each one corresponding to a different SLA, with different operator-specific EE and SE constraints. <s> BIB002
In accordance with the increase in the computational demand from the base stations, in the upcoming 5G networks, energy efficiency needs to be scaled up by 100-1000 times in contrast with the traditional 4G network BIB001 . Since the transmission ranges would have been scaled down due the dense small cell deployment, the energy efficiency evaluation will potentially revolve around the computational side as compared to the transmission side previously. Storage functions for local data caching should also be considered in this evaluation, since it would potentially be common in the forthcoming networks. Scheduling schemes should be enhanced to involve an optimal number of antennas and bandwidth for resource allocation. The trade-off between transmission and computational power should be optimized considering the effects of the kind of transmission technology involved. Software Defined Networking might be a potential fix for this issue, yet it needs further exploration. Moreover, authors in proposed the intermediate delays from source to destination to be incorporated in the energy efficiency formulation for an even more realistic estimation. Most of the ongoing research has been discussing energy efficiency from a lot of different perspectives but so far a unifying approach has not been reached. Green Touch project has taken such an initiative but more exploration is needed for a stronger understanding . With the explosive small cell deployment, 5G network would be interference limited so orthogonal transmission techniques might not be practical. The framework of sequential fractional programming might be extended for energy efficiency optimization with affordable complexity as suggested in BIB002 . Random Matrix theory and stochastic geometry appear as suitable statistical models for evaluating the randomness within the wireless networks, but a thorough research on energy efficiency needs to be conducted employing these tools. Finally, the avenue of self-learning mechanisms is still less explored. Since local caching has been considered a potential answer for reducing the load on backhaul networks, novel approaches including this consideration need to be developed.
A Review of First-Passage Theory for the Segerdahl-Tichy Risk Process and Open Problems <s> Introduction and Brief Review of First Passage Theory <s> Abstract In the discussion on the practical applicability of the collective theory of risk to the insurance field some points have been raised, where it is argued that the conceptions of the theory do not correspond to the conditions prevailing in practice, thus entailing a serious reduction of its working value. Three such points will be considered in this paper. They are usually put forward as follows: 1. The theory assumes constancy in time to hold for the distribution of the amounts at risk falling due, the risk sums. 2. The theory does not take into account that interest is earned on the safeguarding capital of the insurer, the risk reserve. 3. The theory considers the probability that ruin will ever occur to the insurer by exhaustion of the risk reserve. A fairly large part of this probability might be ascribable to the possibility of ruin in a very remote future, whilst the practical insurer is only interested in the probability within a reasonable period of time. <s> BIB001 </s> A Review of First-Passage Theory for the Segerdahl-Tichy Risk Process and Open Problems <s> Introduction and Brief Review of First Passage Theory <s> We consider a process with reflection at the origin and paths which are piecewise linear or Brownian, with the drift and variance constants being determined by the state of an underlying finite Markov process; the purely linear case corresponds to fluid flow models of current interest in telecommunications engineering. It is shown that the stationary distribution is phase-type, and various algorithms for computing the phase representation are given, some iterative with each step involving a matrix inversion and some based upon spectral expansion of the phase generator. Mathematically, the point of view is Markov additive processes, and some key tools are time-reversal and auxiliary Markov processes obtained by observing the underlying Markov process when the additive component is at a maximum <s> BIB002 </s> A Review of First-Passage Theory for the Segerdahl-Tichy Risk Process and Open Problems <s> Introduction and Brief Review of First Passage Theory <s> We consider a risk process with stochastic interest rate, and show that the probability of eventual ruin and the Laplace transform of the time of ruin can be found by solving certain boundary value problems involving integro-differential equations. These equations are then solved for a number of special cases. We also show that a sequence of such processes converges weakly towards a diffusion process, and analyze the above-mentioned ruin quantities for the limit process in some detail. <s> BIB003 </s> A Review of First-Passage Theory for the Segerdahl-Tichy Risk Process and Open Problems <s> Introduction and Brief Review of First Passage Theory <s> We consider spectrally negative Levy process and determine the joint Laplace transform of the exit time and exit position from an interval containing the origin of the process reflected in its supremum. In the literature of fluid models, this stopping time can be identified as the time to buffer-overflow. The Laplace transform is determined in terms of the scale functions that appear in the two-sided exit problem of the given Levy process. The obtained results together with existing results on two sided exit problems are applied to solving optimal stopping problems associated with the pricing of Russian options and their Canadized versions. <s> BIB004 </s> A Review of First-Passage Theory for the Segerdahl-Tichy Risk Process and Open Problems <s> Introduction and Brief Review of First Passage Theory <s> We consider first passage times for piecewise exponential Markov processes that may be viewed as Ornstein-Uhlenbeck processes driven by compound Poisson processes. We allow for two-sided jumps and as a main result we derive the joint Laplace transform of the first passage time of a lower level and the resulting undershoot when passage happens as a consequence of a downward (negative) jump. The Laplace transform is determined using complex contour integrals and we illustrate how the choice of contours depends in a crucial manner on the particular form of the negative jump part, which is allowed to belong to a dense class of probabilities. We give extensions of the main result to two-sided exit problems where the negative jumps are as before but now it is also required that the positive jumps have a distribution of the same type. Further, extensions are given for the case where the driving Levy process is the sum of a compound Poisson process and an independent Brownian motion. Examples are used to illustrate the theoretical results and include the numerical evaluation of some concrete exit probabilities. Also, some of the examples show that for specific values of the model parameters it is possible to obtain closed form expressions for the Laplace transform, as is the case when residue calculus may be used for evaluating the relevant contour integrals. <s> BIB005 </s> A Review of First-Passage Theory for the Segerdahl-Tichy Risk Process and Open Problems <s> Introduction and Brief Review of First Passage Theory <s> We provide a unified analytical treatment of first passage problems under an affine state-dependent jump-diffusion model (with drift and volatility depending linearly on the state). Our proposed model, that generalizes several previously studied cases, may be used for example for obtaining probabilities of ruin in the presence of interest rates under the rational investement strategies proposed by Berk & Green (2004). <s> BIB006 </s> A Review of First-Passage Theory for the Segerdahl-Tichy Risk Process and Open Problems <s> Introduction and Brief Review of First Passage Theory <s> This survey treats the problem of ruin in a risk model when assets earn investment income. In addition to a general presentation of the problem, topics covered are a presentation of the relevant integro-differential equations, exact and numerical solutions, asymptotic results, bounds on the ruin probability and also the possibility of minimizing the ruin probability by investment and possibly reinsurance control. The main emphasis is on continuous time models, but discrete time models are also covered. A fairly extensive list of references is provided, particularly of papers published after 1998. For more references to papers published before that, the reader can consult [47]. <s> BIB007 </s> A Review of First-Passage Theory for the Segerdahl-Tichy Risk Process and Open Problems <s> Introduction and Brief Review of First Passage Theory <s> In this paper we develop a symbolic technique to obtain asymptotic expressions for ruin probabilities and discounted penalty functions in renewal insurance risk models when the premium income depends on the present surplus of the insurance portfolio. The analysis is based on boundary problems for linear ordinary differential equations with variable coefficients. The algebraic structure of the Green's operators allows us to develop an intuitive way of tackling the asymptotic behavior of the solutions, leading to exponential-type expansions and Cram\'er-type asymptotics. Furthermore, we obtain closed-form solutions for more specific cases of premium functions in the compound Poisson risk model. <s> BIB008 </s> A Review of First-Passage Theory for the Segerdahl-Tichy Risk Process and Open Problems <s> Introduction and Brief Review of First Passage Theory <s> This paper concerns an optimal dividend distribution problem for an insurance company whose risk process evolves as a spectrally negative L\'{e}vy process (in the absence of dividend payments). The management of the company is assumed to control timing and size of dividend payments. The objective is to maximize the sum of the expected cumulative discounted dividend payments received until the moment of ruin and a penalty payment at the moment of ruin, which is an increasing function of the size of the shortfall at ruin; in addition, there may be a fixed cost for taking out dividends. A complete solution is presented to the corresponding stochastic control problem. It is established that the value-function is the unique stochastic solution and the pointwise smallest stochastic supersolution of the associated HJB equation. Furthermore, a necessary and sufficient condition is identified for optimality of a single dividend-band strategy, in terms of a particular Gerber-Shiu function. A number of concrete examples are analyzed. <s> BIB009 </s> A Review of First-Passage Theory for the Segerdahl-Tichy Risk Process and Open Problems <s> Introduction and Brief Review of First Passage Theory <s> Abstract This paper solves exit problems for spectrally negative Markov additive processes and their reflections. So-called scale matrix, which is a generalization of the scale function of a spectrally negative Levy process, plays the central role in the study of the exit problems. Existence of the scale matrix was shown by Kyprianou and Palmowski (2008) [32, Thm. 3] . We provide the probabilistic construction of the scale matrix, and identify its transform. In addition, we generalize to the MAP setting the relation between the scale function and the excursion (height) measure. The main technique is based on the occupation density formula and even in the context of fluctuations of spectrally negative Levy processes this idea seems to be new. Our representation of the scale matrix W ( x ) = e − Λ x L ( x ) in terms of nice probabilistic objects opens up possibilities for further investigation of its properties. <s> BIB010 </s> A Review of First-Passage Theory for the Segerdahl-Tichy Risk Process and Open Problems <s> Introduction and Brief Review of First Passage Theory <s> It is often natural to consider defective or killed stochastic processes. Various observations continue to hold true for this wider class of processes yielding more general results in a transparent way without additional effort. We illustrate this point with an example from risk theory by showing that the ruin probability for a defective risk process can be seen as a triple transform of various quantities of interest on the event of ruin. In particular, this observation is used to identify the triple transform in a simple way when either claims or interarrivals are exponential. We also show how to extend these results to modulated risk processes, where exponential distributions are replaced by phase-type distributions. In addition, we review and streamline some basic exit identities for defective Levy and Markov additive processes. <s> BIB011 </s> A Review of First-Passage Theory for the Segerdahl-Tichy Risk Process and Open Problems <s> Introduction and Brief Review of First Passage Theory <s> This paper concerns an optimal dividend distribution problem for an insurance company with surplus-dependent premium. In the absence of dividend payments, such a risk process is a particular case of so-called piecewise deterministic Markov processes. The control mechanism chooses the size of dividend payments. The objective consists in maximazing the sum of the expected cumulative discounted dividend payments received until the time of ruin and a penalty payment at the time of ruin, which is an increasing function of the size of the shortfall at ruin. A complete solution is presented to the corresponding stochastic control problem. We identify the associated Hamilton-Jacobi-Bellman equation and find necessary and sufficient conditions for optimality of a single dividend-band strategy, in terms of particular Gerber-Shiu functions. A number of concrete examples are analyzed. <s> BIB012 </s> A Review of First-Passage Theory for the Segerdahl-Tichy Risk Process and Open Problems <s> Introduction and Brief Review of First Passage Theory <s> As well known, all functionals of a Markov process may be expressed in terms of the generator operator, modulo some analytic work. In the case of spectrally negative Markov processes however, it is conjectured that everything can be expressed in a more direct way using the $W$ scale function which intervenes in the two-sided first passage problem, modulo performing various integrals. This conjecture arises from work on Levy processes \cite{AKP,Pispot,APP,Iva,IP, ivanovs2013potential,AIZ,APY}, where the $W$ scale function has explicit Laplace transform, and is therefore easily computable; furthermore it was found in the papers above that a second scale function $Z$ introduced in \cite{AKP} greatly simplifies first passage laws, especially for reflected processes. This paper gathers a collection of first passage formulas for spectrally negative Parisian L\'evy processes, expressed in terms of $W,Z$ which may serve as an"instruction kit"for computing quantities of interest in applications, for example in risk theory and mathematical finance. To illustrate the usefulness of our list, we construct a new index for the valuation of financial companies modeled by spectrally negative L\'evy processes, based on a Dickson-Waters modifications of the de Finetti optimal expected discounted dividends objective. We offer as well an index for the valuation of conglomerates of financial companies. An implicit question arising is to investigate analog results for other classes of spectrally negative Markovian processes. <s> BIB013 </s> A Review of First-Passage Theory for the Segerdahl-Tichy Risk Process and Open Problems <s> Introduction and Brief Review of First Passage Theory <s> Drawdown (resp. drawup) of a stochastic process, also referred as the reflected process at its supremum (resp. infimum), has wide applications in many areas including financial risk management, actuarial mathematics and statistics. In this paper, for general time-homogeneous Markov processes, we study the joint law of the first passage time of the drawdown (resp. drawup) process, its overshoot, and the maximum of the underlying process at this first passage time. By using short-time pathwise analysis, under some mild regularity conditions, the joint law of the three drawdown quantities is shown to be the unique solution to an integral equation which is expressed in terms of fundamental two-sided exit quantities of the underlying process. Explicit forms for this joint law are found when the Markov process has only one-sided jumps or is a L\'{e}vy process (possibly with two-sided jumps). The proposed methodology provides a unified approach to study various drawdown quantities for the general class of time-homogeneous Markov processes. <s> BIB014 </s> A Review of First-Passage Theory for the Segerdahl-Tichy Risk Process and Open Problems <s> Introduction and Brief Review of First Passage Theory <s> First passage problems for spectrally negative L\'evy processes with possible absorbtion or/and reflection at boundaries have been widely applied in mathematical finance, risk, queueing, and inventory/storage theory. Historically, such problems were tackled by taking Laplace transform of the associated Kolmogorov integro-differential equations involving the generator operator. In the last years there appeared an alternative approach based on the solution of two fundamental"two-sided exit"problems from an interval (TSE). A spectrally one-sided process will exit smoothly on one side on an interval, and the solution is simply expressed in terms of a"scale function"$W$ (Bertoin 1997). The non-smooth two-sided exit (or ruin) problem suggests introducing a second scale function $Z$ (Avram, Kyprianou and Pistorius 2004). Since many other problems can be reduced to TSE, researchers produced in the last years a kit of formulas expressed in terms of the"$W,Z$ alphabet"for a great variety of first passage problems. We collect here our favorite recipes from this kit, including a recent one (94) which generalizes the classic De Finetti dividend problem. One interesting use of the kit is for recognizing relationships between apparently unrelated problems -- see Lemma 3. Last but not least, it turned out recently that once the classic $W,Z$ are replaced with appropriate generalizations, the classic formulas for (absorbed/ reflected) L\'evy processes continue to hold for: a) spectrally negative Markov additive processes (Ivanovs and Palmowski 2012), b) spectrally negative L\'evy processes with Poissonian Parisian absorbtion or/and reflection (Avram, Perez and Yamazaki 2017, Avram Zhou 2017), or with Omega killing (Li and Palmowski 2017). <s> BIB015 </s> A Review of First-Passage Theory for the Segerdahl-Tichy Risk Process and Open Problems <s> Introduction and Brief Review of First Passage Theory <s> The first motivation of our paper is to explore further the idea that, in risk control problems, it may be profitable to base decisions both on the position of the underlying process Xt and on its ... <s> BIB016 </s> A Review of First-Passage Theory for the Segerdahl-Tichy Risk Process and Open Problems <s> Introduction and Brief Review of First Passage Theory <s> As is well-known, the benefit of restricting Levy processes without positive jumps is the “ W , Z scale functions paradigm”, by which the knowledge of the scale functions W , Z extends immediately to other risk control problems. The same is true largely for strong Markov processes X t , with the notable distinctions that (a) it is more convenient to use as “basis” differential exit functions ν , δ , and that (b) it is not yet known how to compute ν , δ or W , Z beyond the Levy, diffusion, and a few other cases. The unifying framework outlined in this paper suggests, however, via an example that the spectrally negative Markov and Levy cases are very similar (except for the level of work involved in computing the basic functions ν , δ ). We illustrate the potential of the unified framework by introducing a new objective (33) for the optimization of dividends, inspired by the de Finetti problem of maximizing expected discounted cumulative dividends until ruin, where we replace ruin with an optimally chosen Azema-Yor/generalized draw-down/regret/trailing stopping time. This is defined as a hitting time of the “draw-down” process Y t = sup 0 ≤ s ≤ t X s − X t obtained by reflecting X t at its maximum. This new variational problem has been solved in a parallel paper. <s> BIB017 </s> A Review of First-Passage Theory for the Segerdahl-Tichy Risk Process and Open Problems <s> Introduction and Brief Review of First Passage Theory <s> In this paper, we investigate the reflected CIR process with two-sided jumps to capture the jump behavior and its non-negativeness. Applying the method of (complex) contour integrals, the closed-form solution to the joint Laplace transform of the first passage time crossing a lower level and the corresponding undershoot is derived. We further extend our arguments to the exit problem from a finite interval and obtain joint Laplace transforms. Our results are expressed in terms of the real and imaginary parts of complex functions by complex matrix. Numerical results are included. <s> BIB018
Introduction. The Segerdahl-Tichy Process Segerdahl (1955) ; , characterized by exponential claims and state dependent drift, has drawn a considerable amount of interest-see, for example, BIB006 ; BIB008 ; BIB012 , due to its economic interest (it is the simplest risk process which takes into account the effect of interest rates-see the excellent overview (Albrecher and Asmussen 2010, Chapter 8) . It is also the simplest non-Lévy, non-diffusion example of a spectrally negative Markov risk model. Note that for both spectrally negative Lévy and diffusion processes, first passage theories which are based on identifying two "basic" monotone harmonic functions/martingales have been developed. This means that for these processes many control problems involving dividends, capital injections, etc., may be solved explicitly once the two basic functions have been obtained. Furthermore, extensions to general spectrally negative Markov processes are possible BIB014 , ; BIB016 ; BIB017 . Unfortunately, methods for computing the basic functions are still lacking outside the Lévy and diffusion classes. This divergence between theoretical and numerical is strikingly illustrated by the Segerdahl process, for which there exist today six theoretical approaches, but for which almost nothing has been computed, with the exception of the ruin probability BIB003 . Below, we review four of these methods (which apply also to certain generalizations provided in BIB006 ; ), with the purpose of drawing attention to connections between them, to underline open problems, and to stimulate further work. Spectrally negative Markov processes with constant jump intensity. To set the stage for our topic and future research, consider a spectrally negative jump diffusion on a filtered probability space (Ω, {F t } t≥0 , P), which satisfies the SDE: and is absorbed or reflected when leaving the half line (0, ∞). Here, B t is standard Brownian motion, σ(x) > 0, c(x) > 0, ∀x > 0, N λ (t) is a Poisson process of intensity λ, and C i are nonnegative random variables with distribution measure F C (dz) and finite mean. The functions c(x), a(x) := σ 2 (x) 2 and Π(dz) = λF C (dz) are referred to as the Lévy -Khinchine characteristics of X t . Note that we assume that all jumps go in the same direction and have constant intensity so that we can take advantage of potential simplifications of the first passage theory in this case. The Segerdahl-Tichy process is the simplest example outside the spectrally negative Lévy and diffusion classes. It is obtained by assuming a(x) = 0 in (1), and C k to be exponential i.i.d random variables with density f (x) = µe −µx (see BIB001 for the case c(x) = c + rx, r > 0, c ≥ 0, and for nonlinear c(x)). Note that, for the case c(x) = c + rx, an explicit computation of the ruin probability has been provided (with some typos) in BIB003 . See also BIB007 and see (Albrecher and Asmussen 2010, Chapter 8) for further information on risk processes with state dependent drift, and in particular the two pages of historical notes and references. First passage theory concerns the first passage times above and below fixed levels. For any process (X t ) t≥0 , these are defined by with inf ∅ = +∞, and the upper script X typically omitted. Since a is typically fixed below, we will write for simplicity T instead of T a,− . First passage times are important in the control of reserves/risk processes. The rough idea is that when below low levels a, reserves processes should be replenished at some cost, and when above high levels b, they should be partly invested to yield income-see, for example, the comprehensive textbook Albrecher and Asmussen (2010) . The most important first passage functions are the solutions of the two-sided upward and downward exit problems from a bounded interval [a, b] : where e q is an independent exponential random variable of rate q. We will call them (killed) survival and ruin probabilities, respectively 1 , but the qualifier killed will be usually dropped below. The absence of killing will be indicated by omitting the subindex q. Note that in the context of potential theory, (3) are called equilibrium potentials (of the capacitors {b, a} and {a, b}). Beyond ruin probabilities : scale functions, dividends, capital gains, etc. Recall that for "completely asymmetric Lévy " processes, with jumps going all in the same direction, a large variety of first passage problems may be reduced to the computation of the two monotone "scale functions" 1 See Ivanovs (2013) for a nice exposition of killing. W q , Z q -see, for example , , , BIB004 BIB009 , BIB011 Palmowski (2012), Albrecher et al. (2016) ; ; , BIB013 , and see BIB015 for a recent compilation of more than 20 laws expressed in terms of W q , Z q . For example, for spectrally negative Lévy processes, the Laplace transform/killed survival probability has a well known simple factorization 2 : For a second example, the De-Finetti de Finetti (1957) discounted dividends fixed barrier objective for spectrally negative Lévy processes has a simple expression in terms of either the W q scale function or of its logarithmic derivative ν q = W q W q 3 : Maximizing over the reflecting barrier b is simply achieved by finding the roots of W, Z formulas for first passage problems for spectrally negative Markov processes. Since results for spectrally negative Lévy processes require often not much more than the strong Markov property, it is natural to attempt to extend them to the spectrally negative strong Markov case. As expected, everything worked out almost smoothly for "Lévy -type cases" like random walks , Markov additive processes BIB010 , etc. Recently, it was discovered that W, Z formulas continue to hold a priori for spectrally negative Markov processes BIB014 , . The main difference is that in equations like Equation (4), W q (x − a) and the second scale function Z q,θ (x − a) BIB009 ; BIB010 must be replaced by two-variable functions W q (x, a), Z q,θ (x, a) (which reduces in the Lévy case to W q (x, y) = W q (x − y), with W q being the scale function of the Lévy process). This unifying structure has lead to recent progress for the optimal dividends problem for spectrally negative Markov processes (see BIB016 ). However, since the computation of the two-variables scale functions is currently well understood only for spectrally negative Lévy processes and diffusions, AG could provide no example outside these classes. In fact, as of today, we are not aware of any explicit or numeric results on the control of the process (1) which have succeeded to exploit the W, Z formalism. Literature review. Several approaches may allow handling particular cases of spectrally negative Markov processes: 1. with phase-type jumps, there is Asmussen's embedding into a regime switching diffusion BIB002 -see Section 5, and the complex integral representations of BIB005 , BIB018 . 2. for Lévy driven Langevin-type processes, renewal equations have been provided in Czarna et al. (2017) -see Section 2 3. for processes with affine operator, an explicit integrating factor for the Laplace transform may be found in BIB006 -see Section 3.
A Review of First-Passage Theory for the Segerdahl-Tichy Risk Process and Open Problems <s> 2 <s> We consider a risk process with stochastic interest rate, and show that the probability of eventual ruin and the Laplace transform of the time of ruin can be found by solving certain boundary value problems involving integro-differential equations. These equations are then solved for a number of special cases. We also show that a sequence of such processes converges weakly towards a diffusion process, and analyze the above-mentioned ruin quantities for the limit process in some detail. <s> BIB001 </s> A Review of First-Passage Theory for the Segerdahl-Tichy Risk Process and Open Problems <s> 2 <s> We provide a unified analytical treatment of first passage problems under an affine state-dependent jump-diffusion model (with drift and volatility depending linearly on the state). Our proposed model, that generalizes several previously studied cases, may be used for example for obtaining probabilities of ruin in the presence of interest rates under the rational investement strategies proposed by Berk & Green (2004). <s> BIB002 </s> A Review of First-Passage Theory for the Segerdahl-Tichy Risk Process and Open Problems <s> 2 <s> Levy Processes and Applications.- The Levy-Ito Decomposition and Path Structure.- More Distributional and Path-Related Properties.- General Storage Models and Paths of Bounded Variation.- Subordinators at First Passage and Renewal Measures.- The Wiener-Hopf Factorisation.- Levy Processes at First Passage.- Exit Problems for Spectrally Negative Processes.- More on Scale Functions.- Ruin Problems and Gerber-Shiu Theory.- Applications to Optimal Stopping Problems.- Continuous-State Branching Processes.- Positive Self-similar Markov Processes.- Epilogue.- Hints for Exercises.- References.- Index. <s> BIB003 </s> A Review of First-Passage Theory for the Segerdahl-Tichy Risk Process and Open Problems <s> 2 <s> The first motivation of our paper is to explore further the idea that, in risk control problems, it may be profitable to base decisions both on the position of the underlying process Xt and on its ... <s> BIB004
The fact that the survival probability has the multiplicative structure (4) is equivalent to the absence of positive jumps, by the strong Markov property; this is the famous "gambler's winning" formula BIB003 . 3 ν q may be more useful than W q in the spectrally negative Markov framework BIB004 4. for the Segerdahl process, the direct IDE solving approach is successful BIB001 ) -see Section 4. We will emphasize here the third approach but use also the second to show how the third approach fits within it. The direct IDE solving approach is recalled for comparison, and Asmussen's approach is also recalled, for its generality. Here is an example of an important problem we would like to solve: Problem 1. Find the de Finetti optimal barrier for the Segerdahl-Tichy process, extending the Equations (5) and (6). Contents. Section 2 reviews the recent approach based on renewal equations due to (which needs still be justified for increasing premiums satisfying (8)). An important renewal (Equation (11)) for the "scale derivative" w is recalled here, and a new result relating the scale derivative to its integrating factor (16) is offered-see Theorem 1. Section 3 reviews older computations of BIB002 for more general processes with affine operator, and provides explicit formulas for the Laplace transforms of the survival and ruin probability (24), in terms of the same integrating factor (16) and its antiderivative. Section 4 reviews the direct classic Kolmogorov approach for solving first passage problems with phase-type jumps. The discounted ruin probability (q > 0) for this process may be found explicitly (33) for the Segerdahl process by transforming the renewal equation (29) into the ODE (30), which is hypergeometric of order 2. This result due to Paulsen has stopped short further research for more general mixed exponential jumps, since it seems to require a separate "look-up" of hypergeometric solutions for each particular problem. Section 5 reviews Asmussen's approach for solving first passage problems with phase-type jumps, and illustrates the simple structure of the survival and ruin probability of the Segerdahl-Tichy process, in terms of the scale derivative w. This approach yields quasi-explicit results when q = 0. Section 6 checks that our integrating factor approach recovers various results for Segerdahl's process, when q = 0 or x = 0. Section 7 reviews necessary hypergeometric identities. Finally, Section 8 outlines further promising directions of research.
A Review of First-Passage Theory for the Segerdahl-Tichy Risk Process and Open Problems <s> Asmussen's Embedding Approach for Solving Kolmogorov's Integro-Differential Equation with Phase-Type Jumps <s> We consider a process with reflection at the origin and paths which are piecewise linear or Brownian, with the drift and variance constants being determined by the state of an underlying finite Markov process; the purely linear case corresponds to fluid flow models of current interest in telecommunications engineering. It is shown that the stationary distribution is phase-type, and various algorithms for computing the phase representation are given, some iterative with each step involving a matrix inversion and some based upon spectral expansion of the phase generator. Mathematically, the point of view is Markov additive processes, and some key tools are time-reversal and auxiliary Markov processes obtained by observing the underlying Markov process when the additive component is at a maximum <s> BIB001 </s> A Review of First-Passage Theory for the Segerdahl-Tichy Risk Process and Open Problems <s> Asmussen's Embedding Approach for Solving Kolmogorov's Integro-Differential Equation with Phase-Type Jumps <s> For the Cram6r-Lundberg risk model with phase-type claims, it is shown that the probability of ruin before an independent phase-type time H coincides with the ruin probability in a certain Markovian fluid model and therefore has an matrix-exponential form. When H is exponential, this yields in particular a probabilistic interpretation of a recent result of Avram & Usabel. When H is Erlang, the matrix algebra takes a simple recursive form, and fixing the mean of H at T and letting the number of stages go to infinity yields a quick approximation procedure for the probability of ruin before time T. Numerical examples are given, including a combination with extrapolation. <s> BIB002 </s> A Review of First-Passage Theory for the Segerdahl-Tichy Risk Process and Open Problems <s> Asmussen's Embedding Approach for Solving Kolmogorov's Integro-Differential Equation with Phase-Type Jumps <s> Our paper illustrates how the theory of Lie systems allows recovering known results and provide new examples of piecewise deterministic processes with phase-type jumps for which the corresponding first-time passage problems may be solved explicitly. <s> BIB003
One of the most convenient approaches to get rid of the integral term in (29) is a probabilistic transformation which gets rid of the jumps as in BIB001 , when the downward phase-type jumps have a survival functionF where B is a n × n stochastic generating matrix (nonnegative off-diagonal elements and nonpositive row sums), β = (β 1 , . . . , β n ) is a row probability vector (with nonnegative elements and ∑ n j=1 β j = 1), and 1 = (1, 1, ..., 1) is a column probability vector. The density is f C (x) = βe −Bx b, where b = (−B)1 is a column vector, and the Laplace transform iŝ Asmussen's approach BIB001 ; BIB002 replaces the negative jumps by segments of slope −1, embedding the original spectrally negative Lévy process into a continuous Markov modulated Lévy process. For the new process we have auxiliary unknowns A i (x) representing ruin or survival probabilities (or, more generally, Gerber-Shiu functions) when starting at x conditioned on a phase i with drift downwards (i.e., in one of the "auxiliary stages of artificial time" introduced by changing the jumps to segments of slope −1). Let A denote the column vector with components A 1 , . . . , A n . The Kolmogorov integro-differential equation turns then into a system of ODE's, due to the continuity of the embedding process. For the ruin probability with exponential jumps of rate µ for example, there is only one downward phase, and the system is: For survival probabilities, one only needs to modify the boundary conditions-see the following section. 5.1. Exit Problems for the Segerdahl-Tichy process, with q = 0 Asmussen's approach is particular convenient for solving exit problems for the Segerdahl-Tichy process. Example 1. The eventual ruin probability. When q = 0, the system for the ruin probabilities with x ≥ 0 is: This may be solved by subtracting the equations. Putting we find: Finally, , and for the survival probability Ψ, where Ψ(0) = 1 W(∞) by plugging W(0) = 1 in the first and last terms in (45). We may also rewrite (45) as: Note that w(x) > 0 implies that the scale function W(x) is nondecreasing. (46) does not depend on a. Indeed, the analog of (44) is: x a w(u)du Remark 7. The definition adopted in this section for the scale function W(x, a) uses the normalization W(a, a) = 1, which is only appropriate in the absence of Brownian motion. Despite the new scale derivative/integrating factor approach, we were not able to produce further explicit results beyond (33), due to the fact that neither the scale derivative, nor the integral of the integrating factor are explicit when q > 0 (this is in line with BIB003 ). (33) remains thus for now an outstanding, not well-understood exception. Problem 5. Are there other explicit first passage results for Segerdahl's process when q > 0? In the next subsections, we show that via the scale derivative/integrating factor approach, we may rederive well-known results for q = 0.
A unifying survey on weighted logics and weighted automata <s> Introduction <s> The formalism of regular expressions was introduced by S. C. Kleene [6] to obtain the following basic theorems. <s> BIB001 </s> A unifying survey on weighted logics and weighted automata <s> Introduction <s> In this note we discuss the definition of a family (~ of automata derived from the family a0 of the finite one-way one-tape automata (Rabin and Scott, 1959). In loose terms, the automata from (~ are among the machines characterized by the following restrictions: (a) Their output consists in the acceptance (or reieetion ) of input words belonging to the set F of all words in the letters of a finite alphabet X. (b) The automaton operates sequentially on the sueessive letters of the input word without the possibility of coming back on the previously read letters and, thus, all the information to be used in the further computations has to be stored in the internal memory. (c) The unbounded part of the memory, V~¢, is the finite dimensional vector space of the vectors with N integral coordinates; this part of the memory plays only a passive role and all the control of the automaton is performed by the finite part. (d) 0n ly elementary arithmetic operations are used and the amount of computation allowed for each input letter is bounded in terms of the total number of additions and subtractions. (e) The rule by which it is decided to accept or reject a given input word is submitted to the same type of requirements and it involves only the storage of a finite amount of information. Thus the family (~ is a very elementary modification of G0 and it is not <s> BIB002 </s> A unifying survey on weighted logics and weighted automata <s> Introduction <s> 1. Motivation. Many variants of the notion of automaton have appeared in the literature. We find it convenient here to adopt the notion of E. F. Moore [7]. Inasmuch as Rabin-Scott [9] adopt this notion, too, it is convenient to refer to [9] for various results presumed here. In particular, Kleene's theorem [5, Theorems 3, 5] is used in the form in which it appears in [9]. It is often perspicacious to view regular expressions, and this notion is used in the sense of [3]. In general, we are concerned with the problems of automatically designing an automaton from a specification of a relation which is to hold between the automaton's input sequences and determined output sequences. These "design requirements" are given via a formula of some kind. The problems with which we are concerned have been described in [1]. With respect to particular formalisms for expressing "design requirements" as well as the notion of automaton itself, the problems are briefly and informally these: (1) to produce an algorithm which when it operates on an automaton and a design requirement produces the correct answer to the question "Does this automaton satisfy this design requirement?", or else show no such algorithm exists; (2) to produce an algorithm which operates on a design requirement and produces the correct answer to the question "Does there exist an automaton which satisfies this design requirement?", or else show no such algorithm exists; (3) to produce an algorithm which operates on a design requirement and terminates with an automaton which satisfies the requirement when one exists and otherwise fails to terminate, or else show no such algorithm exists. Interrelationships among problems (1), (2), (3) will appear in the paper [1]. This paper will also indicate the close connection between problem (1) and decision problems for truth of sentences of certain arithmetics. The paper [1 ] will also make use of certain results concerning weak arithmetics already obtained in the literature to obtain answers to problems (1) and (3). Thus <s> BIB003 </s> A unifying survey on weighted logics and weighted automata <s> Introduction <s> We define a weighted monadic second order logic for trees where the weights are taken from a commutative semiring. We prove that a restricted version of this logic characterizes the class of formal tree series which are accepted by weighted bottom-up finite state tree automata. The restriction on the logic can be dropped if additionally the semiring is locally finite. This generalizes corresponding classical results of Thatcher, Wright, and Doner for tree languages and it extends recent results of Droste and Gastin [Weighted automata and weighted logics, in: Automata, Languages and Programrning--32nd International Colloquium, ICALP 2005, Lisbon, Portugal, 2005, Proceedings, Lecture Notes in Computer Science, Vol. 3580, Springer, Berlin, 2005, pp. 513-525, full version in Theoretical Computer Science, to appear.] from formal power series on words to formal tree series. <s> BIB004 </s> A unifying survey on weighted logics and weighted automata <s> Introduction <s> Weighted automata are used to describe quantitative properties in various areas such as probabilistic systems, image compression, speech-to-text processing. The behaviour of such an automaton is a mapping, called a formal power series, assigning to each word a weight in some semiring. We generalize Buchi's and Elgot's fundamental theorems to this quantitative setting. We introduce a weighted version of MSO logic and prove that, for commutative semirings, the behaviours of weighted automata are precisely the formal power series definable with particular sentences of our weighted logic. We also consider weighted first-order logic and show that aperiodic series coincide with the first-order definable ones, if the semiring is locally finite, commutative and has some aperiodicity property. <s> BIB005 </s> A unifying survey on weighted logics and weighted automata <s> Introduction <s> Nondeterministic finite automata with states and transitions labeled by real-valued weights have turned out to be powerful tools for the representation and compression of digital grayscale and color images. The addressing of pixels by input-sequences is extended to cover multi-resolution images. Encoding algorithms for such weighted finite automata (WFA) exploit self-similarities for efficient image compression, outperforming the well-known JPEG baseline standard most of the time. WFA-concepts are embedded easily into weighted finite transducers (WFT) which can execute several natural operations on images in their compressed form and also into so-called parametric WFA, which are closely related to generalized Iterated Function Systems. <s> BIB006 </s> A unifying survey on weighted logics and weighted automata <s> Introduction <s> We explain why weighted automata are an attractive knowledge representation for natural language problems. We first trace the close historical ties between the two fields, then present two complex real-world problems, transliteration and translation. These problems are usefully decomposed into a pipeline of weighted transducers, and weights can be set to maximize the likelihood of a training corpus using standard algorithms. We additionally describe the representation of language models, critical data sources in natural language processing, as weighted automata. We outline the wide range of work in natural language processing that makes use of weighted string and tree automata and describe current work and challenges. <s> BIB007 </s> A unifying survey on weighted logics and weighted automata <s> Introduction <s> A multioperator monoid $\mathcal{A}$ is a commutative monoid with additional operations on its carrier set. A weighted tree automaton over $\mathcal{A}$ is a finite state tree automaton of which each transition is equipped with an operation of $\mathcal{A}$. We define M-expressions over $\mathcal{A}$ in the spirit of formulas of weighted monadic second-order logics and, as our main result, we prove that if $\mathcal{A}$ is absorptive, then the class of tree series recognizable by weighted tree automata over $\mathcal{A}$ coincides with the class of tree series definable by M-expressions over $\mathcal{A}$. This result implies the known fact that for the series over semirings recognizability by weighted tree automata is equivalent with definability in syntactically restricted weighted monadic second-order logic. We prove this implication by providing two purely syntactical transformations, from M-expressions into formulas of syntactically restricted weighted monadic second-order logic, and vice versa. <s> BIB008 </s> A unifying survey on weighted logics and weighted automata <s> Introduction <s> We present an algorithmic method for the quantitative, performance-aware synthesis of concurrent programs. The input consists of a nondeterministic partial program and of a parametric performance model. The nondeterminism allows the programmer to omit which (if any) synchronization construct is used at a particular program location. The performance model, specified as a weighted automaton, can capture system architectures by assigning different costs to actions such as locking, context switching, and memory and cache accesses. The quantitative synthesis problem is to automatically resolve the nondeterminism of the partial program so that both correctness is guaranteed and performance is optimal. As is standard for shared memory concurrency, correctness is formalized "specification free", in particular as race freedom or deadlock freedom. For worst-case (average-case) performance, we show that the problem can be reduced to 2-player graph games (with probabilistic transitions) with quantitative objectives. While we show, using game-theoretic methods, that the synthesis problem is Nexp-complete, we present an algorithmic method and an implementation that works efficiently for concurrent programs and performance models of practical interest. We have implemented a prototype tool and used it to synthesize finite-state concurrent programs that exhibit different programming patterns, for several performance models representing different architectures. <s> BIB009 </s> A unifying survey on weighted logics and weighted automata <s> Introduction <s> Quantitative aspects of systems can be modeled by weighted automata. Here, we deal with such automata running on finite trees. Usually, transitions are weighted with elements of a semiring and the behavior of the automaton is obtained by multiplying the weights along a run. We turn to a more general cost model: the weight of a run is now determined by a global valuation function. An example of such a valuation function is the average of the weights. We establish a characterization of the behaviors of these weighted finite tree automata by fragments of weighted monadic second-order logic. For bi-locally finite bimonoids, we show that weighted tree automata capture the expressive power of several semantics of full weighted MSO logic. Decision procedures follow as consequences. <s> BIB010 </s> A unifying survey on weighted logics and weighted automata <s> Introduction <s> Weighted timed automata (WTA) model quantitative aspects of real-time systems like continuous consumption of memory, power or financial resources. They accept quantitative timed languages where every timed word is mapped to a value, e.g., a real number. In this paper, we prove a Nivat theorem for WTA which states that recognizable quantitative timed languages are exactly those which can be obtained from recognizable boolean timed languages with the help of several simple operations. We also introduce a weighted extension of relative distance logic developed by Wilke, and we show that our weighted relative distance logic and WTA are equally expressive. The proof of this result can be derived from our Nivat theorem and Wilke’s theorem for relative distance logic. Since the proof of our Nivat theorem is constructive, the translation process from logic to automata and vice versa is also constructive. This leads to decidability results for weighted relative distance logic. <s> BIB011 </s> A unifying survey on weighted logics and weighted automata <s> Introduction <s> Weighted automata are non-deterministic automata where the transitions are equipped with weights. They can model quantitative aspects of systems like costs or energy consumption. The value of a run can be computed, for example, as the maximum, limit average, or discounted sum of transition weights. In multi-weighted automata, transitions carry several weights and can model, for example, the ratio between rewards and costs, or the efficiency of use of a primary resource under some upper bound constraint on a secondary resource. Here, we introduce a general model for multi-weighted automata as well as a multi-weighted MSO logic. In our main results, we show that this multi-weighted MSO logic and multi-weighted automata are expressively equivalent both for finite and infinite words. The translation process is effective, leading to decidability results for our multi-weighted MSO logic. <s> BIB012
Weighted automata are a well-studied formalism modelling quantitative behaviours. Introduced by Schützenberger in BIB002 , they have been applied in many areas such as image compression BIB006 , natural language processing BIB007 , verification and synthesis of programs BIB009 , etc. In the last years, high-level specification formalisms of quantitative properties have received increasing interest. Among other successes, the connection between monadic second-order logic (MSO) and finite automata established by Büchi, Elgot and Trakhtenbrot BIB001 BIB003 , has been extended to the weighted setting. There have been many attempts to find a suitable extension of MSO to describe quantitative properties which captures the expressive power of weighted automata. The considered variants differ with respect to the structures (words, ranked or unranked trees, nested words, etc.) and the weight domains (semirings, valuation monoids, valuation structures, multi-operator monoids, etc.). This article aims at revisiting the link between weighted logics and weighted automata in a uniform manner with regards of these two dimensions. Our main contribution is to consider a new fragment of weighted logics containing a minimal set of features. In order to simplify the uniformity with respect to the structures, we syntactically separate a Boolean fragment from the weighted part: only the syntax of Boolean formulae depends on the structures considered. Then, we clearly separate a small fragment able to define step functions-that we call step formulae-from the more general weighted logic. Because of the minimal set of features that it displays, we call our logic core weighted monadic second-order logic. This separation into three distinct layers, more or less clear in previous works, is designed both to clarify the subsequent study of the expressive power, and to simplify the use of the weighted logic. Towards defining the semantics of this new logic, we first revisit weighted automata by defining an alternative semantics, then lifting it to formulae. This is done in two phases. First, an abstract semantics associates with a structure a multiset of weight labelled structures. E.g., in the case of words, a weighted automaton/formula will map every word to a multiset of weight words. In the setting of trees, every tree is associated with a multiset of weight trees (of the same shape as the original tree). This abstract semantics is fully uninterpreted and, hence, does not depend on any algebraic structure over the set of weights considered. This semantics is in the spirit of a transducer. It has already been used in similar contexts: in BIB008 with an operator H(ω) which relabels trees with operations taken from a multi-operator monoid, in with a weight assignment logic over infinite words, in BIB011 with Nivat theorems for weighted automata over various structures. In a second phase, a concrete semantics is given, by means of an aggregation operator taking the abstract semantics and aggregating every multiset of weight structures to a single value (in a possibly different weight domain). For instance, the usual semantics of weighted automata over semirings can be recovered by mapping every weight word to the product of its weights, and merging the multiset with the addition of the semiring. Separating the semantics in two successive phases, both for weighted automata and logics, allows us to revisit the original proof of expressive equivalence of BIB005 in the abstract semantics. This result has been extended to various weight domains and/or structures (see below). The proof of equivalence in all these works are based on the same core argument which relates runs of automata with the evaluation of formulae. Inspired by the above similarities, our choice of the abstract multiset semantics manifests this core argument. Because the abstract semantics is fully uninterpreted, no additional hypotheses on the weight domain is required to prove the equivalence. We then apply the aggregation operator to obtain a concrete equivalence between weighted automata and our core weighted logic. Our last contribution is to show, by means of purely logical reasoning, that our new fragment of core weighted logic is expressively equivalent to the logics pro-posed in the previous works. Over finite words, this allows us to recover the results over semirings BIB005 , (product) valuation monoids and (product) valuation structures BIB012 . Valuation monoids replace the product operation of the semiring by a lenient valuation operation, making possible to consider discounted sums, average or more evolved combination of sequences of weights. Valuation structures finally also replace the sum by a more general evaluation operator, for instance ratios of several weights computed simultaneously. As an example, it is then possible to compute the ratio between rewards and costs, or the efficiency of use of a primary resource under some upper bound constraint on a secondary resource. Our unifying proof gives new insights on the additional hypotheses (commutativity, distributivity, etc) over the weight domains used in these works. After studying in full details the case of finite words, we illustrate the uniformity of the method with respect to structures, by considering ranked and unranked trees. Once again, our study revisits existing works over semirings BIB004 , (product) valuation monoids BIB010 , and also multi-operator monoids BIB008 . The syntax of the logic in the case of multi-operator monoids is different from the other logics. The proof techniques used to show equivalence of the two formalisms are nevertheless very close to the original ones for semirings.
A unifying survey on weighted logics and weighted automata <s> Core weighted monadic second-order logic <s> Weighted automata are used to describe quantitative properties in various areas such as probabilistic systems, image compression, speech-to-text processing. The behaviour of such an automaton is a mapping, called a formal power series, assigning to each word a weight in some semiring. We generalize Buchi's and Elgot's fundamental theorems to this quantitative setting. We introduce a weighted version of MSO logic and prove that, for commutative semirings, the behaviours of weighted automata are precisely the formal power series definable with particular sentences of our weighted logic. We also consider weighted first-order logic and show that aperiodic series coincide with the first-order definable ones, if the semiring is locally finite, commutative and has some aperiodicity property. <s> BIB001 </s> A unifying survey on weighted logics and weighted automata <s> Core weighted monadic second-order logic <s> While a mature theory around logics such as MSO, LTL, and CTL has been developed in the pure boolean setting of finite automata, weighted automata lack such a natural connection with (temporal) logic and related verification algorithms. In this paper, we will identify weighted versions of MSO and CTL that generalize the classical logics and even other quantitative extensions such as probabilistic CTL. We establish expressiveness results on our logics giving translations from weighted and probabilistic CTL into weighted MSO. <s> BIB002 </s> A unifying survey on weighted logics and weighted automata <s> Core weighted monadic second-order logic <s> A multioperator monoid $\mathcal{A}$ is a commutative monoid with additional operations on its carrier set. A weighted tree automaton over $\mathcal{A}$ is a finite state tree automaton of which each transition is equipped with an operation of $\mathcal{A}$. We define M-expressions over $\mathcal{A}$ in the spirit of formulas of weighted monadic second-order logics and, as our main result, we prove that if $\mathcal{A}$ is absorptive, then the class of tree series recognizable by weighted tree automata over $\mathcal{A}$ coincides with the class of tree series definable by M-expressions over $\mathcal{A}$. This result implies the known fact that for the series over semirings recognizability by weighted tree automata is equivalent with definability in syntactically restricted weighted monadic second-order logic. We prove this implication by providing two purely syntactical transformations, from M-expressions into formulas of syntactically restricted weighted monadic second-order logic, and vice versa. <s> BIB003
We now turn to the description of a new weighted logic, that will be equivalent to weighted automata. Most existing works start with the definition of a very general logic, and then introduce restrictions to match the expressive power of weighted automata. We take the opposite approach by defining a very basic weighted logic, yet powerful enough to be expressively equivalent to weighted automata. Our logic has three layers: the Boolean fragment which is the classical MSO logic over words, a step weighted fragment (step-wMSO) defining step functions (i.e., piecewise constant functions with a finite number of pieces), and the core weighted logic (core-wMSO) which has the full expressive power of weighted automata. We will show in Section 5 that core-wMSO is a fragment of the (full) weighted MSO logic (wMSO) defined in BIB001 . Considering a Boolean fragment inside a weighted logic was originally done in BIB002 and followed in many articles, see, e.g., BIB003 . with a ∈ Σ, r ∈ R, x, y first-order variables and X a secondorder variable. Table 1 Syntax of the core weighted logic core-wMSO(Σ, R).
A unifying survey on weighted logics and weighted automata <s> Actually, (3) holds for arbitrary multisets <s> A multioperator monoid $\mathcal{A}$ is a commutative monoid with additional operations on its carrier set. A weighted tree automaton over $\mathcal{A}$ is a finite state tree automaton of which each transition is equipped with an operation of $\mathcal{A}$. We define M-expressions over $\mathcal{A}$ in the spirit of formulas of weighted monadic second-order logics and, as our main result, we prove that if $\mathcal{A}$ is absorptive, then the class of tree series recognizable by weighted tree automata over $\mathcal{A}$ coincides with the class of tree series definable by M-expressions over $\mathcal{A}$. This result implies the known fact that for the series over semirings recognizability by weighted tree automata is equivalent with definability in syntactically restricted weighted monadic second-order logic. We prove this implication by providing two purely syntactical transformations, from M-expressions into formulas of syntactically restricted weighted monadic second-order logic, and vice versa. <s> BIB001 </s> A unifying survey on weighted logics and weighted automata <s> Actually, (3) holds for arbitrary multisets <s> Weighted timed automata (WTA) model quantitative aspects of real-time systems like continuous consumption of memory, power or financial resources. They accept quantitative timed languages where every timed word is mapped to a value, e.g., a real number. In this paper, we prove a Nivat theorem for WTA which states that recognizable quantitative timed languages are exactly those which can be obtained from recognizable boolean timed languages with the help of several simple operations. We also introduce a weighted extension of relative distance logic developed by Wilke, and we show that our weighted relative distance logic and WTA are equally expressive. The proof of this result can be derived from our Nivat theorem and Wilke’s theorem for relative distance logic. Since the proof of our Nivat theorem is constructive, the translation process from logic to automata and vice versa is also constructive. This leads to decidability results for weighted relative distance logic. <s> BIB002
instance, in the weighted timed setting BIB002 , Droste and Perevoshchikov translate weighted (timed) automata into weighted sentences where Boolean formulae inside the universal quantification are of the form x ∈ X only. In our context, it means only set-step-wMSO inside a product x . Similarly, in the context of trees BIB001 , Fülöp, Stüber, and Vogler use an operation H(ω) which renames every node of the input tree with an operator from some family ω (coming from a multioperator monoid). Again, the renaming is described by means of formulae of the form x ∈ X only, and not by more general MSO formulae.
A unifying survey on weighted logics and weighted automata <s> Lemma 18 <s> The formalism of regular expressions was introduced by S. C. Kleene [6] to obtain the following basic theorems. <s> BIB001 </s> A unifying survey on weighted logics and weighted automata <s> Lemma 18 <s> 1. Motivation. Many variants of the notion of automaton have appeared in the literature. We find it convenient here to adopt the notion of E. F. Moore [7]. Inasmuch as Rabin-Scott [9] adopt this notion, too, it is convenient to refer to [9] for various results presumed here. In particular, Kleene's theorem [5, Theorems 3, 5] is used in the form in which it appears in [9]. It is often perspicacious to view regular expressions, and this notion is used in the sense of [3]. In general, we are concerned with the problems of automatically designing an automaton from a specification of a relation which is to hold between the automaton's input sequences and determined output sequences. These "design requirements" are given via a formula of some kind. The problems with which we are concerned have been described in [1]. With respect to particular formalisms for expressing "design requirements" as well as the notion of automaton itself, the problems are briefly and informally these: (1) to produce an algorithm which when it operates on an automaton and a design requirement produces the correct answer to the question "Does this automaton satisfy this design requirement?", or else show no such algorithm exists; (2) to produce an algorithm which operates on a design requirement and produces the correct answer to the question "Does there exist an automaton which satisfies this design requirement?", or else show no such algorithm exists; (3) to produce an algorithm which operates on a design requirement and terminates with an automaton which satisfies the requirement when one exists and otherwise fails to terminate, or else show no such algorithm exists. Interrelationships among problems (1), (2), (3) will appear in the paper [1]. This paper will also indicate the close connection between problem (1) and decision problems for truth of sentences of certain arithmetics. The paper [1 ] will also make use of certain results concerning weak arithmetics already obtained in the literature to obtain answers to problems (1) and (3). Thus <s> BIB002
The expressive power of core-wMSO(Σ, R) does not change if we replace step-wMSO(Σ, R) formulae by set-step-wMSO(R) formulae. Proof We start with a core-wMSO formula Φ = x Ψ where Ψ is a step-wMSO(Σ, R) formula. Let ϕ 1 , . . . , ϕ n be the MSO formulae occurring in Ψ as conditions of the if-then-else operator. We let X = (X 1 , . . . , X n ) be a tuple of fresh second-order variables. Let also Ψ be the formula obtained from Ψ by replacing every occurrence of ϕ i by x ∈ X i , for all 1 i n. Notice that Ψ is a set-step-wMSO(R) formula. We claim that Φ = x Ψ is equivalent to the formula Indeed, let V = free(Φ) = free( x Ψ ) and V = V ∪ {X 1 , . . . , X n }. For every valid (w, σ) ∈ Σ + V there is a unique (w, σ ) ∈ Σ + V such that σ | V = σ and w, σ |= i ∀x (x ∈ X i ↔ ϕ i ). For all 1 i n, we have σ (X i ) = {j ∈ pos(w) | w, σ[x → j] |= ϕ i }. We obtain {|Φ |}(w, σ) = {| x Ψ |}(w, σ ). Then, it is easy to check by induction on Ψ that for all j ∈ pos(w) we have Proof (of Theorem 9) Let A = (Q, ∆, wgt, I, F ) be a weighted automaton. We use a set variable X δ for each transition δ ∈ ∆ and we let X = (X δ ) δ∈∆ . Intuitively, the tuple X encodes a run of A over a word w when each set variable X δ is interpreted as the set of positions at which transition δ is used in that run. We can easily write an MSO formula run(X) which evaluates to true on some word w if and only if X encodes a run of A on w starting from I and ending in F . First, we state that X is a partition on the positions of w. Then we request that if the first position of w is in X δ then δ ∈ I × Σ × Q is initial. Similarly, the transition of the last position should be final. Finally, if δ = (p, a, q) and δ = (p , a , q ) are the transitions of two consecutive positions of w then q = p . It is routine to write all these requirements in MSO (even in FO 3 ). Assuming that run(X) holds, we let weight(x, X) be the set-step-wMSO formula which evaluates to wgt(δ) where δ ∈ ∆ is the unique transition such that x ∈ X δ . Formally, if ∆ = {δ 1 , δ 2 , . . . , δ n } then we define weight(x, X) as x ∈ X δ1 ? wgt(δ 1 ) : · · · x ∈ X δn−1 ? wgt(δ n−1 ) : wgt(δ n ) and Φ A = X run(X) ? x weight(x, X) : 0 . We can easily check that for all words w ∈ Σ + we have Conversely, we proceed by induction on Φ, hence we have to deal with free variables. So we construct for each formula Φ a weighted automaton A Φ over the alphabet It is folklore that we may increase the set of variables encoded in the alphabet whenever needed, e.g., to deal with sum or if-then-else. Formally, if V ⊆ V then we can lift an automaton A V defined on the al- The automaton A 0 has a single state which is initial but not final and has no transitions. We recall the classical constructions for the additive operators of core-wMSO: +, x and X . If Φ = Φ 1 + Φ 2 then A Φ is obtained as the disjoint union of A Φ1 and A Φ2 , both lifted to Σ Φ . If Φ = X Φ 1 then A Φ is obtained via a variant 4 of the projection construction starting from A Φ1 . Assume that A Φ1 = (Q, ∆, wgt, I, F ). We define A Φ = (Q × {0, 1}, ∆ , wgt , I ×{0}, F ×{0, 1}) over alphabet Σ free(Φ) by letting ((p, i), a, (q, j)) ∈ ∆ iff (p, (a, j), q) ∈ ∆ where (a, j) denotes the letter in Σ free(Φ)∪{X} where the value of the X-component is given by j and the remaining Σ free(Φ) -components (different from X) are given by a. We also let wgt ((p, i), a, (q, j)) = wgt(p, (a, j), q) . This transfer of the alphabet component for X to the state of A Φ allows us to define a bijection between the accepting runs of A Φ1 and the accepting runs of A Φ , preserving sequences of weights. Then, we deduce easily that {|A Φ |} = {|Φ|} over alphabet Σ free(Φ) . If Φ = x Φ 1 , the construction is almost the same. In the definition of A Φ , the set of accepting states is F × {1} and the transitions are given by ((p, 0), a, (q, j)) ∈ ∆ iff (p, (a, j), q) ∈ ∆ ((p, 1), a, (q, 1)) ∈ ∆ iff (p, (a, 0), q) ∈ ∆ with weights inherited as before wgt ((p, 0), a, (q, j)) = wgt(p, (a, j), q) wgt ((p, 1), a, (q, 1)) = wgt(p, (a, 0), q) . We turn now to the more interesting cases: if-thenelse and x . Noticed that ϕ ? Φ 1 : Φ 2 is equivalent to (ϕ ? Φ 1 : 0) + (¬ϕ ? Φ 2 : 0), hence we only need to construct an automaton for Φ = ϕ ? Φ 1 : 0. Let V = free(Φ) = free(ϕ) ∪ free(Φ 1 ). Since ϕ is a (Boolean) MSO formula, by BIB001 BIB002 , we can construct a deterministic 5 automaton A ϕ over the alphabet Σ V which accepts a word w ∈ Σ + V if and only if it is a valid encoding w = (w, σ) satisfying ϕ. Now, by induction, we have an automaton The automaton A Φ is obtained as the "intersection" of A ϕ and A Φ1 (see the formal construction below). Now, let w ∈ Σ + V . If w is not valid or w = (w, σ) is valid and does not satisfy ϕ then A ϕ (hence also A Φ ) has no accepting run on w and we obtain {|A Φ |}(w) = ∅ = {|Φ|}(w). On the other hand, assume that w = (w, σ) is valid and satisfy ϕ. Since A ϕ is deterministic, there is a bijection between the accepting runs of A Φ and the accepting runs of A Φ1 . By construction of A Φ , this bijection preserves the sequence of weights associated with a run. We deduce that {|A Φ |}(w, σ) = {|A Φ1 |}(w, σ) = {|Φ|}(w, σ). We give now the formal definition of A Φ . Let A Φ1 = (Q 1 , ∆ 1 , wgt 1 , I 1 , F 1 ) be the weighted automaton over ∆ is the set of triples δ = ((p 1 , p 2 ), a, (q 1 , q 2 )) such that δ 1 = (p 1 , a, q 1 ) ∈ ∆ 1 and (p 2 , a, q 2 ) ∈ ∆ 2 , and wgt(δ) = wgt(δ 1 ). Finally, it remains to deal with the case Φ = x Ψ . By Lemma 18, we may assume that Ψ is a formula in set-step-wMSO(R). So free(Ψ ) = {x, X 1 , . . . , X n } and the tests in Ψ are of the form x ∈ X i for some i ∈ {1, . . . , n}. Also, free(Φ) = {X 1 , . . . , X n } consists of second-order variables only, so every word w ∈ Σ + free(Φ) is valid. We could also use an unambiguous automaton for Aϕ. For every τ ∈ {0, 1} n , we define the evaluation Ψ (τ ) inductively as follows: r(τ ) = r and Let w = (a 1 , τ 1 ) · · · (a k , τ k ) ∈ Σ + free(Φ) with a j ∈ Σ and τ j ∈ {0, 1} free(Φ) for all 1 j k. We can easily check that {|Φ|}(w) = {{Ψ (τ 1 ) · · · Ψ (τ k )}}. Define A Φ = (Q, ∆, wgt, I, F ) with a single state which is both initial and final (Q = I = F = {q}) and for every a ∈ Σ and τ ∈ {0, 1} free(Φ) , there is a transition δ = (q, (a, τ ), q) ∈ ∆ with wgt(δ) = Ψ (τ ). It is clear that for every word w = (a 1 , τ 1 ) · · · (a k , τ k ) ∈ Σ + free(Φ) , the automaton A Φ has a single run on w whose sequence of weights is Ψ (τ 1 ) · · · Ψ (τ k ). Therefore, {|A Φ |}(w) = {|Φ|}(w), which concludes the proof.
A unifying survey on weighted logics and weighted automata <s> Restricted weighted MSO logic <s> Weighted automata are used to describe quantitative properties in various areas such as probabilistic systems, image compression, speech-to-text processing. The behaviour of such an automaton is a mapping, called a formal power series, assigning to each word a weight in some semiring. We generalize Buchi's and Elgot's fundamental theorems to this quantitative setting. We introduce a weighted version of MSO logic and prove that, for commutative semirings, the behaviours of weighted automata are precisely the formal power series definable with particular sentences of our weighted logic. We also consider weighted first-order logic and show that aperiodic series coincide with the first-order definable ones, if the semiring is locally finite, commutative and has some aperiodicity property. <s> BIB001 </s> A unifying survey on weighted logics and weighted automata <s> Restricted weighted MSO logic <s> Weighted automata are non-deterministic automata where the transitions are equipped with weights. They can model quantitative aspects of systems like costs or energy consumption. The value of a run can be computed, for example, as the maximum, limit average, or discounted sum of transition weights. In multi-weighted automata, transitions carry several weights and can model, for example, the ratio between rewards and costs, or the efficiency of use of a primary resource under some upper bound constraint on a secondary resource. Here, we introduce a general model for multi-weighted automata as well as a multi-weighted MSO logic. In our main results, we show that this multi-weighted MSO logic and multi-weighted automata are expressively equivalent both for finite and infinite words. The translation process is effective, leading to decidability results for our multi-weighted MSO logic. <s> BIB002
We now present the syntax and semantics of the full wMSO logic that has been studied over semirings BIB001 , valuation monoids and valuation structures BIB002 . The syntax used in these previous works is different. Also, there is no separate semantics for the Boolean fragment, instead, it is obtained as a special case of the quantitative semantics. As we will see, this choice requires some additional conditions on the weight domain, called hypothesis (01) below. In order to obtain the same expressive power as weighted automata, we also have to restrict the usage of conjunction and universal quantifications in wMSO. We present effective translations in both directions relating restricted wMSO with core-wMSO, and the conditions that the weight domain has to fulfil in different settings. Using Corollary 11, we obtain a purely logical proof of the equivalence between restricted wMSO and weighted automata, using core-wMSO as an intermediary, simple and elegant, logical formalism.
A unifying survey on weighted logics and weighted automata <s> Extensions to ranked and unranked trees <s> The formalism of regular expressions was introduced by S. C. Kleene [6] to obtain the following basic theorems. <s> BIB001 </s> A unifying survey on weighted logics and weighted automata <s> Extensions to ranked and unranked trees <s> 1. Motivation. Many variants of the notion of automaton have appeared in the literature. We find it convenient here to adopt the notion of E. F. Moore [7]. Inasmuch as Rabin-Scott [9] adopt this notion, too, it is convenient to refer to [9] for various results presumed here. In particular, Kleene's theorem [5, Theorems 3, 5] is used in the form in which it appears in [9]. It is often perspicacious to view regular expressions, and this notion is used in the sense of [3]. In general, we are concerned with the problems of automatically designing an automaton from a specification of a relation which is to hold between the automaton's input sequences and determined output sequences. These "design requirements" are given via a formula of some kind. The problems with which we are concerned have been described in [1]. With respect to particular formalisms for expressing "design requirements" as well as the notion of automaton itself, the problems are briefly and informally these: (1) to produce an algorithm which when it operates on an automaton and a design requirement produces the correct answer to the question "Does this automaton satisfy this design requirement?", or else show no such algorithm exists; (2) to produce an algorithm which operates on a design requirement and produces the correct answer to the question "Does there exist an automaton which satisfies this design requirement?", or else show no such algorithm exists; (3) to produce an algorithm which operates on a design requirement and terminates with an automaton which satisfies the requirement when one exists and otherwise fails to terminate, or else show no such algorithm exists. Interrelationships among problems (1), (2), (3) will appear in the paper [1]. This paper will also indicate the close connection between problem (1) and decision problems for truth of sentences of certain arithmetics. The paper [1 ] will also make use of certain results concerning weak arithmetics already obtained in the literature to obtain answers to problems (1) and (3). Thus <s> BIB002 </s> A unifying survey on weighted logics and weighted automata <s> Extensions to ranked and unranked trees <s> Many of the important concepts and results of conventional finite automata theory are developed for a generalization in which finite algebras take the place of finite automata. The standard closure theorems are proved for the class of sets “recognizable” by finite algebras, and a generalization of Kleene's regularity theory is presented. The theorems of the generalized theory are then applied to obtain a positive solution to a decision problem of second-order logic. <s> BIB003 </s> A unifying survey on weighted logics and weighted automata <s> Extensions to ranked and unranked trees <s> We define a weighted monadic second order logic for trees where the weights are taken from a commutative semiring. We prove that a restricted version of this logic characterizes the class of formal tree series which are accepted by weighted bottom-up finite state tree automata. The restriction on the logic can be dropped if additionally the semiring is locally finite. This generalizes corresponding classical results of Thatcher, Wright, and Doner for tree languages and it extends recent results of Droste and Gastin [Weighted automata and weighted logics, in: Automata, Languages and Programrning--32nd International Colloquium, ICALP 2005, Lisbon, Portugal, 2005, Proceedings, Lecture Notes in Computer Science, Vol. 3580, Springer, Berlin, 2005, pp. 513-525, full version in Theoretical Computer Science, to appear.] from formal power series on words to formal tree series. <s> BIB004 </s> A unifying survey on weighted logics and weighted automata <s> Extensions to ranked and unranked trees <s> Quantitative aspects of systems can be modeled by weighted automata. Here, we deal with such automata running on finite trees. Usually, transitions are weighted with elements of a semiring and the behavior of the automaton is obtained by multiplying the weights along a run. We turn to a more general cost model: the weight of a run is now determined by a global valuation function. An example of such a valuation function is the average of the weights. We establish a characterization of the behaviors of these weighted finite tree automata by fragments of weighted monadic second-order logic. For bi-locally finite bimonoids, we show that weighted tree automata capture the expressive power of several semantics of full weighted MSO logic. Decision procedures follow as consequences. <s> BIB005 </s> A unifying survey on weighted logics and weighted automata <s> Extensions to ranked and unranked trees <s> We introduce a new behavior of weighted unranked tree automata. We prove a characterization of this behavior by two fragments of weighted MSO logic and thereby provide a solution of an open equivalence problem of Droste and Vogler. The characterization works for valuation monoids as weight structures; they include all semirings and, in addition, enable us to cope with average. <s> BIB006
In this section, we show how to extend the equivalence between weighted automata and core-wMSO to other structures, namely ranked and unranked trees. We will primarily use a semantics in multisets of weight trees (instead of weight sequences). Then, we may apply an aggregation operator to recover a more concrete semantics. This approach allows us to infer results for semirings BIB004 and also for tree valuation monoids BIB005 . There are two main ingredients allowing us to prove the equivalence between core-wMSO and weighted automata. First, in the Boolean case, we should have an equivalence between unambiguous (or deterministic) automata and MSO logic. This equivalence is known for many structures such as words BIB001 BIB002 , ranked trees BIB003 , unranked trees , etc. Second, the computation of the weight of a run ρ of an automaton, and the evaluation of a product formula x Ψ should be based on the same mechanism. For words and valuation monoids (or valuation structures), it is the valuation of a sequence of weights. This is why we used an abstract semantics in the semiring of multisets of weight sequences. For trees and tree valuation monoids, the valuation takes a tree of weights as input and returns a value in the monoid. Hence, we use multisets of weight trees as abstract semantics. Note that, multisets of weight trees form a monoid but not a semiring. BIB006
A unifying survey on weighted logics and weighted automata <s> Weighted automata over trees <s> We define a weighted monadic second order logic for trees where the weights are taken from a commutative semiring. We prove that a restricted version of this logic characterizes the class of formal tree series which are accepted by weighted bottom-up finite state tree automata. The restriction on the logic can be dropped if additionally the semiring is locally finite. This generalizes corresponding classical results of Thatcher, Wright, and Doner for tree languages and it extends recent results of Droste and Gastin [Weighted automata and weighted logics, in: Automata, Languages and Programrning--32nd International Colloquium, ICALP 2005, Lisbon, Portugal, 2005, Proceedings, Lecture Notes in Computer Science, Vol. 3580, Springer, Berlin, 2005, pp. 513-525, full version in Theoretical Computer Science, to appear.] from formal power series on words to formal tree series. <s> BIB001 </s> A unifying survey on weighted logics and weighted automata <s> Weighted automata over trees <s> A multioperator monoid $\mathcal{A}$ is a commutative monoid with additional operations on its carrier set. A weighted tree automaton over $\mathcal{A}$ is a finite state tree automaton of which each transition is equipped with an operation of $\mathcal{A}$. We define M-expressions over $\mathcal{A}$ in the spirit of formulas of weighted monadic second-order logics and, as our main result, we prove that if $\mathcal{A}$ is absorptive, then the class of tree series recognizable by weighted tree automata over $\mathcal{A}$ coincides with the class of tree series definable by M-expressions over $\mathcal{A}$. This result implies the known fact that for the series over semirings recognizability by weighted tree automata is equivalent with definability in syntactically restricted weighted monadic second-order logic. We prove this implication by providing two purely syntactical transformations, from M-expressions into formulas of syntactically restricted weighted monadic second-order logic, and vice versa. <s> BIB002 </s> A unifying survey on weighted logics and weighted automata <s> Weighted automata over trees <s> Quantitative aspects of systems can be modeled by weighted automata. Here, we deal with such automata running on finite trees. Usually, transitions are weighted with elements of a semiring and the behavior of the automaton is obtained by multiplying the weights along a run. We turn to a more general cost model: the weight of a run is now determined by a global valuation function. An example of such a valuation function is the average of the weights. We establish a characterization of the behaviors of these weighted finite tree automata by fragments of weighted monadic second-order logic. For bi-locally finite bimonoids, we show that weighted tree automata capture the expressive power of several semantics of full weighted MSO logic. Decision procedures follow as consequences. <s> BIB003 </s> A unifying survey on weighted logics and weighted automata <s> Weighted automata over trees <s> We introduce a new behavior of weighted unranked tree automata. We prove a characterization of this behavior by two fragments of weighted MSO logic and thereby provide a solution of an open equivalence problem of Droste and Vogler. The characterization works for valuation monoids as weight structures; they include all semirings and, in addition, enable us to cope with average. <s> BIB004
An R-weighted (unranked) tree automaton over Σ is a tuple A = (Q, ∆, wgt, F ) with (Q, ∆, F ) a tree automaton and wgt : ∆ → R associating a weight to every transition. The weight tree arising from a run ρ of A over a Σ-tree t is the R-tree wgt • ρ mapping each u ∈ dom(t) to wgt(ρ(u)) ∈ R. The abstract semantics of an Rweighted tree automaton A is a multiset of weight trees. For all trees t ∈ UT Σ , we define Hence, our abstract semantics lives in the commutative monoid N UT R of multisets of R-trees. Then, we may use an aggregation operator aggr : N UT R → S to obtain a concrete semantics in a possibly different weight structure S: Example 28 (Weighted automata over semirings) In the classical setting, the set R of weights is a subset of a semiring (S, +, ×, 0, 1). The value of a run ρ of A over a Σ-tree t is the product of the weights in the R-tree wgt • ρ. Since the semiring is not necessarily commutative, we have to specify the order in which this product is computed. Classically, we choose the postfix order. Formally, given an R-tree ν, the product (ν) = Prod(ν, ε) is computed bottom-up: for all u ∈ dom(ν) we set Prod(ν, u) = Prod(ν, u·1)×· · ·×Prod(ν, u·ar(u))×ν(u) . Note that, if u is a leaf then Prod(ν, u) = ν(u). As for words, the mapping : UT R → S can be lifted to a mapping : N UT R → N S . Then, the semantics is defined as always by summing the values of the accepting runs: [[A]](t) = ρ (wgt(ρ)) where the sum ranges over accepting runs ρ of A over the Σ-tree t. Therefore, the classical case of semirings is obtained from the abstract semantics with the aggregation operator aggr sr (A) = (A) = ν∈A (ν) . In the case of a ranked alphabet, we recover the definition of BIB001 of weighted tree automata. The comparison with the weighted unranked tree automata of is not as easy, at least over non-commutative semirings. We believe that over commutative semirings, our model is equivalent to the weighted unranked tree automata of . The situation is different over non-commutative semirings. Our definition is best motivated by considering words as special cases of trees. There are two ways to inject words in unranked trees, as shown in Fig. 2 : either in a horizontal way (a root with children representing the word from left to right), or a vertical way (unary nodes followed by a leaf, the word being read from bottom to top). With some easy encodings, we may see that our model of weighted unranked tree automata is a conservative extension of weighted word automata, both for the horizontal and the vertical injections of words. Moreover, our approach allows us to obtain the equivalence between automata and logic for arbitrary semirings (even non-commutative ones), as stated in Theorem 30. In contrast, the model of is not a conservative extension of weighted word automata for the horizontal injection. This is witnessed by an example given in Theorem 6.10] , that we now recall. In the (noncommutative) semiring (P({p, q} ), ∪, ·, ∅, {ε}), with two distinct letters p and q, we consider f : UT Σ → P({p, q} ) the tree series mapping every tree t composed of a root directly followed by n children (n ∈ N) to the language {p n q n }, and every other tree to ∅. The model of weighted unranked tree automata we have chosen can not recognise this tree series. However, the model of automata described in is able to recognise this tree series. The main difference between the two models, that explains this discrepancy, is the way weights are assigned during the computation of the automaton. Whereas we have decided to assign weights to transitions of the unranked tree automaton, keeping a Boolean regular (hedge) language to determine whether a transition is enabled, Droste and Vogler decided instead to use a weighted (hedge) automaton when reading the sequence of states of the children. Then, to each position in the tree domain is associated the weight of the (hedge) automaton reading the sequence of states of the children. The semantics over a tree is given by a depth-first left-to-right product of those weights (first the weight of the children from left to right, and then the weight of the parent). Example 29 (Tree valuation monoids) As for words, extensions of weighted automata to more general weight domains have been considered. Following BIB003 , a tree valuation monoid is a tuple (S, +, 0, Val) where (S, +, 0) is a commutative monoid and Val : UT S → S is a valuation function from S-trees to S. The value of a run ρ is now computed by applying this valuation function to the R-tree wgt • ρ. The final semantics is obtained as above by summing the values of accepting runs. Therefore, the semantics in tree valuation monoids is obtained from the abstract semantics with the aggregation operator For instance, when (S, +, ×, 0, 1) is a semiring, we obtain a tree valuation monoid with the postfix product defined in Example 28. We refer to BIB003 for other examples of weighted ranked tree automata, including the interesting case of multi-operator monoids which is also studied in BIB002 . A further extension for unranked trees has recently been considered in BIB004 . Further extensions like tree valuation structureswhere the sum in tree valuation monoids is replaced by a more general operator F as for words-are also possible, though not considered in the literature so far. Our results will apply in this context as well.
A unifying survey on weighted logics and weighted automata <s> Conclusion <s> Weighted automata are used to describe quantitative properties in various areas such as probabilistic systems, image compression, speech-to-text processing. The behaviour of such an automaton is a mapping, called a formal power series, assigning to each word a weight in some semiring. We generalize Buchi's and Elgot's fundamental theorems to this quantitative setting. We introduce a weighted version of MSO logic and prove that, for commutative semirings, the behaviours of weighted automata are precisely the formal power series definable with particular sentences of our weighted logic. We also consider weighted first-order logic and show that aperiodic series coincide with the first-order definable ones, if the semiring is locally finite, commutative and has some aperiodicity property. <s> BIB001 </s> A unifying survey on weighted logics and weighted automata <s> Conclusion <s> Weighted automata are non-deterministic automata where the transitions are equipped with weights. They can model quantitative aspects of systems like costs or energy consumption. The value of a run can be computed, for example, as the maximum, limit average, or discounted sum of transition weights. In multi-weighted automata, transitions carry several weights and can model, for example, the ratio between rewards and costs, or the efficiency of use of a primary resource under some upper bound constraint on a secondary resource. Here, we introduce a general model for multi-weighted automata as well as a multi-weighted MSO logic. In our main results, we show that this multi-weighted MSO logic and multi-weighted automata are expressively equivalent both for finite and infinite words. The translation process is effective, leading to decidability results for our multi-weighted MSO logic. <s> BIB002
We proved the meta-theorem relating weighted automata and core-wMSO at the level of multisets of weight structures for words and trees. However, the definitions and techniques developed in this article can easily be adapted for other structures like nested words, Mazurkiewicz traces, etc. The logical equivalence between restricted wMSO and core-wMSO at the concrete level is established for words in Section 5. An analogous result can be obtained for trees with a similar logical reasoning. In particular, this allows for an extension to trees of the valuation structures of BIB002 . In this article, our meta-theorem is only stated and proved for finite structures. At the level of the concrete semantics, equivalences between weighted automata and weighted logics have been extended to infinite structures, such as words or trees over semirings BIB001 , valuation monoids or valuation structures BIB002 . An extension of our meta-theorem to infinite structures capturing these results is a natural open problem. Finite multisets of weight structures are not adequate anymore since automata may exhibit infinitely many runs on a given input structure. The abstract semantics should ideally distinguish between countably many runs or uncountably many runs.
An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> A definition of the concept 'intuitionistic fuzzy set' (IFS) is given, the latter being a generalization of the concept 'fuzzy set' and an example is described. Various properties are proved, which are connected to the operations and relations over sets, and with modal and topological operators, defined over the set of IFS's. <s> BIB001 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> Abstract New results on intuitionistic fuzzy sets are introduced. Two news operators on intuitionistic fuzzy sets are defined and their basic properties are studied. <s> BIB002 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> We briefly describe the Ordered Weighted Averaging (OWA) operator and discuss a methodology for learning the associated weighting vector from observational data. We then introduce a more general type of OWA operator called the Induced Ordered Weighted Averaging (IOWA) Operator. These operators take as their argument pairs, called OWA pairs, in which one component is used to induce an ordering over the second components which are then aggregated. A number of different aggregation situations have been shown to be representable in this framework. We then show how this tool can be used to represent different types of aggregation models. <s> BIB003 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> In this paper, two uncertain linguistic aggregation operators called uncertain linguistic ordered weighted averaging (ULOWA) operator and uncertain linguistic hybrid aggregation (ULHA) operator are proposed. An approach to multiple attribute group decision making with uncertain linguistic information is developed based on the ULOWA and the ULHA operators. Finally, a practical application of the developed approach to the problem of evaluating university faculty for tenure and promotion is given. <s> BIB004 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> In this paper, we define various generalized induced linguistic aggregation operators, including generalized induced linguistic ordered weighted averaging (GILOWA) operator, generalized induced linguistic ordered weighted geometric (GILOWG) operator, generalized induced uncertain linguistic ordered weighted averaging (GIULOWA) operator, generalized induced uncertain linguistic ordered weighted geometric (GIULOWG) operator, etc. Each object processed by these operators consists of three components, where the first component represents the importance degree or character of the second component, and the second component is used to induce an ordering, through the first component, over the third components which are linguistic variables (or uncertain linguistic variables) and then aggregated. It is shown that the induced linguistic ordered weighted averaging (ILOWA) operator and linguistic ordered weighted averaging (LOWA) operator are the special cases of the GILOWA operator, induced linguistic ordered weighted geometric (ILOWG) operator and linguistic ordered weighted geometric (LOWG) operator are the special cases of the GILOWG operator, the induced uncertain linguistic ordered weighted averaging (IULOWA) operator and uncertain linguistic ordered weighted averaging (ULOWA) operator are the special cases of the GIULOWA operator, and that the induced uncertain linguistic ordered weighted geometric (IULOWG) operator and uncertain LOWG operator are the special cases of the GILOWG operator. <s> BIB005 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> The weighted geometric (WG) operator and the ordered weighted geometric (OWG) operator are two common aggregation operators in the field of information fusion. But these two aggregation operators are usually used in situations where the given arguments are expressed as crisp numbers or linguistic values. In this paper, we develop some new geometric aggregation operators, such as the intuitionistic fuzzy weighted geometric (IFWG) operator, the intuitionistic fuzzy ordered weighted geometric (IFOWG) operator, and the intuitionistic fuzzy hybrid geometric (IFHG) operator, which extend the WG and OWG operators to accommodate the environment in which the given arguments are intuitionistic fuzzy sets which are characterized by a membership function and a non-membership function. Some numerical examples are given to illustrate the developed operators. Finally, we give an application of the IFHG operator to multiple attribute decision making based on intuitionistic fuzzy sets. <s> BIB006 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> An intuitionistic fuzzy set, characterized by a membership function and a non-membership function, is a generalization of fuzzy set. In this paper, based on score function and accuracy function, we introduce a method for the comparison between two intuitionistic fuzzy values and then develop some aggregation operators, such as the intuitionistic fuzzy weighted averaging operator, intuitionistic fuzzy ordered weighted averaging operator, and intuitionistic fuzzy hybrid aggregation operator, for aggregating intuitionistic fuzzy values and establish various properties of these operators. <s> BIB007 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> This paper presents a new interpretation of intuitionistic fuzzy sets in the framework of the Dempster-Shafer theory of evidence (DST). This interpretation makes it possible to represent all mathematical operations on intuitionistic fuzzy values as the operations on belief intervals. Such approach allows us to use directly the Dempster's rule of combination to aggregate local criteria presented by intuitionistic fuzzy values in the decision making problem. The usefulness of the developed method is illustrated with the known example of multiple criteria decision making problem. The proposed approach and a new method for interval comparison based on DST, allow us to solve multiple criteria decision making problem without intermediate defuzzification when not only criteria, but their weights are intuitionistic fuzzy values. <s> BIB008 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> With respect to the multiple attribute group decision making problems in which the attribute weights are unknown and the attribute values take the form of the intuitionistic linguistic numbers, an expanded technique for order preference by similarity to ideal solution (TOPSIS) method is proposed. Firstly, the definition of intuitionistic linguistic number and the operational laws are given and distance between intuitionistic linguistic numbers is defined. Then, the attribute weights are determined based on the ‘maximizing deviation method’ and an extended TOPSIS method is developed to rank the alternatives. Finally, an illustrative example is given to verify the developed approach and to demonstrate its practicality and effectiveness. <s> BIB009 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> With respect to multiple attribute group decision making problems with linguistic information, some new decision analysis methods are proposed. Firstly, we develop three new aggregation operators: generalized 2-tuple weighted average (G-2TWA) operator, generalized 2-tuple ordered weighted average (G-2TOWA) operator and induced generalized 2-tuple ordered weighted average (IG-2TOWA) operator. Then, a method based on the IG-2TOWA and G-2TWA operators for multiple attribute group decision making is presented. In this approach, alternative appraisal values are calculated by the aggregation of 2-tuple linguistic information. Thus, the ranking of alternative or selection of the most desirable alternative(s) is obtained by the comparison of 2-tuple linguistic information. Finally, a numerical example is used to illustrate the applicability and effectiveness of the proposed method. <s> BIB010 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> We study the induced generalized aggregation operators under intuitionistic fuzzy environments. Choquet integral and Dempster-Shafer theory of evidence are applied to aggregate inuitionistic fuzzy information and some new types of aggregation operators are developed, including the induced generalized intuitionistic fuzzy Choquet integral operators and induced generalized intuitionistic fuzzy Dempster-Shafer operators. Then we investigate their various properties and some of their special cases. Additionally, we apply the developed operators to financial decision making under intuitionistic fuzzy environments. Some extensions in interval-valued intuitionistic fuzzy situations are also pointed out. <s> BIB011 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> With respect to multiple attribute group decision making (MAGDM) problems in which the attribute weights and the expert weights take the form of real numbers and the attribute values take the form of intuitionistic uncertain linguistic variables, new group decision making methods have been developed. First, operational laws, expected value definitions, score functions and accuracy functions of intuitionistic uncertain linguistic variables are introduced. Then, an intuitionistic uncertain linguistic weighted geometric average (IULWGA) operator and an intuitionistic uncertain linguistic ordered weighted geometric (IULOWG) operator are developed. Furthermore, some desirable properties of these operators, such as commutativity, idempotency, monotonicity and boundedness, have been studied, and an intuitionistic uncertain linguistic hybrid geometric (IULHG) operator, which generalizes both the IULWGA operator and the IULOWG operator, was developed. Based on these operators, two methods for multiple attribute group decision making problems with intuitionistic uncertain linguistic information have been proposed. Finally, an illustrative example is given to verify the developed approaches and demonstrate their practicality and effectiveness. <s> BIB012 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> We introduce a wide range of induced and linguistic generalized aggregation operators. First, we present the induced linguistic generalized ordered weighted averaging (ILGOWA) operator. It is a generalization of the OWA operator that uses linguistic variables, order inducing variables and generalized means in order to provide a more general formulation. One of its main results is that it includes a wide range of linguistic aggregation operators such as the induced linguistic OWA (ILOWA), the induced linguistic OWG (ILOWG) and the linguistic generalized OWA (LGOWA) operator. We further generalize the ILGOWA operator by using quasi-arithmetic means obtaining the induced linguistic quasi-arithmetic OWA (Quasi-ILOWA) operator and by using hybrid averages forming the induced linguistic generalized hybrid average (ILGHA) operator. We also present a further extension with Choquet integrals. We call it the induced linguistic generalized Choquet integral aggregation (ILGCIA). We end the paper with an application of the new approach in a linguistic group decision making problem. <s> BIB013 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> With respect to multiple attribute group decision making (MAGDM) problems in which both the attribute weights and the expert weights take the form of real numbers, attribute values take the form of intuitionistic linguistic numbers, the group decision making methods based on some generalized dependent aggregation operators are developed. Firstly, score function and accuracy function of intuitionistic linguistic numbers are introduced. Then, an intuitionistic linguistic generalized dependent ordered weighted average (ILGDOWA) operator and an intuitionistic linguistic generalized dependent hybrid weighted aggregation (ILGDHWA) operator are developed. Furthermore, some desirable properties of the ILGDOWA operator, such as commutativity, idempotency and monotonicity, etc. are studied. At the same time, some special cases of the generalized parameters in these operators are analyzed. Based on the ILGDOWA and ILGDHWA operators, the approach to multiple attribute group decision making with intuitionistic linguistic information is proposed. Finally, an illustrative example is given to verify the developed approaches and to demonstrate their practicality and effectiveness. <s> BIB014 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> In this paper, a new concept of interval-valued intuitionistic linguistic number IVILN, which is characterised by a linguistic term, an interval-valued membership degree and an interval-valued non-membership degree, is first introduced. Then, score function, accuracy function and some multiplicative operational laws of IVILNs are defined. Based on these two functions, a simple approach for the comparison between two IVILNs is presented. Based on these operational laws, some new geometric aggregation operators, such as the interval-valued intuitionistic linguistic weighted geometric IVILWG operator, interval-valued intuitionistic linguistic ordered weighted geometric IVILOWG operator and interval-valued intuitionistic linguistic hybrid geometric IVILHG operator, are proposed, and some desirable properties of these operators are established. Furthermore, by using the IVILWG operator and the IVILHG operator, a group decision making approach, in which the criterion values are IVILNs and the criterion weight information is known completely, is developed. Finally, an illustrative example is given to demonstrate the feasibility and effectiveness of the developed method. <s> BIB015 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> Abstract With respect to multiple attribute group decision making (MAGDM) problems in which both the attribute weights and the expert weights take the form of crisp numbers, and attribute values take the form of interval-valued intuitionistic uncertain linguistic variables, some new group decision making analysis methods are developed. Firstly, some operational laws, expected value and accuracy function of interval-valued intuitionistic uncertain linguistic variables are introduced. Then, an interval-valued intuitionistic uncertain linguistic weighted geometric average (IVIULWGA) operator and an interval-valued intuitionistic uncertain linguistic ordered weighted geometric (IVIULOWG) operator have been developed. Furthermore, some desirable properties of the IVIULWGA operator and the IVIULOWG operator, such as commutativity, idempotency and monotonicity, have been studied, and an interval-valued intuitionistic uncertain linguistic hybrid geometric (IVIULHG) operator which generalizes both the IVIULWGA operator and the IVIULOWG operator, was developed. Based on these operators, an approach to multiple attribute group decision making with interval-valued intuitionistic uncertain linguistic information has been proposed. Finally, an illustrative example is given to verify the developed approaches and to demonstrate their practicality and effectiveness. <s> BIB016 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> With respect to multiple attribute decision making (MADM) problems, in which attribute values take the form of intuitionistic uncertain linguistic information, a new decision-making method based on the intuitionistic uncertain linguistic weighted Bonferroni OWA operator is developed. First, the score function, accuracy function, and comparative method of the intuitionistic uncertain linguistic numbers are introduced. Then, an intuitionistic uncertain linguistic Bonferroni OWA (IULBOWA) operator and an intuitionistic uncertain linguistic weighted Bonferroni OWA (IULWBOWA) operator are developed. Furthermore, some properties of the IULBOWA and IULWBOWA operators, such as commutativity, idempotency, monotonicity, and boundedness, are discussed. At the same time, some special cases of these operators are analyzed. Based on the IULWBOWA operator, the multiple attribute decision-making method with intuitionistic uncertain linguistic information is proposed. Finally, an illustrative example is given to illustrat... <s> BIB017 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> Abstract With respect to multiple attribute group decision making (MAGDM) problems in which both the attribute weights and the expert weights take the form of crisp numbers, and attribute values take the form of intuitionistic uncertain linguistic variables, some new intuitionistic uncertain linguistic Heronian mean operators, such as intuitionistic uncertain linguistic arithmetic Heronian mean (IULAHM) operator, intuitionistic uncertain linguistic weighted arithmetic Heronian mean (IULWAHM) operator, intuitionistic uncertain linguistic geometric Heronian mean (IULGHM) operator, and intuitionistic uncertain linguistic weighted geometric Heronian mean (IULWGHM) operator, are proposed. Furthermore, we have studied some desired properties of these operators and discussed some special cases with respect to the different parameter values in these operators. Moreover, with respect to multiple attribute group decision making (MAGDM) problems in which both the attribute weights and the expert weights take the form of real numbers, attribute values take the form of intuitionistic uncertain linguistic variables, some approaches based on the developed operators are proposed. Finally, an illustrative example has been given to show the steps of the developed methods and to discuss the influences of different parameters on the decision-making results. <s> BIB018 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> With respect to multiple attribute group decision making (MADM) problems in which attribute values take the form of intuitionistic linguistic numbers, some new group decision making methods are developed. Firstly, some operational laws, expected value, score function and accuracy function of intuitionistic linguistic numbers are introduced. Then, an intuitionistic linguistic power generalized weighted average (ILPGWA) operator and an intuitionistic linguistic power generalized ordered weighted average (ILPGOWA) operator are developed. Furthermore, some desirable properties of the ILPGWA and ILPGOWA operators, such as commutativity, idempotency and monotonicity, etc. are studied. At the same time, some special cases of the generalized parameters in these operators are analyzed. Based on the ILPGWA and ILPGOWA operators, two approaches to multiple attribute group decision making with intuitionistic linguistic information are proposed. Finally, an illustrative example is given to verify the developed approaches and to demonstrate their practicality and effectiveness. <s> BIB019 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> Abstract In this study a generated admissible order between interval-valued intuitionistic uncertain linguistic numbers using two continuous functions is introduced. Then, two interval-valued intuitionistic uncertain linguistic operators called the interval-valued intuitionistic uncertain linguistic Choquet averaging (IVIULCA) operator and the interval-valued intuitionistic uncertain linguistic Choquet geometric mean (IVIULCGM) operator are defined, which consider the interactive characteristics among elements in a set. In order to overall reflect the correlations between them, we further define the generalized Shapley interval-valued intuitionistic uncertain linguistic Choquet averaging (GS-IVIULCA) operator and the generalized Shapley interval-valued intuitionistic uncertain linguistic Choquet geometric mean (GS-IVIULCGM) operator. Moreover, if the information about the weights of experts and attributes is incompletely known, the models for the optimal fuzzy measures on expert set and attribute set are established, respectively. Finally, a method to multi-attribute group decision making under interval-valued intuitionistic uncertain linguistic environment is developed, and an example is provided to show the specific application of the developed procedure. <s> BIB020 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> The intuitionistic uncertain linguistic variables are the good tools to express the fuzzy information, and the TODIM (an acronym in Portuguese of Interactive and Multicriteria Decision Making) method can consider the bounded rationality of decision makers based on the prospect theory. However, the classical TODIM method can only process the multiple attribute decision making (MADM) problems where the attribute values take the form of crisp numbers. In this paper, we will extend the TODIM method to the multiple attribute group decision making (MAGDM) with intuitionistic uncertain linguistic information. Firstly, the definition, characteristics, expectation, comparison method and distance of intuitionistic uncertain linguistic variables are briefly introduced, and the steps of the classical TODIM method for MADM problems are presented. Then, on the basis of the classical TODIM method, the extended TODIM method is proposed to deal with MAGDM problems with intuitionistic uncertain linguistic variables, and its significant characteristic is that it can fully consider the decision makers' bounded rationality which is a real action in decision making. Finally, an illustrative example is proposed to verify the developed approach. <s> BIB021 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> With respect to multiple attribute group decision making (MAGDM) problems, in which the attribute weights take the form of real numbers, and the attribute values take the form of intuitionistic fuzzy linguistic variables, a decision analysis approach is proposed. In this paper, we develop an intuitionistic fuzzy linguistic induce OWA (IFLIOWA) operator and analyze the properties of it by utilizing some operational laws of intuitionistic fuzzy linguistic variables. A new method based on the IFLIOWA operator for multiple attribute group decision making (MAGDM) is presented. Finally, a numerical example is used to illustrate the applicability and effectiveness of the proposed method. <s> BIB022 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> The intuitionistic uncertain fuzzy linguistic variable can easily expressthe fuzzy information, and the power average (PA) operator is a usefultool which provides more versatility in the information aggregation procedure.At the same time, Einstein operations are a kind of various t-normsand t-conorms families which can be used to perform the corresponding intersectionsand unions of intuitionistic fuzzy sets (IFSs). In this paper, wewill combine the PA operator and Einstein operations to intuitionistic uncertainlinguistic environment, and propose some new PA operators. Firstly,the definition and some basic operations of intuitionistic uncertain linguisticnumber (IULN), power aggregation (PA) operator and Einstein operationsare introduced. Then, we propose intuitionistic uncertain linguistic fuzzypowered Einstein averaging (IULFPEA) operator, intuitionistic uncertain linguisticfuzzy powered Einstein weighted (IULFPEWA) operator, intuitionisticuncertain linguistic fuzzy Einstein geometric (IULFPEG) operator and intuitionisticuncertain linguistic fuzzy Einstein weighted geometric (IULFPEWG)operator, and discuss some properties of them in detail. Furthermore, we developthe decision making methods for multi-attribute group decision making(MAGDM) problems with intuitionistic uncertain linguistic information andgive the detail decision steps. At last, an illustrate example is given to showthe process of decision making and the effectiveness of the proposed method. <s> BIB023 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> The problem for evaluating the design patterns of the Micro-Air vehicle is the multiple attribute decision making problems. In this paper, we introduce the concept of interval-valued intuitionistic uncertain linguistic sets and propose the induced interval-valued intuitionistic uncertain linguistic ordered weighted average (I-IVIULOWA) operator on the basis of the interval-valued intuitionistic uncertain linguistic ordered weighted average (IVIULOWA) operator and IOWA operator. We also study some desirable properties of the proposed operator, such as commutativity, idempotency and monotonicity. Then, we utilize the induced interval-valued intuitionistic uncertain linguistic ordered weighted average (IIVIULOWA) operator to solve the multiple attribute decision making problems with interval-valued intuitionistic uncertain linguistic information. Finally, an illustrative example for evaluating the design patterns of the Micro-Air vehicle is given. <s> BIB024 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> We point out the issues of the operational laws on IIULSs in the reference.We define some new operational laws that eliminate the existing issues.The expected and accuracy functions are defined to rank IIULSs.Two operators on IIULSs are defined, and optimal models are established.An approach is developed, and the associated example is offered. Interval intuitionistic uncertain linguistic sets are an important generalization of fuzzy sets, which well cope with the experts' qualitative preferences as well as reflect the interval membership and non-membership degrees of the uncertain linguistic term. This paper first points out the issues of the operational laws on interval intuitionistic uncertain linguistic numbers in the literature, and then defines some alternative ones. To consider the relationship between interval intuitionistic uncertain linguistic sets, the expectation and accuracy functions are defined. To study the application of interval intuitionistic uncertain linguistic sets, two symmetrical interval intuitionistic uncertain linguistic hybrid aggregation operators are defined. Meanwhile, models for the optimal weight vectors are established, by which the optimal weighting vector can be obtained. As a series of development, an approach to multi-attribute decision making under interval intuitionistic uncertain linguistic environment is developed, and the associated example is provided to demonstrate the effectivity and practicality of the procedure. <s> BIB025 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> With respect to multiple attribute group decision making (MAGDM) problems in which the attributes are dependent and the attribute values take the forms of intuitionistic linguistic numbers and intuitionistic uncertain linguistic numbers, this paper investigates two novel MAGDM methods based on Maclaurin symmetric mean (MSM) aggregation operators. First, the Maclaurin symmetric mean is extended to intuitionistic linguistic environment and two new aggregation operators are developed for aggregating the intuitionistic linguistic information, such as the intuitionistic linguistic Maclaurin symmetric mean (ILMSM) operator and the weighted intuitionistic linguistic Maclaurin symmetric mean (WILMSM) operator. Then, some desirable properties and special cases of these operators are discussed in detail. Furthermore, this paper also develops two new Maclaurin symmetric mean operators for aggregating the intuitionistic uncertain linguistic information, including the intuitionistic uncertain linguistic Maclaurin symmetric mean (IULMSM) operator and the weighted intuitionistic uncertain linguistic Maclaurin symmetric mean (WIULMSM) operator. Based on the WILMSM and WIULMSM operators, two approaches to MAGDM are proposed under intuitionistic linguistic environment and intuitionistic uncertain linguistic environment, respectively. Finally, two practical examples of investment alternative evaluation are given to illustrate the applications of the proposed methods. <s> BIB026 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> Intuitionistic fuzzy set is capable of handling uncertainty with counterpart falsities which exist in nature. Proximity measure is a convenient way to demonstrate impractical significance of values of memberships in the intuitionistic fuzzy set. However, the related works of Pappis (Fuzzy Sets Syst 39(1):111–115, 1991), Hong and Hwang (Fuzzy Sets Syst 66(3):383–386, 1994), Virant (2000) and Cai (IEEE Trans Fuzzy Syst 9(5):738–750, 2001) did not model the measure in the context of the intuitionistic fuzzy set but in the Zadeh’s fuzzy set instead. In this paper, we examine this problem and propose new notions of δ-equalities for the intuitionistic fuzzy set and δ-equalities for intuitionistic fuzzy relations. Two fuzzy sets are said to be δ-equal if they are equal to an extent of δ. The applications of δ-equalities are important to fuzzy statistics and fuzzy reasoning. Several characteristics of δ-equalities that were not discussed in the previous works are also investigated. We apply the δ-equalities to the application of medical diagnosis to investigate a patient’s diseases from symptoms. The idea is using δ-equalities for intuitionistic fuzzy relations to find groups of intuitionistic fuzzified set with certain equality or similar degrees then combining them. Numerical examples are given to illustrate validity of the proposed algorithm. Further, we conduct experiments on real medical datasets to check the efficiency and applicability on real-world problems. The results obtained are also better in comparison with 10 existing diagnosis methods namely De et al. (Fuzzy Sets Syst 117:209–213, 2001), Samuel and Balamurugan (Appl Math Sci 6(35):1741–1746, 2012), Szmidt and Kacprzyk (2004), Zhang et al. (Procedia Eng 29:4336–4342, 2012), Hung and Yang (Pattern Recogn Lett 25:1603–1611, 2004), Wang and Xin (Pattern Recogn Lett 26:2063–2069, 2005), Vlachos and Sergiadis (Pattern Recogn Lett 28(2):197–206, 2007), Zhang and Jiang (Inf Sci 178(6):4184–4191, 2008), Maheshwari and Srivastava (J Appl Anal Comput 6(3):772–789, 2016) and Support Vector Machine (SVM). <s> BIB027 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> Abstract In this paper, we propose a new method for multiattribute decision making (MADM) using multiplication operations of interval-valued intuitionistic fuzzy values (IVIFVs) and the linear programming (LP) methodology. It can overcome the shortcomings of Chen and Huang's MADM method (2017), where Chen and Huang's MADM method has two shortcomings, i.e., (1) it gets an infinite number of solutions of the optimal weights of attributes when the summation values of some columns in the transformed decision matrix (TDM) are the same, resulting in the case that it obtains different preference orders (POs) of the alternatives, and (2) the PO of alternatives cannot be distinguished in some situations. <s> BIB028 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> ABSTRACTAccuracy functions proposed by various researchers fail to compare some interval-valued intuitionistic fuzzy sets (IVIFSs) correctly. In the present research paper, we propose an improved accuracy function to compare all comparable IVIFSs correctly. The use of proposed accuracy function is also proposed in a method for multi attribute group decision making (MAGDM) method with partially known attributes’ weight. Finally, the proposed MAGDM method is implemented on a real case study of evaluation teachers’ performance. Sensitivity analysis of this method is also done to show the effectiveness of the proposed accuracy function in MAGDM. <s> BIB029 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> Abstract In this paper, we propose a new autocratic multiattribute group decision making (AMAGDM) method for hotel location selection based on interval-valued intuitionistic fuzzy sets (IVIFSs), where the evaluating values of the attributes for alternatives and the weights of the attributes given by decision makers are represented by interval-valued intuitionistic fuzzy values (IVIFVs). The proposed method calculates the changing of the weights of the decision makers until the group consensus degree (GCD) of the decision makers is larger than or equal to a predefined threshold value. We also apply the proposed AMAGDM method to deal with the hotel location selection problem. The main contribution of this paper is that we propose a new AMAGDM method which is simpler than Wibowo's method (2013), where the drawback of Wibowo's method is that it is too complicated due to the fact that it adopts the concept of ideal solutions for determining the overall performance of each hotel location alternative with respect to all the selection criteria. The proposed AMAGDM method provides us with a very useful way for AMAGDM in interval-valued intuitionistic fuzzy environments. <s> BIB030 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> Abstract In the process of multi-criteria decision making (MCDM), decision makers or experts usually exploit quantitative or qualitative methods to evaluate the comprehensive performance of all alternatives on each criterion. How the decision-makers or the experts make the evaluations relies on their professional knowledge and the actual performances on the criteria characters of the alternatives. However, because of both the objective complexity of decision making problem and the uncertainty of human subjective judgments, it is sometimes too hard to get the accurate evaluation information. Intuitionistic fuzzy set (IFS) is a useful tool to deal with uncertainty and fuzziness of complex problems. In this paper, we propose a new distance measure between IFSs and prove some of its useful properties. The experimental results show that the proposed distance measure between IFSs can overcome the drawbacks of some existing distance and similarity measures. Then based on the proposed distance measure, an extended intuitionistic fuzzy TOPSIS approach is developed to handle the MCDM problems. Finally, a practical application which is about credit risk evaluation of potential strategic partners is provided to demonstrate the extended intuitionistic fuzzy TOPSIS approach, and then it is compared with other current methods to further explain its effectiveness. <s> BIB031 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> A great number of real-world problems can be associated with multi-criteria decision-making. These problems are often characterized by a high degree of uncertainty. Intuitionistic fuzzy sets (IFSs) are a generalized form of an ordinal fuzzy set to deal with this natural uncertainty. In this paper, we propose a hybrid version of the intuitionistic fuzzy ELECTRE based on VIKOR method, which was never considered before. The advantage and strengths of the intuitionistic fuzzy ELECTRE based on VIKOR method as decision aid technique and IFS as an uncertain framework make the proposed method a suitable choice in solving practical problems. Finally, a numerical example for engineering manager choice is given to illustrate the application of proposed method. The paper also gives a special view of point to the research along IFSs: It can be viewed as a kind of factorial scalar theory in factor space, which helps the authors to complete the paper with clear ideas. <s> BIB032 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> Abstract This paper presents a location selection problem for a military airport using multiple criteria decision making methods. A real-world decision problem is presented and the decision criteria to evaluate alternative locations are specified. The objective is to identify the best location among candidate locations. Nine main criteria and thirty-three sub-criteria are identified by taking into account not only requirements for a military airport such as climate, geography, infrastructure, security, and transportation but also its environmental and social effects. The criteria weights are determined using AHP. Ranking and selection processes of four alternatives are carried out using PROMETHEE and VIKOR methods. Furthermore, the results of PROMETHEE and VIKOR methods are compared with the results of COPRAS, MAIRCA and MABAC methods. All methods suggest the same alternative as the best and produce the same results on the rankings of the location alternatives. One-way sensitivity analysis is carried out on the main criteria weights for all methods. Statistically significant correlations are observed between the rankings of the methods. Therefore, it is concluded that PROMETHEE, VIKOR, COPRAS, MAIRCA and MABAC methods can be successfully used for location selection problems and in general, for other types of multi-criteria decision problems with finite number of alternatives. <s> BIB033 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> Edges of the image play an important role in the field of digital image processing and computer vision. The edges reduce the amount of data, extract useful information from the image and preserve significant structural properties of an input image. Further, these edges can be used for object and facial expression detection. In this paper, we will propose new intuitionistic fuzzy divergence and entropy measures with its proof of validity for intuitionistic fuzzy sets. A new and significant technique has been developed for edge detection. To check the robustness of the proposed method, obtained results are compared with Canny, Sobel and Chaira methods. Finally, mean square error (MSE) and peak signal-to-noise ratio (PSNR) have been calculated and PSNR values of proposed method are always equal or greater than the PSNR values of existing methods. The detected edges of the various sample images are found to be true, smooth and sharpen. <s> BIB034 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> Abstract In this paper, a novel method is proposed to support the process of solving multi-objective nonlinear programming problems subject to strict or flexible constraints. This method assumes that the practical problems are expressed in the form of geometric programming problems. Integrating the concept of intuitionistic fuzzy sets into the solving procedure, a rich structure is provided which can include the inevitable uncertainties into the model regarding different objectives and constraints. Another important feature of the proposed method is that it continuously interacts with the decision maker. Thus, the decision maker could learn about the problem, thereby a compromise solution satisfying his/hers preferences could be obtained. Further, a new two-step geometric programming approach is introduced to determine Pareto-optimal compromise solutions for the problems defined during different iterative steps. Employing the compensatory operator of “weighted geometric mean”, the first step concentrates on finding an intuitionistic fuzzy efficient compromise solution. In the cases where one or more intuitionistic fuzzy objectives are fully achieved, a second geometric programming model is developed to improve the resulting compromise solution. Otherwise, it is concluded that the resulting solution vectors simultaneously satisfy both of the conditions of intuitionistic fuzzy efficiency and Pareto-optimality. The models forming the proposed solving method are developed in a way such that, the posynomiality of the defined problem is not affected. This property is of great importance when solving nonlinear programming problems. A numerical example of multi-objective nonlinear programming problem is also used to provide a better understanding of the proposed solving method. <s> BIB035 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> In the present paper we introduce the classes of sequence stcIFN, stc0IFN and st∞IFN of statistically convergent, statistically null and statistically bounded sequences of intuitionistic fuzzy number based on the newly defined metric on the space of all intuitionistic fuzzy numbers (IFNs). We study some algebraic and topological properties of these spaces and prove some inclusion relations too. <s> BIB036 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Introduction <s> An interval-valued intuitionistic uncertain linguistic set (IVIULS) combines the ambiguity, fuzziness as well as indeterminacy in real-life predicaments due to intricacy of subjective nature of human thoughts and easily express fuzzy information. Technique for order preference by similarity to an ideal solution (TOPSIS) is one of the eminent traditional distance measure-based approach for multi-criteria group decision-making (MCGDM) problems and having widespread applications. This study aims to develop TOPSIS method for MCGDM problems under IVIUL environment. Firstly, some basic operational laws and aggregation operators of IVIULS are discussed. A novel distance measure of IVIULEs is also investigated. An illustrative example of evaluation problem is also taken to clarify developed methodology and to reveal its efficiency with comparative analysis of proposed method. <s> BIB037
Due to the increasing complexity of decision-making problems, it is generally difficult to express criteria values of alternatives by exact numbers. originally proposed the fuzzy set (FS) theory, which is an effective tool in dealing with fuzzy information. However, it is not suitable to handle the information with non-membership. As the generalization of FS, intuitionistic fuzzy set (IFS) introduced by BIB001 BIB002 has a membership degree (MD), a non-membership degree (NMD) and a hesitancy degree (HD), which can further overcome the drawbacks of FS. Now, a large number of methods based on IFS have been utilized to a number of areas. Up to date, many contributions have concentrated on the decision-making techniques based on IFSs, which are from three domains: the theory of foundations, for instance, operational rules BIB028 BIB008 BIB024 , comparative approaches BIB029 , distance and similarity measures BIB002 , likelihood , ranking function , consensus degree BIB030 , proximity measure BIB027 and so on; the extended muticriteria decision-making (MCDM) approaches for IFS, such as TOPSIS BIB031 , ELECTRE BIB032 , VIKOR BIB033 , TODIM , entropy BIB034 and other methods, such as Choquet integral (CI) , multi-objective linear programming or multi-objective nonlinear programming (NLP) BIB035 , Decision-Making Trial and Evaluation Laboratory , statistical convergent sequence spaces BIB036 and so on; and the MCDM techniques based on aggregation operators (AOs) of IFS, they have more superiority than the traditional MCDM techniques because of can acquire the comprehensive values of alternatives by aggregating all attribute values, and then rank the alternatives. However, with the increasing of uncertainty and complexity, the IFS cannot depict the uncertain information comprehensively and accurately in the circumstance in which the MD and NMD with the form of IFS cannot be expressed as real values. For the sake of adequately expressing the fuzzy and uncertain information in real process of decision making, proposed first the concept of linguistic variable (LV ) and defined a discrete linguistic term set (LTS) , that is, variables whose evaluation values are not real and exact numbers but linguistic terms, such as "very low," "low," "fair," "high," "very high," etc. Obviously, the decision maker can more easily to express his/her opinions and preferences by selecting the matching linguistic terms from the LTS. So based on the IFS and the LTS, a novel solution is that MD and NMD are denoted by LTS, which is called intuitionistic linguistic fuzzy set (ILFS), first introduced by . As a generalization of IFS, LT and LTS, the ILFS can more adequately dispose the fuzzy and uncertain information than IFS, LT and LTS. Since appearance, IFLS has attracted more and more attention. Based on the IFLS, different forms of IFLS are extended and some basic operational rules of IFLS are defined, such as intuitionistic uncertain linguistic set (IULS) BIB012 BIB014 , interval-value intuitionistic uncertain linguistic set (IVIULS) BIB015 BIB025 , intuitionistic uncertain 2-tuple linguistic variable (IU2TLV ) Martínez, 2000a, b, 2012) . AOs of IFLS are a new branch of IFLS, which is a meaningful and significance research issue and has attracted more and more attention. For example, some basic intuitionistic linguistic (IL) fuzzy AOs, such as intuitionistic uncertain linguistic weighted geometric mean (IULWGM) operator BIB012 , ordered intuitionistic uncertain linguistic weighted geometric mean (OIULWGM) operator BIB012 , interval-value IULWGM (GIULWGM) operator BIB016 and interval-value OIULWGM (GOIULWGM) operator BIB016 ; the extended MCDM approaches for IUFS, such as the extended TOPSIS (ETOPSIS) approaches BIB009 BIB037 BIB010 , the extended TODIM (ETODIM) approaches BIB021 , the extended VIKOR (EVIKOR) approach ; some IL fuzzy AOs considering the interrelationships between criteria, such as IUL Bonferroni OWM (IULBOWM) operator BIB017 , weighted IUL Bonferroni OWM (WIULBOWM) operator BIB017 , IUL arithmetic Heronian mean (IULAHM) operator BIB018 , IUL geometric Heronian mean (IULGHM) operator BIB018 , weighted IUL arithmetic Heronian mean (WIULAHM) operator BIB018 , IUL geometric Heronian mean (WIULGHM) operator BIB018 , IUL Maclaurin symmetric mean (IULMSM) operator BIB026 , weighted ILMSM (WIULMSM) operator BIB026 ; generalized intuitionistic linguistic fuzzy aggregation operators, such as generalized IL dependent ordered weighted mean (DOWM) (GILDOWM) operator BIB014 BIB019 ) and a generalized IL dependent hybrid weighted mean (DHWM) (GILDHWM) operator BIB014 BIB019 ; IL fuzzy AOs based on CI BIB020 ; induced IL fuzzy AOs BIB019 BIB020 BIB022 BIB003 BIB005 BIB011 BIB013 , such as, IFL induced ordered weighted mean (IFLIOWM) operator BIB019 BIB020 , IFL induced ordered weighted geometric mean (IFLIOWGM) operator BIB019 BIB020 . To understand and learn these AOs and decision-making methods better and more conveniently, it is necessary to make an overview of interval-valued intuitionistic fuzzy information aggregation techniques and their applications. The rest of this paper is organized as follows: in Section 2, we review the basic concepts and operational rules of IFS, LTS, intuitionistic linguistic set (ILS), IULS and IVIULS. In Section 3, we review, summary analysis and discuss some kinds of AOs about ILS, IULS and IVIULS. At the same time, we divide the AOs into categories. In Section 4, we mainly review the applications in dealing with a variety of real and practice MCDM or muticriteria group decision-making (MCGDM) problems. In Section 5, we point out some possible development directions for future research. In Section 6, we discuss the conclusions. 2. Basic concepts and operations 2.1 The intuitionistic fuzzy set Definition 1. BIB007 ) Let E ¼ {ε 1 , ε 2 , …, ε n }be a nonempty set, an IFS R in E is given by R ¼ {〈ε, u R (ε), v R (ε)〉|ε∈E}, where u R : E→[0, 1] and v R : E→[0, 1], with the condition 0 ⩽ u R (ε)+v R (ε) ⩽ 1, ∀ε∈E. The numbers u R (ε) and v R (ε) denote, respectively, the MD and NMD of the element ε to E. For the given element ε, 〈u R (ε), v R (ε)〉 is called intuitionistic fuzzy number (IFN), and for convenience, we can utilizer ¼ u r ; v r ð Þto denote an IFN, which meets the conditions, u R (ε), Letr ¼ u r ; v r ð Þ andt ¼ u t ; v t ð Þ be two IFNs, δ ⩾ 0, then the operations of IFNs are defined as follows BIB007 For relieving the information loss in the decision making, Xu BIB005 BIB011 extended discrete linguistic set S ¼ {s 0 , s 1 , …, s m } to continuous linguistic set _ S ¼ fs l l A 0; t ½ g. For any LV s x ; s y A _ S, the operations of LV can be defined as follows: Definition 2. BIB006 The numbers u R (ε) and v R (ε) denote, respectively, the MD and NMD of the element ε to linguistic index s φ(ε) . In addition, π(ε) ¼ 1−u R (ε)−v R (ε), ∀ε∈E, denotes the ID of the element ε to E. It is evident that 0 ⩽ π(ε) ⩽ 1,∀ε∈E. For the given element ε, 〈s φ(ε) , (u R (ε), v R (ε))〉 is called intuitionistic linguistic fuzzy number (ILFN), and for convenience, we can utilizeẽ ¼ s j e ð Þ ; u e ð Þ; v e ð Þ ð Þ to denote an ILFN, which meets the conditions, It is easy to know that the operation rules (15)- (18) have some limitations which the ULVs obtained by calculating are lower than the maximum number s t are not assured. It is obviously that the upper and lower limits are all greater than s 6 which is the largest number of S. For the sake of overcoming the above limitation, some literatures give some new modified operational laws for ULVs. Let _ s 1 ¼ ½s j 1 ; s k 1 and _ s 2 ¼ ½s j 2 ; s k 2 are ULVs, then the operations of ULV are defined as follows: Definition 4. BIB012 The numbers u R (ε) and v R (ε) denote, respectively, the MD and NMD of the element ε to linguistic index [s φ(ε) , s ϑ(ε) ]. In addition, π(ε) ¼ 1−u R (ε)−v R (ε), ∀ε ∈ E, denotes the ID of the element ε to E. It is evident that 0 ⩽ π(ε) ⩽ 1,∀ε∈E. From BIB012 , BIB014 , BIB004 and , we can find that there are some shortcomings in the process of calculation by taking some examples, which the IULVs obtained by calculating are lower than the maximum number s t are not assured. For supplying this gap, some modified operational laws of IULV are presented in some literatures. then the modified operations of IULV can be defined as follows: Besides, BIB023 defined the operations of IULVs based on the Einstein t-norm (TN) and t-conorm (TC), which can be used to demonstrate the corresponding intersections and unions of IULVs.
An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Interval-value intuitionistic uncertain linguistic set (IVIULS) <s> In this paper, a new concept of interval-valued intuitionistic linguistic number IVILN, which is characterised by a linguistic term, an interval-valued membership degree and an interval-valued non-membership degree, is first introduced. Then, score function, accuracy function and some multiplicative operational laws of IVILNs are defined. Based on these two functions, a simple approach for the comparison between two IVILNs is presented. Based on these operational laws, some new geometric aggregation operators, such as the interval-valued intuitionistic linguistic weighted geometric IVILWG operator, interval-valued intuitionistic linguistic ordered weighted geometric IVILOWG operator and interval-valued intuitionistic linguistic hybrid geometric IVILHG operator, are proposed, and some desirable properties of these operators are established. Furthermore, by using the IVILWG operator and the IVILHG operator, a group decision making approach, in which the criterion values are IVILNs and the criterion weight information is known completely, is developed. Finally, an illustrative example is given to demonstrate the feasibility and effectiveness of the developed method. <s> BIB001 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Interval-value intuitionistic uncertain linguistic set (IVIULS) <s> We point out the issues of the operational laws on IIULSs in the reference.We define some new operational laws that eliminate the existing issues.The expected and accuracy functions are defined to rank IIULSs.Two operators on IIULSs are defined, and optimal models are established.An approach is developed, and the associated example is offered. Interval intuitionistic uncertain linguistic sets are an important generalization of fuzzy sets, which well cope with the experts' qualitative preferences as well as reflect the interval membership and non-membership degrees of the uncertain linguistic term. This paper first points out the issues of the operational laws on interval intuitionistic uncertain linguistic numbers in the literature, and then defines some alternative ones. To consider the relationship between interval intuitionistic uncertain linguistic sets, the expectation and accuracy functions are defined. To study the application of interval intuitionistic uncertain linguistic sets, two symmetrical interval intuitionistic uncertain linguistic hybrid aggregation operators are defined. Meanwhile, models for the optimal weight vectors are established, by which the optimal weighting vector can be obtained. As a series of development, an approach to multi-attribute decision making under interval intuitionistic uncertain linguistic environment is developed, and the associated example is provided to demonstrate the effectivity and practicality of the procedure. <s> BIB002
Definition 5. BIB001 It is obviously that if u lR (ε) ¼ u uR (ε) and v lR (ε) ¼ v uR (ε) for each ε∈E, then IVIULS reduces to be the IULS. Furthermore, if s φ(ε) ¼ s ϑ(ε) , then it reduces to be the ILS. We know ifẽ 1 andẽ 2 are two IVIULVs, then have the same above properties as the IULVs. Furthermore, two symmetrical IVL hybrid aggregation operators are introduced by BIB002 .
An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Intuitionistic uncertain 2-tuple linguistic variable (IU2TLV ) <s> With respect to multiple attribute group decision making (MAGDM) problems in which the attribute weights and the expert weights take the form of real numbers and the attribute values take the form of intuitionistic uncertain linguistic variables, new group decision making methods have been developed. First, operational laws, expected value definitions, score functions and accuracy functions of intuitionistic uncertain linguistic variables are introduced. Then, an intuitionistic uncertain linguistic weighted geometric average (IULWGA) operator and an intuitionistic uncertain linguistic ordered weighted geometric (IULOWG) operator are developed. Furthermore, some desirable properties of these operators, such as commutativity, idempotency, monotonicity and boundedness, have been studied, and an intuitionistic uncertain linguistic hybrid geometric (IULHG) operator, which generalizes both the IULWGA operator and the IULOWG operator, was developed. Based on these operators, two methods for multiple attribute group decision making problems with intuitionistic uncertain linguistic information have been proposed. Finally, an illustrative example is given to verify the developed approaches and demonstrate their practicality and effectiveness. <s> BIB001 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Intuitionistic uncertain 2-tuple linguistic variable (IU2TLV ) <s> In this paper, we introduce the Atanassov's intuitionistic linguistic ordered weighted averaging distance AILOWAD operator. It is a new aggregation operator that unifies distance measures and Atanassov's intuitionistic linguistic variables in the ordered weighted averaging OWA operator. The main advantage of this aggregation operator is that it is able to use the attitudinal character of the decision maker in the aggregation of the distance measures. Moreover, it is able to deal with uncertain situations where the information can assessed with Atanassov's intuitionistic linguistic numbers. We study some of main properties and different particular cases of the AILOWAD operator. We further generalize this approach by using quasi-arithmetic means obtaining the quasi-arithmetic AILOWAD Quasi-AILOWAD operator. We also develop an application of the new approach to a multi-person decision making problem regarding the selection of strategies. Thus, we obtain the multi-person AILOWAD MP-AILOWAD operator. Finally, an illustrative example is given to verify the developed approach and to demonstrate its practicality and effectiveness. <s> BIB002 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Intuitionistic uncertain 2-tuple linguistic variable (IU2TLV ) <s> Dealing with uncertainty is always a challenging problem, and different tools have been proposed to deal with it. Fuzzy sets was presented to manage situations in which experts have some membership value to assess an alternative. The fuzzy linguistic approach has been applied successfully to many problems. The linguistic information expressed by means of 2-tuples, which were composed by a linguistic term and a numeric value assessed in [ - 0.5, 0.5. Linguistic values was used to assess an alternative and variable in qualitative settings. Intuitionistic fuzzy sets were presented to manage situations in which experts have some membership and nonmembership value to assess an alternative. In this paper, the concept of an I2LI model is developed to provide a linguistic and computational basis to manage the situations in which experts assess an alternative in possible and impossible linguistic variable and their translation parameter. A method to solve the group decision making problem based on intuitionistic 2-tuple linguistic information I2LI by the group of experts is formulated. Some operational laws on I2LI are introduced. Based on these laws, new aggregation operators are introduced to aggregate the collective opinion of decision makers. An illustrative example is given to show the practicality and feasibility of our proposed aggregation operators and group decision making method. <s> BIB003
Definition 6. Martínez, 2000a, b, 2012) Let S ¼ {s 0 , s 1 , …, s m }be an ordered linguistic label set. The symbolic translation between the 2-tuple linguistic representation and numerical values can be defined as follows: where Definition 7. BIB003 The numbers u R (ε) and v R (ε) denote, respectively, MD and NMD of the element where the weighted vector ofẽ 1 ;ẽ 2 ; . . .;ẽ n is w ¼ (w 1 , w 2 , …, w n ) T , w i ∈[0, 1] and P n i¼1 w i ¼ 1. Definition 9. BIB001 Þ be a collection of IULVs. The value aggregated by ordered weighted geometric mean (OWGM) operator is an IULV, and: where the weighted vector ofẽ 1 ;ẽ 2 ; . . .;ẽ n is w ¼ (w 1 , w 2 , …, w n ) is any permutation of (1, 2, …, n), such that e y iÀ1 Xẽ y i for all (i ¼ 1, 2, …, n). It is easy to prove that the above operators have the properties of commutativity, idempotency, boundedness and monotonicity. where the weighted vector ofẽ 1 ;ẽ 2 ; . . .;ẽ n is w ¼ (w 1 , w 2 , …, w n ) is any permutation of (1, 2, …, n), such that e y iÀ1 Xẽ y i for all (i ¼ 1, 2, …, n). It is easy to prove that the above operators have the properties of commutativity, idempotency, boundedness and monotonicity. In addition, based on the IL weighted arithmetic mean operator, Wang et al. (2014) developed intuitionistic linguistic ordered weighted mean (ILOWM) operator and the intuitionistic linguistic hybrid operator. BIB002 presented the intuitionistic linguistic ordered weighted mean distance operator, quasi-arithmetic intuitionistic linguistic ordered weighted mean distance operator and multi-person intuitionistic linguistic ordered weighted mean distance operator.
An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> The extended MCDM approaches for IUFS <s> With respect to the multiple attribute group decision making problems in which the attribute weights are unknown and the attribute values take the form of the intuitionistic linguistic numbers, an expanded technique for order preference by similarity to ideal solution (TOPSIS) method is proposed. Firstly, the definition of intuitionistic linguistic number and the operational laws are given and distance between intuitionistic linguistic numbers is defined. Then, the attribute weights are determined based on the ‘maximizing deviation method’ and an extended TOPSIS method is developed to rank the alternatives. Finally, an illustrative example is given to verify the developed approach and to demonstrate its practicality and effectiveness. <s> BIB001 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> The extended MCDM approaches for IUFS <s> An interval-valued intuitionistic uncertain linguistic set (IVIULS) combines the ambiguity, fuzziness as well as indeterminacy in real-life predicaments due to intricacy of subjective nature of human thoughts and easily express fuzzy information. Technique for order preference by similarity to an ideal solution (TOPSIS) is one of the eminent traditional distance measure-based approach for multi-criteria group decision-making (MCGDM) problems and having widespread applications. This study aims to develop TOPSIS method for MCGDM problems under IVIUL environment. Firstly, some basic operational laws and aggregation operators of IVIULS are discussed. A novel distance measure of IVIULEs is also investigated. An illustrative example of evaluation problem is also taken to clarify developed methodology and to reveal its efficiency with comparative analysis of proposed method. <s> BIB002
(1) The ETOPSIS approaches for IUFS. In general, the standard TOPSIS approach can only process the real value and cannot deal with fuzzy information, such as IUFS. introduced an ETOPSIS approach to process the IUFS in real decision-making circumstance. BIB001 developed an extended technique for TOPSIS in which the criteria values are in the form of IULVs and the criteria weights are unknown. BIB002 combined the TOPSIS and IVIULVs by redefining the basic operation rules and distance measure to solve the MCGDM problems. Wei (2011) used the ETOPSIS approach to solve the MAGDM problems with 2TIULVs. (2) The ETODIM approaches for IUFS. We all know that TODIM approach can take into account the bounded rationality of experts based on prospect theory in MCDM. The classical TODIM can only
An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Intuitionistic linguistic fuzzy information <s> With respect to multiple attribute group decision making (MAGDM) problems in which both the attribute weights and the expert weights take the form of real numbers, attribute values take the form of intuitionistic linguistic numbers, the group decision making methods based on some generalized dependent aggregation operators are developed. Firstly, score function and accuracy function of intuitionistic linguistic numbers are introduced. Then, an intuitionistic linguistic generalized dependent ordered weighted average (ILGDOWA) operator and an intuitionistic linguistic generalized dependent hybrid weighted aggregation (ILGDHWA) operator are developed. Furthermore, some desirable properties of the ILGDOWA operator, such as commutativity, idempotency and monotonicity, etc. are studied. At the same time, some special cases of the generalized parameters in these operators are analyzed. Based on the ILGDOWA and ILGDHWA operators, the approach to multiple attribute group decision making with intuitionistic linguistic information is proposed. Finally, an illustrative example is given to verify the developed approaches and to demonstrate their practicality and effectiveness. <s> BIB001 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Intuitionistic linguistic fuzzy information <s> With respect to multiple attribute group decision making (MADM) problems in which attribute values take the form of intuitionistic linguistic numbers, some new group decision making methods are developed. Firstly, some operational laws, expected value, score function and accuracy function of intuitionistic linguistic numbers are introduced. Then, an intuitionistic linguistic power generalized weighted average (ILPGWA) operator and an intuitionistic linguistic power generalized ordered weighted average (ILPGOWA) operator are developed. Furthermore, some desirable properties of the ILPGWA and ILPGOWA operators, such as commutativity, idempotency and monotonicity, etc. are studied. At the same time, some special cases of the generalized parameters in these operators are analyzed. Based on the ILPGWA and ILPGOWA operators, two approaches to multiple attribute group decision making with intuitionistic linguistic information are proposed. Finally, an illustrative example is given to verify the developed approaches and to demonstrate their practicality and effectiveness. <s> BIB002 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Intuitionistic linguistic fuzzy information <s> The intuitionistic uncertain linguistic variables are the good tools to express the fuzzy information, and the TODIM (an acronym in Portuguese of Interactive and Multicriteria Decision Making) method can consider the bounded rationality of decision makers based on the prospect theory. However, the classical TODIM method can only process the multiple attribute decision making (MADM) problems where the attribute values take the form of crisp numbers. In this paper, we will extend the TODIM method to the multiple attribute group decision making (MAGDM) with intuitionistic uncertain linguistic information. Firstly, the definition, characteristics, expectation, comparison method and distance of intuitionistic uncertain linguistic variables are briefly introduced, and the steps of the classical TODIM method for MADM problems are presented. Then, on the basis of the classical TODIM method, the extended TODIM method is proposed to deal with MAGDM problems with intuitionistic uncertain linguistic variables, and its significant characteristic is that it can fully consider the decision makers' bounded rationality which is a real action in decision making. Finally, an illustrative example is proposed to verify the developed approach. <s> BIB003
process the MCDM problems where the criteria values are exact numbers. Liu BIB003 developed an ETODIM to deal with MCDM problems with IULVs. presented an interactive MCDM approach based on TODIM and NLP with IULVs. proposed TODIM for IL (ILTODIM) approach and TODIM for IUL (IULTODIM) approach by improving the distance measure to deal with the MADM problems with the forms of ILV and IULV. (3) The EVIKOR approach for IULVs. The VIKOR approach is a very useful tool to dispose decision-making problems by selecting the best alternative based on the maximizing "group utility" and minimizing "individual regret." At present, a number of researchers pay more and more attention to VIKOR approach. extended the VIKOR approach to deal with IULVs and presented the EVIKOR for MADM problems with IULVs. Furthermore, developed the EVIKOR by using the Hamming distance to deal with the IVIULVs and presented the EVIKOR approach for MADM problems with IVIULVs. Definition 20. BIB001 BIB002 sẽ i ; e ð Þ is the similarity degree betweenẽ i and e, denoted by: Definition 21. BIB001 BIB002 Þbe a collection of IULVs. The value aggregated by GILDHWM operator is an IULV, and: where the weighted vector ofẽ 1 ;ẽ 2 ; . . .;ẽ n is w ¼ (w 1 , w 2 , …, w n ) The IVIULCA, IVIULCGA, GSIVIULCA and GSIVIULCGA operators satisfy the commutativity, idempotency and boundedness.
An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Some intuitionistic linguistic fuzzy AOs considering the interrelationships between criteria <s> We introduce the power average to provide an aggregation operator which allows argument values to support each other in the aggregation process. The properties of this operator are described. We discuss the idea of a power median. We introduce some possible formulations for the support function used in the power average. We extend the supported aggregation facility of empowerment to a wider class of mean operators, such as the OWA (ordered weighted averaging) operator and the generalized mean operator. <s> BIB001 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Some intuitionistic linguistic fuzzy AOs considering the interrelationships between criteria <s> The power-average (PA) operator and the power-ordered-weighted-average (POWA) operator are the two nonlinear weighted-average aggregation tools whose weighting vectors depend on the input arguments. In this paper, we develop a power-geometric (PG) operator and its weighted form, which are on the basis of the PA operator and the geometric mean, and develop a power-ordered-geometric (POG) operator and a power-ordered-weighted-geometric (POWG) operator, which are on the basis of the POWA operator and the geometric mean, and study some of their properties. We also discuss the relationship between the PA and PG operators and the relationship between the POWA and POWG operators. Then, we extend the PG and POWG operators to uncertain environments, i.e., develop an uncertain PG (UPG) operator and its weighted form, and an uncertain power-ordered-weighted-geometric (UPOWG) operator to aggregate the input arguments taking the form of interval of numerical values. Furthermore, we utilize the weighted PG and POWG operators, respectively, to develop an approach to group decision making based on multiplicative preference relations and utilize the weighted UPG and UPOWG operators, respectively, to develop an approach to group decision making based on uncertain multiplicative preference relations. Finally, we apply both the developed approaches to broadband Internet-service selection. <s> BIB002 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Some intuitionistic linguistic fuzzy AOs considering the interrelationships between criteria <s> With respect to multiple attribute decision making (MADM) problems, in which attribute values take the form of intuitionistic uncertain linguistic information, a new decision-making method based on the intuitionistic uncertain linguistic weighted Bonferroni OWA operator is developed. First, the score function, accuracy function, and comparative method of the intuitionistic uncertain linguistic numbers are introduced. Then, an intuitionistic uncertain linguistic Bonferroni OWA (IULBOWA) operator and an intuitionistic uncertain linguistic weighted Bonferroni OWA (IULWBOWA) operator are developed. Furthermore, some properties of the IULBOWA and IULWBOWA operators, such as commutativity, idempotency, monotonicity, and boundedness, are discussed. At the same time, some special cases of these operators are analyzed. Based on the IULWBOWA operator, the multiple attribute decision-making method with intuitionistic uncertain linguistic information is proposed. Finally, an illustrative example is given to illustrat... <s> BIB003 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Some intuitionistic linguistic fuzzy AOs considering the interrelationships between criteria <s> Abstract With respect to multiple attribute group decision making (MAGDM) problems in which both the attribute weights and the expert weights take the form of crisp numbers, and attribute values take the form of intuitionistic uncertain linguistic variables, some new intuitionistic uncertain linguistic Heronian mean operators, such as intuitionistic uncertain linguistic arithmetic Heronian mean (IULAHM) operator, intuitionistic uncertain linguistic weighted arithmetic Heronian mean (IULWAHM) operator, intuitionistic uncertain linguistic geometric Heronian mean (IULGHM) operator, and intuitionistic uncertain linguistic weighted geometric Heronian mean (IULWGHM) operator, are proposed. Furthermore, we have studied some desired properties of these operators and discussed some special cases with respect to the different parameter values in these operators. Moreover, with respect to multiple attribute group decision making (MAGDM) problems in which both the attribute weights and the expert weights take the form of real numbers, attribute values take the form of intuitionistic uncertain linguistic variables, some approaches based on the developed operators are proposed. Finally, an illustrative example has been given to show the steps of the developed methods and to discuss the influences of different parameters on the decision-making results. <s> BIB004 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Some intuitionistic linguistic fuzzy AOs considering the interrelationships between criteria <s> With respect to multiple attribute group decision making (MAGDM) problems in which the attributes are dependent and the attribute values take the forms of intuitionistic linguistic numbers and intuitionistic uncertain linguistic numbers, this paper investigates two novel MAGDM methods based on Maclaurin symmetric mean (MSM) aggregation operators. First, the Maclaurin symmetric mean is extended to intuitionistic linguistic environment and two new aggregation operators are developed for aggregating the intuitionistic linguistic information, such as the intuitionistic linguistic Maclaurin symmetric mean (ILMSM) operator and the weighted intuitionistic linguistic Maclaurin symmetric mean (WILMSM) operator. Then, some desirable properties and special cases of these operators are discussed in detail. Furthermore, this paper also develops two new Maclaurin symmetric mean operators for aggregating the intuitionistic uncertain linguistic information, including the intuitionistic uncertain linguistic Maclaurin symmetric mean (IULMSM) operator and the weighted intuitionistic uncertain linguistic Maclaurin symmetric mean (WIULMSM) operator. Based on the WILMSM and WIULMSM operators, two approaches to MAGDM are proposed under intuitionistic linguistic environment and intuitionistic uncertain linguistic environment, respectively. Finally, two practical examples of investment alternative evaluation are given to illustrate the applications of the proposed methods. <s> BIB005 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Some intuitionistic linguistic fuzzy AOs considering the interrelationships between criteria <s> Coal mine safety has been a pressing issue for many years, and it is a constant and non-negligible problem that must be addressed during any coal mining process. This paper focuses on developing an innovative multi-criteria decision-making (MCDM) method to address coal mine safety evaluation problems. Because lots of uncertain and fuzzy information exists in the process of evaluating coal mine safety, linguistic intuitionistic fuzzy numbers (LIFNs) are introduced to depict the evaluation information necessary to the process. Furthermore, the handling of qualitative information requires the effective support of quantitative tools, and the linguistic scale function (LSF) is therefore employed to deal with linguistic intuitionistic information. First, the distance, a valid ranking method, and Frank operations are proposed for LIFNs. Subsequently, the linguistic intuitionistic fuzzy Frank improved weighted Heronian mean (LIFFIWHM) operator is developed. Then, a linguistic intuitionistic MCDM method for coal mine safety evaluation is constructed based on the developed operator. Finally, an illustrative example is provided to demonstrate the proposed method, and its feasibility and validity are further verified by a sensitivity analysis and comparison with other existing methods. <s> BIB006
In some real decision-making problem, we should take into account the interrelationships between criteria because of existing the situation of mutual support in some criteria. BIB003 presented an IULBOWM operator, WIULBOWM operator. BIB004 proposed the IULAHM operator, IULGHM operator, WIULAHM operator, WIULGHM operator. BIB005 where ξ(i) is the ith largest element in the tupleẽ 1 ;ẽ 2 ; . . .;ẽ n , and w i is the OWA weighted vector of dimension n with the weighted vector of e 1 ;ẽ 2 ; . . .;ẽ n is w ¼ (w 1 , w 2 , …, w n ) T , w i ∈[0, 1] and P n i¼1 w i ¼ 1. Definition 13. BIB003 Þbe a collection of IULVs. The value aggregated by WIULBOWM operator is an IULV, and: where ξ(i) is the ith largest element in the tupleẽ 1 ;ẽ 2 ; . . .;ẽ n , and w i is the OWA weighted vector of dimension n with the weighted vector of e 1 ;ẽ 2 ; . . .;ẽ n is w ¼ (w 1 , w 2 , …, w n ) T , w i ∈[0, 1] and P n i¼1 w i ¼ 1. Obviously, the above IULBOWM and WIULBOWM operators have the desirable properties of commutativity, idempotency, monotonicity and boundedness. Furthermore, introduced IUL partitioned BM (IULPBM) operator, weighted IUL partitioned BM operator, geometric IUL partitioned BM operator and weighted geometric IUL partitioned BM because they consider that in some time the interrelationships between criteria do not always exist and we can take the criteria into some part based on the different categories and the interrelationships between criteria in same part exist. At the same time, the DOWM operator has the advantage of relieving the impact of biased criteria values. It is easy to know that the IULGHM operator has the properties of monotonicity, idempotency and boundedness. Definition 15. BIB004 It is easy to prove that the WIULAHM operator has not the property of idempotency, but it has the property of monotonicity. Definition 16. BIB004 . . .; n ð Þbe a collection of IULVs. The value aggregated by IULGHM operator is an IULV, and: It is easy to know that the IULGHM operator has the properties of monotonicity, idempotency and bounded. Definition 17. BIB004 Þbe a collection of IULVs. The value aggregated by WIULGHM operator is an IULV, the weighted vector ofẽ 1 ;ẽ 2 ; . . .;ẽ n is w ¼ (w 1 , w 2 , …, w n ) T , w i ∈[0, 1] and P n i¼1 w i ¼ 1, n is a balance parameter, and: Obviously, the WIULGHM operator has not the property of idempotency, but it has the property of monotonicity. In addition, BIB006 proposed weighted intuitionistic linguistic fuzzy Frank improved Heronian mean operator to construct the coal mine safety evaluation. investigated the generalized ILHM operator and weighted GILHM operator. . . .; n ð Þbe a collection of IULVs and r ¼ 1, 2, …, n. The value aggregated by IULMSM operator is an IULV. It is easy to demonstrate that the IULMSM operator has the properties of idempotency, monotonicity, boundedness and commutativity. Þbe a collection of IULVs and r ¼ 1, 2, …, n. The value aggregated by WIULMSM operator is an IULV, and: The WILMSM has the property of monotonically in the case of parameter r. In some time, for the sake of selecting the best alternative, we not only take into account the criteria values, but also consider the interrelationships between the criteria. Power average (PA) operator introduced first by Yager BIB001 BIB002 can overcome the above weakness by setting different criteria weights. Recently, based on the PA and BM operator, presented ILF power BM and weighted ILF power BM operator.
An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Induced IL fuzzy AOs <s> In this paper, we define various generalized induced linguistic aggregation operators, including generalized induced linguistic ordered weighted averaging (GILOWA) operator, generalized induced linguistic ordered weighted geometric (GILOWG) operator, generalized induced uncertain linguistic ordered weighted averaging (GIULOWA) operator, generalized induced uncertain linguistic ordered weighted geometric (GIULOWG) operator, etc. Each object processed by these operators consists of three components, where the first component represents the importance degree or character of the second component, and the second component is used to induce an ordering, through the first component, over the third components which are linguistic variables (or uncertain linguistic variables) and then aggregated. It is shown that the induced linguistic ordered weighted averaging (ILOWA) operator and linguistic ordered weighted averaging (LOWA) operator are the special cases of the GILOWA operator, induced linguistic ordered weighted geometric (ILOWG) operator and linguistic ordered weighted geometric (LOWG) operator are the special cases of the GILOWG operator, the induced uncertain linguistic ordered weighted averaging (IULOWA) operator and uncertain linguistic ordered weighted averaging (ULOWA) operator are the special cases of the GIULOWA operator, and that the induced uncertain linguistic ordered weighted geometric (IULOWG) operator and uncertain LOWG operator are the special cases of the GILOWG operator. <s> BIB001 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Induced IL fuzzy AOs <s> An intuitionistic fuzzy set, characterized by a membership function and a non-membership function, is a generalization of fuzzy set. In this paper, based on score function and accuracy function, we introduce a method for the comparison between two intuitionistic fuzzy values and then develop some aggregation operators, such as the intuitionistic fuzzy weighted averaging operator, intuitionistic fuzzy ordered weighted averaging operator, and intuitionistic fuzzy hybrid aggregation operator, for aggregating intuitionistic fuzzy values and establish various properties of these operators. <s> BIB002 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Induced IL fuzzy AOs <s> We study the induced generalized aggregation operators under intuitionistic fuzzy environments. Choquet integral and Dempster-Shafer theory of evidence are applied to aggregate inuitionistic fuzzy information and some new types of aggregation operators are developed, including the induced generalized intuitionistic fuzzy Choquet integral operators and induced generalized intuitionistic fuzzy Dempster-Shafer operators. Then we investigate their various properties and some of their special cases. Additionally, we apply the developed operators to financial decision making under intuitionistic fuzzy environments. Some extensions in interval-valued intuitionistic fuzzy situations are also pointed out. <s> BIB003 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Induced IL fuzzy AOs <s> We introduce a wide range of induced and linguistic generalized aggregation operators. First, we present the induced linguistic generalized ordered weighted averaging (ILGOWA) operator. It is a generalization of the OWA operator that uses linguistic variables, order inducing variables and generalized means in order to provide a more general formulation. One of its main results is that it includes a wide range of linguistic aggregation operators such as the induced linguistic OWA (ILOWA), the induced linguistic OWG (ILOWG) and the linguistic generalized OWA (LGOWA) operator. We further generalize the ILGOWA operator by using quasi-arithmetic means obtaining the induced linguistic quasi-arithmetic OWA (Quasi-ILOWA) operator and by using hybrid averages forming the induced linguistic generalized hybrid average (ILGHA) operator. We also present a further extension with Choquet integrals. We call it the induced linguistic generalized Choquet integral aggregation (ILGCIA). We end the paper with an application of the new approach in a linguistic group decision making problem. <s> BIB004 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Induced IL fuzzy AOs <s> With respect to multiple attribute group decision making (MAGDM) problems, in which the attribute weights take the form of real numbers, and the attribute values take the form of intuitionistic fuzzy linguistic variables, a decision analysis approach is proposed. In this paper, we develop an intuitionistic fuzzy linguistic induce OWA (IFLIOWA) operator and analyze the properties of it by utilizing some operational laws of intuitionistic fuzzy linguistic variables. A new method based on the IFLIOWA operator for multiple attribute group decision making (MAGDM) is presented. Finally, a numerical example is used to illustrate the applicability and effectiveness of the proposed method. <s> BIB005
Now, a type of induced AOs has been a hot topic in a lot of research literatures, which take criteria as pairs, in which the first element denoted order induced variable is used to induce an ordering over the second element which is the aggregated variables. Illuminated by Xu's work BIB001 BIB003 BIB002 BIB005 introduced IFLIOWM operator, IFLIOWGM operator. . . .; n ð Þbe a collection of IULVs and r ¼ 1, 2, …, n. The value aggregated by IFLIOWA operator is an IULV, the weighted vector of e 1 ;ẽ 2 ; . . .;ẽ n is w ¼ (w 1 , w 2 , …, w n ) T , satisfies w i ∈[0, 1], P n i¼1 w i ¼ 1, and: Definition 27. BIB005 BIB004 Þbe a collection of IULVs and r ¼ 1, 2, …, n. The value aggregated by IFLIOWGA operator is an IULV, the weighted vector ofẽ 1 ;ẽ 2 ; . . .;ẽ n is w ¼ (w 1 , w 2 , …, w n ) T , satisfies w i ∈ [0, 1], P n i¼1 w i ¼ 1, and: The IFLIOWA and IFLIOWGA operators satisfy the commutativity, idempotency, monotonicity and boundedness.
An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> The applications about the AOs of IULVs <s> With respect to multiple attribute group decision making (MAGDM) problems in which the attribute weights and the expert weights take the form of real numbers and the attribute values take the form of intuitionistic uncertain linguistic variables, new group decision making methods have been developed. First, operational laws, expected value definitions, score functions and accuracy functions of intuitionistic uncertain linguistic variables are introduced. Then, an intuitionistic uncertain linguistic weighted geometric average (IULWGA) operator and an intuitionistic uncertain linguistic ordered weighted geometric (IULOWG) operator are developed. Furthermore, some desirable properties of these operators, such as commutativity, idempotency, monotonicity and boundedness, have been studied, and an intuitionistic uncertain linguistic hybrid geometric (IULHG) operator, which generalizes both the IULWGA operator and the IULOWG operator, was developed. Based on these operators, two methods for multiple attribute group decision making problems with intuitionistic uncertain linguistic information have been proposed. Finally, an illustrative example is given to verify the developed approaches and demonstrate their practicality and effectiveness. <s> BIB001 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> The applications about the AOs of IULVs <s> With respect to multiple attribute group decision making (MADM) problems in which attribute values take the form of intuitionistic linguistic numbers, some new group decision making methods are developed. Firstly, some operational laws, expected value, score function and accuracy function of intuitionistic linguistic numbers are introduced. Then, an intuitionistic linguistic power generalized weighted average (ILPGWA) operator and an intuitionistic linguistic power generalized ordered weighted average (ILPGOWA) operator are developed. Furthermore, some desirable properties of the ILPGWA and ILPGOWA operators, such as commutativity, idempotency and monotonicity, etc. are studied. At the same time, some special cases of the generalized parameters in these operators are analyzed. Based on the ILPGWA and ILPGOWA operators, two approaches to multiple attribute group decision making with intuitionistic linguistic information are proposed. Finally, an illustrative example is given to verify the developed approaches and to demonstrate their practicality and effectiveness. <s> BIB002 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> The applications about the AOs of IULVs <s> For multi-criteria group decision making problems with intuitionistic linguistic information, we define a new score function and a new accuracy function of intuitionistic linguistic numbers, and propose a simple approach for the comparison between two intuitionistic linguistic numbers. Based on the intuitionistic linguistic weighted arithmetic averaging ILWAA operator, we define two new intuitionistic linguistic aggregation operators, such as the intuitionistic linguistic ordered weighted averaging ILOWA operator and the intuitionistic linguistic hybrid aggregation ILHA operator, and establish various properties of these operators. The ILOWA operator weights the ordered positions of the intuitionistic linguistic numbers instead of weighting the arguments themselves. The ILHA operator generalizes both the ILWAA operator and the ILOWA operator at the same time, and reflects the importance degrees of both the given intuitionistic linguistic numbers and the ordered positions of these arguments. Furthermore, based on the ILHA operator and the ILWAA operator, we develop a multi-criteria group decision making approach, in which the criteria values are intuitionistic linguistic numbers and the criteria weight information is known completely. Finally, an example is given to illustrate the feasibility and effectiveness of the developed method. <s> BIB003 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> The applications about the AOs of IULVs <s> Dealing with uncertainty is always a challenging problem, and different tools have been proposed to deal with it. Fuzzy sets was presented to manage situations in which experts have some membership value to assess an alternative. The fuzzy linguistic approach has been applied successfully to many problems. The linguistic information expressed by means of 2-tuples, which were composed by a linguistic term and a numeric value assessed in [ - 0.5, 0.5. Linguistic values was used to assess an alternative and variable in qualitative settings. Intuitionistic fuzzy sets were presented to manage situations in which experts have some membership and nonmembership value to assess an alternative. In this paper, the concept of an I2LI model is developed to provide a linguistic and computational basis to manage the situations in which experts assess an alternative in possible and impossible linguistic variable and their translation parameter. A method to solve the group decision making problem based on intuitionistic 2-tuple linguistic information I2LI by the group of experts is formulated. Some operational laws on I2LI are introduced. Based on these laws, new aggregation operators are introduced to aggregate the collective opinion of decision makers. An illustrative example is given to show the practicality and feasibility of our proposed aggregation operators and group decision making method. <s> BIB004 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> The applications about the AOs of IULVs <s> In this paper, we first introduce some operations on interval-valued intuitionistic uncertain linguistic sets, and further develop the induced interval-valued intuitionistic uncertain linguistic ordered weighted geometric (I-IVIULOWG) operator. We also establish some desirable properties of this operator, such as commutativity, idempotency and monotonicity. Then, we apply the induced interval-valued intuitionistic uncertain linguistic ordered weighted geometric (I-IVIULOWG) operator to deal with the interval-valued intuitionistic uncertain linguistic multiple attribute decision making problems. Finally, an illustrative example for evaluating the knowledge management performance is given to verify the developed approach and to demonstrate its practicality and effectiveness. <s> BIB005 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> The applications about the AOs of IULVs <s> The problem for evaluating the design patterns of the Micro-Air vehicle is the multiple attribute decision making problems. In this paper, we introduce the concept of interval-valued intuitionistic uncertain linguistic sets and propose the induced interval-valued intuitionistic uncertain linguistic ordered weighted average (I-IVIULOWA) operator on the basis of the interval-valued intuitionistic uncertain linguistic ordered weighted average (IVIULOWA) operator and IOWA operator. We also study some desirable properties of the proposed operator, such as commutativity, idempotency and monotonicity. Then, we utilize the induced interval-valued intuitionistic uncertain linguistic ordered weighted average (IIVIULOWA) operator to solve the multiple attribute decision making problems with interval-valued intuitionistic uncertain linguistic information. Finally, an illustrative example for evaluating the design patterns of the Micro-Air vehicle is given. <s> BIB006 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> The applications about the AOs of IULVs <s> Abstract This paper presents a new two-tier decision making framework with linguistic preferences for scientific decision making. The major reason for adopting linguistic preference is to ease the process of rating of alternatives by allowing decision makers (DMs) to strongly emphasize their opinion on each alternative. In the first tier, aggregation is done using a newly proposed operator called linguistic based aggregation (LBA), which aggregates linguistic terms directly without making any conversion. The main motivation for this proposal is driven by the previous studies on aggregation theory which reveals that conversion leads to loss of information and formation of virtual sets which are no longer sensible and rational for decision making process. Secondly, in the next tier, a new ranking method called IFSP (intuitionistic fuzzy set based PROMETHEE) is proposed which is an extension to PROMETHEE (preference ranking organization method for enrichment evaluation) under intuitionistic fuzzy set (IFS) context. Unlike previous ranking methods, this ranking method follows a new formulation by considering personal choice of the DMs over each alternative. The main motivation for such formulation is derived from the notion of not just obtaining a suitable alternative but also coherently satisfying the DMs’ viewpoint during decision process. Finally, the practicality of the framework is tested by using supplier selection (SS) problem for an automobile factory. The strength and weakness of the proposed LBA-IFSP framework are verified by comparing with other methods under the realm of theoretical and numerical analysis. The results from the analysis infer that proposed LBA-IFSP framework is rationally coherent to DMs’ viewpoint, moderately consistent with other methods and highly stable and robust against rank reversal issue. <s> BIB007 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> The applications about the AOs of IULVs <s> Abstract In this paper by using fuzzy intuitionistic linguistic fuzzy operators has been analyzed the influence of external factors to economic state, social consequences and government responses. The analysis is investigated on the basis of Azerbaijan and international information in 2010-2015. <s> BIB008
In this section, we give an overview of some practical applications of the IULVs AO and approach in the domain of different types of MCDM and MCGDM. Based on IULWGM, OIULWGM, GIULWGM, GOIULWGM, IULBOWM, WIULBOWM, IULAHM, IULGHM, WIULAHM, WIULGHM, IULMSM, WIULMSM, GILDOWM and GILDHWM operator and so on, the corresponding MCDM or MCGDM methods were developed to solve the real MCDM or MCGDM problems, such as human resource management, supply-chain management, project investment (PI) and benefit evaluation: (1) PI. BIB001 applied the MCDM methods based on IULHG, WIULGA and WIULOG operators to solve investment problems, in which an investment company wants to invest a sum of money in the best selection. BIB002 developed MCGDM methods based on GWILPA and GWILPOA operators to deal with investment evaluate problems. BIB003 proposed a MCGDM approach based on the ILHA and WILAA operator to disposal MCGDM problem involving a PI. proposed the weighted trapezium cloud arithmetic mean operator, ordered weighted trapezium cloud arithmetic mean operator and the trapezium cloud hybrid arithmetic operator, and then used them to solve PI problems. gave a real example about selecting the best investment strategy for an investment company by applying GIVIFLIHA operator to aggregate IVIFLVs. gave an illustrated example about investment selection by developing IU2TL continuous extend BM (IU2TLCEBM) operator. presented a novel IFL hybrid aggregation operator to deal with an investment risk evaluation problem in the circumstance of IFLI. (2) Suppler selection. In many literature, researchers have attempted to dispose the suppler selection problems by using the AOs to aggregate intuitionistic linguistic fuzzy information (ILFI). For example, presented a MAGDM method based on I2LGA by extending the Archimedean TN and TC to select the best suppler for manufacturing company' core competition. BIB007 applied a novel approach based on IL AOs to select the best applier from the four potential suppliers. developed an IVIFLI-MCGDM approach based on the IV2TLI and applied it to the practice problem about a purchasing department want to select a best supplier. presented an IL multiple attribute decision making with ILWIOWA and ILGWIOWA operator and its application to low carbon supplier selection. (3) Some other applications. gave two IL MCDM based on HM approaches and their application to evaluation of scientific research capacity. BIB008 analyzed thoroughly the impact of external elements to economic state, social consequences and government responses by applying IFLI. BIB004 built an I2TLI model to solve the problem about a family to purchase a house in best locality. BIB005 presented an approach based on induced IVIULOWG operator to evaluating the knowledge management performance with IVIULFI. BIB006 built a model for evaluating the design patterns of the Micro-Air vehicle under interval-valued intuitionistic uncertain linguistic environment.
An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Further research directions <s> Intuitionistic fuzzy information aggregation plays an important part in intuitionistic fuzzy set theory, which has emerged to be a new research direction receiving more and more attention in recent years. In this paper, we investigate the multiple attribute decision making (MADM) problems with intuitionistic fuzzy numbers. Then, we first introduce some operations on intuitionistic fuzzy sets, such as Einstein sum, Einstein product, and Einstein exponentiation, and further develop some new Einstein hybrid aggregation operators, such as the intuitionistic fuzzy Einstein hybrid averaging (IFEHA) operator and intuitionistic fuzzy Einstein hybrid geometric (IFEHG) operator, which extend the hybrid averaging (HA) operator and the hybrid geometric (HG) operator to accommodate the environment in which the given arguments are intuitionistic fuzzy values. Then, we apply the intuitionistic fuzzy Einstein hybrid averaging (IFEHA) operator and intuitionistic fuzzy Einstein hybrid geometric (IFEHG) operator to deal with multiple attribute decision making under intuitionistic fuzzy environments. Finally, some illustrative examples are given to verify the developed approach and to demonstrate its practicality and effectiveness. <s> BIB001 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Further research directions <s> Since proposed in 1983, the intuitionistic fuzzy set (IFS) theory has grown immensely during the past decades and has wide application in machine learning, pattern recognition, management engineering and decision making. With the rapid development and widespread adoption of IFS, thousands of research results have been appeared, focusing on both theory development and practical applications. Given the large number of research materials exist, this paper intends to make a scientometric review on IFS studies to reveal the most cited papers, influential authors and influential journals in this domain based on the 1318 references retrieved from SCIE and SSCI databases via Web of science. The research results of this paper are based on the objective data analysis and they are less affected by subjective biases, which make them more reliable. <s> BIB002 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Further research directions <s> Abstract Real-world decision-making problems are often complex and indeterminate. Thus, uncertainty and hesitancy are usually unavoidable issues being experienced by decision makers. Dual hesitant fuzzy sets (DHFSs) which are described in terms of the two functions, namely the membership hesitancy function and the non-membership hesitancy function, have been developed. In light of their properties, they are considered as a powerful vehicle to express uncertain information in the process of multi-attribute decision-making (MADM). In accordance with the practical demand, this study proposes a new MADM approach with dual hesitant fuzzy (DHF) assessments based on Frank aggregation operators. First, original score and accuracy functions of DHFS are developed to construct a new comparison method of DHFSs. The properties of the developed score and accuracy functions are analyzed. Second, we investigate the generalized operations of DHFS based on Frank t-norm and t-conorm. The generalized operations are then used to build the generalized arithmetic and geometric aggregation operators of DHF assessments in the context of fuzzy MADM. The monotonicity of arithmetic and geometric aggregated assessments with respect to a parameter in Frank t-norm and t-conorm and their relationship are also demonstrated. In particular, the monotonicity is employed to associate the parameter with the risk attitude of a decision maker, by which a method is designed to determine the parameter. A procedure of the proposed MADM method is presented. Finally, an investment evaluation problem is discussed by the proposed approach to demonstrate its applicability and validity. A detailed sensitivity analysis and a comparative study are also conducted to highlight the validity and advantages of the approach proposed in this paper. More importantly, we discuss the situations where Frank aggregation operators are replaced by Hamacher aggregation operators at the second step of the proposed approach, through re-considering the investment evaluation problem. <s> BIB003 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Further research directions <s> Abstract Fuzzy game theory has been applied in many decision-making problems. The matrix game with interval-valued intuitionistic fuzzy numbers (IVIFNs) is investigated based on Archimedean t-conorm and t-norm. The existing matrix games with IVIFNs are all based on Algebraic t-conorm and t-norm, which are special cases of Archimedean t-conorm and t-norm. In this paper, the intuitionistic fuzzy aggregation operators based on Archimedean t-conorm and t-norm are employed to aggregate the payoffs of players. To derive the solution of the matrix game with IVIFNs, several mathematical programming models are developed based on Archimedean t-conorm and t-norm. The proposed models can be transformed into a pair of primal–dual linear programming models, based on which, the solution of the matrix game with IVIFNs is obtained. It is proved that the theorems being valid in the exiting matrix game with IVIFNs are still true when the general aggregation operator is used in the proposed matrix game with IVIFNs. The proposed method is an extension of the existing ones and can provide more choices for players. An example is given to illustrate the validity and the applicability of the proposed method. <s> BIB004 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Further research directions <s> Abstract Intuitionistic fuzzy soft set (IFSS) theory is one of the successful extension of the soft set theory to deal the uncertainty by introducing the parametrization factor during the analysis. Under this environment, the present paper develops two new scaled prioritized averaging aggregation operators by considering the interaction between the membership degrees. Further, some shortcomings of the existing operators have been highlighted and overcome by the proposed operators. The principal advantage of the operators is that they consider the priority relationships between the parameters as well as experts. Furthermore, some properties based on these operators are discussed in detail. Then, we utilized these operators to solve decision-making problem and validate it with a numerical example. <s> BIB005 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Further research directions <s> The theory of intuitionistic fuzzy sets (IFS) is widely used for dealing with vagueness and the Dempster--Shafer (D-S) evidence theory has a widespread use in multiple criteria decision-making problems under uncertain situation. However, there are many methods to aggregate intuitionistic fuzzy numbers (IFNs), but the aggregation operator to fuse basic probability assignment (BPA) is rare. Power average (P-A) operator, as a powerful operator, is useful and important in information fusion. Motivated by the idea of P-A power, in this paper, a new operator based on the IFS and D-S evidence theory is proposed, which is named as intuitionistic fuzzy evidential power average (IFEPA) aggregation operator. First, an IFN is converted into a BPA, and the uncertainty is measured in D-S evidence theory. Second, the difference between BPAs is measured by Jousselme distance and a satisfying support function is proposed to get the support degree between each other effectively. Then the IFEPA operator is used for aggregating the original IFN and make a more reasonable decision. The proposed method is objective and reasonable because it is completely driven by data once some parameters are required. At the same time, it is novel and interesting. Finally, an application of developed models to the ‘One Belt, One road’ investment decision-making problems is presented to illustrate the effectiveness and feasibility of the proposed operator. <s> BIB006 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Further research directions <s> Normal neutrosophic numbers (NNNs) are an important tool to describe the decision making problems, and they are more appropriate to express the incompleteness, indeterminacy and inconsistency of the evaluation information. In this paper, we firstly introduce the definition, the properties, the score function, the accuracy function, and the operational laws of the NNNs. Then, some operators are proposed, such as the normal neutrosophic power averaging operator, the normal neutrosophic weighted power averaging operator, the normal neutrosophic power geometric operator, the normal neutrosophic weighted power geometric operator, the normal neutrosophic generalized power averaging operator, the normal neutrosophic generalized weighted power averaging (NNGWPA) operator. Furthermore, some properties of them are discussed. Thirdly, we propose a multiple attribute decision making method based on the NNGWPA operator. Finally, we use an illustrative example to demonstrate the practicality and effectiveness of the proposed method. <s> BIB007 </s> An overview of intuitionistic linguistic fuzzy information aggregations and applications <s> Further research directions <s> An outranking method is developed within the environment of hesitant intuitionistic fuzzy linguistic term sets (HIFLTSs), where the membership degree and the non-membership degree of the element are subsets of linguistic term set. The directional Hausdorff distance, which uses HIFLTSs, is proposed, and the dominance relations are subsequently defined using this distance. Moreover, some interesting characteristics of the proposed directional Hausdorff distance are further discussed in detail. In this context, a collective decision matrix is obtained in the form of hesitant intuitionistic fuzzy linguistic elements and analyzes the collective data by using proposed ELECTRE-based outranking method. The linguistic scale functions are employed in this paper to conduct the transformation between qualitative information and quantitative data. Furthermore, based on the proposed method, we also investigate the ranking of the alternatives based on a new proposed definition of HIFLTS. The feasibility and applicability of the proposed method are illustrated with an example, and a comparative analysis is performed with other approaches to validate the effectiveness of the proposed methodology. <s> BIB008
Although the approach and theory of IUL have gained abundant research achievements, a number of works on IUL fuzzy information should be further done in the future. First, some new operational rules, such as Einstein and interactive operational rule BIB001 , Schweizer -Sklar TC and TN , Dombi operations , Frank TC and TN BIB003 , Archimedean TC and TN BIB004 ) and so on, should be extended and applied in the process of aggregation of ILFI. Moreover, some other AOs, such as cloud distance operators BIB002 , prioritized weighted mean operator BIB005 , geometric prioritized weighted mean operator , power generalized AO, evidential power AO BIB006 , induced OWA Minkowski distance operator BIB007 , continuous OWGA operator BIB008 , Muirhead mean operator, and so on should be developed to aggregation ILFI. Finally, the applications in some real and practical fields, such as online comment analysis, smart home, Internet of Things, precision medicine and Big Data, internet bots, unmanned aircraft, software robots, virtual reality and so on, are also very interesting, meaningful and significance in the future. After doing so, we will propose a much more complete and comprehensive theory knowledge system of ILFI.
A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> Epidemiologic and interventional studies have led to lower treatment targets for type 2 diabetes (formerly known as non-insulin-dependent diabetes), including a glycosylated hemoglobin level of 7 percent or less and a before-meal blood glucose level of 80 to 120 mg per dL (4.4 to 6.7 mmol per L). New oral medications make these targets easier to achieve, especially in patients with recently diagnosed diabetes. Acarbose, metformin, miglitol, pioglitazone, rosiglitazone and troglitazone help the patient's own insulin control glucose levels and allow early treatment with little risk of hypoglycemia. Two new long-acting sulfonylureas (glimepiride and extended-release glipizide) and a short-acting sulfonylurea-like agent (repaglinide) simply and reliably augment the patient's insulin supply. Combinations of agents have additive therapeutic effects and can restore glucose control when a single agent is no longer successful. Oral therapy for early type 2 diabetes can be relatively inexpensive, and evidence of its cost-effectiveness is accumulating. <s> BIB001 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> The developing world does not have access to many of the best medical diagnostic technologies; they were designed for air-conditioned laboratories, refrigerated storage of chemicals, a constant supply of calibrators and reagents, stable electrical power, highly trained personnel and rapid transportation of samples. Microfluidic systems allow miniaturization and integration of complex functions, which could move sophisticated diagnostic tools out of the developed-world laboratory. These systems must be inexpensive, but also accurate, reliable, rugged and well suited to the medical and social contexts of the developing world. <s> BIB002 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> A high-performance monitoring system for human blood glucose levels was developed using microchip electrophoresis with a plastic chip. The combination of reductive amination as glucose labeling with fluorescent 2-aminoacridone (AMAC) and glucose-borate complex formation realized the highly selective detection of glucose even in a complex matrix such as a blood sample. The migration time of a single peak, observed on an electropherogram of AMAC-labeled plasma, closely resembled that of glucose standard solution. The treatment of plasma with hexokinase or glucokinase for glucose phosphorylation resulted in a peak shift from approximately 145 to 70 s, corresponding to glucose and glucose-6-phosphate, respectively. A double-logarithm plot revealed a linear relationship between glucose concentration and fluorescence intensity in the range of 1-300 microM of glucose (r(2) = 0.9963; p <0.01), and the detection limit was 0.92 microM. Furthermore, blood glucose concentrations estimated from the standard curves of three subjects were compared with results obtained by conventional colorimetric analysis using glucose dehydrogenase. Good correlation was observed between methods according to simple linear regression analysis (p <0.05). The reproducibility of the assay was about 6.3-9.1% (RSD) and the within-days and between-days reproducibility were 1.6-8.4 and 5.2-7.2%, respectively. This system enables us to determine blood glucose with high sensitivity and accuracy, and will be applicable to clinical diagnosis. <s> BIB003 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> This communication describes a simple method for patterning paper to create well-defined, millimeter-sized channels, comprising hydrophilic paper bounded by hydrophobic polymer. We believe that this type of patterned paper will become the basis for low-cost, portable, and technically simple multiplexed bioassays. We demonstrate this capability by the simultaneous detection of glucose and protein in 5 μL of urine. The assay system is small, disposable, easy to use (and carry), and requires no external equipment, reagents, or power sources. We believe this kind of system is attractive for uses in less-industrialized countries, in the field, or as an inexpensive alternative to more advanced technologies already used in clinical settings.[1-4] ::: ::: The analysis of biological fluids is necessary for monitoring the health of populations,[2] but these measurements are difficult to implement in remote regions such as those found in less-industrialized countries, in emergency situations, or in home health-care settings.[3] Conventional laboratory instruments provide quantitative measurements of biological samples, but they are unsuitable for these situations since they are large, expensive, and require trained personnel and considerable volumes of biological samples.[2] Other bioassay platforms provide alternatives to more expensive instruments,[5-7] but the need remains for a platform that uses small volumes of sample and that is sufficiently inexpensive to be used widely for measuring samples from large populations. ::: ::: We believe that paper may serve as a particularly convenient platform for running bioassays in the remote situations locations. As a prototype for a mthod we believe to be particularly promosing, we patterned photoresist onto chromatography paper to form defined areas of hydrophilic paper separated by hydrophobic lines or “walls”; these patterns provide spatial control of biological fluids and enable fluid transport, without pumping, due to capillary action in the millimeter-sized channels produced. This method for patterning paper makes it possible to run multiple diagnostic assays on one strip of paper, while still using only small volumes of a single sample. In a fully developed technology, patterned photoresist would be replaced by an appropriate printing technology, but patterning paper with photoresist is: i) convenient for prototyping these devices, and ii) a useful new micropatterning technology in its own right. ::: ::: We patterned chromatography paper with SU-8 2010 photoresist as shown in Figure 1a and as described below: we soaked a 7.5-cm diameter piece of chromatography paper in 2 mL of SU-8 2010 for 30 s, spun it at 2000 rpm for 30 s, and then baked it at 95 °C for 5 min to remove the cyclopentanone in the SU-8 formula. We then exposed the photoresist and paper to 405 nm UV light (50 mW/cm2) for 10 s through a photo-mask (CAD/Art Services, Inc.) that was aligned using a mask aligner (OL-2 Mask Aligner, AB-M, Inc). After exposure, we baked the paper a second time at 95 °C for 5 min to cross-link the exposed portions of the resist. The unpolymerized photoresist was removed by soaking the paper in propylene glycol monomethyl ether acetate (PGMEA) (5 min), and by washing the pattern with propan-2-ol (3 × 10 mL). The paper was more hydrophobic after it was patterned, presumably due to residual resist bound to the paper, so we exposed the entire surface to an oxygen plasma for 10 s at 600 millitorr (SPI Plasma-Prep II, Structure Probe, Inc) to increase the hydrophilicity of the paper (Figures 2a and 2b). ::: ::: ::: ::: Figure 1 ::: ::: Chromatography paper patterned with photoresist. The darker lines are cured photoresist; the lighter areas are unexposed paper. (a) Patterned paper after absorbing 5 μL of Waterman red ink by capillary action. The central channel absorbs the sample ... ::: ::: ::: ::: ::: ::: Figure 2 ::: ::: Assays contaminated with (a) dirt, (b) plant pollen, and (c) graphite powder. The pictures were taken before and after running an artificial urine solution that contained 550 mM glucose and 75 μM BSA. The particulates do not move up the channels ... ::: ::: ::: ::: The patterned paper can be derivatized for biological assays by adding appropriate reagents to the test areas (Figures 1b and ​and2b).2b). In this communication, we demonstrate the method by detecting glucose and protein,[8] but the surface should be suitable for measuring many other analytes as well.[7] The glucose assay is based on the enzymatic oxidation of iodide to iodine,[9] where a color change from clear to brown is associated with the presence of glucose.[10] The protein assay is based on the color change of tetrabromophenol blue (TBPB) when it ionizes and binds to proteins;[11] a positive result in this case is indicated by a color change from yellow to blue. ::: ::: For the glucose assay, we spotted 0.3 μL of a 0.6 M solution of potassium iodide, followed by 0.3 μL of a 1:5 horseradish peroxidase/glucose oxidase solution (15 units of protein per mL of solution). For the protein assay, we spotted 0.3 μL of a 250-mM citrate buffer (pH 1.8) in a well separate from the glucose assay, and then layered 0.3 μL of a 3.3 mM solution of tetrabromophenol blue (TBPB) in 95% ethanol over the citrate buffer. The spotted reagents were allowed to air dry at room temperature. This pre-loaded paper gave consistent results for the protein assay regardless of storage temperature and time (when stored for 15 d both at 0 °C and at 23 °C, wrapped in aluminum foil). The glucose assay was sensitive to storage conditions, and showed decreased signal for assays run 24 h after spotting the reagents (when stored at 23 °C); when stored at 0 °C, however, the glucose assay was as sensitive after day 15 as it was on day 1. ::: ::: We measured artificial samples of glucose and protein in clinically relevant ranges (2.5-50 mM for glucose and 0.38-7.5 μM for bovine serum albumin (BSA))[12, 13] by dipping the bottom of each test strip in 5 μL of a pre-made test solution (Figure 2d). The fluid filled the entire pattern within ca. one minute, but the assays required 10-11 min for the paper to dry and for the color to fully develop.[14] In all cases, we observed color changes corresponding roughly in intensity to the amount of glucose and protein in the test samples, where the lowest concentrations define the lower limits to which these assays can be used (Figure 2e). For comparison, commercially-available dipsticks detect glucose at concentrations as low as 5 mM[7, 9] and protein as low as 0.75 μM;[6, 15] these limits indicate that these paper-based assays are comparable in sensitivity to commercial dipstick assays. Our assay format also allows for the measurement of multiple analytes. ::: ::: This paper-based assay is suitable for measuring multiple samples in parallel and in a relatively short period of time. For example, in one trial, one researcher was able to run 20 different samples (all with 550 mM glucose and 75 μM BSA) within 7.5 min (followed by another 10.5 min for the color to fully develop). An 18-min assay of this type—one capable of measuring two analytes in 20 different sample—may be efficient enough to use in high-throughput screens of larger sample pools. ::: ::: In the field, samples will not be measured under sterile conditions, and dust and dirt may contaminate the assays. The combination of paper and capillary action provides a mechanism for separating particulates from a biological fluid. As a demonstration, we purposely contaminated the artificial urine samples with quantities of dirt, plant pollen, and graphite powder at levels higher than we might expect to see in the samples in the field. These particulates do not move up the channels under the action of capillary wicking, and do not interfere with the assay (Figure 3). ::: ::: Paper strips have been used in biomedical assays for decades because they offer an inexpensive platform for colorimetric chemical testing.[1] Patterned paper has characteristics that lead to miniaturized assays that run by capillary action (e.g., without external pumping), with small volumes of fluids. These methods suggest a path for the development of simple, inexpensive, and portable diagnostic assays that may be useful in remote settings, and in particular, in less-industrialized countries where simple assays are becoming increasingly important for detecting disease and monitoring health,[16, 17], for environmental monitoring, in veterinary and agricultural practice and for other applications. <s> BIB004 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> This article describes FLASH (Fast Lithographic Activation of Sheets), a rapid method for laboratory prototyping of microfluidic devices in paper. Paper-based microfluidic devices are emerging as a new technology for applications in diagnostics for the developing world, where low cost and simplicity are essential. FLASH is based on photolithography, but requires only a UV lamp and a hotplate; no clean-room or special facilities are required (FLASH patterning can even be performed in sunlight if a UV lamp and hotplate are unavailable). The method provides channels in paper with dimensions as small as 200 µm in width and 70 µm in height; the height is defined by the thickness of the paper. Photomasks for patterning paper-based microfluidic devices can be printed using an ink-jet printer or photocopier, or drawn by hand using a waterproof black pen. FLASH provides a straightforward method for prototyping paper-based microfluidic devices in regions where the technological support for conventional photolithography is not available. <s> BIB005 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> This article describes a method for fabricating 3D microfluidic devices by stacking layers of patterned paper and double-sided adhesive tape. Paper-based 3D microfluidic devices have capabilities in microfluidics that are difficult to achieve using conventional open-channel microsystems made from glass or polymers. In particular, 3D paper-based devices wick fluids and distribute microliter volumes of samples from single inlet points into arrays of detection zones (with numbers up to thousands). This capability makes it possible to carry out a range of new analytical protocols simply and inexpensively (all on a piece of paper) without external pumps. We demonstrate a prototype 3D device that tests 4 different samples for up to 4 different analytes and displays the results of the assays in a side-by-side configuration for easy comparison. Three-dimensional paper-based microfluidic devices are especially appropriate for use in distributed healthcare in the developing world and in environmental monitoring and water analysis. <s> BIB006 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> Paper-based microfluidic patterns have been demonstrated in recent literature to have a significant potential in developing low-cost analytical devices for telemedicine and general health monitoring. This study reports a new method for making microfluidic patterns on a paper surface using plasma treatment. Paper was first hydrophobized and then treated using plasma in conjunction with a mask. This formed well defined hydrophilic channels on the paper. Paper-based microfluidic systems produced in this way retained the flexibility of paper and a variety of patterns could be formed. A major advantage of this system is that simple functional elements such as switches and filters can be built into the patterns. Examples of these elements are given in this study. <s> BIB007 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> OBJECTIVE ::: To assess the effect of self-monitoring of blood glucose (SMBG) on glycaemic control in non-insulin treated patients with type 2 diabetes by means of a systematic review and meta-analysis. ::: ::: ::: RESEARCH DESIGN AND METHODS ::: MEDLINE and the Cochrane Controlled Trials Register were searched from inception to January 2009 for randomised controlled trials comparing SMBG with non-SMBG or more frequent SMBG with less intensive SMBG. Electronic searches were supplemented by manual searching of reference lists and reviews. The comparison of SMBG with non-SMBG was the primary, the comparison of more frequent SMBG with less intensive SMBG the secondary analysis. Stratified analyses were performed to evaluate modifying factors. ::: ::: ::: MAIN OUTCOME MEASURES ::: The primary endpoint was glycated haemoglobin A(1c) (HbA(1c)), secondary outcomes included fasting glucose and the occurrence of hypoglycaemia. Using random effects models a weighted mean difference (WMD) was calculated for HbA(1c) and a risk ratio (RR) was calculated for hypoglycaemia. Due to considerable heterogeneity, no combined estimate was computed for fasting glucose. ::: ::: ::: RESULTS ::: Fifteen trials (3270 patients) were included in the analyses. SMBG was associated with a larger reduction in HbA(1c) compared with non-SMBG (WMD -0.31%, 95% confidence interval -0.44 to -0.17). The beneficial effect associated with SMBG was not attenuated over longer follow-up. SMBG significantly increased the probability of detecting a hypoglycaemia (RR 2.10, 1.37 to 3.22). More frequent SMBG did not result in significant changes of HbA(1c) compared with less intensive SMBG (WMD -0.21%, 95% CI -0.57 to 0.15). ::: ::: ::: CONCLUSIONS ::: SMBG compared with non-SMBG is associated with a significantly improved glycaemic control in non-insulin treated patients with type 2 diabetes. The added value of more frequent SMBG compared with less intensive SMBG remains uncertain. <s> BIB008 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> Here we present a simple and low-cost production method to generate paper-based microfluidic devices with wax for portable bioassay. The wax patterning method we introduced here included three different ways: (i) painting with a wax pen, (ii) printing with an inkjet printer followed by painting with a wax pen, (iii) printing by a wax printer directly. The whole process was easy to operate and could be finished within 5-10 min without the use of a clean room, UV lamp, organic solvent, etc. Horse radish peroxidase, BSA and glucose assays were conducted to verify the performance of wax-patterned paper. <s> BIB009 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> This technical note describes a detailed study on wax printing, a simple and inexpensive method for fabricating microfluidic devices in paper using a commercially available printer and hot plate. The printer prints patterns of solid wax on the surface of the paper, and the hot plate melts the wax so that it penetrates the full thickness of the paper. This process creates complete hydrophobic barriers in paper that define hydrophilic channels, fluid reservoirs, and reaction zones. The design of each device was based on a simple equation that accounts for the spreading of molten wax in paper. <s> BIB010 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> Abstract Objectives The aim of this study was assess serum ischemia modified albumin (IMA) in type 2 diabetes patients and determine its correlation with other risk factors for chronic complications such as inflammation and hyperglycemia. Design and methods Fasting glucose, glycated albumin, total cholesterol, HDL cholesterol, LDL cholesterol, triglycerides, creatinine, uric acid, albumin, lactic acid, high-sensitivity C-reactive protein (hs-CRP) and IMA were measured in 80 patients with type 2 diabetes and 26 controls. Results Fasting glucose, glycated albumin, triglycerides, creatinine, IMA and hs-CRP were significantly higher in patients with type 2 diabetes. Correlations were weak but significant between IMA and fasting glucose, IMA and hs-CRP, hs-CRP and HDL cholesterol and hs-CRP and fasting glucose were observed. Conclusions We have shown higher levels of IMA and hs-CRP in type 2 diabetes. Hyperglycemia and inflammation reduces the capacity of albumin to bind cobalt, resulting in higher IMA levels. <s> BIB011 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> BACKGROUND ::: Self-monitoring of Blood Glucose (SMBG) is purported to improve glycaemic control, measured by glycosylated haemoglobin (HbA1c). The effectiveness of SMBG in type 2 diabetes mellitus (T2DM) is well-documented though no systematic review of the economic evidence surrounding the use of SMBG in T2DM has been performed. ::: ::: ::: OBJECTIVES ::: To perform a systematic review of economic evaluations of SMBG in T2DM patients. ::: ::: ::: INCLUSION CRITERIA ::: All adult patients suffering from T2DM were included. Outcomes of differing treatment groups, where specified, were also recorded. Studies which examined SMBG as an intervention to control blood glucose were considered. To be included, studies must have made a formal attempt to relate cost to outcome data in a cost-effectiveness or cost utility analysis.The main outcomes were in terms of cost-effectiveness and cost-utility. ::: ::: ::: SEARCH STRATEGY ::: Extensive electronic searches were conducted. Searching was carried out, for the time period 1990 to January 2009, for full text papers and conference abstracts. ::: ::: ::: METHODOLOGICAL QUALITY ::: Methodological quality of included studies was assessed by two reviewers using the standard critical appraisal tools from the JBI-Actuari (Joanna Briggs Institute-Analysis of Cost, Technology and Utilisation Assessment and Review Instrument). Included modelling studies were also assessed using the review criteria of economic models set out by Phillips and colleagues. ::: ::: ::: DATA COLLECTION ::: Data from included studies were extracted using the JBI-Actuari extraction tool. ::: ::: ::: DATA SYNTHESIS ::: Studies were grouped by outcome measure and summarised using tabular and narrative formats. ::: ::: ::: RESULTS ::: Five studies met the review criteria. Three were model-based analyses assessing long-term cost-effectiveness of SMBG, all of which concluded that SMBG was cost-effective. Two further primary economic evaluations assessed short-term cost-effectiveness. Their results found SMBG to be associated with increased cost and no significant reduction in HbA1c. The studies examined subgroups in terms of their treatment protocols and SMBG was considered more likely to be cost-effective in drug and insulin treated groups compared to diet and exercise groups. ::: ::: ::: CONCLUSIONS ::: Economic evidence surrounding SMBG in T2DM remains unclear. For the most part, included studies found SMBG to be cost-effective though analyses are extremely sensitive to relative effects, time-frame of analyses and model assumptions. Whilst large uncertainty exists, SMBG may be cost-effective in certain subgroups e.g. drug and insulin-treated patients. ::: ::: ::: IMPLICATION FOR PRACTICE ::: No strong evidence to recommend the regular use of SMBG in well-controlled diabetes patients, treated only with diet and exercise programmes, exists. The evidence does offer support for SMBG in drug and insulin treated T2DM. It is recommended that clinicians select appropriate patients for SMBG, from these groups, based on their domain expertise. ::: ::: ::: IMPLICATIONS FOR RESEARCH ::: Large-scale prospective RCTs of SMBG, particularly in drug and insulin treated patients, with well-conducted economic evaluations performed alongside them, will enable a more accurate estimation of the cost-effectiveness of SMBG. The optimal frequency and administration of SMBG is still unknown and is another area that warrants further research. <s> BIB012 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> The interest in low-cost microfluidic platforms as well as emerging microfabrication techniques has increased considerably over the last years. Toner- and paper-based techniques have appeared as two of the most promising platforms for the production of disposable devices for on-chip applications. This review focuses on recent advances in the fabrication techniques and in the analytical/bioanalytical applications of toner and paper-based devices. The discussion is divided in two parts dealing with (i) toner and (ii) paper devices. Examples of miniaturized devices fabricated by using direct-printing or toner transfer masking in polyester-toner, glass, PDMS as well as conductive platforms as recordable compact disks and printed circuit board are presented. The construction and the use of paper-based devices for off-site diagnosis and bioassays are also described to cover this emerging platform for low-cost diagnostics. <s> BIB013 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> We report the use of paper-based microfluidic devices fabricated from a novel polymer blend for the monitoring of urinary ketones, glucose, and salivary nitrite. Paper-based devices were fabricated via photolithography in less than 3 min and were immediately ready for use for these diagnostically relevant assays. Patterned channels on filter paper as small as 90 microm wide with barriers as narrow as 250 microm could be reliably patterned to permit and block fluid wicking, respectively. Colorimetric assays for ketones and nitrite were adapted from the dipstick format to this paper microfluidic chip for the quantification of acetoacetate in artificial urine, as well as nitrite in artificial saliva. Glucose assays were based on those previously demonstrated (Martinez et al., Angew Chem Int Ed 8:1318-1320, 1; Martinez et al., Anal Chem 10:3699-3707, 2; Martinez et al., Proc Nat Acad Sci USA 50:19606-19611, 3; Lu et al., Electrophoresis 9:1497-1500, 4; Abe et al., Anal Chem 18:6928-6934, 5). Reagents were spotted on the detection pad of the paper device and allowed to dry prior to spotting of samples. The ketone test was a two-step reaction requiring a derivitization step between the sample spotting pad and the detection pad, thus for the first time, confirming the ability of these paper devices to perform online multi-step chemical reactions. Following the spotting of the reagents and sample solution onto the paper device and subsequent drying, color images of the paper chips were recorded using a flatbed scanner, and images were converted to CMYK format in Adobe Photoshop CS4 where the intensity of the color change was quantified using the same software. The limit of detection (LOD) for acetoacetate in artificial urine was 0.5 mM, while the LOD for salivary nitrite was 5 microM, placing both of these analytes within the clinically relevant range for these assays. Calibration curves for urinary ketone (5 to 16 mM) and salivary nitrite (5 to 2,000 microM) were generated. The time of device fabrication to the time of test results was about 25 min. <s> BIB014 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> This paper describes the fabrication and the performance of microfluidic paper-based electrochemical sensing devices (we call the microfluidic paper-based electrochemical devices, µPEDs). The µPEDs comprise paper-based microfluidic channels patterned by photolithography or wax printing, and electrodes screen-printed from conducting inks (e.g., carbon or Ag/AgCl). We demonstrated that the µPEDs are capable of quantifying the concentrations of various analytes (e.g., heavy-metal ions and glucose) in aqueous solutions. This low-cost analytical device should be useful for applications in public health, environmental monitoring, and the developing world. <s> BIB015 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> This article describes the use of microfluidic paper-based analytical devices (muPADs) to perform quantitative chemical assays with internal standards. MicroPADs are well-suited for colorimetric biochemical assays; however, errors can be introduced from the background color of the paper due to batch difference and age, and from color measurement devices. To reduce errors from these sources, a series of standard analyte solutions and the sample solution are assayed on a single device with multiple detection zones simultaneously; an analyte concentration calibration curve can thus be established from the standards. Since the muPAD design allows the colorimetric measurements of the standards and the sample to be conducted simultaneously and under the same condition, errors from the above sources can be minimized. The analytical approach reported in this work shows that muPADs can perform quantitative chemical analysis at very low cost. <s> BIB016 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> This Technical Note demonstrates a simple method based on flexographic printing of polystyrene to form liquid guiding boundaries and layers on paper substrates. The method allows formation of hydrophobic barrier structures that partially or completely penetrate through the substrate. This unique property enables one to form very thin fluidic channels on paper, leading to reduced sample volumes required in point-of-care diagnostic devices. The described method is compatible with roll-to-roll flexography units found in many printing houses, making it an ideal method for large-scale production of paper-based fluidic structures. <s> BIB017 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> Paper spray is developed as a direct sampling ionization method for mass spectrometric analysis of complex mixtures. Ions of analyte are generated by applying a high voltage to a paper triangle wetted with a small volume (<10 microL) of solution. Samples can be preloaded onto the paper, added with the wetting solution, or transferred from surfaces using the paper as a wipe. It is demonstrated that paper spray is applicable to the analysis of a wide variety of compounds, including small organic compounds, peptides, and proteins. Procedures are developed for analysis of dried biofluid spots and applied to therapeutic drug monitoring with whole blood samples and to illicit drug detection in raw urine samples. Limits of detection of 50 ng/mL (or 20 pg absolute) are achieved for atenolol in bovine blood. The combination of sample collection from surfaces and paper spray ionization also enables fast chemical screening at high sensitivity, for example 100 pg of heroin distributed on a surface and agrochemicals on fruit peels are detectable. Online derivatization with a preloaded reagent is demonstrated for analysis of cholesterol in human serum. The combination of paper spray with miniature mass spectrometers offers a powerful impetus to wide application of mass spectrometry in nonlaboratory environments. <s> BIB018 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> A novel, ultra low-cost surface enhanced Raman spectroscopy (SERS) substrate has been developed by modifying the surface chemistry of cellulose paper and patterning nanoparticle arrays, all with a consumer inkjet printer. Micro/nanofabrication of SERS substrates for on-chip chemical and biomolecular analysis has been under intense investigation. However, the high cost of producing these substrates and the limited shelf life severely limit their use, especially for routine laboratory analysis and for point-of-sample analysis in the field. Paper-based microfluidic biosensing systems have shown great potential as low-cost disposable analysis tools. In this work, this concept is extended to SERS-based detection. Using an inexpensive consumer inkjet printer, cellulose paper substrates are modified to be hydrophobic in the sensing regions. Synthesized silver nanoparticles are printed onto this hydrophobic paper substrate with microscale precision to form sensing arrays. The hydrophobic surface prevents the aque... <s> BIB019 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> Abstract We describe the development of a highly stable and sensitive glucose biosensor based on the nanohybrid materials derived from gold nanoparticles (AuNPs) and multi-walled carbon nanotubes (MWCNT). The biosensing platform was developed by using layer-by-layer (LBL) self-assembly of the nanohybrid materials and the enzyme glucose oxidase (GOx). A high density of AuNPs and MWCNT nanocomposite materials were constructed by alternate self assembly of thiol functionalized MWCNTs and AuNPs, followed by chemisoption of GOx. The surface morphology of multilayered AuNPs/MWCNT structure was characterized by field emission-scanning electron microscope (FE-SEM), and the surface coverage of AuNPs was investigated by cyclic voltammetry (CV), showing that 5 layers of assembly achieves the maximum particle density on electrode. The immobilization of GOx was monitored by electrochemical impedance spectroscopy (EIS). CV and amperometry methods were used to study the electrochemical oxidation of glucose at physiological pH 7.4. The Au electrode modified with five layers of AuNPs/MWCNT composites and GOx exhibited an excellent electrocatalytic activity towards oxidation of glucose, which presents a wide liner range from 20 μM to 10 mM, with a sensitivity of 19.27 μA mM−1 cm−2. The detection limit of present modified electrode was found to be 2.3 μM (S/N = 3). In addition, the resulting biosensor showed a faster amperometric current response (within 3 s) and low apparent Michaelis–Menten constant ( K m app ) . Our present study shows that the high density of AuNPs decorated MWCNT is a promising nanohybrid material for the construction of enzyme based electrochemical biosensors. <s> BIB020 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> Abstract Aim To assess whether self-monitoring of quantitative urine glucose or blood glucose is effective, convenient and safe for glycaemic control in non-insulin treated type 2 diabetes. Methods Adults with non-insulin treated type 2 diabetes were recruited and randomized into three groups: Group A, self-monitoring with a quantitative urine glucose meter (n = 38); Group B, selfmonitoring with a blood glucose meter (n = 35); Group C, the control group without selfmonitoring (n = 35). All patients were followed up for six months, during which identical diabetes care was provided. Results There was a significant decrease in HbA1c within each group (p Conclusions This study suggests that self-monitoring of urine glucose has comparable efficacy on glycaemic control, and facilitates better compliance than blood self monitoring, without influencing the quality of life or risk of hypoglycaemia. <s> BIB021 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> Wax screen-printing as a low-cost, simple, and rapid method for fabricating paper-based microfluidic devices (µPADs) is reported here. Solid wax was rubbed through a screen onto paper filters. The printed wax was then melted into the paper to form hydrophobic barriers using only a hot plate. We first studied the relationship between the width of a hydrophobic barrier and the width of the original design line. We also optimized the heating temperature and time and determined the resolution of structures fabricated using this technique. The minimum width of hydrophilic channel and hydrophobic barrier is 650 and 1300 µm, respectively. Next, our fabrication method was compared to a photolithographic method using the reaction between bicinchoninic acid (BCA) and Cu1+ to demonstrate differences in background reactivity. Photolithographically defined channels exhibited a high background while wax printed channels showed a very low background. Finally, the utility of wax screen-printing was demonstrated for the simultaneous determination of glucose and total iron in control human serum samples using an electrochemical method with glucose oxidase and a colorimetric method with 1,10-phenanthroline. This study demonstrates that wax screen-printing is an easy-to-use and inexpensive alternative fabrication method for µPAD, which will be especially useful in developing countries. <s> BIB022 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> Abstract A reaction plate mimicking the working principle of a conventional three-dimensional microplate was printed on various hydrophilic paper substrates. Planar well patterns with high wetting/non-wetting contrast were formed using hydrophobic polydimethylsiloxane (PDMS) based ink with fast curing time, which enables truly low cost roll-to-roll fabrication. The formation and functionality of the printed reaction arrays were verified by two proof-of-concept demonstrations. Firstly a colorimetric glucose sensor, based on an enzymatic reaction sequence involving glucose oxidase, was screen-printed on the reaction plate. A detection limit of 0.1 mg/mL and a fairly linear sensor response was obtained on a logarithmic scale. Secondly, the employment of the reaction plate for electrical applications was demonstrated by modulating the resistance of a drop-casted polyaniline film as a function of pH. <s> BIB023 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> We report a method for fabricating inexpensive microfluidic platforms on paper using laser treatment. Any paper with a hydrophobic surface coating (e.g., parchment paper, wax paper, palette paper) can be used for this purpose. We were able to selectively modify the surface structure and property (hydrophobic to hydrophilic) of several such papers using a CO(2) laser. We created patterns down to a minimum feature size of 62±1 µm. The modified surface exhibited a highly porous structure which helped to trap/localize chemical and biological aqueous reagents for analysis. The treated surfaces were stable over time and were used to self-assemble arrays of aqueous droplets. Furthermore, we selectively deposited silica microparticles on patterned areas to allow lateral diffusion from one end of a channel to the other. Finally, we demonstrated the applicability of this platform to perform chemical reactions using luminol-based hemoglobin detection. <s> BIB024 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> In this work, chemiluminescence (CL) method was combined with microfluidic paper-based analytical device (μPAD) to establish a novel CL μPAD biosensor for the first time. This novel CL μPAD biosensor was based on enzyme reaction which produced H(2)O(2) while decomposing the substrate and the CL reaction between rhodanine derivative and generated H(2)O(2) in acid medium. Microchannels in μPAD were fabricated by cutting method. And the possible CL assay principle of this CL μPAD biosensor was explained. Rhodanine derivative system was used to reach the purpose of high sensitivity and well-defined signal for this CL μPAD biosensor. And the optimum reaction conditions were investigated. The quantitative determination of uric acid could be achieved by this CL μPAD biosensor with accurate and satisfactory result. And this biosensor could provide good reproducible results upon storage at 4°C for at least 10 weeks. The successful integration of μPAD and CL reaction made the final biosensor inexpensive, easy-to-use, low-volume, and portable for uric acid determination, which also greatly reduces the cost and increases the efficiency required for an analysis. We believe this simple, practical CL μPAD biosensor will be of interest for use in areas such as disease diagnosis. <s> BIB025 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> In this study, a novel microfluidic paper-based chemiluminescence analytical device (μPCAD) with a simultaneous, rapid, sensitive and quantitative response for glucose and uric acid was designed. This novel lab-on-paper biosensor is based on oxidase enzyme reactions (glucose oxidase and urate oxidase, respectively) and the chemiluminescence reaction between a rhodanine derivative and generated hydrogen peroxide in an acid medium. The possible chemiluminescence assay principle of this μPCAD is explained. We found that the simultaneous determination of glucose and uric acid could be achieved by differing the distances that the glucose and uric acid samples traveled. This lab-on-paper biosensor could provide reproducible results upon storage at 4 °C for at least 10 weeks. The application test of our μPCAD was then successfully performed with the simultaneous determination of glucose and uric acid in artificial urine. This study shows the successful integration of the μPCAD and the chemiluminescence method will be an easy-to-use, inexpensive, and portable alternative for point-of-care monitoring. <s> BIB026 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> This paper describes the first approach at combining paper microfluidics with electrochemiluminescent (ECL) detection. Inkjet printing is used to produce paper microfluidic substrates which are combined with screen-printed electrodes (SPEs) to create simple, cheap, disposable sensors which can be read without a traditional photodetector. The sensing mechanism is based on the orange luminescence due to the ECL reaction of tris(2,2′-bipyridyl)ruthenium(II) (Ru(bpy)32+) with certain analytes. Using a conventional photodetector, 2-(dibutylamino)ethanol (DBAE) and nicotinamide adenine dinucleotide (NADH) could be detected to levels of 0.9 μM and 72 μM, respectively. Significantly, a mobile camera phone can also be used to detect the luminescence from the sensors. By analyzing the red pixel intensity in digital images of the ECL emission, a calibration curve was constructed demonstrating that DBAE could be detected to levels of 250 μM using the phone. <s> BIB027 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> A surface acoustic wave-based sample delivery and ionization method that requires minimal to no sample pretreatment and that can operate under ambient conditions is described. This miniaturized technology enables real-time, rapid, and high-throughput analysis of trace compounds in complex mixtures, especially high ionic strength and viscous samples that can be challenging for conventional ionization techniques such as electrospray ionization. This technique takes advantage of high order surface acoustic wave (SAW) vibrations that both manipulate small volumes of liquid mixtures containing trace analyte compounds and seamlessly transfers analytes from the liquid sample into gas phase ions for mass spectrometry (MS) analysis. Drugs in human whole blood and plasma and heavy metals in tap water have been successfully detected at nanomolar concentrations by coupling a SAW atomization and ionization device with an inexpensive, paper-based sample delivery system and mass spectrometer. The miniaturized SAW ioniza... <s> BIB028 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> In this work, we first employ a drying method combining with the bienzyme colorimetric detection of glucose and uric acid on microfluidic paper-based analysis devices (μPADs). The channels of 3D μPADs are also designed by us to get better results. The color results are recorded by both Gel Documentation systems and a common camera. By using Gel Documentation systems, the limits of detection (LOD) of glucose and uric acid are 3.81 × 10(-5)M and 4.31 × 10(-5)M, respectively one order of magnitude lower than that of the reported methods on μPADs. By using a common camera, the limits of detection (LOD) of glucose and uric acid are 2.13 × 10(-4)M and 2.87 × 10(-4)M, respectively. Furthermore, the effects of detection conditions have been investigated and discussed comprehensively. Human serum samples are detected with satisfactory results, which are comparable with the clinical testing results. A low-cost, simple and rapid colorimetric method for the simultaneous detection of glucose and uric acid on the μPADs has been developed with enhanced sensitivity. <s> BIB029 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> In this work, robust approach for a highly sensitive point-of-care virus detection was established based on immunomagnetic nanobeads and fluorescent quantum dots (QDs). Taking advantage of immunomagnetic nanobeads functionalized with the monoclonal antibody (mAb) to the surface protein hemagglutinin (HA) of avian influenza virus (AIV) H9N2 subtype, H9N2 viruses were efficiently captured through antibody affinity binding, without pretreatment of samples. The capture kinetics could be fitted well with a first-order bimolecular reaction with a high capturing rate constant kf of 4.25 × 109 (mol/L)−1 s–1, which suggested that the viruses could be quickly captured by the well-dispersed and comparable-size immunomagnetic nanobeads. In order to improve the sensitivity, high-luminance QDs conjugated with streptavidin (QDs-SA) were introduced to this assay through the high affinity biotin-streptavidin system by using the biotinylated mAb in an immuno sandwich mode. We ensured the selective binding of QDs-SA to the ... <s> BIB030 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> A novel 3D microfluidic paper-based immunodevice, integrated with blood plasma separation from whole blood samples, automation of rinse steps, and multiplexed CL detections, was developed for the first time based on the principle of origami (denoted as origami-based device). This 3D origami-based device, comprised of one test pad surrounded by four folding tabs, could be patterned and fabricated by wax-printing on paper in bulk. In this work, a sandwich-type chemiluminescence (CL) immunoassay was introduced into this 3D origami-based immunodevice, which could separate the operational procedures into several steps including (i) folding pads above/below and (ii) addition of reagent/buffer under a specific sequence. The CL behavior, blood plasma separation, washing protocol, and incubation time were investigated in this work. The developed 3D origami-based CL immunodevice, combined with a typical luminuol-H(2)O(2) CL system and catalyzed by Ag nanoparticles, showed excellent analytical performance for the simultaneous detection of four tumor markers. The whole blood samples were assayed and the results obtained were in agreement with the reference values from the parallel single-analyte test. This paper-based microfluidic origami CL detection system provides a new strategy for a low-cost, sensitive, simultaneous multiplex immunoassay and point-of-care diagnostics. <s> BIB031 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> In this review we discuss how nanomaterials can be integrated in diagnostic paper-based biosensors for the detection of proteins, nucleic acids and cells. In particular first the different types and properties of paper-based nanobiosensors and nanomaterials are briefly explained. Then several examples of their application in diagnostics of several biomarkers are reported. Finally our opinions regarding future trends in this field are discussed. <s> BIB032 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> Dipstick and lateral-flow formats have dominated rapid diagnostics over the last three decades. These formats gained popularity in the consumer markets due to their compactness, portability and facile interpretation without external instrumentation. However, lack of quantitation in measurements has challenged the demand of existing assay formats in consumer markets. Recently, paper-based microfluidics has emerged as a multiplexable point-of-care platform which might transcend the capabilities of existing assays in resource-limited settings. However, paper-based microfluidics can enable fluid handling and quantitative analysis for potential applications in healthcare, veterinary medicine, environmental monitoring and food safety. Currently, in its early development stages, paper-based microfluidics is considered a low-cost, lightweight, and disposable technology. The aim of this review is to discuss: (1) fabrication of paper-based microfluidic devices, (2) functionalisation of microfluidic components to increase the capabilities and the performance, (3) introduction of existing detection techniques to the paper platform and (4) exploration of extracting quantitative readouts via handheld devices and camera phones. Additionally, this review includes challenges to scaling up, commercialisation and regulatory issues. The factors which limit paper-based microfluidic devices to become real world products and future directions are also identified. <s> BIB033 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> Immediate response for disease control relies on simple, inexpensive, and sensitive diagnostic tests, highly sought after for timely and accurate test of various diseases, including infectious diseases. Composite Fe3O4/Au nanoparticles have attracted considerable interest in diagnostic applications due to their unique physical and chemical properties. Here, we developed a simple coating procedure for gold magnetic nanoparticles (GMNs) with poly(acrylic acid) (PAA). PAA-coated GMNs (PGMNs) were stable and monodispersed and characterized by Fourier transform-infrared spectroscopy (FT-IR), transmission electron microscopy, UV-visible scanning spectrophotometry, thermogravimetric analysis, and Zetasizer methodologies. For diagnostic application, we established a novel lateral flow immunoassay (LFIA) strip test system where recombinant Treponema pallidum antigens (r-Tp) were conjugated with PGMNs to construct a particle probe for detection of anti-Tp antibodies. Intriguingly, the particle probes specifically identified Tp antibodies with a detection limitation as low as 1 national clinical unit/mL (NCU/mL). An ample pool of 1020 sera samples from three independent hospitals were obtained to assess our PGMNs-based LFIA strips, which exhibited substantially high values of sensitivity and specificity for all clinical tests (higher than 97%) and, therefore, proved to be a suitable approach for syphilis screening at a point-of-care test manner. <s> BIB034 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> The impact of detecting multiple infectious diseases simultaneously at point-of-care with good sensitivity, specificity, and reproducibility would be enormous for containing the spread of diseases in both resource-limited and rich countries. Many barcoding technologies have been introduced for addressing this need as barcodes can be applied to detecting thousands of genetic and protein biomarkers simultaneously. However, the assay process is not automated and is tedious and requires skilled technicians. Barcoding technology is currently limited to use in resource-rich settings. Here we used magnetism and microfluidics technology to automate the multiple steps in a quantum dot barcode assay. The quantum dot-barcoded microbeads are sequentially (a) introduced into the chip, (b) magnetically moved to a stream containing target molecules, (c) moved back to the original stream containing secondary probes, (d) washed, and (e) finally aligned for detection. The assay requires 20 min, has a limit of detection of ... <s> BIB035 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> Human exposure to particulate matter (PM) air pollution has been linked with respiratory, cardiovascular, and neurodegenerative diseases, in addition to various cancers. Consistent among all of these associations is the hypothesis that PM induces inflammation and oxidative stress in the affected tissue. Consequently, a variety of assays have been developed to quantify the oxidative activity of PM as a means to characterize its ability to induced oxidative stress. The vast majority of these assays rely on high-volume, fixed-location sampling methods due to limitations in assay sensitivity and detection limit. As a result, our understanding of how personal exposure contributes to the intake of oxidative air pollution is limited. To further this understanding, we present a microfluidic paper-based analytical device (μPAD) for measuring PM oxidative activity on filters collected by personal sampling. The μPAD is inexpensive to fabricate and provides fast and sensitive analysis of aerosol oxidative activity. T... <s> BIB036 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> Paper-based analytical devices (PADs) represent a growing class of elegant, yet inexpensive chemical sensor technologies designed for point-of-use applications. Most PADs, however, still utilize some form of instrumentation such as a camera for quantitative detection. We describe here a simple technique to render PAD measurements more quantitative and straightforward using the distance of colour development as a detection motif. The so-called distance-based detection enables PAD chemistries that are more portable and less resource intensive compared to classical approaches that rely on the use of peripheral equipment for quantitative measurement. We demonstrate the utility and broad applicability of this technique with measurements of glucose, nickel, and glutathione using three different detection chemistries: enzymatic reactions, metal complexation, and nanoparticle aggregation, respectively. The results show excellent quantitative agreement with certified standards in complex sample matrices. This work provides the first demonstration of distance-based PAD detection with broad application as a class of new, inexpensive sensor technologies designed for point-of-use applications. <s> BIB037 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> In this work, we reported a simple rapid and point-of-care magnetic immunofluorescence assay for avian influenza virus (AIV) and developed a portable experimental setup equipped with an optical fiber spectrometer and a microfluidic device. We achieved the integration of immunomagnetic target capture, concentration, and fluorescence detection in the microfluidic chip. By optimizing flow rate and incubation time, we could get a limit of detection low up to 3.7 × 104 copy/μL with a sample consumption of 2 μL and a total assay time of less than 55 min. This approach had proved to possess high portability, fast analysis, high specificity, high precision, and reproducibility with an intra-assay variability of 2.87% and an interassay variability of 4.36%. As a whole, this microfluidic system may provide a powerful platform for the rapid detection of AIV and may be extended for detection of other viral pathogens; in addition, this portable experimental setup enables the development of point-of-care diagnostic sys... <s> BIB038 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> A new technique for the detection of explosives has been developed based on fluorescence quenching of pyrene on paper-based analytical devices (μPADs). Wax barriers were generated (150 °C, 5 min) using ten different colours. Magenta was found as the most suitable wax colour for the generation of the hydrophobic barriers with a nominal width of 120 μm resulting in fully functioning hydrophobic barriers. One microliter of 0.5 mg mL(-1) pyrene dissolved in an 80:20 methanol-water solution was deposited on the hydrophobic circle (5 mm diameter) to produce the active microchip device. Under ultra-violet (UV) illumination, ten different organic explosives were detected using the μPAD, with limits of detection ranging from 100-600 ppm. A prototype of a portable battery operated instrument using a 3 W power UV light-emitting-diode (LED) (365 nm) and a photodiode sensor was also built and evaluated for the successful automatic detection of explosives and potential application for field-based screening. <s> BIB039 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> This work presents a novel and facile method for fabricating paper-based microfluidic devices by means of coupling of hydrophobic silane to paper fibers followed by deep UV-lithography. After filter paper being simply immersed in an octadecyltrichlorosilane (OTS) solution in n-hexane for 5 min, the hydrophilic paper became highly hydrophobic (water contact angle of about 125°) due to the hydrophobic OTS molecules were coupled to paper's cellulose fibers. The hydrophobized paper was then exposed to deep UV-lights through a quartz mask that had the pattern of the to-be-prepared channel network. Thus, the UV-exposed regions turned highly hydrophilic whereas the masked regions remained highly hydrophobic, generating hydrophilic channels, reservoirs and reaction zones that were well-defined by the hydrophobic regions. The resolution for hydrophilic channels was 233 ± 30 μm and that for between-channel hydrophobic barrier was 137 ± 21 μm. Contact angle measurement, X-ray photoelectron spectroscopy (XPS) and attenuated total reflectance Fourier transform-infrared (ATR-FT-IR) spectroscopy were employed to characterize the surface chemistry of the OTS-coated and UV/O(3)-treated paper, and the related mechanism was discussed. Colorimetric assays of nitrite are demonstrated with the developed paper-based microfluidic devices. <s> BIB040 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> An electrode platform printed on a recyclable low-cost paper substrate was characterized using cyclic voltammetry. The working and counter electrodes were directly printed gold-stripes, while the reference electrode was a printed silver stripe onto which an AgCl layer was deposited electrochemically. The novel paper-based chips showed comparable performance to conventional electrochemical cells. Different types of electrode modifications were carried out to demonstrate that the printed electrodes behave similarly with conventional electrodes. Firstly, a self-assembled monolayer (SAM) of alkanethiols was successfully formed on the Au electrode surface. As a consequence, the peak currents were suppressed and no longer showed clear increase as a function of the scan rate. Such modified electrodes have potential in various sensor applications when terminally substituted thiols are used. Secondly, a polyaniline film was electropolymerized on the working electrode by cyclic voltammetry and used for potentiometric pH sensing. The calibration curve showed close to Nerstian response. Thirdly, a poly(3,4-ethylenedioxythiophene) (PEDOT) layer was electropolymerized both by galvanostatic and cyclic potential sweep method on the working electrode using two different dopants; Cl− to study ion-to-electron transduction on paper-Au/PEDOT system and glucose oxidase in order to fabricate a glucose biosensor. The planar paper-based electrochemical cell is a user-friendly platform that functions with low sample volume and allows the sample to be applied and changed by e.g. pipetting. Low unit cost is achieved with mask- and mesh-free inkjet-printing technology. <s> BIB041 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> Microfluidic devices fabricated out of paper (and paper and tape) have emerged as promising platforms for conducting multiple diagnostic assays simultaneously in resource-limited settings. Certain types of assays in these devices, however, require a source of power to function. Lithium ion, nickel-cadmium, and other types of batteries have been used to power these devices, but these traditional batteries are too expensive and pose too much of a disposal hazard for diagnostic applications in resource-limited settings. To circumvent this problem, we previously designed a “fluidic battery” that is composed of multiple galvanic cells, incorporated directly into a multilayer paper-based microfluidic device. We now show that multiple cells of these fluidic batteries can be connected in series and/or in parallel in a predictable way to obtain desired values of current and potential, and that the batteries can be optimized to last for a short period of time (<1 min) or for up to 10–15 min. This paper also (i) outlines and quantifies the parameters that can be adjusted to maximize the current and potential of fluidic batteries, (ii) describes two general configurations for fluidic batteries, and (iii) provides equations that enable prediction of the current and potential that can be obtained when these two general designs are varied. This work provides the foundation upon which future applications of fluidic batteries will be based. <s> BIB042 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> In this study, a fast, low-cost, and facile spray method was proposed. This method deposits highly sensitive surface-enhanced Raman scattering (SERS) silver nanoparticles (AgNPs) on the paper-microfluidic scheme. The procedures for substrate preparation were studied including different strategies to synthesize AgNPs and the optimization of spray cycles. In addition, the morphologies of the different kinds of paper substrates were characterized by SEM and investigated by their SERS signals. The established method was found to be favorable for obtaining good sensitivity and reproducible results. The RSDs of Raman intensity of randomly analyzing 20 spots on the same paper or different filter papers depositing AgNPs are both below 15%. The SERS enhancement factor is approximately 2 × 10(7) . The whole fabrication is very rapid, robust, and does not require specific instruments. Furthermore, the total cost for 1000 pieces of chip is less than $20. These advantages demonstrated the potential for growing SERS applications in the area of environmental monitoring, food safety, and bioanalysis in the future. <s> BIB043 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> One of the goals of point-of-care (POC) is a chip-based, miniaturized, portable, self-containing system that allows the assay of proteins, nucleic acids, and cells in complex samples. The integration of nanomaterials and microfluidics can help achieve this goal. This tutorial review outlines the mechanism of assaying biomarkers by gold nanoparticles (AuNPs), and the implementation of AuNPs for microfluidic POC devices. In line with this, we discuss some recent advances in AuNP-coupled microfluidic sensors with enhanced performance. Portable and automated instruments for device operation and signal readout are also included for practical applications of these AuNP-combined microfluidic chips. <s> BIB044 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> This report demonstrates a straightforward, robust, multiplexed and point-of-care microcapillary-based loop-mediated isothermal amplification (cLAMP) for assaying nucleic acids. This assay integrates capillaries (glass or plastic) to introduce and house sample/reagents, segments of water droplets to prevent contamination, pocket warmers to provide heat, and a hand-held flashlight for a visual readout of the fluorescent signal. The cLAMP system allows the simultaneous detection of two RNA targets of human immunodeficiency virus (HIV) from multiple plasma samples, and achieves a high sensitivity of two copies of standard plasmid. As few nucleic acid detection methods can be wholly independent of external power supply and equipment, our cLAMP holds great promise for point-of-care applications in resource-poor settings. <s> BIB045 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> Microbial pathogens pose serious threats to public health and safety, and results in millions of illnesses and deaths as well as huge economic losses annually. Laborious and expensive pathogen tests often represent a significant hindrance to implementing effective front-line preventative care, particularly in resource-limited regions. Thus, there is a significant need to develop low-cost and easy-to-use methods for pathogen detection. Herein, we present a simple and inexpensive litmus test for bacterial detection. The method takes advantage of a bacteria-specific RNA-cleaving DNAzyme probe as the molecular recognition element and the ability of urease to hydrolyze urea and elevate the pH value of the test solution. By coupling urease to the DNAzyme on magnetic beads, the detection of bacteria is translated into a pH increase, which can be readily detected using a litmus dye or pH paper. The simplicity, low cost, and broad adaptability make this litmus test attractive for field applications, particularly in the developing world. <s> BIB046 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> Preconcentration of pathogens from patient samples represents a great challenge in point-of-care (POC) diagnostics. Here, a low-cost, rapid, and portable agarose-based microfluidic device was developed to concentrate biological fluid from micro- to picoliter volume. The microfluidic concentrator consisted of a glass slide simply covered by an agarose layer with a binary tree-shaped microchannel, in which pathogens could be concentrated at the end of the microchannel due to the capillary effect and the strong water permeability of the agarose gel. The fluorescent Escherichia coli strain OP50 was used to demonstrate the capacity of the agarose-based device. Results showed that 90% recovery efficiency could be achieved with a million-fold volume reduction from 400 μL to 400 pL. For concentration of 1 × 10(3) cells mL(-1) bacteria, approximately ten million-fold enrichment in cell density was realized with volume reduction from 100 μL to 1.6 pL. Urine and blood plasma samples were further tested to validate the developed method. In conjugation with fluorescence immunoassay, we successfully applied the method to the concentration and detection of infectious Staphylococcus aureus in clinics. The agarose-based microfluidic concentrator provided an efficient approach for POC detection of pathogens. <s> BIB047 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> A paper microfluidic chip was designed and fabricated to evaluate the taste of 10 different red wines using a set of chemical dyes. The digital camera of a smartphone captured the images, and its red-green-blue (RGB) pixel intensities were analyzed by principal component analysis (PCA). Using 8 dyes and 2 principal components (PCs), we were able to distinguish each wine by the grape variety and the oxidation status. Through comparing with the flavor map by human evaluation, PC1 seemed to represent the sweetness and PC2 the bodyness of red wine. This superior performance is attributed to: (1) careful selection of commercially available dyes through a series of linear correlation studies with the taste chemicals in red wines, (2) minimization of sample-to-sample variation by splitting a single sample into multiple wells on the paper microfluidics, and (3) filtration of particulate matter through paper fibers. The image processing and PCA procedure can eventually be implemented as a stand-alone smartphone application and can be adopted as an extremely low-cost, disposable, fully handheld, easy-to-use, yet sensitive and specific quality control method for appraising red wine or similar beverage products in resource-limited environments. <s> BIB048 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> In this paper we describe a method for three-dimensional wax patterning of microfluidic paper-based analytical devices (μPADs). The method is rooted in the fundamental details of wax transport in paper and provides a simple way to fabricate complex channel architectures such as hemichannels and fully enclosed channels. We show that three-dimensional μPADs can be fabricated with half as much paper by using hemichannels rather than ordinary open channels. We also provide evidence that fully enclosed channels are efficiently isolated from the exterior environment, decreasing contamination risks, simplifying the handling of the device, and slowing evaporation of solvents. <s> BIB049 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> We report a simple, low-cost, one-step fabrication method for microfluidic paper-based analytical devices (μPAD) using only polystyrene and a patterned screen. The polystyrene solution applied through the screen penetrates through the paper, forming a three-dimensional hydrophobic barrier, defining a hydrophilic analysis zone. The optimal polystyrene concentration and paper types were first investigated. Adjusting polystyrene concentration allows for various types of paper to be used for successful device fabrication. Using an optimized polystyrene concentration with Whatman#4 filter paper, a linear relationship was found to exist between the design width and the printed width. The smallest hydrophilic channel and hydrophobic barrier that can be obtained are 670 ± 50 μm and 380 ± 40 μm, respectively. High device-to-device fabrication reproducibility was achieved yielding a relative standard deviation (%RSD) in the range of 1.12–2.54% (n = 64) of the measured diameter of the well-shaped fabricated test zones with a designed diameter of 5 and 7 mm. To demonstrate the significance of the fabricated μPAD, distance-based and well-based paper devices were constructed for the analysis of H2O2 and antioxidant activity, respectively. The analysis of H2O2 in real samples using distance-based measurement with CeO2 nanoparticles as the colorimetric agent produced the same results at 95% confidence level, as those obtained using KMnO4 titration. A proof-of-concept antioxidant activity determination based on the 2,2-diphenyl-1-picrylhydrazyl (DPPH) assay was also demonstrated. The results verify that the polymer screen-printing method can be used as an alternative method for μPAD fabrication. <s> BIB050 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> We developed a novel, low-cost and simple method for the fabrication of microfluidic paper-based analytical devices (μPADs) by silanization of filter cellulose using a paper mask having a specific pattern. The paper mask was penetrated with trimethoxyoctadecylsilane (TMOS) by immersing into TMOS-heptane solution. By heating the filter paper sandwiched between the paper mask and glass slides, TMOS was immobilized onto the filter cellulose via the reaction between cellulose OH and TMOS, while the hydrophilic area was not silanized because it was not in contact with the paper mask penetrated with TMOS. The effects of some factors including TMOS concentration, heating temperature and time on the fabrication of μPADs were studied. This method is free of any expensive equipment and metal masks, and could be performed by untrained personnel. These features are very attractive for the fabrication and applications of μPADs in developing countries or resource-limited settings. A flower-shaped μPAD was fabricated and used to determine glucose in human serum samples. The contents determined by this method agreed well with those determined by a standard method. <s> BIB051 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> Paper microfluidic devices are a promising technology in developing analytical devices for point-of-care diagnosis in the developing world. This article describes a simple method for paper microfluidic devices based on a PZT drop-on-demand droplet generator. Wax was jetted in the form of droplet, linked with each other and formed into wax pattern on filter paper with a PZT actuator and a glass nozzle. The heated wax pattern became a hydrophobic barrier for reagent used in bio-assay. The glass nozzle fabricated by a home-made micronozzle puller without complicated fabrication technology was low cost, simple and easily made. Coefficient of variation of the jetted wax droplet diameter was 4.0% which showed good reproducibility. The width of wax line was experimentally studied by changing the driving voltage, nozzle diameters and degree of overlapping. The wax line with width of 700–1700 μm was prepared for paper based microfluidic devices. Multi-assay of glucose, protein and pH and 3 × 3 arrays of glucose, protein and pH assay were realized with the prepared paper microfluidic devices. The wax droplet generating system supplied a low-cost, simple, easy-to-use and fast fabrication method for paper microfluidic devices. <s> BIB052 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> Abstract Paper based colorimetric biosensing platform utilizing cross-linked siloxane 3-aminopropyltriethoxysilane (APTMS) as probe was developed for the detection of a broad range of targets including H 2 O 2 , glucose and protein biomarker. APTMS was extensively used for the modification of filter papers to develop paper based analytical devices. We discovered when APTMS was cross-linked with glutaraldehyde (GA), the resulting complex (APTMS–GA) displays brick-red color, and a visual color change was observed when the complex reacted with H 2 O 2 . By integrating the APTMS–GA complex with filter paper, the modified paper enables quantitative detection of H 2 O 2 through the monitoring of the color intensity change of the paper via software Image J. Then, with the immobilization of glucose oxidase (GOx) onto the modified paper, glucose can be detected through the detection of enzymatically generated H 2 O 2 . For protein biomarker prostate specific antigen (PSA) assay, we immobilized capture, not captured anti-PSA antibody (Ab 1 ) onto the paper surface and using GOx modified gold nanorod (GNR) as detection anti-PSA antibody (Ab 2 ) label. The detection of PSA was also achieved via the liberated H 2 O 2 when the GOx label reacted with glucose. The results demonstrated the possibility of this paper based sensor for the detection of different analytes with wide linear range. The low cost and simplicity of this paper based sensor could be developed for “point-of-care” analysis and find wide application in different areas. <s> BIB053 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> This paper describes the development and use of a handheld and lightweight stamp for the production of microfluidic paper-based analytical devices (μPADs). We also chemically modified the paper surface for improved colorimetric measurements. The design of the microfluidic structure has been patterned in a stamp, machined in stainless steel. Prior to stamping, the paper surface was oxidized to promote the conversion of hydroxyl into aldehyde groups, which were then chemically activated for covalent coupling of enzymes. Then, a filter paper sheet was impregnated with paraffin and sandwiched with a native paper (n-paper) sheet, previously oxidized. The metal stamp was preheated at 150 °C and then brought in contact with the paraffined paper (p-paper) to enable the thermal transfer of the paraffin to the n-paper, thus forming the hydrophobic barriers under the application of a pressure of ca. 0.1 MPa for 2 s. The channel and barrier widths measured in 50 independent μPADs exhibited values of 2.6 ± 0.1 and 1.4 ± 0.1 mm, respectively. The chemical modification for covalent coupling of enzymes on the paper surface also led to improvements in the colour uniformity generated inside the sensing area, a known bottleneck in this technology. The relative standard deviation (RSD) values for glucose and uric acid (UA) assays decreased from 40 to 10% and from 20 to 8%, respectively. Bioassays related to the detection of glucose, UA, bovine serum albumin (BSA), and nitrite were successfully performed in concentration ranges useful for clinical assays. The semi-quantitative analysis of all four analytes in artificial urine samples revealed an error smaller than 4%. The disposability of μPADs, the low instrumental requirements of the stamp-based fabrication, and the improved colour uniformity enable the use of the proposed devices for the point-of-care diagnostics or in limited resources settlements. <s> BIB054 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> We report a method for the bottom-up fabrication of paper-based capillary microchips by the blade coating of cellulose microfibers on a patterned surface. The fabrication process is similar to the paper-making process in which an aqueous suspension of cellulose microfibers is used as the starting material and is blade-coated onto a polypropylene substrate patterned using an inkjet printer. After water evaporation, the cellulose microfibers form a porous, hydrophilic, paperlike pattern that wicks aqueous solution by capillary action. This method enables simple, fast, inexpensive fabrication of paper-based capillary channels with both width and height down to about 10 μm. When this method is used, the capillary microfluidic chip for the colorimetric detection of glucose and total protein is fabricated, and the assay requires only 0.30 μL of sample, which is 240 times smaller than for paper devices fabricated using photolithography. <s> BIB055 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> Abstract A miniaturized paper-based microfluidic electrochemical enzymatic biosensing platform was developed and the effects of fluidic behaviors in paper substrate on electrochemical sensing were systemically investigated. The biosensor is composed of an enzyme-immobilized pure cellulose paper pad, an enzymeless screen-printed electrode (SPE) modified with platinum nanoparticles (PtNPs), and a pair of clamped acrylonitrile butadiene styrene (ABS) plastic holders to provide good alignment for stable signal sensing. The wicking rate of liquid sample in paper was predicted, using a two-dimensional Fickian-diffusion model, to be 1.0 × 10 −2 cm 2 /s, and was verified experimentally. Dip-coating was used to prepare the enzyme-modified paper pad (EPP), which is amenable for mass manufacturing. The EPP retained excellent hydrophilicity and mechanical properties, with even slightly improved tensile strength and break strain. No significant difference in voltammetric behaviors was observed between measurements made in bulk buffer solution and with different sample volumes applied to EPP beyond its saturation wicking volume. Glucose oxidase (GO x ), an enzyme specific for glucose (Glc) substrate, was used as a model enzyme and its enzymatic reaction product H 2 O 2 was detected by the enzymeless PtNPs-SPE in the presence of ambient electron mediator O 2 . Consequently, Glc was detected with its concentration linearly depending on H 2 O 2 oxidation current with sensitivity of 10.5 μA mM -1 cm -2 and detection limit of 9.3 μM (at S / N = 3). The biosensor can be quickly regenerated with memory effects removed by buffer additions for continuous real-time detection of multiple samples in one run for point-of-care purposes. This integrated platform is also inexpensive since the EPP is easily stored, and enzymeless PtNPs-SPEs can be used multiple times with different EPPs. The green and facile preparation in bulk, excellent mechanical strength, well-maintained enzyme activity, disposability, and good reproducibility and stability make our paper-fluidic biosensor platform suitable for various real-time electrochemical bioassays without any external power for mixing, especially in resource-limited conditions. <s> BIB056 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> beta-Agonists are a group of illegal but widely used feed additives in the stockbreeding industry. In order to achieve simple-to-use, fast and high-throughput testing of this banned chemical, herein we suggest a paper-based analytical device on which a chemiluminescence diminishment method was performed. In this approach, extracts from swine hair samples as well as luminescent reagents, such as luminol and potassium periodate solution, in a low volume were applied to our device. It was found that the light emission was diminished by the beta-agonists extracted from the swine hair samples. The degree of diminishment is proportional to the concentration of the beta-agonists from 1.0 x 10(-5) to 1.0 x 10(-8) mol L-1. Also, the concentrations of solutions for chemiluminescence were optimized. The mechanism and reaction kinetics of chemiluminescence were discussed as well. The detection limit was obtained as 1.0 x 10(-9) mol L-1, and recoveries from 96% to 110% were achieved, both of which suggested that our method will be favourable in field applications for swine hair samples. <s> BIB057 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> Fluorescence assays often require specialized equipment and, therefore, are not easily implemented in resource-limited environments. Herein we describe a point-of-care assay strategy in which fluorescence in the visible region is used as a readout, while a camera-equipped cellular phone is used to capture the fluorescent response and quantify the assay. The fluorescence assay is made possible using a paper-based microfluidic device that contains an internal fluidic battery, a surface-mount LED, a 2 mm section of a clear straw as a cuvette, and an appropriately designed small molecule reagent that transforms from weakly fluorescent to highly fluorescent when exposed to a specific enzyme biomarker. The resulting visible fluorescence is digitized by photographing the assay region using a camera-equipped cellular phone. The digital images are then quantified using image processing software to provide sensitive as well as quantitative results. In a model 30 min assay, the enzyme β-D-galactosidase was measured quantitatively down to 700 pM levels. This communication describes the design of these types of assays in paper-based microfluidic devices and characterizes the key parameters that affect the sensitivity and reproducibility of the technique. <s> BIB058 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> The capacity to achieve rapid, sensitive, specific, quantitative, and multiplexed genetic detection of pathogens via a robust, portable, point-of-care platform could transform many diagnostic applications. And while contemporary technologies have yet to effectively achieve this goal, the advent of microfluidics provides a potentially viable approach to this end by enabling the integration of sophisticated multistep biochemical assays (e.g., sample preparation, genetic amplification, and quantitative detection) in a monolithic, portable device from relatively small biological samples. Integrated electrochemical sensors offer a particularly promising solution to genetic detection because they do not require optical instrumentation and are readily compatible with both integrated circuit and microfluidic technologies. Nevertheless, the development of generalizable microfluidic electrochemical platforms that integrate sample preparation and amplification as well as quantitative and multiplexed detection remains a challenging and unsolved technical problem. Recognizing this unmet need, we have developed a series of microfluidic electrochemical DNA sensors that have progressively evolved to encompass each of these critical functionalities. For DNA detection, our platforms employ label-free, single-step, and sequence-specific electrochemical DNA (E-DNA) sensors, in which an electrode-bound, redox-reporter-modified DNA "probe" generates a current change after undergoing a hybridization-induced conformational change. After successfully integrating E-DNA sensors into a microfluidic chip format, we subsequently incorporated on-chip genetic amplification techniques including polymerase chain reaction (PCR) and loop-mediated isothermal amplification (LAMP) to enable genetic detection at clinically relevant target concentrations. To maximize the potential point-of-care utility of our platforms, we have further integrated sample preparation via immunomagnetic separation, which allowed the detection of influenza virus directly from throat swabs and developed strategies for the multiplexed detection of related bacterial strains from the blood of septic mice. Finally, we developed an alternative electrochemical detection platform based on real-time LAMP, which not is only capable of detecting across a broad dynamic range of target concentrations, but also greatly simplifies quantitative measurement of nucleic acids. These efforts represent considerable progress toward the development of a true sample-in-answer-out platform for genetic detection of pathogens at the point of care. Given the many advantages of these systems, and the growing interest and innovative contributions from researchers in this field, we are optimistic that iterations of these systems will arrive in clinical settings in the foreseeable future. <s> BIB059 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> A thin and flexible paper-based skin patch was developed for the diagnostic screening of cystic fibrosis. It utilized a unique combination of both anion exchange and pH test papers to enable the quantitative, colorimetric and on-skin detection of sweat anions. <s> BIB060 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> A rapid and highly sensitive point-of-care (PoC) lateral flow assay for phospholipase A2 (PLA2) is demonstrated in serum through the enzyme-triggered release of a new class of biotinylated multiarmed polymers from a liposome substrate. Signal from the enzyme activity is generated by the adhesion of polystreptavidin-coated gold nanoparticle networks to the lateral flow device, which leads to the appearance of a red test line due to the localized surface plasmon resonance effect of the gold. The use of a liposome as the enzyme substrate and multivalent linkers to link the nanoparticles leads to amplification of the signal, as the cleavage of a small amount of lipids is able to release a large amount of polymer linker and adhesion of an even larger amount of gold nanoparticles. By optimizing the molecular weight and multivalency of these biotinylated polymer linkers, the sensitivity of the device can be tuned to enable naked-eye detection of 1 nM human PLA2 in serum within 10 min. This high sensitivity enabled the correct diagnosis of pancreatitis in diseased clinical samples against a set of healthy controls using PLA2 activity in a point-of-care device for the first time. <s> BIB061 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> Early and timely detection of disease biomarkers can prevent the spread of infectious diseases, and drastically decrease the death rate of people suffering from different diseases such as cancer and infectious diseases. Because conventional diagnostic methods have limited application in low-resource settings due to the use of bulky and expensive instrumentation, simple and low-cost point-of-care diagnostic devices for timely and early biomarker diagnosis is the need of the hour, especially in rural areas and developing nations. The microfluidics technology possesses remarkable features for simple, low-cost, and rapid disease diagnosis. There have been significant advances in the development of microfluidic platforms for biomarker detection of diseases. This article reviews recent advances in biomarker detection using cost-effective microfluidic devices for disease diagnosis, with the emphasis on infectious disease and cancer diagnosis in low-resource settings. This review first introduces different microfluidic platforms (e.g. polymer and paper-based microfluidics) used for disease diagnosis, with a brief description of their common fabrication techniques. Then, it highlights various detection strategies for disease biomarker detection using microfluidic platforms, including colorimetric, fluorescence, chemiluminescence, electrochemiluminescence (ECL), and electrochemical detection. Finally, it discusses the current limitations of microfluidic devices for disease biomarker detection and future prospects. <s> BIB062 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> Low-cost assays have broad applications ranging from human health diagnostics and food safety inspection to environmental analysis. Hence, low-cost assays are especially attractive for rural areas and developing countries, where financial resources are limited. Recently, paper-based microfluidic devices have emerged as a low-cost platform which greatly accelerates the point of care (POC) analysis in low-resource settings. This paper reviews recent advances of low-cost bioanalysis on paper-based microfluidic platforms, including fully paper-based and paper hybrid microfluidic platforms. In this review paper, we first summarized the fabrication techniques of fully paper-based microfluidic platforms, followed with their applications in human health diagnostics and food safety analysis. Then we highlighted paper hybrid microfluidic platforms and their applications, because hybrid platforms could draw benefits from multiple device substrates. Finally, we discussed the current limitations and perspective trends of paper-based microfluidic platforms for low-cost assays. <s> BIB063 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> Microfluidic paper-based analytical devices (μPADs) attract tremendous attention as an economical tool for in-field diagnosis, food safety and environmental monitoring. We innovatively fabricated 2D and 3D μPADs by photolithography-patterning microchannels on a Parafilm® and subsequently embossing them to paper. This truly low-cost, wax printer and cutter plotter independent approach offers the opportunity for researchers from resource-limited laboratories to work on paper-based analytical devices. <s> BIB064 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> A novel, highly selective and sensitive paper-based colorimetric sensor for trace determination of copper (Cu(2+)) ions was developed. The measurement is based on the catalytic etching of silver nanoplates (AgNPls) by thiosulfate (S2O3(2-)). Upon the addition of Cu(2+) to the ammonium buffer at pH 11, the absorption peak intensity of AuNPls/S2O3(2-) at 522 nm decreased and the pinkish violet AuNPls became clear in color as visible to the naked eye. This assay provides highly sensitive and selective detection of Cu(2+) over other metal ions (K(+), Cr(3+), Cd(2+), Zn(2+), As(3+), Mn(2+), Co(2+), Pb(2+), Al(3+), Ni(2+), Fe(3+), Mg(2+), Hg(2+) and Bi(3+)). A paper-based colorimetric sensor was then developed for the simple and rapid determination of Cu(2+) using the catalytic etching of AgNPls. Under optimized conditions, the modified AgNPls coated at the test zone of the devices immediately changes in color in the presence of Cu(2+). The limit of detection (LOD) was found to be 1.0 ng mL(-1) by visual detection. For semi-quantitative measurement with image processing, the method detected Cu(2+) in the range of 0.5-200 ng mL(-1)(R(2)=0.9974) with an LOD of 0.3 ng mL(-1). The proposed method was successfully applied to detect Cu(2+) in the wide range of real samples including water, food, and blood. The results were in good agreement according to a paired t-test with results from inductively coupled plasma-optical emission spectrometry (ICP-OES). <s> BIB065 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> A new water-soluble conjugated polyelectrolyte containing triphenylamine groups with aldehyde pendants was synthesized, which featured distinctly different emission colors according to its states, in aqueous solution and in the solid. Paper-based strips containing the polymer were prepared by simple immersion of filter paper in the polyelectrolyte solution for practical and efficient detection of biothiols including cysteine and homocysteine. The presence of aldehyde groups enables us to demonstrate noticeable fluorescence emission color changes (green-to-blue) because of the alterations in electron push–pull structure in the polymer via a reaction between the aldehyde group of the polymer and the aminothiol moiety in biothiol compounds. The presence of an aldehyde group and a sulfonate side chain was found to be indispensable for the cysteine reaction site and for a hydrophilic environment allowing the easy approach of cysteine, respectively, resulting in a simple and easy detection protocol for biothiol compounds. <s> BIB066 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> Abstract Background After Roux-en-Y gastric bypass (RYGB), hypoglycemia can occur and be associated with adverse events such as intense malaise and impaired quality of life. Objective To compare insulin secretion, sensitivity, and clearance between two groups of patients, with or without hypoglycemia, after an oral glucose tolerance test (OGTT 75-g), and also to compare real-life glucose profiles within these two groups. Setting Bariatric surgery referral center. Methods This study involves a prospective cohort of 46 consecutive patients who complained of malaise compatible with hypoglycemia after RYGB, in whom an OGTT 75-g was performed. A plasma glucose value of lower than 2.8 mmol/L (50 mg/dl) between 90 and 120 min after the load was considered to be a significant hypoglycemia. The main outcome measures were insulin sensitivity, beta-cell function, and glycemic profiles during the test. Glucose parameters were also evaluated by continuous glucose monitoring (CGM) in a real-life setting in 43 patients. Results Twenty-five patients had plasma glucose that was lower than 2.8 mmol/L between 90 and 120 from the load (HYPO group). Twenty-one had plasma glucose that was higher than 2.8 mmol/L (NONHYPO group). The HYPO patients were younger, had lost more weight after RYGB, were less frequently diabetic before surgery, and displayed higher early insulin secretion rates compared with the NONHYPO patients after the 75-g OGTT, and they had lower late insulin secretion rates. The HYPO patients had lower interstitial glucose values in real life, which suggests that a continuum exists between observations with an oral glucose load and real-life interstitial glucose concentrations. Conclusions This study suggests that HYPO patients after RYGB display an early increased insulin secretion rate when tested with an OGTT. CGM shows that HYPO patients spend more time below 3.3 mmol/L when compared with NONHYPO patients. This phenotype of patients should be monitored carefully after RYGB. <s> BIB067 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> A disposable, equipment-free, versatile point-of-care testing platform, microfluidic distance readout sweet hydrogel integrated paper-based analytical device (μDiSH-PAD), was developed for portable quantitative detection of different types of targets. The platform relies on a target-responsive aptamer cross-linked hydrogel for target recognition, cascade enzymatic reactions for signal amplification, and microfluidic paper-based analytic devices (μPADs) for visual distance-based quantitative readout. A “sweet” hydrogel with trapped glucoamylase (GA) was synthesized using an aptamer as a cross-linker. When target is present in the sample, the “sweet” hydrogel collapses and releases enzyme GA into the sample, generating glucose by amylolysis. A hydrophilic channel on the μPADs is modified with glucose oxidase (GOx) and colorless 3,3′-diaminobenzidine (DAB) as the substrate. When glucose travels along the channel by capillary action, it is converted to H2O2 by GOx. In addition, DAB is converted into brown ins... <s> BIB068 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Introduction <s> Abstract In this work, an origami paper-based analytical device for glucose biosensor by employing fully-drawn pencil electrodes has been reported. The three-electrode system was prepared on paper directly by drawing with nothing more than pencils. By simple printing, two separated zones on paper were designed for the immobilization of the mediator and glucose oxidase (GOx), respectively. The used paper provides a favorable and biocompatible support for maintaining the bioactivities of GOx. With a sandwich-type scheme, the origami biosensor exhibited great analytical performance for glucose sensing including acceptable reproducibility and favorable selectivity against common interferents in physiological fluids. The limit of detection and linear range achieved with the approach was 0.05 mM and 1–12 mM, respectively. Its analytical performance was also demonstrated in the analysis of human blood samples. Such fully-drawn paper-based device is cheap, flexible, portable, disposable, and environmentally friendly, affording great convenience for practical use under resource-limited conditions. We therefore envision that this approach can be extended to generate other functional paper-based devices. <s> BIB069
Glucose, one of the essential metabolic intermediates, is an important medical analyte which is the indicator of various diseases, such as glucose metabolism disorders and islet cell carcinoma BIB011 BIB020 BIB021 BIB029 . Normally, the concentration of glucose in human blood stream is in the range of 3.8-6.9 mM. A level below 2.8 mM after no-eating or following exercise is considered to be hypoglycemia BIB067 . For diabetics, the blood glucose concentration should be strictly controlled below 10 mM according to the American Diabetes Association . Frequent and convenient monitor of the blood glucose concentration is a key endeavor for medical diagnosis BIB003 BIB001 and of critical importance to the diabetics for the hyperglycemia complications prevention BIB008 BIB012 . A terminology "ASSURED" representing the words "affordable, sensitive, specific, user-friendly, rapid and robust, equipment-free and delivered to those in need", is summarized by the World Health Organization (WHO) as the guidelines for the diagnostic point-of-care tests (POCTs) . These diagnostic tests are emerging for applications in the underdeveloped and developing world, where cost-effect and simplicity are of major concerns BIB004 BIB013 . As the most abundant biopolymer on the Earth, cellulose is mostly used to produce paper for industrial use. Being composed of a network of hydrophilic cellulose fibers , paper has a natural porous microstructure, which is amenable to lateral flow via capillary action, realizing on-site analysis without the requirement for external forces such as pumps BIB004 BIB002 . Microfluidic paper-based analytical devices (µPADs) as a promising and powerful platform have shown great potential in the development of POCTs BIB032 BIB044 BIB059 BIB033 . This concept was first proposed by the Whitesides group in 2007 BIB004 and the photoresist-patterned paper was used to fabricate the microfluidic devices that the liquid could transport through capillary force in the lack of external equipment. Since then, µPADs have been popular in a variety of applications, such as clinical diagnostics BIB004 BIB034 BIB035 BIB045 BIB060 BIB061 , food safety BIB046 , environmental monitoring BIB062 BIB036 BIB037 and bioterrorism BIB038 BIB030 BIB063 BIB047 BIB039 due to the advantages of portability, simplicity, economic affordability and minimal sample consumption. Paper substrate is hydrophilic by nature. Therefore, to fabricate the µPADs, hydrophobic barriers are usually created to confine the fluid flow within a desired location or direct the fluidics follow desired trails. A number of techniques, including photolithography BIB004 BIB040 BIB014 BIB064 BIB048 BIB005 BIB006 , wax printing BIB009 BIB010 BIB049 BIB065 , screen-printing BIB022 BIB015 , plasma treating BIB007 BIB016 , flexography BIB023 BIB050 BIB017 and laser treating BIB024 have been developed for the manufacture of hydrophobic barriers. In the photolithography process, photoresists, e.g., octadecyltrichlorosilane (OTS), poly(o-nitrobenzylmethacrylate) (PoNBMA) and SU-8 used to fabricate µPADs are costly and the expensive photolithography equipment is also required. Patterning paper with wax printing technology could offer relative high speed, facile process and high resolution for fabricating µPADs, while the commercial wax printers of high running costs and the wax of low melting point restrict the use in batch production. Screen-printing method exhibits slightly higher resolutions than wax printing, but it is limited by the requirements of accordingly various printing screens when patterns are changed. Although plasma treating produces patterns without affecting their flexibility or surface topography, this method suffers from the limitation of mass production. Flexographic printing is considered as a proper technique for mass production. However, its requirements locate at the two prints of polystyrene and different printing plates. High resolution could be achieved when fabricating µPADs using laser treating method, but it is of difficulties to fold or store the laser-treated devices BIB051 BIB052 . Though each fabrication method has its own advantages and limits, the economic benefit of µPAD mass production is the principal issue in concerned, especially for the widespread utilization in glucose detection. Balancing the interests between cost and performance may rely on the development of unique process technology and new materials. With the development of µPADs, multiple conventional detection techniques, such as colorimetric detection BIB051 BIB053 BIB054 BIB068 BIB055 , electrochemical detection BIB056 BIB069 BIB041 , chemiluminescence (CL) BIB025 BIB026 BIB027 BIB057 BIB031 , fluorescence BIB058 BIB066 BIB042 , mass spectrum (MS) BIB028 BIB018 and surface-enhanced Raman spectroscopy (SERS) BIB043 BIB019 have been applied to paper-based devices for rapid diagnostics. In this article, colorimetric and electrochemical µPADs for glucose detection in the past five years are summarized and reviewed. With the development of microfabrication and nanomaterial, glucose detection µPADs with high sensitivity and stability will be commercially accessible in the near future.
A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Fabrication Process of Colorimetric Glucose µPADs <s> Metabolic engineering for the overproduction of high-value small molecules is dependent upon techniques in directed evolution to improve production titers. The majority of small molecules targeted for overproduction are inconspicuous and cannot be readily obtained by screening. We provide a review on the development of high-throughput colorimetric, fluorescent, and growth-coupled screening techniques, enabling inconspicuous small-molecule detection. We first outline constraints on throughput imposed during the standard directed evolution workflow (library construction, transformation, and screening) and establish a screening and selection ladder on the basis of small-molecule assay throughput and sensitivity. An in-depth analysis of demonstrated screening and selection approaches for small-molecule detection is provided. Particular focus is placed on in vivo biosensor-based detection methods that reduce or eliminate in vitro assay manipulations and increase throughput. We conclude by providing our prospec... <s> BIB001 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Fabrication Process of Colorimetric Glucose µPADs <s> Light scattering phenomena in periodic systems have been investigated for decades in optics and photonics. Their classical description relies on Bragg scattering, which gives rise to constructive interference at specific wavelengths along well defined propagation directions, depending on illumination conditions, structural periodicity, and the refractive index of the surrounding medium. In this paper, by engineering multifrequency colorimetric responses in deterministic aperiodic arrays of nanoparticles, we demonstrate significantly enhanced sensitivity to the presence of a single protein monolayer. These structures, which can be readily fabricated by conventional Electron Beam Lithography, sustain highly complex structural resonances that enable a unique optical sensing approach beyond the traditional Bragg scattering with periodic structures. By combining conventional dark-field scattering micro-spectroscopy and simple image correlation analysis, we experimentally demonstrate that deterministic aperiodic surfaces with engineered structural color are capable of detecting, in the visible spectral range, protein layers with thickness of a few tens of Angstroms. <s> BIB002 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Fabrication Process of Colorimetric Glucose µPADs <s> A number of analogues of phenethylamine and tryptamine, which are prepared by modification of the chemical structures, are being developed for circulation on the black market. Often called “designer drugs,” they are abused in many countries, and cause serious social problems in many parts of the world. Acute deaths have been reported after overdoses of designer drugs. Various methods are required for screening and routine analysis of designer drugs in biological materials for forensic and clinical purposes. Many sample preparation and chromatographic methods for analysis of these drugs in biological materials and seized items have been published. This review presents various colorimetric detections, gas chromatographic (GC)–mass spectrometric, and liquid chromatographic (LC)–mass spectrometric methods proposed for designer drug analyses. Basic information on extractions, derivatizations, GC columns, LC columns, detection limits, and linear ranges is also summarized. <s> BIB003 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Fabrication Process of Colorimetric Glucose µPADs <s> This paper describes the use of a printed circuit technology to generate hydrophilic channels in a filter paper. Patterns of channels were designed using Protel soft, and printed on a blank paper. Then, the patterns were transferred to a sheet copper using a thermal transfer printer. The sheet copper with patterns was dipped into ferric chloride solution to etch the whole patterns of the sheet copper. At last, the etched sheet copper was coated with a film of paraffin and then a filter paper. An electric iron was used to heat the other side of the sheet copper. The melting paraffin penetrated full thickness of the filter paper and formed a hydrophobic “wall”. Colorimetric assays for the presence of protein and glucose were demonstrated using the paper-based device. The work is helpful to researchers to fabricate paper-based microfluidic devices for monitoring health and detecting disease. <s> BIB004 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Fabrication Process of Colorimetric Glucose µPADs <s> Many diagnostic tests in a conventional clinical laboratory are performed on blood plasma because changes in its composition often reflect the current status of pathological processes throughout the body. Recently, a significant research effort has been invested into the development of microfluidic paper-based analytical devices (μPADs) implementing these conventional laboratory tests for point-of-care diagnostics in resource-limited settings. This paper describes the use of red blood cell (RBC) agglutination for separating plasma from finger-prick volumes of whole blood directly in paper, and demonstrates the utility of this approach by integrating plasma separation and a colorimetric assay in a single μPAD. The μPAD was fabricated by printing its pattern onto chromatography paper with a solid ink (wax) printer and melting the ink to create hydrophobic barriers spanning through the entire thickness of the paper substrate. The μPAD was functionalized by spotting agglutinating antibodies onto the plasma separation zone in the center and the reagents of the colorimetric assay onto the test readout zones on the periphery of the device. To operate the μPAD, a drop of whole blood was placed directly onto the plasma separation zone of the device. RBCs in the whole blood sample agglutinated and remained in the central zone, while separated plasma wicked through the paper substrate into the test readout zones where analyte in plasma reacted with the reagents of the colorimetric assay to produce a visible color change. The color change was digitized with a portable scanner and converted to concentration values using a calibration curve. The purity and yield of separated plasma was sufficient for successful operation of the μPAD. This approach to plasma separation based on RBC agglutination will be particularly useful for designing fully integrated μPADs operating directly on small samples of whole blood. <s> BIB005 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Fabrication Process of Colorimetric Glucose µPADs <s> We developed a novel, low-cost and simple method for the fabrication of microfluidic paper-based analytical devices (μPADs) by silanization of filter cellulose using a paper mask having a specific pattern. The paper mask was penetrated with trimethoxyoctadecylsilane (TMOS) by immersing into TMOS-heptane solution. By heating the filter paper sandwiched between the paper mask and glass slides, TMOS was immobilized onto the filter cellulose via the reaction between cellulose OH and TMOS, while the hydrophilic area was not silanized because it was not in contact with the paper mask penetrated with TMOS. The effects of some factors including TMOS concentration, heating temperature and time on the fabrication of μPADs were studied. This method is free of any expensive equipment and metal masks, and could be performed by untrained personnel. These features are very attractive for the fabrication and applications of μPADs in developing countries or resource-limited settings. A flower-shaped μPAD was fabricated and used to determine glucose in human serum samples. The contents determined by this method agreed well with those determined by a standard method. <s> BIB006 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Fabrication Process of Colorimetric Glucose µPADs <s> Paper microfluidic devices are a promising technology in developing analytical devices for point-of-care diagnosis in the developing world. This article describes a simple method for paper microfluidic devices based on a PZT drop-on-demand droplet generator. Wax was jetted in the form of droplet, linked with each other and formed into wax pattern on filter paper with a PZT actuator and a glass nozzle. The heated wax pattern became a hydrophobic barrier for reagent used in bio-assay. The glass nozzle fabricated by a home-made micronozzle puller without complicated fabrication technology was low cost, simple and easily made. Coefficient of variation of the jetted wax droplet diameter was 4.0% which showed good reproducibility. The width of wax line was experimentally studied by changing the driving voltage, nozzle diameters and degree of overlapping. The wax line with width of 700–1700 μm was prepared for paper based microfluidic devices. Multi-assay of glucose, protein and pH and 3 × 3 arrays of glucose, protein and pH assay were realized with the prepared paper microfluidic devices. The wax droplet generating system supplied a low-cost, simple, easy-to-use and fast fabrication method for paper microfluidic devices. <s> BIB007 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Fabrication Process of Colorimetric Glucose µPADs <s> This paper describes the development and use of a handheld and lightweight stamp for the production of microfluidic paper-based analytical devices (μPADs). We also chemically modified the paper surface for improved colorimetric measurements. The design of the microfluidic structure has been patterned in a stamp, machined in stainless steel. Prior to stamping, the paper surface was oxidized to promote the conversion of hydroxyl into aldehyde groups, which were then chemically activated for covalent coupling of enzymes. Then, a filter paper sheet was impregnated with paraffin and sandwiched with a native paper (n-paper) sheet, previously oxidized. The metal stamp was preheated at 150 °C and then brought in contact with the paraffined paper (p-paper) to enable the thermal transfer of the paraffin to the n-paper, thus forming the hydrophobic barriers under the application of a pressure of ca. 0.1 MPa for 2 s. The channel and barrier widths measured in 50 independent μPADs exhibited values of 2.6 ± 0.1 and 1.4 ± 0.1 mm, respectively. The chemical modification for covalent coupling of enzymes on the paper surface also led to improvements in the colour uniformity generated inside the sensing area, a known bottleneck in this technology. The relative standard deviation (RSD) values for glucose and uric acid (UA) assays decreased from 40 to 10% and from 20 to 8%, respectively. Bioassays related to the detection of glucose, UA, bovine serum albumin (BSA), and nitrite were successfully performed in concentration ranges useful for clinical assays. The semi-quantitative analysis of all four analytes in artificial urine samples revealed an error smaller than 4%. The disposability of μPADs, the low instrumental requirements of the stamp-based fabrication, and the improved colour uniformity enable the use of the proposed devices for the point-of-care diagnostics or in limited resources settlements. <s> BIB008 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Fabrication Process of Colorimetric Glucose µPADs <s> This paper presents a novel paper-based analytical device based on the colorimetric paper assays through its light reflectance. The device is portable, low cost (<20 dollars), and lightweight (only 176 g) that is available to assess the cost-effectiveness and appropriateness of the original health care or on-site detection information. Based on the light reflectance principle, the signal can be obtained directly, stably and user-friendly in our device. We demonstrated the utility and broad applicability of this technique with measurements of different biological and pollution target samples (BSA, glucose, Fe, and nitrite). Moreover, the real samples of Fe (II) and nitrite in the local tap water were successfully analyzed, and compared with the standard UV absorption method, the quantitative results showed good performance, reproducibility, and reliability. This device could provide quantitative information very conveniently and show great potential to broad fields of resource-limited analysis, medical diagnostics, and on-site environmental detection. <s> BIB009 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Fabrication Process of Colorimetric Glucose µPADs <s> Paper-based microfluidics is a rapidly progressing inter-disciplinary technology driven by the need for low-cost alternatives to conventional point-of-care diagnostic tools. For transport of reagents/analytes, such devices often consist of interconnected hydrophilic fluid-flow channels that are demarcated by hydrophobic barrier walls that extend through the thickness of the paper. Here, we present a laser-based fabrication procedure that uses polymerisation of a photopolymer to produce the required fluidic channels in paper. Experimental results showed that the structures successfully guide the flow of fluids and allow containment of fluids in wells, and hence the technique is suitable for fabrication of paper-based microfluidic devices. The minimum width for the hydrophobic barriers that successfully prevented fluid leakage was ~120 μm and the minimum width for the fluidic channels that can be formed was ~80 μm, the smallest reported so far for paper-based fluidic patterns. <s> BIB010 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Fabrication Process of Colorimetric Glucose µPADs <s> We report a method for the bottom-up fabrication of paper-based capillary microchips by the blade coating of cellulose microfibers on a patterned surface. The fabrication process is similar to the paper-making process in which an aqueous suspension of cellulose microfibers is used as the starting material and is blade-coated onto a polypropylene substrate patterned using an inkjet printer. After water evaporation, the cellulose microfibers form a porous, hydrophilic, paperlike pattern that wicks aqueous solution by capillary action. This method enables simple, fast, inexpensive fabrication of paper-based capillary channels with both width and height down to about 10 μm. When this method is used, the capillary microfluidic chip for the colorimetric detection of glucose and total protein is fabricated, and the assay requires only 0.30 μL of sample, which is 240 times smaller than for paper devices fabricated using photolithography. <s> BIB011 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Fabrication Process of Colorimetric Glucose µPADs <s> This paper describes a simple and instrument-free screen-printing method to fabricate hydrophilic channels by patterning polydimethylsiloxane (PDMS) onto chromatography paper. Clearly recognizable border lines were formed between hydrophilic and hydrophobic areas. The minimum width of the printed channel to deliver an aqueous sample was 600 μm, as obtained by this method. Fabricated microfluidic paper-based analytical devices (μPADs) were tested for several colorimetric assays of pH, glucose, and protein in both buffer and artificial urine samples and results were obtained in less than 30 min. The limits of detection (LODs) for glucose and bovine serum albumin (BSA) were 5 mM and 8 μM, respectively. Furthermore, the pH values of different solutions were visually recognised with the naked eye by using a sensitive ink. Ultimately, it is expected that this PDMS-screen-printing (PSP) methodology for μPADs can be readily translated to other colorimetric detection and hydrophilic channels surrounded by a hydrophobic polymer can be formed to transport fluids toward target zones. <s> BIB012 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Fabrication Process of Colorimetric Glucose µPADs <s> A simple and low-cost fabrication method for paper-based diagnostic devices (PBDDs) is described in this study. Street-available polymer solutions were screen printed onto filter papers to create hydrophobic patterns for fluidic channels. In order to obtain fully functional hydrophobic patterns for fluids, the original polymer solutions were diluted with butyl acetate to yield a suitable viscosity range between 30-200 cP for complete patterning on paper. Typical pH and glucose tests with color indicators were performed on the screen printed PBDDs. Images of the PBDDs were analyzed by computers to obtain calibration curves for pH between 2 and 12 and glucose concentration ranging from 10-1000 mmol dm(-3). Detection of formaldehyde in acetone was also carried out to show the possibility of using this PBBD for analytical detection with organic solvents. An exemplar PBDD with simultaneous pH and glucose detection was also used to demonstrate the feasibility of applying this technique for realistic diagnostic applications. <s> BIB013 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Fabrication Process of Colorimetric Glucose µPADs <s> Interest in low-cost diagnostic devices has recently gained attention, in part due to the rising cost of healthcare and the need to serve populations in resource-limited settings. A major challenge in the development of such devices is the need for hydrophobic barriers to contain polar bio-fluid analytes. Key approaches in lowering the cost in diagnostics have centered on (i) development of low-cost fabrication techniques/processes, (ii) use of affordable materials, or, (iii) minimizing the need for high-tech tools. This communication describes a simple, low-cost, adaptable, and portable method for patterning paper and subsequent use of the patterned paper in diagnostic tests. Our approach generates hydrophobic regions using a ball-point pen filled with a hydrophobizing molecule suspended in a solvent carrier. An empty ball-point pen was filled with a solution of trichloro perfluoroalkyl silane in hexanes (or hexadecane), and the pen used to draw lines on Whatman® chromatography 1 paper. The drawn regions defined the test zones since the trichloro silane reacts with the paper to give a hydrophobic barrier. The formation of the hydrophobic barriers is reaction kinetic and diffusion-limited, ensuring well defined narrow barriers. We performed colorimetric glucose assays and enzyme-linked immuno-sorbent assay (ELISA) using the created test zones. To demonstrate the versatility of this approach, we fabricated multiple devices on a single piece of paper and demonstrated the reproducibility of assays on these devices. The overall cost of devices fabricated by drawing are relatively lower ( <s> BIB014 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Fabrication Process of Colorimetric Glucose µPADs <s> Paper microfluidic devices are a promising technology in developing analytical devices for point-of-care diagnosis in the developing world. This article describes a novel method of wax jetting with a PZT (piezoelectric ceramic transducer) actuator and glass nozzle for the fabrication of paper microfluidic devices. The hydrophobic fluid pattern was formed by the permeation of filter paper with wax droplets. Results showed that the size of the wax droplet, which was determined by the voltage of the driving signal and nozzle diameter, ranged from 150 μm to 380 μm, and the coefficient of variation of the droplet diameter was under 4.0%. The smallest width of the fluid channel was 600 μm frontside and 750 μm backside. The patterned filter paper was without any leakage, and multi-assay of glucose, protein, and pH on the paper microfluidic device, and laminar diffusion flow with blue and yellow dye were realized. The wax jetting system supplied a low-cost, simple, easy-to-use and fast fabrication method for paper microfluidic devices. <s> BIB015
Colorimetric detection has been the most widely employed technique for paper-based analytical devices due to the advantages of visual readout, straightforward operation and superior stability BIB001 BIB002 BIB003 . Glucose oxidase (GOx) and horseradish peroxidase (HRP) are the commonly used bienzyme system to catalyze the reaction between glucose and the color indicator in µPADs. The catalytic reaction of glucose by glucose oxidase results in hydrogen peroxide (H 2 O 2 ) and gluconic acid. Peroxidase then catalyzes the reaction of H 2 O 2 with color indicator and generates a visual color change. Identifying an appropriate color indicator is one of the crucial steps in the advancement of µPADs for the glucose concentrations determination. Potassium iodide (i.e., KI) was one of the commonly used color indicators. HRP catalyzes the oxidation of iodide to iodine by hydrogen peroxide, leading to a change from colorless to a visual brown color BIB006 BIB007 BIB008 BIB015 BIB009 BIB012 BIB004 BIB013 BIB005 . Garcia et al. BIB008 proposed a production method of µPAD using a handheld metal stamp (Figure 1 ). The channel and barrier widths of the fabricated µPAD were 2.6 ± 0.1 and 1.4 ± 0.1 mm, respectively. The improvement in the color uniformity was created by the covalent coupling of enzymes on the surface of paper. The linear response was in the range from 0 to 12 mM. Cai et al. BIB006 developed a µPAD fabricated free of metal masks or expensive equipment. A mask immobilized with trimethoxyoctadecylsilane (TMOS) was used to silanize the cellulose paper substrate by heating the paper, which was located between the mask and glass slides. TMOS adsorbed on the mask would evaporate and penetrate into the cellulose paper aligning onto the mask, while other parts remained hydrophilic due to the lack of reaction between cellulose OH groups and TMOS (Figure 2) . Li et al. BIB007 BIB015 developed a piezoelectric ceramic transducer (PZT) drop-on-demand wax droplet generating system for µPADs. Wax was jetted as droplet and shaped to form the hydrophobic fluid pattern on a piece of filter paper with a PZT actuator. Mohammadi et al. BIB012 proposed a screen-printing method to fabricate µPAD through patterning polydimethylsiloxane (PDMS) instead of wax onto paper to construct hydrophilic channels. The glucose diagnostic device could be developed by drawing with a silane/hexane ink without further requirement of complex equipment. Oyola-Reynoso et al. BIB014 used a ball-point pen in the fullness of a solution of trichloro perfluoroalkyl silane in hexanes to draw hydrophobic regions of paper. To investigate the glucose concentration in blood plasma, Yang et al. BIB005 developed a µPAD with agglutinating antibodies immobilized for separating blood plasma from red blood cells in whole blood ( Figure 3 ). Furthermore, laser-induced photo-polymerisation BIB010 and blade coating BIB011 were also used for creation of µPADs depending on GOx/HRP bienzyme reaction. metal masks or expensive equipment. A mask immobilized with trimethoxyoctadecylsilane (TMOS) was used to silanize the cellulose paper substrate by heating the paper, which was located between the mask and glass slides. TMOS adsorbed on the mask would evaporate and penetrate into the cellulose paper aligning onto the mask, while other parts remained hydrophilic due to the lack of reaction between cellulose OH groups and TMOS (Figure 2 ). Li et al. BIB007 BIB015 developed a piezoelectric ceramic transducer (PZT) drop-on-demand wax droplet generating system for μPADs. Wax was jetted as droplet and shaped to form the hydrophobic fluid pattern on a piece of filter paper with a PZT actuator. Mohammadi et al. BIB012 proposed a screen-printing method to fabricate μPAD through patterning polydimethylsiloxane (PDMS) instead of wax onto paper to construct hydrophilic channels. The glucose diagnostic device could be developed by drawing with a silane/hexane ink without further requirement of complex equipment. Oyola-Reynoso et al. BIB014 used a ball-point pen in the fullness of a solution of trichloro perfluoroalkyl silane in hexanes to draw hydrophobic regions of paper. To investigate the glucose concentration in blood plasma, Yang et al. BIB005 developed a μPAD with agglutinating antibodies immobilized for separating blood plasma from red blood cells in whole blood ( Figure 3 ). Furthermore, laser-induced photo-polymerisation BIB010 and blade coating BIB011 were also used for creation of μPADs depending on GOx/HRP bienzyme reaction. metal masks or expensive equipment. A mask immobilized with trimethoxyoctadecylsilane (TMOS) was used to silanize the cellulose paper substrate by heating the paper, which was located between the mask and glass slides. TMOS adsorbed on the mask would evaporate and penetrate into the cellulose paper aligning onto the mask, while other parts remained hydrophilic due to the lack of reaction between cellulose OH groups and TMOS (Figure 2 ). Li et al. BIB007 BIB015 developed a piezoelectric ceramic transducer (PZT) drop-on-demand wax droplet generating system for μPADs. Wax was jetted as droplet and shaped to form the hydrophobic fluid pattern on a piece of filter paper with a PZT actuator. Mohammadi et al. BIB012 proposed a screen-printing method to fabricate μPAD through patterning polydimethylsiloxane (PDMS) instead of wax onto paper to construct hydrophilic channels. The glucose diagnostic device could be developed by drawing with a silane/hexane ink without further requirement of complex equipment. Oyola-Reynoso et al. BIB014 used a ball-point pen in the fullness of a solution of trichloro perfluoroalkyl silane in hexanes to draw hydrophobic regions of paper. To investigate the glucose concentration in blood plasma, Yang et al. BIB005 developed a μPAD with agglutinating antibodies immobilized for separating blood plasma from red blood cells in whole blood ( Figure 3 ). Furthermore, laser-induced photo-polymerisation BIB010 and blade coating BIB011 were also used for creation of μPADs depending on GOx/HRP bienzyme reaction.
A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Alternative Color Indicators for Glucose μPADs <s> Abstract Cellulose paper based glucose test strips were successfully prepared using 2,4,6-tribromo-3-hydroxy benzoic acid (TBHBA) as the chromogen agent. Cellulose paper is a good substrate for carrying chromogen agents and other chemicals so that the quantitative analysis can be done based on the colorimetric chemistry. The color intensity of the developed compounds, which was measured as the differential diffusive reflectance of the test strip at 510 nm, was correlated to the glucose concentration of the sample solutions in the range of 0.18–9.91 mg/ml. These colorimetric test strips could be conveniently used, do not have to use an electronic device, and would have potential applications in the home monitoring of blood glucose for people with diabetes. <s> BIB001 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Alternative Color Indicators for Glucose μPADs <s> In this work, we first employ a drying method combining with the bienzyme colorimetric detection of glucose and uric acid on microfluidic paper-based analysis devices (μPADs). The channels of 3D μPADs are also designed by us to get better results. The color results are recorded by both Gel Documentation systems and a common camera. By using Gel Documentation systems, the limits of detection (LOD) of glucose and uric acid are 3.81 × 10(-5)M and 4.31 × 10(-5)M, respectively one order of magnitude lower than that of the reported methods on μPADs. By using a common camera, the limits of detection (LOD) of glucose and uric acid are 2.13 × 10(-4)M and 2.87 × 10(-4)M, respectively. Furthermore, the effects of detection conditions have been investigated and discussed comprehensively. Human serum samples are detected with satisfactory results, which are comparable with the clinical testing results. A low-cost, simple and rapid colorimetric method for the simultaneous detection of glucose and uric acid on the μPADs has been developed with enhanced sensitivity. <s> BIB002 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Alternative Color Indicators for Glucose μPADs <s> Many diagnostic tests in a conventional clinical laboratory are performed on blood plasma because changes in its composition often reflect the current status of pathological processes throughout the body. Recently, a significant research effort has been invested into the development of microfluidic paper-based analytical devices (μPADs) implementing these conventional laboratory tests for point-of-care diagnostics in resource-limited settings. This paper describes the use of red blood cell (RBC) agglutination for separating plasma from finger-prick volumes of whole blood directly in paper, and demonstrates the utility of this approach by integrating plasma separation and a colorimetric assay in a single μPAD. The μPAD was fabricated by printing its pattern onto chromatography paper with a solid ink (wax) printer and melting the ink to create hydrophobic barriers spanning through the entire thickness of the paper substrate. The μPAD was functionalized by spotting agglutinating antibodies onto the plasma separation zone in the center and the reagents of the colorimetric assay onto the test readout zones on the periphery of the device. To operate the μPAD, a drop of whole blood was placed directly onto the plasma separation zone of the device. RBCs in the whole blood sample agglutinated and remained in the central zone, while separated plasma wicked through the paper substrate into the test readout zones where analyte in plasma reacted with the reagents of the colorimetric assay to produce a visible color change. The color change was digitized with a portable scanner and converted to concentration values using a calibration curve. The purity and yield of separated plasma was sufficient for successful operation of the μPAD. This approach to plasma separation based on RBC agglutination will be particularly useful for designing fully integrated μPADs operating directly on small samples of whole blood. <s> BIB003 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Alternative Color Indicators for Glucose μPADs <s> Abstract In this contribution, we first developed a semiquantitative method for the detection of glucose with self-calibration based on bienzyme colorimetry by using tree-shaped paper strip. The GOD/HRP bienzyme system was utilized to amplify the color signal in the aqueous phase. Moreover, we employed a paper as microfluidic media for running colorimetric assay, while tree-shaped paper strip was designed to ensure uniform microfluidic flow for multiple branches. Our proposed method gives direct outcomes which can be observed by the naked eye or recorded by a simple camera. The linear range is from 1.0 × 10 −3 to 11.0 × 10 −3 M, with a detection limit of 3 × 10 −4 M. Furthermore, the effect of detection condition has been investigated and discussed comprehensively. The result of determining glucose in human serum is consistent with that of detecting standard glucose solution by using our developed approach. A low-cost, simple, and rapid colorimetric method for the simultaneous detection of glucose with self-calibration on the tree-shaped paper has been proposed. <s> BIB004 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Alternative Color Indicators for Glucose μPADs <s> We developed a novel, low-cost and simple method for the fabrication of microfluidic paper-based analytical devices (μPADs) by silanization of filter cellulose using a paper mask having a specific pattern. The paper mask was penetrated with trimethoxyoctadecylsilane (TMOS) by immersing into TMOS-heptane solution. By heating the filter paper sandwiched between the paper mask and glass slides, TMOS was immobilized onto the filter cellulose via the reaction between cellulose OH and TMOS, while the hydrophilic area was not silanized because it was not in contact with the paper mask penetrated with TMOS. The effects of some factors including TMOS concentration, heating temperature and time on the fabrication of μPADs were studied. This method is free of any expensive equipment and metal masks, and could be performed by untrained personnel. These features are very attractive for the fabrication and applications of μPADs in developing countries or resource-limited settings. A flower-shaped μPAD was fabricated and used to determine glucose in human serum samples. The contents determined by this method agreed well with those determined by a standard method. <s> BIB005
Due to the weaker color signal produced by potassium iodide, some organics and nanoparticles were used as color indicators in glucose μPADs. 2,4,6-tribromo-3-hydroxy benzoic acid (TBHBA) and 4-aminoantipyrine (4-APP) were used as substrates catalyzed by HRP to generate color signal for glucose detection due to superior water solubility of TBHBA and positive charges of TBHBA/4-APP which can be attached firmly onto paper substrate with negative charges BIB004 BIB001 . Chen et al. BIB002 Figure 2. Scheme of the µPAD fabrication in BIB005 : A filter paper mask (b) was obtained by cutting on a native filter paper (a), and was immersed in TMOS solution (c); The TMOS-adsorbed mask and a native filter paper were packed between two glass slides (d); TMOS molecules were assembled on the native filter paper by heating (e); and the fabricated µPAD with hydrophilic-hydrophobic contrast (f) and its photograph (g) obtained by spraying water on it. With the permission from BIB005 ; Copyright 2014, The Royal Society of Chemistry. Due to the weaker color signal produced by potassium iodide, some organics and nanoparticles were used as color indicators in glucose μPADs. 2,4,6-tribromo-3-hydroxy benzoic acid (TBHBA) and 4-aminoantipyrine (4-APP) were used as substrates catalyzed by HRP to generate color signal for glucose detection due to superior water solubility of TBHBA and positive charges of TBHBA/4-APP which can be attached firmly onto paper substrate with negative charges BIB004 BIB001 . Chen et al. BIB002 Figure 3. Fabrication scheme of the µPAD designed in BIB003 . The central plasma separation zone (a) and the four test readout zones (b) were patterned on chromatography paper by a wax printer (c); (d) Agglutinating antibodies were immobilized at the central part while the reagents for the colorimetric assay at the periphery zones; (e) To perform a diagnostic test with the developed µPAD, the whole blood sample was dropped onto the plasma separation zone; (f) The red blood cells were agglutinated in the central zone, while the separated plasma wicked into the test readout zones and reacted with the reagents of the colorimetric assay. With the permission from BIB003 ; Copyright 2012, The Royal Society of Chemistry.
A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Alternative Color Indicators for Glucose µPADs <s> An enzymatic synthesis route to protein-wrapped gold nanoparticles is developed. Glucose oxidase (GOD) reduces Au(III) ion in the presence of β-D-glucose, and stable gold nanoparticles with average diameter of 14.5 nm areformed. FT-IR spectra, zeta potential and CD spectra of purified nanoparticles indicate that they are stabilized by the adsorbed protein layer. <s> BIB001 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Alternative Color Indicators for Glucose µPADs <s> Abstract The control of size and shape of metallic nanoparticles is a fundamental goal in nanochemistry, and crucial for applications exploiting nanoscale properties of materials. We present here an approach to the synthesis of gold nanoparticles mediated by glucose oxidase (GOD) immobilized on solid substrates using the Layer-by-Layer (LbL) technique. The LbL films contained four alternated layers of chitosan and poly(styrene sulfonate) (PSS), with GOD in the uppermost bilayer adsorbed on a fifth chitosan layer: (chitosan/PSS)4/(chitosan/GOD). The films were inserted into a solution containing gold salt and glucose, at various pHs. Optimum conditions were achieved at pH 9, producing gold nanoparticles of ca. 30 nm according to transmission electron microscopy. A comparative study with the enzyme in solution demonstrated that the synthesis of gold nanoparticles is more efficient using immobilized GOD. <s> BIB002 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Alternative Color Indicators for Glucose µPADs <s> Carboxyl-modified graphene oxide (GO-COOH) possesses intrinsic peroxidase-like activity that can catalyze the reaction of peroxidase substrate 3,3,5,5-tetramethyl-benzidine (TMB) in the presence of H2O2 to produce a blue color reaction. A simple, cheap, and highly sensitive and selective colorimetric method for glucose detection has been developed and will facilitate the utilization of GO-COOH intrinsic peroxidase activity in medical diagnostics and biotechnology. <s> BIB003 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Alternative Color Indicators for Glucose µPADs <s> We report the first use of redox nanoparticles of cerium oxide as colorimetric probes in bioanalysis. The method is based on changes in the physicochemical properties of ceria nanoparticles, used here as chromogenic indicators, in response to the analyte. We show that these particles can be fully integrated in a paper-based bioassay. To construct the sensor, ceria nanoparticles and glucose oxidase were coimmobilized onto filter paper using a silanization procedure. In the presence of glucose, the enzymatically generated hydrogen peroxide induces a visual color change of the ceria nanoparticles immobilized onto the bioactive sensing paper, from white-yellowish to dark orange, in a concentration-dependent manner. A detection limit of 0.5 mM glucose with a linear range up to 100 mM and a reproducibility of 4.3% for n = 11 ceria paper strips were obtained. The assay is fully reversible and can be reused for at least 10 consecutive measurement cycles, without significant loss of activity. Another unique featur... <s> BIB004 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Alternative Color Indicators for Glucose µPADs <s> Abstract Cellulose paper based glucose test strips were successfully prepared using 2,4,6-tribromo-3-hydroxy benzoic acid (TBHBA) as the chromogen agent. Cellulose paper is a good substrate for carrying chromogen agents and other chemicals so that the quantitative analysis can be done based on the colorimetric chemistry. The color intensity of the developed compounds, which was measured as the differential diffusive reflectance of the test strip at 510 nm, was correlated to the glucose concentration of the sample solutions in the range of 0.18–9.91 mg/ml. These colorimetric test strips could be conveniently used, do not have to use an electronic device, and would have potential applications in the home monitoring of blood glucose for people with diabetes. <s> BIB005 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Alternative Color Indicators for Glucose µPADs <s> In this work, we first employ a drying method combining with the bienzyme colorimetric detection of glucose and uric acid on microfluidic paper-based analysis devices (μPADs). The channels of 3D μPADs are also designed by us to get better results. The color results are recorded by both Gel Documentation systems and a common camera. By using Gel Documentation systems, the limits of detection (LOD) of glucose and uric acid are 3.81 × 10(-5)M and 4.31 × 10(-5)M, respectively one order of magnitude lower than that of the reported methods on μPADs. By using a common camera, the limits of detection (LOD) of glucose and uric acid are 2.13 × 10(-4)M and 2.87 × 10(-4)M, respectively. Furthermore, the effects of detection conditions have been investigated and discussed comprehensively. Human serum samples are detected with satisfactory results, which are comparable with the clinical testing results. A low-cost, simple and rapid colorimetric method for the simultaneous detection of glucose and uric acid on the μPADs has been developed with enhanced sensitivity. <s> BIB006 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Alternative Color Indicators for Glucose µPADs <s> Abstract In this paper gold nanoparticles (Au-NPs) have been used as colorimetric reporters for the detection of sugars. The synthesis of Au-NPs has been obtained in presence of glucose as reducing agent in different conditions, allowing the formation of pink or blue coloured NPs, and has been employed in the design of two colorimetric assays. Both assays rely on the analyte induced intensity increase (without any shift) of the NPs plasmon band absorption. The “pink assay” is based on the sugar assisted chemical synthesis of NPs and it represents a simple one-step colorimetric approach to the quantification of all potentially reducing sugars (sucrose included) with a LOD of 10 μM. The “blue assay” is based on the Au-NP synthesis catalysed by the enzyme glucose oxidase and it is specific for glucose, with a LOD of 5 μM. Compared to the classical bi-enzymatic (glucose oxidase/peroxidase) optical assay, it uses only one enzyme and does not suffer of the bleaching of the final colour because the reporter Au-NPs are very stable. <s> BIB007 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Alternative Color Indicators for Glucose µPADs <s> Paper-based analytical devices (PADs) represent a growing class of elegant, yet inexpensive chemical sensor technologies designed for point-of-use applications. Most PADs, however, still utilize some form of instrumentation such as a camera for quantitative detection. We describe here a simple technique to render PAD measurements more quantitative and straightforward using the distance of colour development as a detection motif. The so-called distance-based detection enables PAD chemistries that are more portable and less resource intensive compared to classical approaches that rely on the use of peripheral equipment for quantitative measurement. We demonstrate the utility and broad applicability of this technique with measurements of glucose, nickel, and glutathione using three different detection chemistries: enzymatic reactions, metal complexation, and nanoparticle aggregation, respectively. The results show excellent quantitative agreement with certified standards in complex sample matrices. This work provides the first demonstration of distance-based PAD detection with broad application as a class of new, inexpensive sensor technologies designed for point-of-use applications. <s> BIB008 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Alternative Color Indicators for Glucose µPADs <s> Abstract In this contribution, we first developed a semiquantitative method for the detection of glucose with self-calibration based on bienzyme colorimetry by using tree-shaped paper strip. The GOD/HRP bienzyme system was utilized to amplify the color signal in the aqueous phase. Moreover, we employed a paper as microfluidic media for running colorimetric assay, while tree-shaped paper strip was designed to ensure uniform microfluidic flow for multiple branches. Our proposed method gives direct outcomes which can be observed by the naked eye or recorded by a simple camera. The linear range is from 1.0 × 10 −3 to 11.0 × 10 −3 M, with a detection limit of 3 × 10 −4 M. Furthermore, the effect of detection condition has been investigated and discussed comprehensively. The result of determining glucose in human serum is consistent with that of detecting standard glucose solution by using our developed approach. A low-cost, simple, and rapid colorimetric method for the simultaneous detection of glucose with self-calibration on the tree-shaped paper has been proposed. <s> BIB009 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Alternative Color Indicators for Glucose µPADs <s> Abstract Paper based colorimetric biosensing platform utilizing cross-linked siloxane 3-aminopropyltriethoxysilane (APTMS) as probe was developed for the detection of a broad range of targets including H 2 O 2 , glucose and protein biomarker. APTMS was extensively used for the modification of filter papers to develop paper based analytical devices. We discovered when APTMS was cross-linked with glutaraldehyde (GA), the resulting complex (APTMS–GA) displays brick-red color, and a visual color change was observed when the complex reacted with H 2 O 2 . By integrating the APTMS–GA complex with filter paper, the modified paper enables quantitative detection of H 2 O 2 through the monitoring of the color intensity change of the paper via software Image J. Then, with the immobilization of glucose oxidase (GOx) onto the modified paper, glucose can be detected through the detection of enzymatically generated H 2 O 2 . For protein biomarker prostate specific antigen (PSA) assay, we immobilized capture, not captured anti-PSA antibody (Ab 1 ) onto the paper surface and using GOx modified gold nanorod (GNR) as detection anti-PSA antibody (Ab 2 ) label. The detection of PSA was also achieved via the liberated H 2 O 2 when the GOx label reacted with glucose. The results demonstrated the possibility of this paper based sensor for the detection of different analytes with wide linear range. The low cost and simplicity of this paper based sensor could be developed for “point-of-care” analysis and find wide application in different areas. <s> BIB010 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Alternative Color Indicators for Glucose µPADs <s> This paper describes a silica nanoparticle-modified microfluidic paper-based analytical device (μPAD) with improved color intensity and uniformity for three different enzymatic reactions with clinical relevance (lactate, glucose, and glutamate). The μPADs were produced on a Whatman grade 1 filter paper and using a CO2 laser engraver. Silica nanoparticles modified with 3-aminopropyltriethoxysilane were then added to the paper devices to facilitate the adsorption of selected enzymes and prevent the washing away effect that creates color gradients in the colorimetric measurements. According to the results herein described, the addition of silica nanoparticles yielded significant improvements in color intensity and uniformity. The resulting μPADs allowed for the detection of the three analytes in clinically relevant concentration ranges with limits of detection (LODs) of 0.63 mM, 0.50 mM, and 0.25 mM for lactate, glucose, and glutamate, respectively. An example of an analytical application has been demonstrated for the semi-quantitative detection of all three analytes in artificial urine. The results demonstrate the potential of silica nanoparticles to avoid the washing away effect and improve the color uniformity and intensity in colorimetric bioassays performed on μPADs. <s> BIB011 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Alternative Color Indicators for Glucose µPADs <s> Abstract In this paper, Graphene oxide@SiO 2 @CeO 2 hybrid nanosheets (GSCs) have been successfully synthesized by the wet-chemical strategy. TEM, FITR and XPS were applied to characterize the morphology and composition of the nanosheets. The colorimetric assay of these nanosheets indicated that they possessed high intrinsic peroxidase activity, which should be ascribed to the combination of graphene oxide and CeO 2 . A fully integrated reagentless bioactive paper based on GSCs was fabricated, which were able to simultaneously detect glucose, lactate, uric acid and cholesterol. The results demonstrated that GSCs have great potential as an alternative to the commonly employed peroxidase in daily nursing and general physical examination. <s> BIB012 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Alternative Color Indicators for Glucose µPADs <s> In our present study, we developed an optical biosensor for direct determination of salivary glucose by using immobilized glucose oxidase enzyme on filter paper strip (specific activity 1.4 U/strip) and then reacting it with synthetic glucose samples in presence of co-immobilized color pH indicator. The filter paper changed color based on concentration of glucose in reaction media and hence, by scanning this color change (using RGB profiling) through an office scanner and open source image processing software (GIMP) the concentration of glucose in the reaction medium could be deduced. Once the biosensor was standardized, the synthetic glucose sample was replaced with human saliva from donors. The individual's blood glucose level at the time of obtaining saliva was also measured using an Accuchek(™) active glucometer (Roche Inc.). In this preliminary study, a correlation of nearly 0.64 was found between glucose levels in saliva and blood of healthy individuals and in diabetic patients it was nearly in the order of 0.95, thereby validating the importance of salivary analysis. The RGB profiling method obtained a detection range of 9-1350 mg/dL glucose at a response time of 45 s and LOD of 22.2 mg/dL. <s> BIB013 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Alternative Color Indicators for Glucose µPADs <s> Custom-made pencils containing reagents dispersed in a solid matrix were developed to enable rapid and solvent-free deposition of reagents onto membrane-based fluidic devices. The technique is as simple as drawing with the reagent pencils on a device. When aqueous samples are added to the device, the reagents dissolve from the pencil matrix and become available to react with analytes in the sample. Colorimetric glucose assays conducted on devices prepared using reagent pencils had comparable accuracy and precision to assays conducted on conventional devices prepared with reagents deposited from solution. Most importantly, sensitive reagents, such as enzymes, are stable in the pencils under ambient conditions, and no significant decrease in the activity of the enzyme horseradish peroxidase stored in a pencil was observed after 63 days. Reagent pencils offer a new option for preparing and customizing diagnostic tests at the point of care without the need for specialized equipment. <s> BIB014 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Alternative Color Indicators for Glucose µPADs <s> This paper describes the modification of microfluidic paper-based analytical devices (μPADs) with chitosan to improve the analytical performance of colorimetric measurements associated with enzymatic bioassays. Chitosan is a natural biopolymer extensively used to modify biosensing surfaces due to its capability of providing a suitable microenvironment for the direct electron transfer between an enzyme and a reactive surface. This hypothesis was investigated using glucose and uric acid (UA) colorimetric assays as model systems. The best colorimetric sensitivity for glucose and UA was achieved using a chromogenic solution composed of 4-aminoantipyrine and sodium 3,5-dichloro-2-hydroxy-benzenesulfonate (4-AAP/DHBS), which provided a linear response for a concentration range between 0.1 and 1.0 mM. Glucose and UA were successfully determined in artificial serum samples with accuracies between 87 and 114%. The limits of detection (LODs) found for glucose and UA assays were 23 and 37 μM, respectively. The enhanced analytical performance of chitosan-modified μPADs allowed the colorimetric detection of glucose in tear samples from four nondiabetic patients. The achieved concentration levels ranged from 130 to 380 μM. The modified μPADs offered analytical reliability and accuracy as well as no statistical difference from the values achieved through a reference method. Based on the presented results, the proposed μPAD can be a powerful alternative tool for non-invasive glucose analysis. <s> BIB015 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Alternative Color Indicators for Glucose µPADs <s> A disposable, equipment-free, versatile point-of-care testing platform, microfluidic distance readout sweet hydrogel integrated paper-based analytical device (μDiSH-PAD), was developed for portable quantitative detection of different types of targets. The platform relies on a target-responsive aptamer cross-linked hydrogel for target recognition, cascade enzymatic reactions for signal amplification, and microfluidic paper-based analytic devices (μPADs) for visual distance-based quantitative readout. A “sweet” hydrogel with trapped glucoamylase (GA) was synthesized using an aptamer as a cross-linker. When target is present in the sample, the “sweet” hydrogel collapses and releases enzyme GA into the sample, generating glucose by amylolysis. A hydrophilic channel on the μPADs is modified with glucose oxidase (GOx) and colorless 3,3′-diaminobenzidine (DAB) as the substrate. When glucose travels along the channel by capillary action, it is converted to H2O2 by GOx. In addition, DAB is converted into brown ins... <s> BIB016
Due to the weaker color signal produced by potassium iodide, some organics and nanoparticles were used as color indicators in glucose µPADs. 2,4,6-tribromo-3-hydroxy benzoic acid (TBHBA) and 4-aminoantipyrine (4-APP) were used as substrates catalyzed by HRP to generate color signal for glucose detection due to superior water solubility of TBHBA and positive charges of TBHBA/4-APP which can be attached firmly onto paper substrate with negative charges BIB009 BIB005 . Chen et al. BIB006 replaced TBHBA with N-ethyl-N (3-sulfopropyl)-3-methyl-aniline sodium salt (TOPS) and used TOPS/4-APP in µPAD for glucose detection, which showed a limit of detection (LOD) of 38.1 µM. Gabriel et al. BIB015 used 4-AAP and sodium 3,5-dichloro-2-hydroxy-benzenesulfonate (DHBS) as the chromogenic solution. Chitosan was involved to improve the sensing performance of glucose in tear samples and the detection limit was 0.023 mM. Zhou et al. BIB010 used cross-linked siloxane 3-aminopropyltriethoxysilane (APTMS) as probe for colorimetric µPAD. Only glucose oxidase needs to be immobilized on the µPAD due to a visual color change when APTMS/glutaraldehyde (GA) complex reacted with H 2 O 2 . The µPAD exhibited good linearity for the concentration in the range from 0.5 to 30 mM, covering the clinical range for normal blood glucose level . Similarly, Soni et al. BIB013 used co-immobilized color pH indicator for direct determination of salivary glucose with no need for peroxidase. While most conventional intensity-based colorimetric µPAD were still constrained to the requirement of camera for quantitative detection, Cate et al. BIB008 and Wei et al. BIB016 utilized visual distance-based methods for µPADs through the distance of color development as a detection value. GOx and colorless 3,3 -diaminobenzidine (DAB) were immobilized in a hydrophilic channel as the substrate on the µPADs. H 2 O 2 were generated by GOx when sample solution travelled along the channel by capillary action, and then further reacted with DAB to form a visible brown, insoluble product (poly(DAB)) in the presence of peroxidase (Figure 4 ). The length of the brown precipitate was positively correlated to the concentration of glucoses. replaced TBHBA with N-ethyl-N (3-sulfopropyl)-3-methyl-aniline sodium salt (TOPS) and used TOPS/4-APP in μPAD for glucose detection, which showed a limit of detection (LOD) of 38.1 μM. Gabriel et al. BIB015 used 4-AAP and sodium 3,5-dichloro-2-hydroxy-benzenesulfonate (DHBS) as the chromogenic solution. Chitosan was involved to improve the sensing performance of glucose in tear samples and the detection limit was 0.023 mM. Zhou et al. BIB010 used cross-linked siloxane 3-aminopropyltriethoxysilane (APTMS) as probe for colorimetric μPAD. Only glucose oxidase needs to be immobilized on the μPAD due to a visual color change when APTMS/glutaraldehyde (GA) complex reacted with H2O2. The μPAD exhibited good linearity for the concentration in the range from 0.5 to 30 mM, covering the clinical range for normal blood glucose level . Similarly, Soni et al. BIB013 used co-immobilized color pH indicator for direct determination of salivary glucose with no need for peroxidase. While most conventional intensity-based colorimetric μPAD were still constrained to the requirement of camera for quantitative detection, Cate et al. BIB008 and Wei et al. BIB016 utilized visual distance-based methods for μPADs through the distance of color development as a detection value. GOx and colorless 3,3′-diaminobenzidine (DAB) were immobilized in a hydrophilic channel as the substrate on the μPADs. H2O2 were generated by GOx when sample solution travelled along the channel by capillary action, and then further reacted with DAB to form a visible brown, insoluble product (poly(DAB)) in the presence of peroxidase ( Figure 4 ). The length of the brown precipitate was positively correlated to the concentration of glucoses. Nanoparticles have been used in lateral flow assays associated with colorimetric detection to improve the analytical performance and minimize washing effects BIB007 BIB011 . Figueredo et al. applied three different types of nanomaterials, namely Fe 3 O 4 nanoparticles (MNPs), multiwalled carbon nanotubes (MWCNT), and graphene oxide (GO) in paper-based analytical devices to improve the homogeneity on color measurements. Instead of constructing hydrophobic barriers on paper surface as described above, a layer of hydrophilic paper channels was directly built up on the surface of a hydrophobic substrate. With the assistance of glucose oxidase and HRP, the LOD of the µPADs treated with MNPs, MWCNT and GO were 43, 62, and 18 µM, respectively. Evans et al. BIB011 also aimed at improving color intensity and uniformity by using silica nanoparticles ( Figure 5 ). The PAD added with silica nanoparticles can prevent the color gradients in the colorimetric detection caused by the washing away effect and the LOD was 0.5 mM. According to the ability of glucose oxidase to reduce Au 3+ ions to Au 0 in the presence of glucose BIB001 BIB002 , Palazzo et al. BIB007 used gold nanoparticles (AuNPs) as colorimetric reporters to detect glucose. This µPAD only used glucose oxidase instead of conventional bienzymatic (GOx/peroxidase) device and it avoided bleaching of the final color, with a LOD of 5 µM. Some nanoparticles like graphene oxide (GO) and cerium oxide (CeO 2 ) possessed high intrinsic peroxidase-like catalytic activity BIB003 BIB004 . Deng et al. BIB012 synthesized GO@SiO 2 @CeO 2 hybrid nanosheets (GSCs) as an alternative to the commonly employed peroxidase. 2,2 -azinobis(3-ethylbenzothiozoline)-6-sulfonic acid (ABTS) used as the electron donor dye substrate was converted from a colorless reduced form to a blue-green oxidized form by GSCs instead of HRP BIB014 with a LOD of 9 nM. Nanoparticles have been used in lateral flow assays associated with colorimetric detection to improve the analytical performance and minimize washing effects BIB007 BIB011 . Figueredo et al. applied three different types of nanomaterials, namely Fe3O4 nanoparticles (MNPs), multiwalled carbon nanotubes (MWCNT), and graphene oxide (GO) in paper-based analytical devices to improve the homogeneity on color measurements. Instead of constructing hydrophobic barriers on paper surface as described above, a layer of hydrophilic paper channels was directly built up on the surface of a hydrophobic substrate. With the assistance of glucose oxidase and HRP, the LOD of the μPADs treated with MNPs, MWCNT and GO were 43, 62, and 18 μM, respectively. Evans et al. BIB011 also aimed at improving color intensity and uniformity by using silica nanoparticles ( Figure 5 ). The PAD added with silica nanoparticles can prevent the color gradients in the colorimetric detection caused by the washing away effect and the LOD was 0.5 mM. According to the ability of glucose oxidase to reduce Au 3+ ions to Au 0 in the presence of glucose BIB001 BIB002 , Palazzo et al. BIB007 used gold nanoparticles (AuNPs) as colorimetric reporters to detect glucose. This μPAD only used glucose oxidase instead of conventional bienzymatic (GOx/peroxidase) device and it avoided bleaching of the final color, with a LOD of 5 μM. Some nanoparticles like graphene oxide (GO) and cerium oxide (CeO2) possessed high intrinsic peroxidase-like catalytic activity BIB003 BIB004 . Deng et al. BIB012 synthesized GO@SiO2@CeO2 hybrid nanosheets (GSCs) as an alternative to the commonly employed peroxidase. 2,2′-azinobis(3-ethylbenzothiozoline)-6-sulfonic acid (ABTS) used as the electron donor dye substrate was converted from a colorless reduced form to a blue-green oxidized form by GSCs instead of HRP BIB014 with a LOD of 9 nM.
A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> 3D-μPADs <s> This paper describes an efficient and high throughput method for fabricating three-dimensional (3D) paper-based microfluidic devices. The method avoids tedious alignment and assembly steps and eliminates a major bottleneck that has hindered the development of these types of devices. A single researcher now can prepare hundreds of devices within 1 h. <s> BIB001 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> 3D-μPADs <s> A simple paper-based optical biosensor for glucose monitoring was developed. As a glucose biosensing principle, a colorimetric glucose assay, using glucose oxidase (GOx) and horseradish peroxidase (HRP), was chosen. The enzymatic glucose assay was implanted on the analytical paper-based device, which is fabricated by the wax printing method. The fabricated device consists of two paper layers. The top layer has a sample loading zone and a detection zone, which are modified with enzymes and chromogens. The bottom layer contains a fluidic channel to convey the solution from the loading zone to the detection zone. Double-sided adhesive tape is used to attach these two layers. In this system, when a glucose solution is dropped onto the loading zone, the solution is transferred to the detection zone, which is modified with GOx, HRP, and chromogenic compounds through the connected fluidic channel. In the presence of GOx-generated H2O2, HRP converts chromogenic compounds into the final product exhibiting a blue color, inducing color change in the detection zone. To confirm the changes in signal intensity in the detection zone, the resulting image was registered by a digital camera from a smartphone. To minimize signal interference from external light, the experiment was performed in a specifically designed light-tight box, which was suited to the smartphone. By using the developed biosensing system, various concentrations of glucose samples (0–20 mM) and human serum (5–17 mM) were precisely analyzed within a few minutes. With the developed system, we could expand the applicability of a smartphone to bioanalytical health care. <s> BIB002 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> 3D-μPADs <s> There is a strong interest in the use of biopolymers in the electronic and biomedical industries, mainly towards low-cost applications. The possibility of developing entirely new kinds of products based on cellulose is of current interest, in order to enhance and to add new functionalities to conventional paper-based products. We present our results towards the development of paper-based microfluidics for molecular diagnostic testing. Paper properties were evaluated and compared to nitrocellulose, the most commonly used material in lateral flow and other rapid tests. Focusing on the use of paper as a substrate for microfluidic applications, through an eco-friendly wax-printing technology, we present three main and distinct colorimetric approaches: (i) enzymatic reactions (glucose detection); (ii) immunoassays (antibodies anti-Leishmania detection); (iii) nucleic acid sequence identification (Mycobacterium tuberculosis complex detection). Colorimetric glucose quantification was achieved through enzymatic reactions performed within specific zones of the paper-based device. The colouration achieved increased with growing glucose concentration and was highly homogeneous, covering all the surface of the paper reaction zones in a 3D sensor format. These devices showed a major advantage when compared to the 2D lateral flow glucose sensors, where some carryover of the coloured products usually occurs. The detection of anti-Leishmania antibodies in canine sera was conceptually achieved using a paper-based 96-well enzyme-linked immunosorbent assay format. However, optimization is still needed for this test, regarding the efficiency of the immobilization of antigens on the cellulose fibres. The detection of Mycobacterium tuberculosis nucleic acids integrated with a non-cross-linking gold nanoprobe detection scheme was also achieved in a wax-printed 384-well paper-based microplate, by the hybridization with a species-specific probe. The obtained results with the above-mentioned proof-of-concept sensors are thus promising towards the future development of simple and cost-effective paper-based diagnostic devices. (Some figures may appear in colour only in the online journal) <s> BIB003 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> 3D-μPADs <s> The development of real-time innocuous blood diagnosis has been a long-standing goal in healthcare; an improved, miniature, all-in-one point-of-care testing (POCT) system with low cost and simplified operation is highly desired. Here, we present a one-touch-activated blood multidiagnostic system (OBMS) involving the synergistic integration of a hollow microneedle and paper-based sensor, providing a number of unique characteristics for simplifying the design of microsystems and enhancing user performance. In this OBMS, all functions of blood collection, serum separation, and detection were sequentially automated in one single device that only required one-touch activation by finger-power without additional operations. For the first time, we successfully demonstrated the operation of this system in vivo in glucose and cholesterol diagnosis, showing a great possibility for human clinical application and commercialization. Additionally, this novel system offers a new approach for the use of microneedles and paper sensors as promising intelligent elements in future real-time healthcare monitoring devices. <s> BIB004 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> 3D-μPADs <s> Abstract We developed a simple and low-cost cell culture monitoring system utilizing a paper-based analytical device (PAD) and a smartphone. The PAD simultaneously analyses glucose and lactate concentrations in the cell culture medium. Focusing on the fact that animal cells consume glucose and produce lactate under anaerobic conditions, oxidase- and horseradish peroxidase (HRP) enzyme-mediated colorimetric assays were integrated into the PAD. The PAD was designed to have three laminated layers. By using a double-sided adhesive tape as the middle layer and wax coating, a bifurcated fluidic channel was prepared to manipulate sample flow. At the inlet and the outlets of the channel, a sample drop zone and two detection zones for glucose and lactate, respectively, were positioned. When sample solution is loaded onto the drop zone, it flows to the detection zone through the hydrophilic fluidic channel via capillary force. Upon reaching the detection zone, the sample reacts with glucose and lactate oxidases (GOx and LOx) and HRP, immobilized on the detection zone along with colorless chromophores. By the Trinder’s reaction, the colorless chromophore is converted to a blue-colored product, generating concentration-dependent signal. With a gadget designed to aid the image acquisition, the PAD was positioned to the smartphone-embedded camera. Images of the detection zones were acquired using a mobile application and the color intensities were quantified as sensor signals. For the glucose assay using GOx/HRP format, we obtained the limit of detection (LOD ∼0.3 mM) and the limit of quantification (LOQ ∼0.9 mM) values in the dynamic detection range from 0.3 to 8.0 mM of glucose. For lactate assay using LOx/HRP, the LOD (0.02 mM) and the LOQ (0.06 mM) values were registered in the dynamic detection range from 0.02 to 0.50 mM of lactate. With the device, simultaneous analyses of glucose and lactate in cell culture media were conducted, exhibiting highly accurate and reproducible results. Based on the results, we propose that the optical sensing system developed is feasible for practical monitoring of animal cell culture. <s> BIB005
Three-dimensional microfluidic paper-based analytical devices (3D-μPADs) represent an emerging platform development tendency due to the advantages of high throughput, complex fluid manipulation, multiplexed analytical tests, and parallel sample distribution . Compared to the 2D μPADs, 3D-μPADs showed the advantage of highly homogeneous coloration that covering all the surface of the paper reaction zones. Fluid can move freely in both the horizontal and vertical directions in a 3D-μPAD. Yoon groups BIB002 BIB005 , Costa et al. BIB003 and Lewis et al. BIB001 fabricated 3D-μPADs by stacking alternating layers of patterned paper and double-sided adhesive tape with holes. In the presence of H2O2 generated by GOx, the HRP converts 4-AAP and N-ethyl-N-(2-hydroxy-3-sulfopropyl)-3,5-dimethylaniline sodium salt monohydrate (MAOS) from colorless compounds to a blue form, which can be visualized in the detection zone. Digital camera from a smartphone was utilized to read the signal and the dynamic detection ranges from 0.3 to 0.8 mM BIB005 . Li et al. BIB004 integrated a minimally invasive microneedle with 3D-μPAD to create the onetouch-activated blood diagnostic system, which shows great potential in clinical application.
A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> 3D-µPADs <s> This paper describes an efficient and high throughput method for fabricating three-dimensional (3D) paper-based microfluidic devices. The method avoids tedious alignment and assembly steps and eliminates a major bottleneck that has hindered the development of these types of devices. A single researcher now can prepare hundreds of devices within 1 h. <s> BIB001 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> 3D-µPADs <s> We present a new method for fabricating three-dimensional paper-based fluidic devices that uses toner as a thermal adhesive to bond multiple layers of patterned paper together. The fabrication process is rapid, involves minimal equipment (a laser printer and a laminator) and produces complex channel networks with dimensions down to 1 mm. The devices can run multiple diagnostic assays on one or more samples simultaneously, can incorporate positive and negative controls and can be programmed to display the results of the assays in a variety of patterns. The patterns of the results can encode information, which could be used to identify counterfeit devices, identify samples, encrypt the results for patient privacy or monitor patient compliance. <s> BIB002 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> 3D-µPADs <s> The first step in curing a disease is being able to detect the disease effectively. Paper-based microfluidic devices are biodegradable and can make diagnosing diseases cost-effective and easy in almost all environments. We created a three-dimesnional (3D) paper device using wax printing fabrication technique and basic principles of origami. This design allows for a versatile fabrication technique over previously reported patterning of SU-8 photoresist on chromatography paper by employing a readily available wax printer. The design also utilizes multiple colorimetric assays that can accommodate one or more analytes including urine, blood, and saliva. In this case to demonstrate the functionality of the 3D paper-based microfluidic system, a urinalysis of protein and glucose assays is conducted. The amounts of glucose and protein introduced to the device are found to be proportional to the color change of each assay. This color change was quantified by use of Adobe Photoshop. Urine samples from participants with no pre-existing health conditions and one person with diabetes were collected and compared against synthetic urine samples with predetermined glucose and protein levels. Utilizing this method, we were able to confirm that both protein and glucose levels were in fact within healthy ranges for healthy participants. For the participant with diabetes, glucose was found to be above the healthy range while the protein level was in the healthy range. <s> BIB003 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> 3D-µPADs <s> A simple paper-based optical biosensor for glucose monitoring was developed. As a glucose biosensing principle, a colorimetric glucose assay, using glucose oxidase (GOx) and horseradish peroxidase (HRP), was chosen. The enzymatic glucose assay was implanted on the analytical paper-based device, which is fabricated by the wax printing method. The fabricated device consists of two paper layers. The top layer has a sample loading zone and a detection zone, which are modified with enzymes and chromogens. The bottom layer contains a fluidic channel to convey the solution from the loading zone to the detection zone. Double-sided adhesive tape is used to attach these two layers. In this system, when a glucose solution is dropped onto the loading zone, the solution is transferred to the detection zone, which is modified with GOx, HRP, and chromogenic compounds through the connected fluidic channel. In the presence of GOx-generated H2O2, HRP converts chromogenic compounds into the final product exhibiting a blue color, inducing color change in the detection zone. To confirm the changes in signal intensity in the detection zone, the resulting image was registered by a digital camera from a smartphone. To minimize signal interference from external light, the experiment was performed in a specifically designed light-tight box, which was suited to the smartphone. By using the developed biosensing system, various concentrations of glucose samples (0–20 mM) and human serum (5–17 mM) were precisely analyzed within a few minutes. With the developed system, we could expand the applicability of a smartphone to bioanalytical health care. <s> BIB004 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> 3D-µPADs <s> There is a strong interest in the use of biopolymers in the electronic and biomedical industries, mainly towards low-cost applications. The possibility of developing entirely new kinds of products based on cellulose is of current interest, in order to enhance and to add new functionalities to conventional paper-based products. We present our results towards the development of paper-based microfluidics for molecular diagnostic testing. Paper properties were evaluated and compared to nitrocellulose, the most commonly used material in lateral flow and other rapid tests. Focusing on the use of paper as a substrate for microfluidic applications, through an eco-friendly wax-printing technology, we present three main and distinct colorimetric approaches: (i) enzymatic reactions (glucose detection); (ii) immunoassays (antibodies anti-Leishmania detection); (iii) nucleic acid sequence identification (Mycobacterium tuberculosis complex detection). Colorimetric glucose quantification was achieved through enzymatic reactions performed within specific zones of the paper-based device. The colouration achieved increased with growing glucose concentration and was highly homogeneous, covering all the surface of the paper reaction zones in a 3D sensor format. These devices showed a major advantage when compared to the 2D lateral flow glucose sensors, where some carryover of the coloured products usually occurs. The detection of anti-Leishmania antibodies in canine sera was conceptually achieved using a paper-based 96-well enzyme-linked immunosorbent assay format. However, optimization is still needed for this test, regarding the efficiency of the immobilization of antigens on the cellulose fibres. The detection of Mycobacterium tuberculosis nucleic acids integrated with a non-cross-linking gold nanoprobe detection scheme was also achieved in a wax-printed 384-well paper-based microplate, by the hybridization with a species-specific probe. The obtained results with the above-mentioned proof-of-concept sensors are thus promising towards the future development of simple and cost-effective paper-based diagnostic devices. (Some figures may appear in colour only in the online journal) <s> BIB005 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> 3D-µPADs <s> The development of real-time innocuous blood diagnosis has been a long-standing goal in healthcare; an improved, miniature, all-in-one point-of-care testing (POCT) system with low cost and simplified operation is highly desired. Here, we present a one-touch-activated blood multidiagnostic system (OBMS) involving the synergistic integration of a hollow microneedle and paper-based sensor, providing a number of unique characteristics for simplifying the design of microsystems and enhancing user performance. In this OBMS, all functions of blood collection, serum separation, and detection were sequentially automated in one single device that only required one-touch activation by finger-power without additional operations. For the first time, we successfully demonstrated the operation of this system in vivo in glucose and cholesterol diagnosis, showing a great possibility for human clinical application and commercialization. Additionally, this novel system offers a new approach for the use of microneedles and paper sensors as promising intelligent elements in future real-time healthcare monitoring devices. <s> BIB006 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> 3D-µPADs <s> Abstract This study investigates a new paper-based 3D microfluidic analytical device for analyzing multiple biological fluids. A wax-printed and -impregnated device was operated using tip-pinch manipulation of the thumb and index fingers and applied the chemical reaction of a preloaded colorimetric indicator and biological solutions. Chemical sensing of protein and glucose concentrations was quantitatively analyzed by changes in the color intensity of the image taken from three image readout devices including scanner (Epson Perfection V700), microscope (USB-embedded handheld digital microscope), and smartphone (LG Optimus Vu). Paper-based 3D microfluidic analytic device with three image analyzers successfully quantified 1.5–75 μM protein concentrations and 0–900 mg/dL glucose concentrations. Paper-based 3D microfluidic device combined with the smartphone showed the performance in protein bioassay (1.5–75 μM) and glucose bioassay (0–50 mM) including clinically relevant ranges comparable to other devices. An origami-driven paper-based 3D microfluidic analytic is a useful platform with great potential for application in point-of-care diagnostics. <s> BIB007 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> 3D-µPADs <s> This study demonstrates a simple approach for fabricating a 3D-μPAD from a single sheet of paper by double-sided printing and lamination. First, a wax printer prints vertically symmetrical and asymmetrical wax patterns onto a double-sided paper surface. Then, a laminator melts the printed wax patterns to form microfluidic channels in the paper sheet. The vertically symmetrical wax patterns form vertical channels when the melted wax patterns make contact with each other. The asymmetrical wax patterns form lateral and vertical channels at the cross section of the paper when the printed wax patterns are melted to a lower height than the thickness of the single sheet of paper. Finally, the two types of wax patterns form a 3D microfluidic network to move fluid laterally and vertically in the single sheet of paper. This method eliminates major technical hurdles related to the complicated and tedious alignment, assembly, bonding, and punching process. This 3D-μPAD can be used in a multiplex digital assay to measure the concentration of a target analyte in a sample solution simply by counting the number of colored bars at a fixed time. It does not require any external instruments to perform digital measurements. Therefore, we expect that this approach could be an instrument-free assay format for use in developing countries. <s> BIB008 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> 3D-µPADs <s> Abstract We developed a simple and low-cost cell culture monitoring system utilizing a paper-based analytical device (PAD) and a smartphone. The PAD simultaneously analyses glucose and lactate concentrations in the cell culture medium. Focusing on the fact that animal cells consume glucose and produce lactate under anaerobic conditions, oxidase- and horseradish peroxidase (HRP) enzyme-mediated colorimetric assays were integrated into the PAD. The PAD was designed to have three laminated layers. By using a double-sided adhesive tape as the middle layer and wax coating, a bifurcated fluidic channel was prepared to manipulate sample flow. At the inlet and the outlets of the channel, a sample drop zone and two detection zones for glucose and lactate, respectively, were positioned. When sample solution is loaded onto the drop zone, it flows to the detection zone through the hydrophilic fluidic channel via capillary force. Upon reaching the detection zone, the sample reacts with glucose and lactate oxidases (GOx and LOx) and HRP, immobilized on the detection zone along with colorless chromophores. By the Trinder’s reaction, the colorless chromophore is converted to a blue-colored product, generating concentration-dependent signal. With a gadget designed to aid the image acquisition, the PAD was positioned to the smartphone-embedded camera. Images of the detection zones were acquired using a mobile application and the color intensities were quantified as sensor signals. For the glucose assay using GOx/HRP format, we obtained the limit of detection (LOD ∼0.3 mM) and the limit of quantification (LOQ ∼0.9 mM) values in the dynamic detection range from 0.3 to 8.0 mM of glucose. For lactate assay using LOx/HRP, the LOD (0.02 mM) and the LOQ (0.06 mM) values were registered in the dynamic detection range from 0.02 to 0.50 mM of lactate. With the device, simultaneous analyses of glucose and lactate in cell culture media were conducted, exhibiting highly accurate and reproducible results. Based on the results, we propose that the optical sensing system developed is feasible for practical monitoring of animal cell culture. <s> BIB009 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> 3D-µPADs <s> This paper describes the modification of microfluidic paper-based analytical devices (μPADs) with chitosan to improve the analytical performance of colorimetric measurements associated with enzymatic bioassays. Chitosan is a natural biopolymer extensively used to modify biosensing surfaces due to its capability of providing a suitable microenvironment for the direct electron transfer between an enzyme and a reactive surface. This hypothesis was investigated using glucose and uric acid (UA) colorimetric assays as model systems. The best colorimetric sensitivity for glucose and UA was achieved using a chromogenic solution composed of 4-aminoantipyrine and sodium 3,5-dichloro-2-hydroxy-benzenesulfonate (4-AAP/DHBS), which provided a linear response for a concentration range between 0.1 and 1.0 mM. Glucose and UA were successfully determined in artificial serum samples with accuracies between 87 and 114%. The limits of detection (LODs) found for glucose and UA assays were 23 and 37 μM, respectively. The enhanced analytical performance of chitosan-modified μPADs allowed the colorimetric detection of glucose in tear samples from four nondiabetic patients. The achieved concentration levels ranged from 130 to 380 μM. The modified μPADs offered analytical reliability and accuracy as well as no statistical difference from the values achieved through a reference method. Based on the presented results, the proposed μPAD can be a powerful alternative tool for non-invasive glucose analysis. <s> BIB010
Three-dimensional microfluidic paper-based analytical devices (3D-µPADs) represent an emerging platform development tendency due to the advantages of high throughput, complex fluid manipulation, multiplexed analytical tests, and parallel sample distribution . Compared to the 2D µPADs, 3D-µPADs showed the advantage of highly homogeneous coloration that covering all the surface of the paper reaction zones. Fluid can move freely in both the horizontal and vertical directions in a 3D-µPAD. Yoon groups BIB004 BIB009 , Costa et al. BIB005 and Lewis et al. BIB001 fabricated 3D-µPADs by stacking alternating layers of patterned paper and double-sided adhesive tape with holes. In the presence of H 2 O 2 generated by GOx, the HRP converts 4-AAP and N-ethyl-N-(2-hydroxy-3-sulfopropyl)-3,5-dimethylaniline sodium salt monohydrate (MAOS) from colorless compounds to a blue form, which can be visualized in the detection zone. Digital camera from a smartphone was utilized to read the signal and the dynamic detection ranges from 0.3 to 0.8 mM BIB009 . Li et al. BIB006 integrated a minimally invasive microneedle with 3D-µPAD to create the one-touch-activated blood diagnostic system, which shows great potential in clinical application. 3D-µPADs could also be converted from 2D structures by origami BIB002 BIB007 BIB003 . Choi et al. BIB007 separated the 3D-µPADs into two layers. Reservoirs on the top layer were preloaded with reagent for glucose detection and the test solutions were loaded to each injection zone in the bottom layer. The device was used by tip-pinch manipulation with the thumb and index fingers to operate the chemical reaction of the preloaded reagent and test solutions. Sechi et al. BIB003 used 3D origami technique to fold the 3D-µPAD and the sample flows from the x, y, and z directions toward the detection points along the hydrophobic channels created by the wax printing technique (Figure 6 ). 3D-μPADs could also be converted from 2D structures by origami BIB002 BIB007 BIB003 . Choi et al. BIB007 separated the 3D-μPADs into two layers. Reservoirs on the top layer were preloaded with reagent for glucose detection and the test solutions were loaded to each injection zone in the bottom layer. The device was used by tip-pinch manipulation with the thumb and index fingers to operate the chemical reaction of the preloaded reagent and test solutions. Sechi et al. BIB003 used 3D origami technique to fold the 3D-μPAD and the sample flows from the x, y, and z directions toward the detection points along the hydrophobic channels created by the wax printing technique (Figure 6 ). Traditional fabrication techniques of 3D-μPAD involve stacking layers of patterned paper and origami-clamping, which are complicated and low efficiency. Li et al. and Jeong et al. BIB008 proposed a method to fabricate a 3D-μPAD in a single layer of paper by doubled-sided printing and lamination ( Figure 7) . Through adjusting the density of printed wax and the heating time, penetration depth of melted wax could be controlled. This method eliminates major technical hurdles related to the complicated and interminable stacking, alignment, bonding and punching process. The LODs achieved versus the colorimetric specific indicators through enzymatic reactions and the kinds of barriers explored were summarized in Table 1 . Traditional fabrication techniques of 3D-µPAD involve stacking layers of patterned paper and origami-clamping, which are complicated and low efficiency. Li et al. and Jeong et al. BIB008 proposed a method to fabricate a 3D-µPAD in a single layer of paper by doubled-sided printing and lamination ( Figure 7) . Through adjusting the density of printed wax and the heating time, penetration depth of melted wax could be controlled. This method eliminates major technical hurdles related to the complicated and interminable stacking, alignment, bonding and punching process. The LODs achieved versus the colorimetric specific indicators through enzymatic reactions and the kinds of barriers explored were summarized in Table 1 . BIB010 4-AAP/DHBS Paraffin 0.023 mM Figure 7 . Scheme of the 3D-μPAD formation on a single sheet of paper in BIB008 . Before (a) and after (b) loading the red dye solution, the front, backside and cross section images of each parts indicated that the red dye solution had smoothly flowed from the inlet to the outlet via the alternative lower and upper channels. With the permission from BIB008 ; Copyright 2015, The Royal Society of Chemistry.
A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Advanced Fabrications of Electrochemical Glucose μPADs <s> Abstract Electrochemical paper-based analytical devices (ePADs) with integrated plasma isolation for determination of glucose from whole blood samples have been developed. A dumbbell shaped ePAD containing two blood separation zones (VF2 membranes) with a middle detection zone was fabricated using the wax dipping method. The dumbbell shaped device was designed to separate plasma while generating homogeneous flow to the middle detection zone of the ePAD. The proposed ePADs work with whole blood samples with 24–60% hematocrit without dilution, and the plasma was completely separated within 4 min. Glucose in isolated plasma separated was detected using glucose oxidase immobilized on the middle of the paper device. The hydrogen peroxide generated from the reaction between glucose and the enzyme pass through to a Prussian blue modified screen printed electrode (PB-SPEs). The currents measured using chronoamperometry at the optimal detection potential for H 2 O 2 (−0.1 V versus Ag/AgCl reference electrode) were proportional to glucose concentrations in the whole blood. The linear range for glucose assay was in the range 0–33.1 mM ( r 2 = 0.987). The coefficients of variation (CVs) of currents were 6.5%, 9.0% and 8.0% when assay whole blood sample containing glucose concentration at 3.4, 6.3, and 15.6 mM, respectively. Because each sample displayed intra-individual variation of electrochemical signal, glucose assay in whole blood samples were measured using the standard addition method. Results demonstrate that the ePAD glucose assay was not significantly different from the spectrophotometric method ( p = 0.376, paired sample t -test, n = 10). <s> BIB001 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Advanced Fabrications of Electrochemical Glucose μPADs <s> a b s t r a c t This paper describes a simple inexpensive paper-based amperometric glucose biosensor developed based on Prussian Blue (PB)-modified screen-printed carbon electrodes (SPCEs). The use of cellulose paper proved to be a simple, "ideal" and green biocompatible immobilization matrix for glucose oxidase (GOx) as it was successfully embedded within the fibre matrix of paper via physical adsorption. The glucose biosensor allowed a small amount (0.5 L) of sample solution for glucose analysis. The biosensor had a linear calibration range between 0.25 mM and 2.00 (R2 = 0.987) and a detection limit of 0.01 mM glucose (S/N = 3). Interference study of selected potential interfering compounds on the biosensor response was investigated. Its analytical performance was demonstrated in the analysis of selected commercial glucose beverages. Despite the simplicity of the immobilization method, the biosensor retained ca. 72% of its activity after a storage period of 45 days. © 2014 Elsevier B.V. All rights reserved. <s> BIB002 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Advanced Fabrications of Electrochemical Glucose μPADs <s> Abstract A simple low cost “green” biosensor configuration comprising of a hydrophilic cellulose paper disk with immobilised glucose oxidase (GOx) via adsorption step, placed on top of a screen printed carbon electrode (SPCE) was developed. This biosensor configuration allowed for low volume of glucose sample (5 μL) to be analysed. Cellulose paper was also used as the pre-storage reagent matrix for 0.1 M phosphate buffer solution (PBS, pH 7.0) and 10 mM soluble ferrocene monocarboxylic acid mediator. This biosensor exhibited a linear dynamic calibration range of 1 to 5 mM glucose ( r 2 = 0.971), with a limit of detection of 0.18 mM and retained 98% of its signal after a period of four months. In addition, its performance was demonstrated in the analysis of selected commercial soda beverages. The glucose concentrations obtained by the biosensor corroborated well with an independent high performance liquid chromatographic (HPLC) method. <s> BIB003 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Advanced Fabrications of Electrochemical Glucose μPADs <s> Abstract A miniaturized paper-based microfluidic electrochemical enzymatic biosensing platform was developed and the effects of fluidic behaviors in paper substrate on electrochemical sensing were systemically investigated. The biosensor is composed of an enzyme-immobilized pure cellulose paper pad, an enzymeless screen-printed electrode (SPE) modified with platinum nanoparticles (PtNPs), and a pair of clamped acrylonitrile butadiene styrene (ABS) plastic holders to provide good alignment for stable signal sensing. The wicking rate of liquid sample in paper was predicted, using a two-dimensional Fickian-diffusion model, to be 1.0 × 10 −2 cm 2 /s, and was verified experimentally. Dip-coating was used to prepare the enzyme-modified paper pad (EPP), which is amenable for mass manufacturing. The EPP retained excellent hydrophilicity and mechanical properties, with even slightly improved tensile strength and break strain. No significant difference in voltammetric behaviors was observed between measurements made in bulk buffer solution and with different sample volumes applied to EPP beyond its saturation wicking volume. Glucose oxidase (GO x ), an enzyme specific for glucose (Glc) substrate, was used as a model enzyme and its enzymatic reaction product H 2 O 2 was detected by the enzymeless PtNPs-SPE in the presence of ambient electron mediator O 2 . Consequently, Glc was detected with its concentration linearly depending on H 2 O 2 oxidation current with sensitivity of 10.5 μA mM -1 cm -2 and detection limit of 9.3 μM (at S / N = 3). The biosensor can be quickly regenerated with memory effects removed by buffer additions for continuous real-time detection of multiple samples in one run for point-of-care purposes. This integrated platform is also inexpensive since the EPP is easily stored, and enzymeless PtNPs-SPEs can be used multiple times with different EPPs. The green and facile preparation in bulk, excellent mechanical strength, well-maintained enzyme activity, disposability, and good reproducibility and stability make our paper-fluidic biosensor platform suitable for various real-time electrochemical bioassays without any external power for mixing, especially in resource-limited conditions. <s> BIB004 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Advanced Fabrications of Electrochemical Glucose μPADs <s> Enzymatic sensors on complementary metal–oxide–semiconductor (CMOS) chips are realized using carbon ink and chromatography paper (ChrPr). Electrodes are fabricated from carbon ink on CMOS chips. The carbon ink electrodes work as well-behaving electrochemical electrodes. Enzyme electrodes are realized by covering the carbon ink electrodes on the CMOS chip with ChrPr supporting enzymes and electron mediators. Such enzyme electrodes successfully give anodic current proportional to the glucose concentration. Good linearity is observed up to 10 mM glucose concentration, which is sufficient for blood glucose testing applications. <s> BIB005 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Advanced Fabrications of Electrochemical Glucose μPADs <s> This study demonstrates a simple approach for fabricating a 3D-μPAD from a single sheet of paper by double-sided printing and lamination. First, a wax printer prints vertically symmetrical and asymmetrical wax patterns onto a double-sided paper surface. Then, a laminator melts the printed wax patterns to form microfluidic channels in the paper sheet. The vertically symmetrical wax patterns form vertical channels when the melted wax patterns make contact with each other. The asymmetrical wax patterns form lateral and vertical channels at the cross section of the paper when the printed wax patterns are melted to a lower height than the thickness of the single sheet of paper. Finally, the two types of wax patterns form a 3D microfluidic network to move fluid laterally and vertically in the single sheet of paper. This method eliminates major technical hurdles related to the complicated and tedious alignment, assembly, bonding, and punching process. This 3D-μPAD can be used in a multiplex digital assay to measure the concentration of a target analyte in a sample solution simply by counting the number of colored bars at a fixed time. It does not require any external instruments to perform digital measurements. Therefore, we expect that this approach could be an instrument-free assay format for use in developing countries. <s> BIB006 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Advanced Fabrications of Electrochemical Glucose μPADs <s> Abstract This report describes for the first time the development of paper-based enzymatic reactors (PERs) for the detection of glucose (Glu) in artificial serum sample using a 3D printed batch injection analysis (BIA) cell coupled with electrochemical detection. The fabrication of the PERs involved firstly the oxidation of the paper surface with a sodium periodate solution. The oxidized paper was then perforated with a paper punch to create microdisks and activated with a solution containing N -hydroxysuccinimide (NHS) and N -(3-dimethylaminopropyl)- N ′-ethylcarbodiimide hydrochloride (EDC). Glucose oxidase (GOx) enzyme was then covalently immobilized on paper surface to promote the enzymatic assay for the detection of Glu in serum sample. After the addition of Glu on the PER surface placed inside a plastic syringe, the analyte penetrated through the paper surface under vertical flow promoting the enzymatic assay. The reaction product (H 2 O 2 ) was collected with an electronic micropipette in a microtube and analyzed in the 3D BIA cell coupled with screen-printed electrodes (SPEs). The overall preparation time and the cost estimated per PER were 2.5 h and $0.02, respectively. Likewise the PERs, the use of a 3D printer allowed the fabrication of a BIA cell within 4 h at cost of $5. The coupling of SPE with the 3D printed cell exhibited great analytical performance including repeatability and reproducibility lower than 2% as well as high sampling rate (30 injections h −1 ) under low injection volume (10 μL). The limit of detection (LD) and linear range achieved with the proposed approach was 0.11 mmol L −1 and 1–10 mmol L −1 , respectively. Lastly, the glucose concentration level was successfully determined using the proposed method and the values found were not statistically different from the data achieved by a reference method at confidence level of 95%. <s> BIB007
Electrochemical detection integrated with a paper-based analytical device plays an important role in glucose detection due to the advantage of low cost, high sensitivity and selectivity, minimal sample preparation and short time of response. Screen-printed electrode (SPE) has been used for glucose detection in many paper-based analytical devices due to the advantage of flexible design and easy modification with chemicals. The research group of Swee Ngin Tan developed a paper-based amperometric glucose biosensor by placing a paper disk immobilized with glucose oxidase (GOx) on top of the SPE and used Fc-COOH or Prussian Blue (PB) as mediator BIB002 BIB003 . The linear response range was 1-5 mM with a correlation coefficient of 0.971. The PAD showed a LOD of 0.18 mM. Yang et al. BIB004 modified the SPE with platinum nanoparticles (PtNPs) and used the enzymeless PtNPs-SPE to detect glucose oxidase reaction product H2O2. The detection limit was dropped to 9.3 μM. Noiphung et al. BIB001 added a plasma isolation part and used the PAD to detect glucose from whole blood. A polyvinyl alcoholbound glass fiber was used to separate whole blood and the linear calibration range was from 0 up to 33.1 mM with a correlation coefficient of 0.987. Dias et al. BIB007 developed a paper-based enzymatic device to detect glucose in the 3D batch injection analysis (BIA) cell coupled with SPEs. The LOD was 0.11 mM and linear range was 1-10 mM. Miki et al. BIB005 replaced screen-printed electrode with complementary metal-oxide-semiconductor (CMOS) chips for electrochemical paper-based glucose detection. Electrodes were fabricated on CMOS chips, the working electrode (WE) and counter electrode (CE) were dropped with carbon ink, and the reference electrode (RE) was formed using Ag/AgCl ink. Glucose oxidase and electron mediator K3[Fe(CN)6] were immobilized on chromatography paper. Anodic currents given by electrodes were proportional to the glucose concentrations and linearity is up to 10 mM, which is sufficient for clinical applications . Scheme of the 3D-µPAD formation on a single sheet of paper in BIB006 . Before (a) and after (b) loading the red dye solution, the front, backside and cross section images of each parts indicated that the red dye solution had smoothly flowed from the inlet to the outlet via the alternative lower and upper channels. With the permission from BIB006 ; Copyright 2015, The Royal Society of Chemistry.
A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Advanced Fabrications of Electrochemical Glucose µPADs <s> Abstract Electrochemical paper-based analytical devices (ePADs) with integrated plasma isolation for determination of glucose from whole blood samples have been developed. A dumbbell shaped ePAD containing two blood separation zones (VF2 membranes) with a middle detection zone was fabricated using the wax dipping method. The dumbbell shaped device was designed to separate plasma while generating homogeneous flow to the middle detection zone of the ePAD. The proposed ePADs work with whole blood samples with 24–60% hematocrit without dilution, and the plasma was completely separated within 4 min. Glucose in isolated plasma separated was detected using glucose oxidase immobilized on the middle of the paper device. The hydrogen peroxide generated from the reaction between glucose and the enzyme pass through to a Prussian blue modified screen printed electrode (PB-SPEs). The currents measured using chronoamperometry at the optimal detection potential for H 2 O 2 (−0.1 V versus Ag/AgCl reference electrode) were proportional to glucose concentrations in the whole blood. The linear range for glucose assay was in the range 0–33.1 mM ( r 2 = 0.987). The coefficients of variation (CVs) of currents were 6.5%, 9.0% and 8.0% when assay whole blood sample containing glucose concentration at 3.4, 6.3, and 15.6 mM, respectively. Because each sample displayed intra-individual variation of electrochemical signal, glucose assay in whole blood samples were measured using the standard addition method. Results demonstrate that the ePAD glucose assay was not significantly different from the spectrophotometric method ( p = 0.376, paired sample t -test, n = 10). <s> BIB001 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Advanced Fabrications of Electrochemical Glucose µPADs <s> a b s t r a c t This paper describes a simple inexpensive paper-based amperometric glucose biosensor developed based on Prussian Blue (PB)-modified screen-printed carbon electrodes (SPCEs). The use of cellulose paper proved to be a simple, "ideal" and green biocompatible immobilization matrix for glucose oxidase (GOx) as it was successfully embedded within the fibre matrix of paper via physical adsorption. The glucose biosensor allowed a small amount (0.5 L) of sample solution for glucose analysis. The biosensor had a linear calibration range between 0.25 mM and 2.00 (R2 = 0.987) and a detection limit of 0.01 mM glucose (S/N = 3). Interference study of selected potential interfering compounds on the biosensor response was investigated. Its analytical performance was demonstrated in the analysis of selected commercial glucose beverages. Despite the simplicity of the immobilization method, the biosensor retained ca. 72% of its activity after a storage period of 45 days. © 2014 Elsevier B.V. All rights reserved. <s> BIB002 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Advanced Fabrications of Electrochemical Glucose µPADs <s> Abstract A simple low cost “green” biosensor configuration comprising of a hydrophilic cellulose paper disk with immobilised glucose oxidase (GOx) via adsorption step, placed on top of a screen printed carbon electrode (SPCE) was developed. This biosensor configuration allowed for low volume of glucose sample (5 μL) to be analysed. Cellulose paper was also used as the pre-storage reagent matrix for 0.1 M phosphate buffer solution (PBS, pH 7.0) and 10 mM soluble ferrocene monocarboxylic acid mediator. This biosensor exhibited a linear dynamic calibration range of 1 to 5 mM glucose ( r 2 = 0.971), with a limit of detection of 0.18 mM and retained 98% of its signal after a period of four months. In addition, its performance was demonstrated in the analysis of selected commercial soda beverages. The glucose concentrations obtained by the biosensor corroborated well with an independent high performance liquid chromatographic (HPLC) method. <s> BIB003 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Advanced Fabrications of Electrochemical Glucose µPADs <s> Abstract A miniaturized paper-based microfluidic electrochemical enzymatic biosensing platform was developed and the effects of fluidic behaviors in paper substrate on electrochemical sensing were systemically investigated. The biosensor is composed of an enzyme-immobilized pure cellulose paper pad, an enzymeless screen-printed electrode (SPE) modified with platinum nanoparticles (PtNPs), and a pair of clamped acrylonitrile butadiene styrene (ABS) plastic holders to provide good alignment for stable signal sensing. The wicking rate of liquid sample in paper was predicted, using a two-dimensional Fickian-diffusion model, to be 1.0 × 10 −2 cm 2 /s, and was verified experimentally. Dip-coating was used to prepare the enzyme-modified paper pad (EPP), which is amenable for mass manufacturing. The EPP retained excellent hydrophilicity and mechanical properties, with even slightly improved tensile strength and break strain. No significant difference in voltammetric behaviors was observed between measurements made in bulk buffer solution and with different sample volumes applied to EPP beyond its saturation wicking volume. Glucose oxidase (GO x ), an enzyme specific for glucose (Glc) substrate, was used as a model enzyme and its enzymatic reaction product H 2 O 2 was detected by the enzymeless PtNPs-SPE in the presence of ambient electron mediator O 2 . Consequently, Glc was detected with its concentration linearly depending on H 2 O 2 oxidation current with sensitivity of 10.5 μA mM -1 cm -2 and detection limit of 9.3 μM (at S / N = 3). The biosensor can be quickly regenerated with memory effects removed by buffer additions for continuous real-time detection of multiple samples in one run for point-of-care purposes. This integrated platform is also inexpensive since the EPP is easily stored, and enzymeless PtNPs-SPEs can be used multiple times with different EPPs. The green and facile preparation in bulk, excellent mechanical strength, well-maintained enzyme activity, disposability, and good reproducibility and stability make our paper-fluidic biosensor platform suitable for various real-time electrochemical bioassays without any external power for mixing, especially in resource-limited conditions. <s> BIB004 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Advanced Fabrications of Electrochemical Glucose µPADs <s> Enzymatic sensors on complementary metal–oxide–semiconductor (CMOS) chips are realized using carbon ink and chromatography paper (ChrPr). Electrodes are fabricated from carbon ink on CMOS chips. The carbon ink electrodes work as well-behaving electrochemical electrodes. Enzyme electrodes are realized by covering the carbon ink electrodes on the CMOS chip with ChrPr supporting enzymes and electron mediators. Such enzyme electrodes successfully give anodic current proportional to the glucose concentration. Good linearity is observed up to 10 mM glucose concentration, which is sufficient for blood glucose testing applications. <s> BIB005 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Advanced Fabrications of Electrochemical Glucose µPADs <s> Abstract This report describes for the first time the development of paper-based enzymatic reactors (PERs) for the detection of glucose (Glu) in artificial serum sample using a 3D printed batch injection analysis (BIA) cell coupled with electrochemical detection. The fabrication of the PERs involved firstly the oxidation of the paper surface with a sodium periodate solution. The oxidized paper was then perforated with a paper punch to create microdisks and activated with a solution containing N -hydroxysuccinimide (NHS) and N -(3-dimethylaminopropyl)- N ′-ethylcarbodiimide hydrochloride (EDC). Glucose oxidase (GOx) enzyme was then covalently immobilized on paper surface to promote the enzymatic assay for the detection of Glu in serum sample. After the addition of Glu on the PER surface placed inside a plastic syringe, the analyte penetrated through the paper surface under vertical flow promoting the enzymatic assay. The reaction product (H 2 O 2 ) was collected with an electronic micropipette in a microtube and analyzed in the 3D BIA cell coupled with screen-printed electrodes (SPEs). The overall preparation time and the cost estimated per PER were 2.5 h and $0.02, respectively. Likewise the PERs, the use of a 3D printer allowed the fabrication of a BIA cell within 4 h at cost of $5. The coupling of SPE with the 3D printed cell exhibited great analytical performance including repeatability and reproducibility lower than 2% as well as high sampling rate (30 injections h −1 ) under low injection volume (10 μL). The limit of detection (LD) and linear range achieved with the proposed approach was 0.11 mmol L −1 and 1–10 mmol L −1 , respectively. Lastly, the glucose concentration level was successfully determined using the proposed method and the values found were not statistically different from the data achieved by a reference method at confidence level of 95%. <s> BIB006
Electrochemical detection integrated with a paper-based analytical device plays an important role in glucose detection due to the advantage of low cost, high sensitivity and selectivity, minimal sample preparation and short time of response. Screen-printed electrode (SPE) has been used for glucose detection in many paper-based analytical devices due to the advantage of flexible design and easy modification with chemicals. The research group of Swee Ngin Tan developed a paper-based amperometric glucose biosensor by placing a paper disk immobilized with glucose oxidase (GOx) on top of the SPE and used Fc-COOH or Prussian Blue (PB) as mediator BIB002 BIB003 . The linear response range was 1-5 mM with a correlation coefficient of 0.971. The PAD showed a LOD of 0.18 mM. Yang et al. BIB004 modified the SPE with platinum nanoparticles (PtNPs) and used the enzymeless PtNPs-SPE to detect glucose oxidase reaction product H 2 O 2 . The detection limit was dropped to 9.3 µM. Noiphung et al. BIB001 added a plasma isolation part and used the PAD to detect glucose from whole blood. A polyvinyl alcohol-bound glass fiber was used to separate whole blood and the linear calibration range was from 0 up to 33.1 mM with a correlation coefficient of 0.987. Dias et al. BIB006 developed a paper-based enzymatic device to detect glucose in the 3D batch injection analysis (BIA) cell coupled with SPEs. The LOD was 0.11 mM and linear range was 1-10 mM. Miki et al. BIB005 replaced screen-printed electrode with complementary metal-oxide-semiconductor (CMOS) chips for electrochemical paper-based glucose detection. Electrodes were fabricated on CMOS chips, the working electrode (WE) and counter electrode (CE) were dropped with carbon ink, and the reference electrode (RE) was formed using Ag/AgCl ink. Glucose oxidase and electron mediator K 3 [Fe(CN) ] were immobilized on chromatography paper. Anodic currents given by electrodes were proportional to the glucose concentrations and linearity is up to 10 mM, which is sufficient for clinical applications .
A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Electrochemical Glucose µPADs with Printed Electrodes <s> An electrode platform printed on a recyclable low-cost paper substrate was characterized using cyclic voltammetry. The working and counter electrodes were directly printed gold-stripes, while the reference electrode was a printed silver stripe onto which an AgCl layer was deposited electrochemically. The novel paper-based chips showed comparable performance to conventional electrochemical cells. Different types of electrode modifications were carried out to demonstrate that the printed electrodes behave similarly with conventional electrodes. Firstly, a self-assembled monolayer (SAM) of alkanethiols was successfully formed on the Au electrode surface. As a consequence, the peak currents were suppressed and no longer showed clear increase as a function of the scan rate. Such modified electrodes have potential in various sensor applications when terminally substituted thiols are used. Secondly, a polyaniline film was electropolymerized on the working electrode by cyclic voltammetry and used for potentiometric pH sensing. The calibration curve showed close to Nerstian response. Thirdly, a poly(3,4-ethylenedioxythiophene) (PEDOT) layer was electropolymerized both by galvanostatic and cyclic potential sweep method on the working electrode using two different dopants; Cl− to study ion-to-electron transduction on paper-Au/PEDOT system and glucose oxidase in order to fabricate a glucose biosensor. The planar paper-based electrochemical cell is a user-friendly platform that functions with low sample volume and allows the sample to be applied and changed by e.g. pipetting. Low unit cost is achieved with mask- and mesh-free inkjet-printing technology. <s> BIB001 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Electrochemical Glucose µPADs with Printed Electrodes <s> Abstract The present work describes for the first time the coupling of graphite pencil electrodes with paper-based analytical devices (μPADs) for glucose biosensing. Electrochemical measurement for μPADs using a two-electrode system was also developed. This dual-electrode configuration on paper provides electrochemical responses similar to those recorded by conventional electrochemical systems (three electrode systems). A wax printing process was used to define hydrophilic circular microzones by inserting hydrophobic patterns on paper. The microzones were employed one for filtration, one for an enzymatic reaction and one for electrochemical detection. By adding 4-aminophenylboronic acid as redox mediator and glucose oxidase to the reaction microzone, it was possible to reach low limits of detection for glucose with graphite pencil electrodes without modifying the electrode. The limit of detection of the proposed μPAD was found to be 0.38 μmol L −1 for glucose. Low sample consumption (40 μL) and fast analysis time (less than 5 min) combined with low cost electrodes and paper-based analytical platforms are attractive properties of the proposed μPAD with electrochemical detection. Artificial blood serum samples containing glucose were analyzed with the proposed device as proof of concept. <s> BIB002 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Electrochemical Glucose µPADs with Printed Electrodes <s> Abstract The development of a miniaturized and low-cost platform for the highly sensitive, selective and rapid detection of multiplexed metabolites is of great interest for healthcare, pharmaceuticals, food science, and environmental monitoring. Graphene is a delicate single-layer, two-dimensional network of carbon atoms with extraordinary electrical sensing capability. Microfluidic paper with printing technique is a low cost matrix. Here, we demonstrated the development of graphene-ink based biosensor arrays on a microfluidic paper for the multiplexed detection of different metabolites, such as glucose, lactate, xanthine and cholesterol. Our results show that the graphene biosensor arrays can detect multiple metabolites on a microfluidic paper sensitively, rapidly and simultaneously. The device exhibits a fast measuring time of less than 2 min, a low detection limit of 0.3 μM, and a dynamic detection range of 0.3–15 μM. The process is simple and inexpensive to operate and requires a low consumption of sample volume. We anticipate that these results could open exciting opportunities for a variety of applications. <s> BIB003 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Electrochemical Glucose µPADs with Printed Electrodes <s> The integration of paper with an electrochemical device has attracted growing attention for point-of-care testing, where it is of great importance to fabricate electrodes on paper in a low-cost, easy and versatile way. In this work, we report a simple strategy for directly writing electrodes on paper using a pressure-assisted ball pen to form a paper-based electrochemical device (PED). This method is demonstrated to be capable of fabricating electrodes on paper with good electrical conductivity and electrochemical performance, holding great potential to be employed in point-of-care applications, such as in human health diagnostics and food safety detection. As examples, the PEDs fabricated using the developed method are applied for detection of glucose in artificial urine and melamine in sample solutions. Furthermore, our developed strategy is also extended to fabricate PEDs with multi-electrode arrays and write electrodes on non-planar surfaces (e.g., paper cup, human skin), indicating the potential application of our method in other fields, such as fabricating biosensors, paper electronics etc. <s> BIB004 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Electrochemical Glucose µPADs with Printed Electrodes <s> Abstract In this work, an origami paper-based analytical device for glucose biosensor by employing fully-drawn pencil electrodes has been reported. The three-electrode system was prepared on paper directly by drawing with nothing more than pencils. By simple printing, two separated zones on paper were designed for the immobilization of the mediator and glucose oxidase (GOx), respectively. The used paper provides a favorable and biocompatible support for maintaining the bioactivities of GOx. With a sandwich-type scheme, the origami biosensor exhibited great analytical performance for glucose sensing including acceptable reproducibility and favorable selectivity against common interferents in physiological fluids. The limit of detection and linear range achieved with the approach was 0.05 mM and 1–12 mM, respectively. Its analytical performance was also demonstrated in the analysis of human blood samples. Such fully-drawn paper-based device is cheap, flexible, portable, disposable, and environmentally friendly, affording great convenience for practical use under resource-limited conditions. We therefore envision that this approach can be extended to generate other functional paper-based devices. <s> BIB005 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Electrochemical Glucose µPADs with Printed Electrodes <s> The present work describes the fabrication of paper-based analytical devices (μPADs) by immobilization of glucose oxidase onto the screen printed carbon electrodes (SPCEs) for the electrochemical glucose detection. The sensitivity towards glucose was improved by using a SPCE prepared from homemade carbon ink mixed with cellulose acetate. In addition, 4-aminophenylboronic acid (4-APBA) was used as a redox mediator giving a lower detection potential for improvement selectivity. Under optimized condition, the detection limit was 0.86 mM. The proposed device was applied in real samples. This μPAD has many advantages including low sample consumption, rapid analysis method, and low device cost. <s> BIB006
An electrochemical sensor is composed of substrate and electrode so that it is important to fabricate electrodes on paper using an easy and versatile method. Some scientists directly printed electrodes on paper substrate instead of using commercial screen-printed electrodes BIB005 BIB001 BIB006 BIB004 BIB003 . Rungsawang et al. BIB006 used 4-aminophenylboronic acid (4-APBA) as redox mediator to improve the selectivity of the homemade screen-printed carbon electrode due to the low detection potential and the detection limit was 0.86 mM. Määttänen et al. BIB001 used an inkjet-printing paper-based device, whose working and counter electrodes were printed gold-stripes and a silver-stripe was printed onto an AgCl layer to form the reference electrode. Several modifications were carried to demonstrate the inkjet-printing paper-based device showed no difference with conventional electrodes. Li et al. BIB004 proposed a direct writing method using a pressure-assisted accessory ball pen to fabricate electrodes on paper (Figure 8 ). The electrodes fabricated on paper were demonstrated with great electrical conductivity and electrochemical performance, and the electrode could be used in the artificial urine samples, which exhibited the potential in practical application. Li et al. BIB005 developed a three-electrode system prepared on paper directly by drawing with graphite pencils. The µPAD was designed with a sandwich-type structure that mediator and glucose oxidase were immobilized on separated zones. This origami µPAD showed acceptable reproducibility and high selectivity against interferents in physiological fluids. The linear calibration range was from 1 up to 12 mM and the LOD was 0.05 mM. Santhiago et al. BIB002 developed a dual-electrode system to replace the conventional three electrode systems. Graphite pencil was directly used as the working electrode instead of drawing on the paper. 4-aminophenylboronic acid was added as redox mediator to reach low limits glucose detection with a LOD of 0.38 µM.
A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Electrochemical Glucose μPADs with Printed Electrodes <s> An electrode platform printed on a recyclable low-cost paper substrate was characterized using cyclic voltammetry. The working and counter electrodes were directly printed gold-stripes, while the reference electrode was a printed silver stripe onto which an AgCl layer was deposited electrochemically. The novel paper-based chips showed comparable performance to conventional electrochemical cells. Different types of electrode modifications were carried out to demonstrate that the printed electrodes behave similarly with conventional electrodes. Firstly, a self-assembled monolayer (SAM) of alkanethiols was successfully formed on the Au electrode surface. As a consequence, the peak currents were suppressed and no longer showed clear increase as a function of the scan rate. Such modified electrodes have potential in various sensor applications when terminally substituted thiols are used. Secondly, a polyaniline film was electropolymerized on the working electrode by cyclic voltammetry and used for potentiometric pH sensing. The calibration curve showed close to Nerstian response. Thirdly, a poly(3,4-ethylenedioxythiophene) (PEDOT) layer was electropolymerized both by galvanostatic and cyclic potential sweep method on the working electrode using two different dopants; Cl− to study ion-to-electron transduction on paper-Au/PEDOT system and glucose oxidase in order to fabricate a glucose biosensor. The planar paper-based electrochemical cell is a user-friendly platform that functions with low sample volume and allows the sample to be applied and changed by e.g. pipetting. Low unit cost is achieved with mask- and mesh-free inkjet-printing technology. <s> BIB001 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Electrochemical Glucose μPADs with Printed Electrodes <s> Abstract The present work describes for the first time the coupling of graphite pencil electrodes with paper-based analytical devices (μPADs) for glucose biosensing. Electrochemical measurement for μPADs using a two-electrode system was also developed. This dual-electrode configuration on paper provides electrochemical responses similar to those recorded by conventional electrochemical systems (three electrode systems). A wax printing process was used to define hydrophilic circular microzones by inserting hydrophobic patterns on paper. The microzones were employed one for filtration, one for an enzymatic reaction and one for electrochemical detection. By adding 4-aminophenylboronic acid as redox mediator and glucose oxidase to the reaction microzone, it was possible to reach low limits of detection for glucose with graphite pencil electrodes without modifying the electrode. The limit of detection of the proposed μPAD was found to be 0.38 μmol L −1 for glucose. Low sample consumption (40 μL) and fast analysis time (less than 5 min) combined with low cost electrodes and paper-based analytical platforms are attractive properties of the proposed μPAD with electrochemical detection. Artificial blood serum samples containing glucose were analyzed with the proposed device as proof of concept. <s> BIB002 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Electrochemical Glucose μPADs with Printed Electrodes <s> Abstract The development of a miniaturized and low-cost platform for the highly sensitive, selective and rapid detection of multiplexed metabolites is of great interest for healthcare, pharmaceuticals, food science, and environmental monitoring. Graphene is a delicate single-layer, two-dimensional network of carbon atoms with extraordinary electrical sensing capability. Microfluidic paper with printing technique is a low cost matrix. Here, we demonstrated the development of graphene-ink based biosensor arrays on a microfluidic paper for the multiplexed detection of different metabolites, such as glucose, lactate, xanthine and cholesterol. Our results show that the graphene biosensor arrays can detect multiple metabolites on a microfluidic paper sensitively, rapidly and simultaneously. The device exhibits a fast measuring time of less than 2 min, a low detection limit of 0.3 μM, and a dynamic detection range of 0.3–15 μM. The process is simple and inexpensive to operate and requires a low consumption of sample volume. We anticipate that these results could open exciting opportunities for a variety of applications. <s> BIB003 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Electrochemical Glucose μPADs with Printed Electrodes <s> The integration of paper with an electrochemical device has attracted growing attention for point-of-care testing, where it is of great importance to fabricate electrodes on paper in a low-cost, easy and versatile way. In this work, we report a simple strategy for directly writing electrodes on paper using a pressure-assisted ball pen to form a paper-based electrochemical device (PED). This method is demonstrated to be capable of fabricating electrodes on paper with good electrical conductivity and electrochemical performance, holding great potential to be employed in point-of-care applications, such as in human health diagnostics and food safety detection. As examples, the PEDs fabricated using the developed method are applied for detection of glucose in artificial urine and melamine in sample solutions. Furthermore, our developed strategy is also extended to fabricate PEDs with multi-electrode arrays and write electrodes on non-planar surfaces (e.g., paper cup, human skin), indicating the potential application of our method in other fields, such as fabricating biosensors, paper electronics etc. <s> BIB004 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Electrochemical Glucose μPADs with Printed Electrodes <s> Abstract In this work, an origami paper-based analytical device for glucose biosensor by employing fully-drawn pencil electrodes has been reported. The three-electrode system was prepared on paper directly by drawing with nothing more than pencils. By simple printing, two separated zones on paper were designed for the immobilization of the mediator and glucose oxidase (GOx), respectively. The used paper provides a favorable and biocompatible support for maintaining the bioactivities of GOx. With a sandwich-type scheme, the origami biosensor exhibited great analytical performance for glucose sensing including acceptable reproducibility and favorable selectivity against common interferents in physiological fluids. The limit of detection and linear range achieved with the approach was 0.05 mM and 1–12 mM, respectively. Its analytical performance was also demonstrated in the analysis of human blood samples. Such fully-drawn paper-based device is cheap, flexible, portable, disposable, and environmentally friendly, affording great convenience for practical use under resource-limited conditions. We therefore envision that this approach can be extended to generate other functional paper-based devices. <s> BIB005 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Electrochemical Glucose μPADs with Printed Electrodes <s> The present work describes the fabrication of paper-based analytical devices (μPADs) by immobilization of glucose oxidase onto the screen printed carbon electrodes (SPCEs) for the electrochemical glucose detection. The sensitivity towards glucose was improved by using a SPCE prepared from homemade carbon ink mixed with cellulose acetate. In addition, 4-aminophenylboronic acid (4-APBA) was used as a redox mediator giving a lower detection potential for improvement selectivity. Under optimized condition, the detection limit was 0.86 mM. The proposed device was applied in real samples. This μPAD has many advantages including low sample consumption, rapid analysis method, and low device cost. <s> BIB006
An electrochemical sensor is composed of substrate and electrode so that it is important to fabricate electrodes on paper using an easy and versatile method. Some scientists directly printed electrodes on paper substrate instead of using commercial screen-printed electrodes BIB005 BIB001 BIB006 BIB004 BIB003 . Rungsawang et al. BIB006 used 4-aminophenylboronic acid (4-APBA) as redox mediator to improve the selectivity of the homemade screen-printed carbon electrode due to the low detection potential and the detection limit was 0.86 mM. Määttänen et al. BIB001 used an inkjet-printing paper-based device, whose working and counter electrodes were printed gold-stripes and a silver-stripe was printed onto an AgCl layer to form the reference electrode. Several modifications were carried to demonstrate the inkjet-printing paper-based device showed no difference with conventional electrodes. Li et al. BIB004 proposed a direct writing method using a pressure-assisted accessory ball pen to fabricate electrodes on paper (Figure 8 ). The electrodes fabricated on paper were demonstrated with great electrical conductivity and electrochemical performance, and the electrode could be used in the artificial urine samples, which exhibited the potential in practical application. Li et al. BIB005 developed a threeelectrode system prepared on paper directly by drawing with graphite pencils. The μPAD was designed with a sandwich-type structure that mediator and glucose oxidase were immobilized on separated zones. This origami μPAD showed acceptable reproducibility and high selectivity against interferents in physiological fluids. The linear calibration range was from 1 up to 12 mM and the LOD was 0.05 mM. Santhiago et al. BIB002 developed a dual-electrode system to replace the conventional three electrode systems. Graphite pencil was directly used as the working electrode instead of drawing on the paper. 4-aminophenylboronic acid was added as redox mediator to reach low limits glucose detection with a LOD of 0.38 μM. The LODs achieved versus the electrochemical specific mediators through enzymatic reactions and the kinds of electrodes explored were summarized in Table 2 . The LODs achieved versus the electrochemical specific mediators through enzymatic reactions and the kinds of electrodes explored were summarized in Table 2 . BIB002 4-APBA Graphite dual-electrode 0.38 µM
A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Other Glucose Detection Platforms <s> In this report, a paper-based micro-calorimetric biochemical detection method is presented. Calorimetric detection of biochemical reactions is demonstrated as an extension of current colorimetric and electrochemical detection mechanisms of paper-based biochemical analytical systems. Reaction and/or binding temperature of glucose/glucose oxidase, DNA/hydrogen peroxide, and biotin/streptavidin, are measured by the paper-based micro-calorimeter. Commercially available glucose calibration samples of 0.05, 0.15 and 0.3% wt/vol concentration are used for comparing the device performance with a commercially available glucose meter (electrochemical detection). The calorimetric glucose detection demonstrates a measurement error less than 2%. The calorimetric detection results of DNA concentrations from 0.9 to 7.3 mg/mL and temperature changes in biotin and streptavidin reaction are presented to demonstrate the feasibility of integrating the calorimetric detection method with paper based microfluidic devices. <s> BIB001 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Other Glucose Detection Platforms <s> A simple and inexpensive method to fabricate a colloidal CdSe/ZnS quantum dots-modified paper-based assay for glucose is herein reported. The circular paper sheets were uniformly loaded and displayed strong fluorescence under a conventional hand-held UV lamp (365 nm). The assay is based on the use of glucose oxidase enzyme (GOx), which impregnated the paper sheets, producing H2O2 upon the reaction with the glucose contained in the samples. After 20 min of exposure, the fluorescence intensity changed due to the quenching caused by H2O2. To obtain a reading, the paper sheets were photographed under 365 nm excitation using a digital camera. Several parameters, including the amount of QD, sample pH, and amount of GOx were optimized to maximize the response to glucose. The paper-based assay showed a sigmoidal-shaped response with respect to the glucose concentration in the 5-200 mg·dL-1 range (limit of detection of 5 μg·dL-1), demonstrating their potential use for biomedical applications. <s> BIB002 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Other Glucose Detection Platforms <s> In this report, we present a paper membrane-based surface-enhanced Raman scattering (SERS) platform for the determination of blood glucose level using a nitrocellulose membrane as substrate paper, and the microfluidic channel was simply constructed by wax-printing method. The rod-shaped gold nanorod particles were modified with 4-mercaptophenylboronic acid (4-MBA) and 1-decanethiol (1-DT) molecules and used as embedded SERS probe for paper-based microfluidics. The SERS measurement area was simply constructed by dropping gold nanoparticles on nitrocellulose membrane, and the blood sample was dropped on the membrane hydrophilic channel. While the blood cells and proteins were held on nitrocellulose membrane, glucose molecules were moved through the channel toward the SERS measurement area. Scanning electron microscopy (SEM) was used to confirm the effective separation of blood matrix, and total analysis is completed in 5 min. In SERS measurements, the intensity of the band at 1070 cm(-1) which is attributed to B-OH vibration decreased depending on the rise in glucose concentration in the blood sample. The glucose concentration was found to be 5.43 ± 0.51 mM in the reference blood sample by using a calibration equation, and the certified value for glucose was 6.17 ± 0.11 mM. The recovery of the glucose in the reference blood sample was about 88 %. According to these results, the developed paper-based microfluidic SERS platform has been found to be suitable for use for the detection of glucose in blood samples without any pretreatment procedure. We believe that paper-based microfluidic systems may provide a wide field of usage for paper-based applications. <s> BIB003 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Other Glucose Detection Platforms <s> In this study, a turn-on paper-based optical analytical system with a rapid, sensitive and quantitative response for glucose was developed. The luminescence sensing material, crystalline iridium(III)-Zn(II) coordination polymers, or Ir-Zne, was grown electrochemically on stainless steel mesh and then deposited on filter paper. This sensing substrate was subsequently built up under glucose oxidase encapsulated in hydrogel and then immobilized on egg membrane with the layer-by-layer method. Once the glucose solution was dropped onto the paper, the oxygen content was depleted simultaneously with a concomitant increase in the phosphorescence of Ir-Zne. The detection limit for glucose was 0.05 mM. The linear dynamic range for the determination of glucose was 0.05–8.0 mM with a correlation coefficient (R2) of 0.9956 (y=68.11 [glucose]−14.72). The response time was about 0.12 s, and the sample volume was less than 5 μL. The effects of mesh size, buffer concentration, pH, enzyme concentration, temperature, and interference, and the stability of the biosensor, have also been studied in detail. Finally, the biosensor was successfully applied to the determination of glucose in human serum. <s> BIB004 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Other Glucose Detection Platforms <s> The analytical performance for paper spray (PS) using a new insert sample approach based on paper with paraffin barriers (PS-PB) is presented. The paraffin barrier is made using a simple, fast and cheap method based on the stamping of paraffin onto a paper surface. Typical operation conditions of paper spray such as the solvent volume applied on the paper surface, and the paper substrate type are evaluated. A paper substrate with paraffin barriers shows better performance on analysis of a range of typical analytes when compared to the conventional PS-MS using normal paper (PS-NP) and PS-MS using paper with two rounded corners (PS-RC). PS-PB was applied to detect sugars and their inhibitors in sugarcane bagasse liquors from a second generation ethanol process. Moreover, the PS-PB proved to be excellent, showing results for the quantification of glucose in hydrolysis liquors with excellent linearity (R2 = 0.99), limits of detection (2.77 mmol L−1) and quantification (9.27 mmol L−1). The results are better than for PS-NP and PS-RC. The PS-PB was also excellent in performance when compared with the HPLC-UV method for glucose quantification on hydrolysis of liquor samples. <s> BIB005
Except for the conventional colorimetric and electrochemical techniques for glucose detection, there are some other techniques, such as luminescence BIB004 , fluorescence BIB002 , calorimetric BIB001 , mass spectrum (MS) BIB005 and surface-enhanced Raman spectroscopy (SERS) BIB003 applied to µPADs for rapid glucose diagnostics. Chen et al. BIB004 developed a turn-on paper-based phosphorescence device using Ir-Zn e , a kind of luminescence sensing material, composited with GOx with layer-by-layer technique. Once glucose existed, the oxygen content was depleted and the phosphorescence of Ir-Zn e increased concomitantly. The linear calibration range was from 0.05 to 8.0 mM with a correlation coefficient of 0.9956 and the LOD was 0.05 mM. Durán et al. BIB002 utilized colloidal CdSe/ZnS quantum dots (Q-dots) to produce an optical paper-based device for glucose detection. Paper loaded with Q-dots would display strong fluorescence under a UV lamp. H 2 O 2 generated by GOx could cause fluorescence intensity to be quenched after a 20 min exposure. Calorimetric detection is demonstrated as an extension of current detection mechanisms of colorimetric and electrochemical µPADs. Davaji et al. BIB001 developed a calorimetric µPAD based on binding temperature of glucose/GOx for glucose detection through change in heat. Colletes et al. BIB005 presented a new insert sample method based on paper with paraffin barriers (PS-PB) and it was employed to glucose detection with a LOD of 2.77 mM. A paper membrane-based SERS platform was developed by Torul et al. BIB003 for glucose determination in blood using a nitrocellulose membrane and wax-printing microfluidic channel. Gold nanoparticles modified with 4-mercaptophenylboronic acid (4-MBA) and 1-decanethiol (1-DT) molecules were used as probe for µPADs. Glucose molecules were moved through the channel toward the measuring area constructed by dropping AuNPs on the membrane. The glucose concentration was 6.17 ± 0.11 mM and the device may provide a wide range of applications in daily life.
A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Conclusions <s> The fabrication of toner-based microfluidic devices to perform clinical diagnostics with capillary action and colorimetric detection is described in this report. Test zones and microfluidic channels were drawn in a graphic software package and laser printed on a polyester film. The printed layout and its mirror image were aligned with an intermediary cut-through polyester film and then thermally laminated together at 150 °C at 60 cm/min to obtain a channel with ca. 100-μm depth. Colorimetric assays for glucose, protein, and cholesterol were successfully performed using a desktop scanner. The limit of detection (LD) values found for protein, cholesterol, and glucose were 8, 0.2, and 0.3 mg/mL, respectively. The relative standard deviation (RSD) values for an interdevices comparison were 6%, 1%, and 3% for protein, cholesterol, and glucose, respectively. Bioassays were successfully performed on toner-based devices stored at different temperatures during five consecutive days without loss of activity. <s> BIB001 </s> A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection <s> Conclusions <s> Here we report development of a smartphone app (application) that digitizes the colours of a colorimetric sensor array. A conventional colorimetric sensor array consists of multiple paper-based sensors, and reports the detection results in terms of colour change. Evaluation of the colour changes is normally done by the naked eye, which may cause uncertainties due to personal subjectivity and the surrounding conditions. Solutions have been particularly sought in smartphones as they are capable of spectrometric functions. Our report specifically focuses on development of a practical app for immediate point-of-care (POC) multi-analyte sensing without additional devices. First, the individual positions of the sensors are automatically identified by the smartphone; second, the colours measured at each sensor are digitized based on a correction algorithm; and third, the corrected colours are converted to concentration values by pre-loaded calibration curves. All through these sequential processes, the sensor array taken in a smartphone snapshot undergoes laboratory-level spectrometry. The advantages of inexpensive and convenient paper-based colorimetry and the ubiquitous smartphone are tied to achieve a ready-to-go POC diagnosis. <s> BIB002
Rapid and convenient tests for glucose have become essential in underdeveloped and developing countries, as glucose is an important indicator of metabolic activity. Since microfluidic paper-based analytical device was proposed by the Harvard group in 2007, it has attracted extensive attention in a wide range of applications. Numerous methods have been developed to fabricate the µPADs and multiple detection techniques have been applied to glucose diagnostics. Colorimetric and electrochemical detection are doubtlessly the most important techniques. Colorimetric detection is more widely used than electrochemical detection while the sensitivity is lower than the latter. With the development of point-of-care diagnostic (POCT), it is expected that the carry-on paper-based analytical devices will be generated. The devices tend to be miniaturization and the spectrometric functions or electronic measurements could be integrated in the smartphones BIB002 . Alternative materials like toner BIB001 have also been investigated for clinical glucose diagnostics without the part of cumbersome fabrication process. Besides, the exploration of biocompatibility and toxicity of papers give a potential for developing minimally invasive or non-invasive µPADs for real-time glucose detection. Improvements of stability and accuracy of glucose detection will bring the devices to be commercially available in the future.
A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Introduction <s> The ramifications of assistive technology for both current and future service provision are wide. In recent years, policy makers have become increasingly aware of the potential of these services to maintain older and disabled people in their own homes. The purpose of this paper is to report on a literature review and provide illustrations of how the evidence can be used to underpin the development of assistive technology services for older and disabled people and disabled children. The aim is to support the development of user‐focused, accessible services. <s> BIB001 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Introduction <s> The potential of technology to connect people and provide access to education, commerce, employment and entertainment has never been greater or more rapidly changing. Communication technologies and new media promise to ‘revolutionize our lives’ by breaking down barriers and expanding access for disabled people. Yet, it is also true that technology can create unexpected and undercritiqued forms of social exclusion for disabled people. In addition to exploring some of the ways that even (or especially) assistive technology can result in new forms of social exclusion, we also propose alternative ways of thinking about inclusive and accessible (as opposed to assistive) technology and provide some very practical ways that accessible technologies would promote greater access and flexibility for disabled students and adults. We contend that technology should be conceived of as a global, accessible and inclusive concept, not one that requires a qualifier based on who it is for. <s> BIB002 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Introduction <s> There are around 285 million visually impaired people worldwide, and around 370,000 people are registered as blind or partially sighted in the UK. Ongoing advances in information technology (IT) are increasing the scope for IT-based mobile assistive technologies to facilitate the independence, safety, and improved quality of life of the visually impaired. Research is being directed at making mobile phones and other handheld devices accessible via our haptic (touch) and audio sensory channels. We review research and innovation within the field of mobile assistive technology for the visually impaired and, in so doing, highlight the need for successful collaboration between clinical expertise, computer science, and domain users to realize fully the potential benefits of such technologies. We initially reflect on research that has been conducted to make mobile phones more accessible to people with vision loss. We then discuss innovative assistive applications designed for the visually impaired that are either delivered via mainstream devices and can be used while in motion (e.g., mobile phones) or are embedded within an environment that may be in motion (e.g., public transport) or within which the user may be in motion (e.g., smart homes). <s> BIB003 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Introduction <s> This paper proposes a novel concept for helping the visually impaired know what kind of object there is in an environment. This concept is implemented as a cane system that selects a target object based on a user's demand, recognizes the object from depth data obtained by a Microsoft Kinect sensor, and returns the recognition results via a tactile device. The proposed system is evaluated through a user study where one blindfolded subject actually uses the system to find chairs in an experimental environment. The experimental results indicate that the system is promising as means of helping the visually impaired recognize objects. <s> BIB004
The World Health Organization (WHO) reported in 2013 that 285 million people are estimated to be visually impaired worldwide: 39 million are blind and 246 million suffer from low vision. From the overall population with visual impairment, about 90% of the world's visually impaired live in developing countries and 82% of people living with blindness are aged 50 and above. Regrettably, this percentage is expected to increase in the coming decades. Visual impairment has a significant impact on individuals' quality of life, including their ability to work and to develop personal relationships. Almost half (48%) of the visually impaired feel ''moderately'' or ''completely'' cut off from people and things around them BIB003 . There are four levels of visual function, according to the International Classification of Diseases (ICD-10, Update and Revision 2006): normal vision, moderate visual impairment, severe visual impairment and blindness . Moderate visual impairment combined with severe visual impairment may be grouped under the term ''low vision''; low vision combined with blindness represents all forms of visual impairment . In order to overcome or lessen the difficulties imposed by visual impairment, extensive research has been dedicated to building assistive systems. The need for assistive technologies has long been a constant in the daily lives of people with visual impairment and will remain so in future years. There are various definitions for assistive technology in general. Common to all of them, however, is the concept of an item or piece of equipment that enables individuals with disabilities to enjoy full inclusion and integration into society BIB002 BIB001 . Traditional assistive technologies for the blind include white canes, guide dogs, screen readers, and so forth. However, the detectable ranges of white canes are very short (at most 1.5 m) and, consequently, the visually impaired can only immediately detect nearby obstacles at ground level. Guide dogs are also used by the visually impaired to navigate to their destinations avoiding the dangers they may encounter along their path. However, it is difficult to provide a sufficient number of guide dogs because of the long-time periods needed to train them, as well as the high costs associated with their training. Furthermore, it is also quite difficult for the visually impaired to take care of the living dogs appropriately BIB004 . Modern mobile assistive technologies are becoming more discrete and include (or are delivered via) a wide range of mobile computerized devices, including ubiquitous technologies like mobile phones. Such discrete technologies can help alleviate the cultural stigma associated with the more traditional (and noticeable) assistive devices . Visual impairment imposes many restrictions and specific requirements on human mobility. The overall objective of this work is to review the assistive technologies that have been proposed by researchers in recent years to address the limitations in user mobility resulting from visual impairment. This work does not focus on the analysis and description of individual systems. Instead it will review how technology is being used in recent years to individually address the different tasks related to assistive human navigation and how the components of traditional navigation systems can be adapted to address the limitations and requirements of visually impaired users. Human navigation, in general, requires an estimation of the user location, the relation to its context and finding a way to a specific destination. This work will cover these topics in dedicated sections. In this work, the term ''visual impairment'' incorporates any condition that impedes an individual's ability to execute typical daily activities due to visual loss. Because the aim of this work is to present a general review of navigation and orientation assistive technologies for the visually impaired, low vision is not separated from total blindness and so these terms are used interchangeably.
A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Understanding human navigation <s> This article reports on an experiment undertaken to test the spatiocognitive competence of the visually impaired population in regard to wayfinding. The test consisted of eight basic wayfinding tasks, each representing a particular spatio-cognitive operation. The tasks were executed in a labyrinthian layout allowing for control of the difficulty level of the tasks and limiting extraneous perceptual factors, which tended to interfere with the measure of spatio-cognitive abilities. The experimental groups were composed of congenitally totally blind, adventitiously totally blind, and subjects with a weak visual residue; the control was established by a sighted and a sighted blindfolded group. The sample's 18 subjects per group were matched in terms of age, education, and sex. The performance results of the visually impaired groups in all eight tasks led to rejection of any spatio-cognitive deficiency theory. The performance of the congenitally totally blind group, in particular, shows that spatio-cognitive c... <s> BIB001 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Understanding human navigation <s> This paper illustrates the application of cognitive mapping to people with visual impairments and blindness. It gives perspectives on past research, outlines ongoing research, highlights some of the methodological and validity issues arising from this research, and discusses the movement of theory into practice. The findings of three small preliminary studies have been reported, as part of continuing research into the cognitive mapping abilities of blind or visually impaired people. These studies have highlighted the need to use multiple, mutually supportive tests to assess cognitive map knowledge. In light of these findings and the need to move theory into practice, a current research project is outlined. This project seeks to use the knowledge gained from the three projects to design and implement an auditory hypermap system to aid wayfinding and the spatial learning of an area. Finally an agenda for applied research is presented. <s> BIB002 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Understanding human navigation <s> This chapter first presents a review of existing locomotion assistance devices for the blind. These devices are merely proximeters, that measure the distance to the closest obstacles, and convey this information to their users. We introduce the measurement methods (infrared sensors, ultrasonic sensors, laser telemeters) and the user interfaces (sounds and tactile vibrations). Then, we analyse the shortcomings of these systems, and thus explain what additional features new devices could offer. To study the feasibility of such systems, we tackle the different issues raised in the process: localizing users, modeling their environment and adding semantic annotations. Finally, we explain how such devices could fit into a view of ambient intelligence, and how the problems raised extend beyond the field of assistance to blind people. <s> BIB003 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Understanding human navigation <s> In this article, recent achievements of cognitive research in geographic information science (GIScience) are reviewed and prospects for future directions discussed. Cognitive research in GIScience concerns human knowledge and knowing involving geographic information and geographic information systems (GIS). It includes both internal mental and external symbolic structures and processes, and is practically motivated by the desire to improve the usability, efficiency, equity, and profitability of geographic information and GIS. Taking 1992 as the start of modern GIScience, recent cognitive research falls into six areas: human factors of GIS, geovisualization, navigation systems, cognitive geo-ontologies, geographic and environmental spatial thinking and memory, and cognitive aspects of geographic education. Future prospects for cognitive GIScience research include recommendations for methods, including eye-movement recordings and fMRI; theoretical approaches, including situated cognition, evolutionary cognition, and cognitive neuroscience; and specific problems, including how users incorporate uncertainty metadata in reasoning and decision making, the role of GIS in teaching K-12 students to think spatially, and the potential detrimental effects of over-reliance on digital navigation systems. <s> BIB004 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Understanding human navigation <s> Haptic Sight is a new interface idea providing immediate spatial information to visually impaired people in order to assist independent walking. The interface idea stems from a thorough investigation in which we studied visually impaired people's indoor walking behavior, decision making process, their unique concept of space, and information needs. The aim of this study is to identify an interface design and investigate an appropriate means of spatial information delivery. <s> BIB005
Human beings have the ability to acquire and use information obtained from the surrounding environment using their natural sensors. They have developed a number of evolutionary mechanisms that enable the distinction between different objects and the triggering of events and complex processes based on their perception of reality. Cognition concerns knowledge and knowing in intelligent entities, especially by human beings, but also nonhuman animals and synthetic computational entities such as robots BIB004 . Cognition includes the mental structures and processes involved in perception, attention, thinking and reasoning, learning, memory, linguistic and non-linguistic communication. It also includes external symbolic structures and processes, such as maps or written procedures for carrying out formal spatial analysis, which assist internal cognition. Similarly, cognition is often about space, place, or environment, so cognitive acts are quite often of geographic nature BIB004 . Cognitive mapping BIB002 is of extreme importance for individuals in terms of creating a conceptual model of the surrounding space and objects around them, thereby supporting their interaction with the physical environment BIB003 . In new environments, finding your way can be time consuming and may require a considerable amount of attention. In these types of scenario, visual impairment is a major limitation to user mobility. On the one hand, individuals with visual impairments often need the help of sighted people to navigate and cognitively map new environments, which is time consuming, not always available and leads to lower mobility BIB001 . On the other hand, individuals with cognitive impairment may experience difficulty in learning new environments and following directions. Assistive systems for human navigation generally aim to allow their users to safely and efficiently navigate in unfamiliar environments, without getting lost, by dynamically planning the path based on the user's location, respecting the constraints posed by their special needs. Collecting the specific needs or specificities of any impairment is a key point for the development of any assistive system. Using direct observational and interviewbased knowledge elicitation methods, researchers of The Haptic Sight study BIB005 tried to gain a better understanding of a visually impaired person's indoor walking behavior and the information required for him to walk independently. They found that the visually impaired need to be aware of their current location, the direction they are heading, the direction they need to go and the path to their destination. Only after the research team had identified these parameters did they develop a handheld device-based application. In other words, users with visual impairment must be aware of their physical location, their relation to the surrounding environment (context) and the route they must follow to navigate to a desired destination. When designing an assistive system for human navigation, separate processing units (or modules) can address these identified tasks, namely location, orientation, navigation and interface, as shown in Fig. 1 . This work reviews different ways with which different researchers addressed the use of technology to fill the gaps and needs presented by visual impairment in each of these topics. As with the design of any assistive system, the interface with the user must be adequate to the user's limitations. This work will cover this topic in a dedicated section as well.
A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Location <s> We present a study focusing on the usability of a wayfinding and localization system for persons with visual impairment. This system uses special color markers, placed at key locations in the environment, that can be detected by a regular camera phone. Three blind participants tested the system in various indoor locations and under different system settings. Quantitative performance results are reported. <s> BIB001 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Location <s> Whereas outdoor navigation systems typically rely upon GPS, indoor systems have to rely upon dierent techniques for localizing the user, as GPS signals cannot be received indoors. Over the past decade various indoor navigation systems have been developed. This paper provides a comprehensive overview of existing indoor navigation systems and analyzes the dierent techniques used for: (1) locating the user; (2) planning a path; (3) representing the environment; and (4) interacting with the user. Our survey identies a number of research issues that could facilitate large scale deployment of indoor navigation systems. <s> BIB002
All guidance/navigation systems must include a basic form of localization, i.e., the determination of a user's location and/or pose. The estimation of the user's location is sometimes referred to as ''positioning'' BIB001 . The most common localization methods can be grouped into four different categories: (1) direct sensing, (2) dead reckoning, (3) triangulation and (4) pattern recognition BIB002 . It is important to understand that depending on the technology used, the user location may be estimated by the direct application of techniques, or by using computational methods to process data that can indirectly contribute to estimate the location. It is also important to distinguish between the two. If, on one hand, direct-sensing techniques can almost directly provide an indication of the user's location, other methods, such as dead reckoning, use the components of locomotion (heading, acceleration, speed, etc.) to computationally estimate the displacement from a known location. The same applies to triangulation and pattern recognition. In the case of pattern recognition, it is not the actual detection of the visual pattern that provides an estimation of the location. Instead, some of the metrics and data outputting from the detection (such as pose and distance from the detected pattern) can be used to computationally make the estimation. The location can be used for both planning the path (navigation) and providing surrounding (contextual) information (orientation). If the user's location is known, the system can also find a new path in case the user gets lost or calculate an alternative path, if needed. The planned path is then used to generate and provide guiding directions to a user-specified destination.
A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Direct-sensing techniques <s> Metronaut is a novel wearable computer which captures information, senses position, provides wide range communications, consumes less than one watt of power, and weighs less than one pound. Metronaut employs a bar code reader for information input and position location, a two-way pager for communications, and an ARM processor for computation. Metronaut's application is schedule negotiation and guidance instructions for a visitor to the CMU campus. The visitor's position is determined from reading bar codes at information signs around campus. Modifications to the schedule are negotiated using the two-way pager for communications with the campus computing infrastructure. Metronaut is alternatively powered by a mechanical flywheel converting kinetic energy to electrical energy. <s> BIB001 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Direct-sensing techniques <s> The design of mobile navigation systems adapting to limited resources will be an important future challenge. Since typically several different means of transportation have to be combined in order to reach a destination, the user interface of such a system has to adapt to the user's changing situation. This applies especially to the alternating use of different technologies to detect the user's position, which should be as seamless as possible. This article presents a hybrid navigation system that relies on different technologies to determine the user's location and that adapts the presentation of route directions to the limited technical resources of the output device and the limited cognitive resources of the user. <s> BIB002 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Direct-sensing techniques <s> We describe a navigation and location determination system for the blind using an RFID tag grid. Each RFID tag is programmed upon installation with spatial coordinates and information describing the surroundings. This allows for a self-describing, localized information system with no dependency on a centralized database or wireless infrastructure for communications. We describe the system and report on its characteristic performance, limitations, and lessons learned. <s> BIB003 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Direct-sensing techniques <s> We present a robot-assisted wayfinding system for the visually impaired in structured indoor environments. The system consists of a mobile robotic guide and small passive RFID sensors embedded in the environment. The system is intended for use in indoor environments, such as office buildings, supermarkets and airports. We describe how the system was deployed in two indoor environments and evaluated by visually impaired participants in a series of pilot experiments. We analyze the system's successes and failures and outline our plans for future research and development. <s> BIB004 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Direct-sensing techniques <s> Location-based mobile services have been in use, and studied, for a long time. With the proliferation of wireless networking technologies, users are mostly interested in advanced services that render the surrounding environment (i.e., the building) highly intelligent and significantly facilitate their activities. In this paper our focus is on indoor navigation, one of the most important location services. Existing approaches for indoor navigation are driven by geometric information and neglect important aspects, such as the semantics of space and user capabilities and context. The derived applications are not intelligent enough to catalytically contribute to the pervasive computing vision. In this paper, a novel navigation mechanism is introduced. Such navigation scheme is enriched with user profiles and the adoption of an ontological framework. These enhancements introduce a series of technical challenges that are extensively discussed throughout the paper. <s> BIB005 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Direct-sensing techniques <s> Blind people need to become as independent as possible in their daily life in order to guarantee a fully social inclusion. Mobility means the possibility of freely moving, without support of any accompanying person, at home, in public and private buildings, and in open spaces, as the streets of the town. Mobile and wireless technologies, and in particular the ones used to locate persons or objects, can be used to realize navigation systems in an intelligent environment. Such systems open new opportunities to improve the speed, easiness, and safety of the visually impaired persons mobility. Using these technologies together with Text To Speech systems and a mobile-based database the authors developed a cost effective, easy-to-use orientation and navigation system: RadioVirgilio/SesamoNet1. The cost effectiveness is due to the recovery of RFID identity tags from cattle slaughtering: these tags are then borrowed to create a grid used for navigation. In this paper the results of an usability analysis of this guide system are presented. A preliminary experiment involving a small group of experts and a blind person is described. In order to evaluate the usability, three cognitive walkthrough sessions have been done to discuss the system's basic functionality and to highlight the most critical aspects to be modified. <s> BIB006 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Direct-sensing techniques <s> A location and tracking system becomes very important to our future world of pervasive computing, where information is all around us. Location is one of the most needed information for emerging and future applications. Since the public use of GPS satellite is allowed, several state-of-the-art devices become part of our life, e.g. a car navigator and a mobile phone with a built-in GPS receiver. However, location information for indoor environments is still very limited. Several techniques are proposed to get location information in buildings such as using a radio signal triangulation, a radio signal (beacon) emitter, or signal fingerprinting. Using radio frequency identification (RFID) tags is a new way of giving location information to users. Due to its passive communication circuit, RFID tags can be embedded almost anywhere without an energy source. The tags stores location information and gives it to any reader that is within a proximity range which can be up to 10-15 meters for UHF RFID systems. We propose an RFID-based system for navigation in a building for blind people or visually impaired. The system relies on the location information on the tag, a userpsilas destination, and a routing server where the shortest route from the userpsilas current location to the destination. The navigation device communicates with the routing server using GPRS networks. We build a prototype based on our design and show some results. We found that there are some delay problems in the devices which are the communication delay due to the cold start cycle of a GPRS modem and the voice delay due to the file transfer delay from a MMC module. <s> BIB007 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Direct-sensing techniques <s> Mobile navigation service is one of the most important Location Based Services. With the rapid advances in enabling technologies for ubiquitous computing, more and more active or passive devices/sensors are augmented in the indoor environment, indoor environment has become smarter. This paper proposes that by introducing the notions of Smart Environment and Ambient Intelligent, a ubiquitous indoor navigation service can be built to provide an adaptive smart wayfinding support and enhance users with a new experience during indoor navigation. In this paper, we set up a smart environment with a positioning module and a wireless module. Based on this smart environment, we design a ubiquitous indoor navigation system with interaction and annotation module (for user generated content), user tracking module (for collaborative filtering) and context-aware adaptation to illustrate some potential benefits of combining indoor navigation and Smart Environment. <s> BIB008 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Direct-sensing techniques <s> The autonomy of blind people in their daily life depends on their knowledge of the surrounding world, and they are aided by keen senses and assistive devices that help them to deduce their surroundings. Existing solutions require that users carry a wide range of devices and, mostly, do not include mechanisms to ensure the autonomy of users in the event of system failure. This paper presents the nav4b system that combines guidance and navigation with object's recognition, extending traditional aids (white cane and smartphone). A working prototype was installed on the UTAD campus to perform experiments with blind users. <s> BIB009 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Direct-sensing techniques <s> Abstract Nowadays, navigation systems are widely used to find the correct path, or the quickest, between two places. These systems use the Global Positioning System (GPS) and only work well in outdoor environment since GPS signals cannot easily penetrate and/or are greatly degraded inside of buildings. Several technologies have been proposed to make navigation inside of buildings possible. One such technology is Radio-Frequency Identification (RFID). In the case of outside environments, some hybrid systems have been proposed that use GPS as main information source and RFID for corrections and location error minimization. In this article we propose a navigation system that uses RFID as the main technology to guide people with visual impairment in unfamiliar environments, both indoor and outdoor, complementing the traditional white cane and providing information about the user's geographical context. <s> BIB010
Localization techniques based on direct sensing determine the location of the user through the sensing of identifiers (or tags), which have been installed in the environment. Typical direct-sensing technologies include the use of radio-frequency identification (RFID) tags that can either be passive BIB006 BIB004 BIB003 BIB010 or active (some systems use both active and passive tags BIB007 BIB009 ), infrared (IR) transmitters that are installed in known positions where each transmitter broadcasts a unique ID BIB002 BIB005 , Bluetooth beacons BIB008 or visual barcodes BIB001 . All of these technologies require the user to carry extra equipment to sense the identifiers. In the case of radio-frequency identification, though single RFID tags are quite inexpensive, in order to massively install them in large environments may become costly. Another disadvantage is the range of detection. In the case of passive tags the range is too low. In the case of active tags, the range is higher but they require individual power supply (and respective maintenance). Infrared emitters require the user to be in the line-of-sight and, even so, they are strongly affected by sunlight interference. Bluetooth beacons, when used for localization, require the user to walk more slowly than with other sensing techniques because of the communication/pairing delay. Barcodes are, in a way, very similar to radio-frequency identification. This approach is low cost, easy to install and to maintain. The main limitation is that the user has to find each barcode and scan it, which may be cumbersome and will slow down navigation. In the case of blind users, using a system that searches for printed barcodes that they cannot see is also very demanding and prone to reading failure.
A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Dead reckoning <s> The position-tracking accuracy of a location-aware mobile system can change dynamically as a function of the user’s location and other variables specific to the tracker technology used. This is especially problematic for mobile augmented reality systems, which ideally require extremely precise position tracking for the user’s head, but which may not always be able to achieve the necessary level of accuracy. While it is possible to ignore variable positional accuracy in an augmented reality user interface, this can make for a confusing system; for example, when accuracy is low, virtual objects that are nominally registered with real ones may be too far off to be of use. To address this problem, we describe the early stages of an experimental mobile augmented reality system that adapts its user interface automatically to accommodate changes in tracking accuracy. Our system employs different technologies for tracking a user’s position, resulting in a wide variation in positional accuracy: an indoor ultrasonic tracker and an outdoor real-time kinematic GPS system. For areas outside the range of both, we introduce a dead-reckoning approach that combines a pedometer and orientation tracker with environmental knowledge expressed in spatial maps and accessibility graphs. We present preliminary results from this approach in the context of a navigational guidance system that helps users to orient themselves in an unfamiliar environment. Our system uses inferencing and path planning to guide users toward targets that they choose. <s> BIB001 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Dead reckoning <s> Pedestrians must often find their way in unfamiliar urban environments or complex buildings. In these cases they need guidance to reach their desired destination, for example a specific room in a local authorities' building, a counter, or a department at an university. The goal of location-based mobile services is to provide such guidance on demand (anywhere, anytime), individually tailored to the actual information needs and presented in preferred forms. Thereby the navigation service requires positioning and tracking capabilities of a mobile user with a certain positioning accuracy and reliability. In particular, navigating in urban areas is a very challenging task as pedestrians move in spaces where none of the known positioning techniques works continuously in standalone mode and the movement is in a much more complex space than 2D networks (i.e. on pedestrian paths and along roads, outdoor and indoor, through underground passages, etc.). To solve this challenging task of continuous position determination, a combination of different location technologies is required. The integration of the sensors should be performed such that all the sensors are tightly coupled in the sense of a so-called multi-sensor system. In a new research project in our University entitled "Pedestrian Navigation Systems in Combined Indoor/Outdoor Environments (NAVIO)" we are working on the improvement of such navigation services. The project is mainly focusing on the information aspect of location-based services, i.e. on the user's task at hand and support of the user's decisions by information provided by such a service. Specifications will allow selection of appropriate sensor data and to integrate data when and where needed, to propose context-dependent routes fitting to partly conflicting interests and goals as well as to select appropriate communication methods in terms of supporting the user guiding by various multimedia cartography forms. To test and to demonstrate our approach and results, the project takes a use case scenario into account, i.e. the guidance of visitors to departments of the Vienna University of Technology. First results of our project are presented in this paper. <s> BIB002 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Dead reckoning <s> Walking is the most fundamental means of human transportation. Unlike travel by car, walking is not planar, but rather stereoscopic. We therefore developed a real navigation system for pedestrian point-to-point navigation. We propose herein a method of 3D pedestrian navigation, in which position detection is driven mainly by dead reckoning. The proposed method enables ubiquitous round-the-clock 3D positioning, even inside buildings or between tall buildings. In addition, pedestrian navigation is customized by changing the costs of the road network links. Finally, a positioning data accumulation system is implemented so that we can log tracks and easily incorporate new roads or attributes in the future. <s> BIB003 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Dead reckoning <s> This paper describes, path planning and following algorithms for use in indoor navigation for the blind and visually impaired. Providing indoor navigational assistance for this type of users presents additional challenges not faced by conventional guidance systems, due to the personal nature of the interactions. The algorithms are part of an overall Indoor Navigation Model that is used to provide assistance and guidance in unfamiliar indoor environments. Path planning uses the A* and Dijkstra's shortest path algorithms, to operate on an "Intelligent Map", that is based on a new data structure termed "cactus tree" which is predicated on the relationships between the different objects that represent an indoor environment. The paths produced are termed "virtual hand rails", which can be used to dynamically plan a path for a user within a region. The path following algorithm is based on dead reckoning, but incorporates human factors as well as information about the flooring and furnishing structures along the intended planned path. Experimental and simulating results show that the guiding/navigation problem becomes a divergent mathematical problem if the positional information offered by the positioning and tracking systems does not reach a certain requirement. This research explores the potential to design an application for the visually impaired even when to- date 'positioning and tracking' system cannot offer reliable position information that highly required by this type of application. <s> BIB004 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Dead reckoning <s> Ad hoc solutions for tracking and providing navigation support to emergency response teams is an important and safety-critical challenge. We propose a navigation system based on a combination of foot-mounted inertial sensors and ultrasound beacons. We evaluate experimentally the performance of our dead reckoning system in different environments and for different trail topologies. The inherent drift observed in dead reckoning is addressed by deploying ultrasound beacons as landmarks. We study through simulations the use of the proposed approach in guiding a person along a defined path. ::: Simulation results show that satisfactory guidance performance is achieved despite noisy ultrasound measurements, magnetic interference and uncertainty in ultrasound node locations. The models used for the simulations are based on experimental data and the authors' experience with actual sensors. The simulation results will be used to inform future development of a full real time system. <s> BIB005 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Dead reckoning <s> Whereas outdoor navigation systems typically rely upon GPS, indoor systems have to rely upon dierent techniques for localizing the user, as GPS signals cannot be received indoors. Over the past decade various indoor navigation systems have been developed. This paper provides a comprehensive overview of existing indoor navigation systems and analyzes the dierent techniques used for: (1) locating the user; (2) planning a path; (3) representing the environment; and (4) interacting with the user. Our survey identies a number of research issues that could facilitate large scale deployment of indoor navigation systems. <s> BIB006
Humans maintain (update) their sense of orientation as they move around via a combination of two processes, i.e. landmark-based and dead-reckoning processes. Landmarkbased updating involves recognizing specific features in the world that may be associated with known places. Deadreckoning updating involves keeping track of the components of locomotion (including heading, velocity or acceleration) and travel duration. Dead reckoning is sometimes referred to as ''path integration'' BIB006 . While the user is moving, a dead-reckoning system estimates the user's location through a combination of odometry readings. Odometry readings can be acquired through a combination of sensors such as accelerometers, magnetometers, compasses, and gyroscopes BIB005 BIB001 BIB003 BIB002 or using a user's specific walking pattern (such as the user's average walking speed) BIB004 . An initial location is typically determined using a global navigation satellite system (GNSS) like the Global Positioning System (GPS) BIB001 , radio-frequency identification (RFID) tags BIB003 , or cellular phone positioning (GSM broadcasting stations) BIB002 . Since the location estimation is a recursive process, inaccuracy in location estimation translates into errors that accumulate over time. The accumulated error can be corrected using environmental knowledge. The users' position can be synchronized using periodic updates from directsensing localization techniques such as RFID tags, or pattern-matching localization methods such as the use of data extracted from the recognition of known visual landmarks. A benefit of processing data from pattern matching over direct-sensing techniques is a lower installation cost, as a smaller number of identifiers must be installed.
A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Triangulation <s> This paper presents the design, implementation, and evaluation of Cricket , a location-support system for in-building, mobile, location-dependent applications. It allows applications running on mobile and static nodes to learn their physical location by using listeners that hear and analyze information from beacons spread throughout the building. Cricket is the result of several design goals, including user privacy, decentralized administration, network heterogeneity, and low cost. Rather than explicitly tracking user location, Cricket helps devices learn where they are and lets them decide whom to advertise this information to; it does not rely on any centralized management or control and there is no explicit coordination between beacons; it provides information to devices regardless of their type of network connectivity; and each Cricket device is made from off-the-shelf components and costs less than U.S. $10. We describe the randomized algorithm used by beacons to transmit information, the use of concurrent radio and ultrasonic signals to infer distance, the listener inference algorithms to overcome multipath and interference, and practical beacon configuration and positioning techniques that improve accuracy. Our experience with Cricket shows that several location-dependent applications such as in-building active maps and device control can be developed with little effort or manual configuration. <s> BIB001 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Triangulation <s> Drishti is a wireless pedestrian navigation system. It integrates several technologies including wearable computers, voice recognition and synthesis, wireless networks, Geographic Information System (GIS) and Global positioning system (GPS). Drishti augments contextual information to the visually impaired and computes optimized routes based on user preference, temporal constraints (e.g. traffic congestion), and dynamic obstacles (e.g. ongoing ground work, road blockade for special events). The system constantly guides the blind user to navigate based on static and dynamic data. Environmental conditions and landmark information queried from a spatial database along their route are provided on the fly through detailed explanatory voice cues. The system also provides capability for the user to add intelligence, as perceived by, the blind user, to the central server hosting the spatial database. Our system is supplementary to other navigational aids such as canes, blind guide dogs and wheel chairs. <s> BIB002 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Triangulation <s> The design of mobile navigation systems adapting to limited resources will be an important future challenge. Since typically several different means of transportation have to be combined in order to reach a destination, the user interface of such a system has to adapt to the user's changing situation. This applies especially to the alternating use of different technologies to detect the user's position, which should be as seamless as possible. This article presents a hybrid navigation system that relies on different technologies to determine the user's location and that adapts the presentation of route directions to the limited technical resources of the output device and the limited cognitive resources of the user. <s> BIB003 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Triangulation <s> In this paper, we discuss application possibilities of augmented reality technologies in the field of mobility support for the deaf blind. We propose the navigation system called virtual leading blocks for the deaf-blind, which consists of a wearable interface for Finger-Braille, one of the commonly used communication methods among deaf-blind people in Japan, and a ubiquitous environment for barrier-free application, which consists of floor-embedded active radio-frequency identification (RFID) tags. The wearable Finger-Braille interface using two Linux-based wristwatch computers has been developed as a hybrid interface of verbal and nonverbal communication in order to inform users of their direction and position through the tactile sensation. We propose the metaphor of "watermelon splitting" for navigation by this system and verify the feasibility of the proposed system through experiments. <s> BIB004 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Triangulation <s> There are many navigation systems for visually impaired people but few can provide dynamic interactions and adaptability to changes. None of these systems work seamlessly both indoors and outdoors. Drishti uses a precise position measurement system, a wireless connection, a wearable computer, and a vocal communication interface to guide blind users and help them travel in familiar and unfamiliar environments independently and safely. Outdoors, it uses DGPS as its location system to keep the user as close as possible to the central line of sidewalks of campus and downtown areas; it provides the user with an optimal route by means of its dynamic routing and rerouting ability. The user can switch the system from an outdoor to an indoor environment with a simple vocal command. An OEM ultrasound positioning system is used to provide precise indoor location measurements. Experiments show an in-door accuracy of 22 cm. The user can get vocal prompts to avoid possible obstacles and step-by-step walking guidance to move about in an indoor environment. This paper describes the Drishti system and focuses on the indoor navigation design and lessons learned in integrating the indoor with the outdoor system. <s> BIB005 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Triangulation <s> A location-aware navigation system has been developed and implemented for the visually disabled or visually impaired; the system is designed to improve individuals' independent mobility. This self-contained, portable system integrates several technologies, including mobile personal digital assistants, voice synthesis, a geographic information system (GIS), and a differential Global Positioning System (DGPS). The system is meant to augment the various sensory inputs available to the visually impaired user. It provides the user with navigation assistance, making use of voice cues iterating contextual building and feature information at regular intervals, through automatic GPS readings and a GIS database. To improve the efficiency of the retrieval of contextual information, an indexing method based on road segmentation was developed to replace the exhaustive search method. Experimental results show that the performance of the system on searching the buildings, landmarks, and other features around a road has been significantly improved by using this indexing method. <s> BIB006 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Triangulation <s> In the research project NAVIO (Pedestrian Navigation Systems in Combined Indoor/Outdoor Environements) at our University we are working on the improvement of navigation services for pedestrians. Thereby we are mainly focusing on the information aspect of location-based services, i.e., on the user’s task at hand and the support of the user’s decisions by information provided by such a service. Specifications will allow us to select appropriate sensor data and to integrate data when and where needed, to propose context-dependent routes fitting to partly conflicting interests and goals as well as to select appropriate communication methods in terms of supporting the user guidance by various multimedia cartography forms. These taks are addressed in the project in three different work packages, i.e., the first on “Integrated positioning”, the second on “Pedestrian route modeling” and the third on “Multimedia route communication”. In this paper we will concentrate on the research work and findings in the first work package. For continuous positioning of a pedestrian suitable location technologies include GNSS and indoor location techniques, cellular phone positioning, dead reckoning sensors (e.g. magnetic compass, gyro and accelerometers) for measurement of heading and travelled distance as well as barometric pressure sensors for height determination. The integration of these sensors in a modern multi-sensor system can be performed using an adapted Kalman filter. To test and to demonstrate our approach, we take a use case scenario into account, i.e., the guidance of visitors to departments of the Vienna University of Technology. The results of simulation studies and practical tests could confirm that such a service can achieve a high level of performance for the guidance of a pedestrian in urban areas and mixed indoor and outdoor environments. <s> BIB007 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Triangulation <s> In this paper the design, development and evaluation of a GPS-based auditory navigation system is presented that implicitly guides a user by a contextualized rendering of personal audio files. The benefit of this navigation system is that the user can listen to his own audio contents while being navigated. Wearing headphones, the user listens to audio contents which are located in a virtual environment. The user simply walks in the direction where the sound seems to have its origin. A formal evaluation under field conditions proved that navigation with contextualized audio contents is efficient and intuitive and that users are highly satisfied with the navigation support given by the evaluated auditory display. <s> BIB008 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Triangulation <s> Location-based mobile services have been in use, and studied, for a long time. With the proliferation of wireless networking technologies, users are mostly interested in advanced services that render the surrounding environment (i.e., the building) highly intelligent and significantly facilitate their activities. In this paper our focus is on indoor navigation, one of the most important location services. Existing approaches for indoor navigation are driven by geometric information and neglect important aspects, such as the semantics of space and user capabilities and context. The derived applications are not intelligent enough to catalytically contribute to the pervasive computing vision. In this paper, a novel navigation mechanism is introduced. Such navigation scheme is enriched with user profiles and the adoption of an ontological framework. These enhancements introduce a series of technical challenges that are extensively discussed throughout the paper. <s> BIB009 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Triangulation <s> Almost 2 million Japanese citizens use Navitime, a mobile phone-based navigation service that incorporates various modes of transportation. User experiences reveal implications for designing urban-computing services. Location-based services are a key pervasive computing application that could deeply influence urban spaces and their inhabitants. Recent advances in mobile phones, GPS, and wireless networking infrastructures are making it possible to implement and operate large-scale location- based services in the real world. <s> BIB010 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Triangulation <s> Many applications in the area of location-based services and personal navigation require nowadays the location determination of a user not only in an outdoor environment but also an indoor. Typical applications of location-based services (LBS) mainly in outdoor environments are fleet management, travel aids, location identification, emergency services and vehicle navigation. LBS applications can be further extended if reliable and reasonably accurate three-dimensional positional information of a mobile device can be determined seamlessly in both indoor and outdoor environments. Current geolocation methods for LBS may be classified as GNSS-based, cellular network-based or their combinations. GNSS-based methods rely very much on the satellite visibility and the receiver-satellite geometry. This can be very problematic in dense high-rise urban environments and when transferring to an indoor environment. Especially, in cities with many high-rise buildings, the urban canyon will greatly affect the reception of the GNSS signals. Moreover, positioning in the indoor/outdoor transition areas would experience signal quality and signal reception problems, if GNSS systems alone are employed. The authors have proposed the integration of GNSS with wireless positioning techniques such as WiFi and UWB. In the case of WiFi positioning, the so-called fingerprinting method based on WiFi signal strength observations is usually employed. In this article, the underlying technology is briefly reviewed, followed by an investigation of two WiFi-positioning systems. Testing of the system is performed in two localisation test beds, one at the Vienna University of Technology and another one at the Hong Kong Polytechnic University. The first test showed that the trajectory of a moving user could be obtained with a standard deviation of about ±3-5 m. The main disadvantage of WiFi fingerprinting, however, is the required time consuming and costly signal strength system calibration in the beginning. Therefore, the authors have investigated if the measured signal strength values can be converted to the corresponding range to the access point. A new approach for this conversion is presented and analysed in typical test scenarios. <s> BIB011 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Triangulation <s> The complexity of indoor radio propagation has resulted in location-awareness being derived from empirical fingerprinting techniques, where positioning is performed via a previously-constructed radio map, usually of WiFi signals. The recent introduction of the Bluetooth Low Energy (BLE) radio protocol provides new opportunities for indoor location. It supports portable battery-powered beacons that can be easily distributed at low cost, giving it distinct advantages over WiFi. However, its differing use of the radio band brings new challenges too. In this work, we provide a detailed study of BLE fingerprinting using 19 beacons distributed around a $\sim\! 600\ \mbox{m}^2$ testbed to position a consumer device. We demonstrate the high susceptibility of BLE to fast fading, show how to mitigate this, and quantify the true power cost of continuous BLE scanning. We further investigate the choice of key parameters in a BLE positioning system, including beacon density, transmit power, and transmit frequency. We also provide quantitative comparison with WiFi fingerprinting. Our results show advantages to the use of BLE beacons for positioning. For one-shot (push-to-fix) positioning we achieve $30\ \mbox{m}^2$ ), compared to $100\ \mbox{m}^2$ ) and < 8.5 m for an established WiFi network in the same area. <s> BIB012
Though most direct-sensing techniques try to locate the user by sensing one unique identifier, several systems employ multiple identifiers and use triangulation computational methods to locate the user. These methods locate the user by triangulating the sensed tags installed in known locations. The tags that have been frequently used for indoor or outdoor localization include RFID BIB004 , infrared (IR) BIB003 , and ultrasound BIB001 BIB005 . Lateration uses the distance between the user and at least three known points, whereas angulation uses the angular measurements from at least three known points to the user to determine the users' location . Global Positioning System (GPS) is the most commonly used system for outdoor localization BIB008 BIB002 BIB006 and uses a trilateration computational method to locate the user, based on known satellite positions. GPS receivers analyze a periodic signal sent out by each satellite to compute the latitude, longitude and altitude at the users' position. For outdoor navigation, GPS has become the standard as it is free, reliable, and it is available any place on Earth in any weather condition. The main disadvantage of GPS localization is that the GPS signal strongly degrades inside buildings, between tall buildings or in dense forest areas (such as parks). There are two alternative triangulationbased techniques, which are available in contexts where GPS signals are not sensed, or available. Cell-tower positioning BIB010 uses the triangulation of the known locations of cell towers with the provided signal strength of each cell phone tower, whereas wireless local area networks (WLAN) positioning BIB009 BIB007 triangulates the location of wireless base stations using the signal of each emitting station. Both techniques have a lower precision than GPS due to multi-path reflection problems. Another way of using the signal from wireless emitting stations, such as Wi-Fi, is signal fingerprinting. This approach is based on signal strength observations on previously known locations. An estimate of the location is obtained based on these measurements and a signal propagation model. The propagation model can be obtained by simulation or with prior calibration measurements in certain locations. In this last case, the measured signal strength values at a certain location are compared with the signal strengths values of pre-calibrated points stored in a database. This approach, with proper calibration, can provide extremely high accuracy, in comparison with GNSS-based approaches and has been successfully adopted in the field of robotics and unmanned vehicle applications. The major limitation in its application on the blind user case is the cost-over-benefit. The required time and costly signal strength system calibration is very high in the beginning BIB011 BIB012 .
A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Pattern recognition <s> This article presents a short but detailed description of the optophone-its origins as a reading device for the blind, the various stages of its development, and the possibility of its use as a mobility aid for the blind. Research into the use of stereo vision is described as an aid to information reduction, in the hope of remedying the problems of information overload that commonly plague electronic blind aids. <s> BIB001 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Pattern recognition <s> In an easel, a clamp for suspending sheets or other objects is formed with an elongated plate and a pair of brackets that support a bar. The brackets are particularly formed so that they incline downwardly towards the plate upon which they are mounted and the bar is arranged to be slidingly affixed to the brackets. The bar slides up and down and may grip objects placed between it and the plate. Ideally, the bar is provided with cushion means which provide the actual gripping action against the plate. <s> BIB002
Recently, systems have been developed which use computer vision techniques, like pattern matching, to sense the surrounding environment and detect visual landmarks. Although at first glance it may be quite obvious that pattern recognition alone cannot provide an indication of the user location, an estimation can indirectly be extracted using the outputting data from the pattern detection, such as pose and distance to the detected pattern. The most common artificial vision systems developed to support the guidance of blind users extract this type of information by analyzing the characteristics of the objects detected in the captured image using classical image processing techniques BIB001 BIB002 . Some systems go further by combining vision sensors with positioning sensors or even combining multiple vision sensors to obtain a 3D representation of the scene (to obtain depth information).
A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Motion and pose estimation <s> Context awareness is an important functionality for wearable computers. In particular, the computer should know where the person is in the environment. This paper proposes an image sequence matching technique for the recognition of locations and previously visited places. As in single word recognition in speech recognition, a dynamic programming algorithm is proposed for the calculation of the similarity of different locations. The system runs on a standalone wearable computer, such as a Libretto PC. Using a training sequence, a dictionary of locations is created automatically. These locations are then recognized by the system in real time using a hat-mounted camera. <s> BIB001 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Motion and pose estimation <s> Increasingly, cell phones are used to browse for information while location systems assist in gathering information that is most appropriate to the user's current location. We seek to take this one step further and actually overlay information on to the physical world using the cell phone's camera and thereby minimize a user's cognitive effort. This "magic lens" approach has many applications of which we are exploring two: indoor building navigation and dynamic directory assistance. In essence, we match "landmarks" identified in the camera image with those stored in a building database. We use two different types of features - floor corners that can be matched against a floorplan and SIFT features that can be matched to a database constructed from other images. The camera's pose can be determined exactly from a match and information can be properly aligned so that it can overlay directly onto the phone's image display. In this paper, we present early results that demonstrate it is possible to realize this capability for a variety of indoor environments. Latency is shown to already be reasonable and likely to be improved by further optimizations. Our goal is to further explore the computational tradeoff between the server and phone client so as to achieve an acceptable latency of a few seconds. <s> BIB002 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Motion and pose estimation <s> Two major limitations of real-time visual SLAM algorithms are the restricted range of views over which they can operate and their lack of robustness when faced with erratic camera motion or severe visual occlusion. In this paper we describe a visual SLAM algorithm which addresses both of these problems. The key component is a novel feature description method which is both fast and capable of repeat-able correspondence matching over a wide range of viewing angles and scales. This is achieved in real-time by using a SIFT-like spatial gradient descriptor in conjunction with efficient scale prediction and exemplar based feature representation. Results are presented illustrating robust realtime SLAM operation within an office environment. <s> BIB003 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Motion and pose estimation <s> This low-cost indoor navigation system runs on off-the-shelf camera phones. More than 2,000 users at four different large-scale events have already used it. The system uses built-in cameras to determine user location in real time by detecting unobtrusive fiduciary markers. The required infrastructure is limited to paper markers and static digital maps, and common devices are used, facilitating quick deployment in new environments. The authors have studied the application quantitatively in a controlled environment and qualitatively during deployment at four large international events. According to test users, marker-based navigation is easier to use than conventional mobile digital maps. Moreover, the users' location awareness in navigation tasks improved. Experiences drawn from questionnaires, usage log data, and user interviews further highlight the benefits of this approach. <s> BIB004 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Motion and pose estimation <s> Abstract : This report describes an efficient algorithm to accurately determine the position and orientation of a camera in an outdoor urban environment using camera imagery acquired from a single location on the ground. The requirement to operate using imagery from a single location allows a system using our algorithms to generate instant position estimates and ensures that the approach may be applied to both mobile and immobile ground sensors. Localization is accomplished by registering visible ground images to urban terrain models that are easily generated offline from aerial imagery. Provided there are a sufficient number of buildings in view of the sensor, our approach provides accurate position and orientation estimates, with position estimates that are more accurate than those typically produced by a global positioning system (GPS). <s> BIB005 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Motion and pose estimation <s> Independent travel is a well known challenge for blind and visually impaired persons. In this paper, we propose a proof-of-concept computer vision-based wayfinding aid for blind people to independently access unfamiliar indoor environments. In order to find different rooms (e.g. an office, a lab, or a bathroom) and other building amenities (e.g. an exit or an elevator), we incorporate object detection with text recognition. First we develop a robust and efficient algorithm to detect doors, elevators, and cabinets based on their general geometric shape, by combining edges and corners. The algorithm is general enough to handle large intra-class variations of objects with different appearances among different indoor environments, as well as small inter-class differences between different objects such as doors and door-like cabinets. Next, in order to distinguish intra-class objects (e.g. an office door from a bathroom door), we extract and recognize text information associated with the detected objects. For text recognition, we first extract text regions from signs with multiple colors and possibly complex backgrounds, and then apply character localization and topological analysis to filter out background interference. The extracted text is recognized using off-the-shelf optical character recognition (OCR) software products. The object type, orientation, location, and text information are presented to the blind traveler as speech. <s> BIB006 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Motion and pose estimation <s> In this paper we introduce Audible Vision, a system that can help blind and visually impaired users navigate in large indoor open spaces. The system uses computer vision to estimate the location and orientation of the user, and enables the user to perceive his/her relative position to a landmark through 3D audio. Testing shows that Audible Vision can work reliably in real-life ever-changing environment crowded with people. <s> BIB007 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Motion and pose estimation <s> This paper presents a sensor fusion strategy applied for Simultaneous Localization and Mapping (SLAM) in dynamic environments. The designed approach consists of two features: (i) the first one is a fusion module which synthesizes line segments obtained from laser rangefinder and line features extracted from monocular camera. This policy eliminates any pseudo segments that appear from any momentary pause of dynamic objects in laser data. (ii) The second characteristic is a modified multi-sensor point estimation fusion SLAM (MPEF-SLAM) that incorporates two individual Extended Kalman Filter (EKF) based SLAM algorithms: monocular and laser SLAM. The error of the localization in fused SLAM is reduced compared with those of individual SLAM. Additionally, a new data association technique based on the homography transformation matrix is developed for monocular SLAM. This data association method relaxes the pleonastic computation. The experimental results validate the performance of the proposed sensor fusion and data association method. <s> BIB008 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Motion and pose estimation <s> Abstract Assisting the visually impaired along their navigation path is a challenging task which drew the attention of several researchers. A lot of techniques based on RFID, GPS and computer vision modules are available for blind navigation assistance. In this paper, we proposed a depth estimation technique from a single image based on local depth hypothesis devoid of any user intervention and its application to assist the visually impaired people. The ambient space ahead of the user is captured by a camera and the captured image is resized for computational efficiency. The obstacles in the foreground of the image are segregated using edge detection followed by morphological operations. Then depth is estimated for each obstacle based on local depth hypothesis. The estimated depth map is then compared with the reference depth map of the corresponding depth hypothesis and the deviation of the estimated depth map from the reference depth map is used to retrieve the spatial information about the obstacles ahead of the user. <s> BIB009
Systems that use computer vision to estimate the location and orientation of the user enable him/her to perceive their relative position to a detected georeferenced visual landmark BIB009 BIB006 BIB007 . When the user is carrying a camera whose position and orientation relative to the user's body are known, the motion of the features detected in the captured images may be used to assess information about the carrier's pose and motion. Visual motion information is not affected by the same error sources as global navigation satellite systems or self-contained sensors (like inertial sensors) and is therefore a complementary information source for increasing the accuracy of the positioning measurements . Research related to visual positioning methods has been mainly focused on the autonomous navigation of vehicles and mobile robots. The first papers related to the use of computer vision assistance in pedestrian navigation were published in the late 90 s BIB001 . They described the use of databases preloaded with images of samples taken of the expected surroundings, which were tagged with information about their geographic location. The position of the pedestrian was provided when a match was found between an image taken by the pedestrian and an image stored in the database BIB005 . The database and the image processing could be made locally or remotely on a server, depending on processing power requirements BIB002 . A visual pedestrian navigation system independent of a server and of pre-existing databases usually needs integration with other positioning sensors to be functional. In such a system, monitoring the motion of features in consecutive images taken by the user device and integrating the information with measurements obtained with other sensors or a Global Navigation Satellite System (GNSS) receiver can be used to obtain the relative position of the user. Initial absolute position information can be used to reduce drift and other errors, as without initial position the visual perception only provides information about the user's motion. Such serverindependent systems have been proposed using visualaided Inertial Measurement Unit (IMU) measurements. Other techniques, like the ones used in Simultaneous Localization and Mapping (SLAM) systems, produce a map of the unknown environment while simultaneously locating the user. Traditionally, mapping has been done using inertial sensors, though in recent years SLAM systems that also integrate a camera (visual SLAM systems) have been developed BIB003 . The magnitude of the motion of a figure in an image is dependent on the relative depth of the object within the captured scene, i.e. the distance of the object from the camera. Because the distance of objects from the camera in the environment is usually unknown, a scale problem arises and different methods for overcoming it have been used . Tools for resolving the distance, like laser rangefinders, have been integrated with a camera BIB008 . The requirement for carrying special equipment reduces the applicability of this method for pedestrian navigation, especially for blind users. Another approach is the use of computer vision algorithms to detect artificial landmarks with known indoor location (georeferenced landmarks). Recently, indoor navigation systems have been proposed, which use computer vision to detect and decode fiduciary markers in real time, using standard camera phones. One of the most common markers used are 2-D barcodes. The barcode provides a unique ID and a fixed sized template, which may be used to estimate the pose of the viewer BIB004 . Using these types of special markers, a standard smartphone may be used in these kinds of systems, without the need to carry any extra equipment. Once the marker is in the camera's field of vision, the user can receive a warning about his relative bearing to the marker, as well as an approximate distance.
A review of assistive spatial orientation and navigation technologies for the visually impaired <s> 3D sensing <s> This paper presents a visual odometer system using stereo cameras for pedestrian navigation. Corner detection, stereo matching, triangulation, tracking, and robust ego-motion estimation are used for data processing. The outcome is the estimated incremental egomotion of the stereo cameras. The problems of implementing the system on a pedestrian are stated. The first problem is image feature motion. The motion of image features is the result of the motion of stereo cameras. In the case that the feature belongs to an independent moving object, the movement of the feature is the result of the motion of the cameras together with the motion of the feature itself. Hence, a novel robust ego-motion estimation algorithm must be utilized to eliminate outliers, which are independent moving features, mismatched features in the stereo matching step and incorrect assigned features in the tracking step. Secondly, the feature, which is collected on a pedestrian, results in a winding trajectory, which may easily fail the tracking algorithm. In this paper, we introduce a new method based on the knowledge of gait analysis to capture images at the same stage of walking cycle. This leads to less winding trajectory, which can be tracked without increasing order and computational cost of the tracker. <s> BIB001 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> 3D sensing <s> In order to supplement the traditional aids for the visually impaired, many different technologies are being explored to provide more accurate and useful information. In particular, vision systems generate visual information that can be used to provide guidance to the visually impaired. This paper presents a ID signal matching algorithm for stereo vision correlation as well as an embedded system that provides obstacle distance estimation to the user. <s> BIB002 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> 3D sensing <s> In this paper, we present a walking guidance system for the visually impaired pedestrians. The system has been designed to help the visually impaired by responding intelligently to various situations that can occur in unrestricted natural outdoor environments when walking and finding the destinations. It involves the main functions of people detection, text recognition, face recognition. In addition, added sophisticated functions of walking path guidance using Differential Global Positioning System, obstacle detection using a stereo camera and voice user, interface are included. In order to operate all functions concurrently, we develop approaches in real situations and integrate them. Finally, we experiment on a prototype system under natural environments in order to verify our approaches. The results show that our approaches are applicable to real situations. <s> BIB003 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> 3D sensing <s> RGB-D cameras are novel sensing systems that capture RGB images along with per-pixel depth information. In this paper we investigate how such cameras can be used in the context of robotics, specifically for building dense 3D maps of indoor environments. Such maps have applications in robot navigation, manipulation, semantic mapping, and telepresence. We present RGB-D Mapping, a full 3D mapping system that utilizes a novel joint optimization algorithm combining visual features and shape-based alignment. Visual and depth information are also combined for view-based loop closure detection, followed by pose optimization to achieve globally consistent maps.We evaluate RGB-D Mapping on two large indoor environments, and show that it effectively combines the visual and shape information available from RGB-D cameras. <s> BIB004 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> 3D sensing <s> Computer stereo vision is an important technique for robotic navigation and other mobile scenarios where depth perception is needed, but it usually requires two cameras with a known horizontal displacement. In this paper, we present a solution for mobile devices with just one camera, which is a first step towards making computer stereo vision available to a wide range of devices that are not equipped with stereo cameras. We have built a prototype using a state-of-the-art mobile phone, which has to be manually displaced in order to record images from different lines of sight. Since the displacement between the two images is not known in advance, it is measured using the phone's inertial sensors. We evaluated the accuracy of our single-camera approach by performing distance calculations to everyday objects in different indoor and outdoor scenarios, and compared the results with that of a stereo camera phone. As a main advantage of a single moving camera is the possibility to vary its relative position between taking the two pictures, we investigated the effect of different camera displacements on the accuracy of distance measurements. <s> BIB005 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> 3D sensing <s> The sheer volume of data generated by depth cameras provides a challenge to process in real time, in particular when used for indoor mobile robot localization and navigation. We introduce the Fast Sampling Plane Filtering (FSPF) algorithm to reduce the volume of the 3D point cloud by sampling points from the depth image, and classifying local grouped sets of points as belonging to planes in 3D (the “plane filtered” points) or points that do not correspond to planes within a specified error margin (the “outlier” points). We then introduce a localization algorithm based on an observation model that down-projects the plane filtered points on to 2D, and assigns correspondences for each point to lines in the 2D map. The full sampled point cloud (consisting of both plane filtered as well as outlier points) is processed for obstacle avoidance for autonomous navigation. All our algorithms process only the depth information, and do not require additional RGB data. The FSPF, localization and obstacle avoidance algorithms run in real time at full camera frame rates (30Hz) with low CPU requirements (16%). We provide experimental results demonstrating the effectiveness of our approach for indoor mobile robot localization and navigation. We further compare the accuracy and robustness in localization using depth cameras with FSPF vs. alternative approaches that simulate laser rangefinder scans from the 3D data. <s> BIB006 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> 3D sensing <s> Consumer-grade range cameras such as the Kinect sensor have the potential to be used in mapping applications where accuracy requirements are less strict. To realize this potential insight into the geometric quality of the data acquired by the sensor is essential. In this paper we discuss the calibration of the Kinect sensor, and provide an analysis of the accuracy and resolution of its depth data. Based on a mathematical model of depth measurement from disparity a theoretical error analysis is presented, which provides an insight into the factors influencing the accuracy of the data. Experimental results show that the random error of depth measurement increases with increasing distance to the sensor, and ranges from a few millimeters up to about 4 cm at the maximum range of the sensor. The quality of the data is also found to be influenced by the low resolution of the depth measurements. <s> BIB007 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> 3D sensing <s> This paper presents a system which extends the use of the traditional white cane by the blind for navigation purposes in indoor environments. Depth data of the scene in front of the user is acquired using the Microsoft Kinect sensor which is then mapped into a pattern representation. Using neural networks, the proposed system uses this information to extract relevant features from the scene, enabling the detection of possible obstacles along the way. The results show that the neural network is able to correctly classify the type of pattern presented as input. <s> BIB008 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> 3D sensing <s> In this paper, we present a novel approach for aerial obstacle detection (e.g., branches or awnings) using a 3-D smartphone in the context of the visually impaired (VI) people assistance. This kind of obstacles are especially challenging because they cannot be detected by the walking stick or the guide dog. The algorithm captures the 3-D data of the scene through stereo vision. To our knowledge, this is the first work that presents a technology able to obtain real 3-D measures with smartphones in real time. The orientation sensors of the device (magnetometer and accelerometer) are used to approximate the walking direction of the user, in order to look for the obstacles only in such a direction. The obtained 3-D data are compressed and then linearized for detecting the potential obstacles. Potential obstacles are tracked in order to accumulate enough evidence to alert the user only when a real obstacle is found. In the experimental section, we show the results of the algorithm in several situations using real data and helped by VI users. <s> BIB009
Distance is one of the most important aspects of navigation, as it is used to avoid collisions or recognize nearby objects. The way human vision uses different perspectives of the same scene to create a three-dimensional perception of the world inspired the use of multiple cameras to model/ recognize the world in three dimensions. When a stereo camera is used, the distance to objects may be estimated using triangulation BIB001 . In the case of stereovision, the distance between the two cameras, called the baseline, affects the accuracy of the motion obtained from the images. The farther the two cameras are from each other, the better the accuracy will be BIB005 . Stereovision may be used to obtain 3D range information, and area correlation methods can be used for approximate depth information. This information has been successfully used in combination with pedestrian detection models BIB003 . Methods using genetic algorithms have been used to perform stereovision correlation and generate dense disparity maps, as well. These disparity maps, in turn, provide rough distance estimates to the user, allowing them to navigate through the environment BIB002 . Simpler approaches use one relative view (right or left camera) and a depth map (from the stereo vision equipment) to perform fuzzy-based clustering segmentation of the scenario into object clusters . After, knowing the clusters' locations, it is possible to detect near and far obstacles and feed this information to the user. The detection of changes in a 3D space based on fusing range data and image data captured by the cameras may also be used to create a 3D representation of the surrounding space that can be transmitted to the user through an appropriate interface, namely haptic . Having a short-term depth map computed about the user's immediate environment may be used to classify the path/scene as having (or not) any immediate obstacles, whether they are ground based, aerial or their relative position (left/right). Recently, 3D vision sensors have evolved considerably and have been applied in several popular devices such as smartphones and game consoles, greatly reducing its cost. Stereovision has been successfully applied to mobile devices (smartphones) allowing the structure of the environment to be estimated and for some kind of obstacle classification to be performed BIB009 . Theoretically, stereovision camera phones can work and be used to extract the same type of information as other standard stereovision systems. In the case of the use of smartphones, the main limitation is their low processing power in terms of realtime execution. In recent years, it has been significantly increased. However, real world scenes are usually very structured and obstacle classification in real time is still only used to work as a virtual stick, or white cane (not replacing it entirely, for security reasons). In many cases, stereoscopic vision has been replaced by the Microsoft Kinect sensor . This led to the mass use of these sensors in scientific research with good results BIB004 BIB006 BIB007 . The Kinect sensor includes a depth sensor and an RGB camera. The depth sensor is composed by an infrared laser source that projects non-visible light with a coded pattern combined with a monochromatic CMOS image sensor that captures the reflected light. The pattern received by the RGB sensor is a deformed version of the original pattern, projected by the laser source and deformed by the objects on the scene. The algorithm that deciphers the light coding generates a depth image representing the scene. Using machine learning techniques, such as neural networks, to analyze depth images obtained from the Microsoft Kinect sensor enables the recognition of pre-defined features/patterns of the surrounding environment BIB008 . Generally, in terms of the contribution that data extracted from computer vision pattern recognition can give to location systems, whether using stereovision or other image-based sensors like the Kinect, distance can be estimated and, in combination with data from pattern/feature detection and an appropriate geographic information system, contribute to assess the location of the user. In this context, data for vision-based localization must also be present in the geographic information system used. The geographic information system is a central element to provide any type of location-based service, and its importance is discussed further in this paper.
A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Spatial orientation <s> This paper presents incorporation of certain human vision properties in the image processing methodologies, applied in the vision substitutive system for human blind. The prototype of the system has digital video camera fixed in a headgear, stereo earphone and a laptop computer, interconnected. The processing of the captured image is designed as human vision. It involves lateral inhibition, which is developed using Feed Forward Neural Network (FFNN) and domination of the object properties with suppression of background by means of Fuzzy based Image Processing System (FLIPS). The processed image is mapped to stereo acoustic signals to the earphone. The sound is generated using non-linear frequency incremental sine wave. The sequence of the scanning to construct the acoustic signal is designed to produce stereo signals, which aids to locate the object in horizontal axis. Frequency variation implies the location of object in the vertical axis. The system is tested with blind volunteer and his suggestion in formatting, pleasantness and discrimination of sound pattern were also considered. <s> BIB001 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Spatial orientation <s> People with severe visual impairment need a means of remaining oriented to their environment as they move through it. A series of indoor and outdoor trials using a variety of technologies and interfaces led to the development and evaluation of three promising wearable orientation interfaces: a virtual sonic beacon, speech output, and a shoulder-tapping system. Street crossing was used as a critical test situation in which to evaluate these interfaces. The shoulder-tapping system was found most universally usable. Results indicated that, given the great variety of co-morbidities within this population, which is comprised of mostly older persons, optimal performance and flexibility may best be obtained in a design that combines the best elements of both the speech and shoulder-tapping interfaces. <s> BIB002 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Spatial orientation <s> In this paper, we present an object detection and classification method for OpenEyes-II. OpenEyes-II is a walking guidance system that helps the visually impaired to respond naturally to various situations that can occur in unrestricted natural outdoor environments during walking and reaching the destination. Object detection and classification is requisite for implementing obstacle and face detection which are major parts of a walking guidance system. It can discriminate pedestrian from obstacles, and extract candidate regions for face detection and recognition. We have used stereo-based segmentation and SVM (Support Vector Machines), which has superior classification performance in binary classification case such like object detection. The experiments on a large number of street scenes demonstrate the effectiveness of the proposed method. <s> BIB003 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Spatial orientation <s> In this paper, we present a real-time pedestrian detection method in outdoor environments. It is necessary for pedestrian detection to implement obstacle and face detection which are major parts of a walking guidance system for the visually impaired. It detects foreground objects on the ground, discriminates pedestrians from other noninterest objects, and extracts candidate regions for face detection and recognition. For effective real-time pedestrian detection, we have developed a method using stereo-based segmentation and the SVM (Support Vector Machines), which works well particularly in binary classification problem (e.g. object detection). We used vertical edge features extracted from arms, legs and torso. In our experiments, test results on a large number of outdoor scenes demonstrated the effectiveness of the proposed pedestrian detection method. <s> BIB004 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Spatial orientation <s> In spite of the impressive advances related to retinal prostheses, there is no imminent promise to make them soon available with a realistic performance to help navigating blind persons. In our new project, we are designing a Bionic Eyeglass that is providing a wearable TeraOps visual computing power to guide visually impaired people in their daily life. Detection and recognition of signs and displays in real, noisy environments is a key element in many functions of the Bionic Eyeglass. This paper describes spatial-temporal analogic cellular algorithms used for localizing signs and displays, and recognition of numbers they contain. <s> BIB005 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Spatial orientation <s> Urban intersections are the most dangerous parts of a blind or visually impaired person's travel. To address this problem, this paper describes the novel "Crosswatch" system, which uses computer vision to provide information about the location and orientation of crosswalks to a blind or visually impaired pedestrian holding a camera cell phone. A prototype of the system runs on an off-the-shelf Nokia camera phone in real time, which automatically takes a few images per second, uses the cell phone's built-in computer to analyze each image in a fraction of a second and sounds an audio tone when it detects a crosswalk. Tests with blind subjects demonstrate the feasibility of the system and its ability to provide useful crosswalk alignment information under real-world conditions. <s> BIB006 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Spatial orientation <s> Urban intersections are the most dangerous parts of a blind or visually impaired personpsilas travel. To address this problem, this paper describes the novel ldquoCrosswatchldquo system, which uses computer vision to provide information about the location and orientation of crosswalks to a blind or visually impaired pedestrian holding a camera cell phone. A prototype of the system runs on an off-the-shelf Nokia N95 camera phone in real time, which automatically takes a few images per second, analyzes each image in a fraction of a second and sounds an audio tone when it detects a crosswalk. Real-time performance on the cell phone, whose computational resources are limited compared to the type of desktop platform usually used in computer vision, is made possible by coding in Symbian C++. Tests with blind subjects demonstrate the feasibility of the system. <s> BIB007 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Spatial orientation <s> We describe a wayfinding system for blind and visually impaired persons that uses a camera phone to determine the user's location with respect to color markers, posted at locations of interest (such as offices), which are automatically detected by the phone. The color marker signs are specially designed to be detected in real time in cluttered environments using computer vision software running on the phone; a novel segmentation algorithm quickly locates the borders of the color marker in each image, which allows the system to calculate how far the marker is from the phone. We present a model of how the user's scanning strategy (i.e. how he/she pans the phone left and right to find color markers) affects the system's ability to detect color markers given the limitations imposed by motion blur, which is always a possibility whenever a camera is in motion. Finally, we describe experiments with our system tested by blind and visually impaired volunteers, demonstrating their ability to reliably use the system to find locations designated by color markers in a variety of indoor and outdoor environments, and elucidating which search strategies were most effective for users. <s> BIB008 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Spatial orientation <s> The ability of gaining visual information from the environment can be of utmost importance for visually impaired and blind people. Our experimental system, consisting of a cell phone and a compact cellular visual computer, is able to detect and recognize objects and understand basic events around the user in predefined situations to help them in everyday tasks. We developed algorithms for two important new tasks: pedestrian crosswalk detection and identification of gender pictograms. <s> BIB009 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Spatial orientation <s> The last decades a variety of portable or wearable navigation systems have been developed to assist visually impaired people during navigation in known or unknown, indoor or outdoor environments. There are three main categories of these systems: electronic travel aids (ETAs), electronic orientation aids (EOAs), and position locator devices (PLDs). This paper presents a comparative survey among portable/wearable obstacle detection/avoidance systems (a subcategory of ETAs) in an effort to inform the research community and users about the capabilities of these systems and about the progress in assistive technology for visually impaired people. The survey is based on various features and performance parameters of the systems that classify them in categories, giving qualitative-quantitative measures. Finally, it offers a ranking, which will serve only as a reference point and not as a critique on these systems. <s> BIB010 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Spatial orientation <s> We have built an arm-navigation assisting system for a visually impaired person (user) to reach an object on the table, where optical tracking of marks attacked both on the objects and on his arm is used in order to augment his sight. The system helps him with giving spacial information of the workspace so that he creates a cognitive map of the workspace. For this purpose degrees of congestion on the workspace must be conveyed to the user. Starting from the description of the assisting system, we propose in this paper a method of judging the degrees of congestion of the workspace around arm. There are five of them: from “narrow” to “broad,” which are determined by using well-established Neural Network techniques on the basis of the spacial data obtained by the Distance Field Model (DFM) representation of the workspace. Defining spaciousness by entropy-like measure based on the DFM data is also proposed separately. <s> BIB011 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Spatial orientation <s> Orientation and mobility are tremendous problems for Blind people. Assistive technologies based on Global Positioning System (GPS) could provide them with a remarkable autonomy. Unfortunately, GPS accuracy, Geographical Information System (GIS) data and map-matching techniques are adapted to vehicle navigation only, and fail in assisting pedestrian navigation, especially for the Blind. In this paper, we designed an assistive device for the Blind based on adapted GIS, and fusion of GPS and vision based positioning. The proposed assistive device may improve user positioning, even in urban environment where GPS signals are degraded. The estimated position would then be compatible with assisted navigation for the Blind. Interestingly the vision module may also answer Blind needs by providing them with situational awareness (localizing objects of interest) along the path. Note that the solution proposed for positioning could also enhance autonomous robots or vehicles localization. <s> BIB012 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Spatial orientation <s> In this paper, machine learning and geometric computer vision are combined for the purpose of automatic reading bus line numbers with a smart phone. This can prove very useful to improve the autonomy of visually impaired people in urban scenarios. The problem is a challenging one, since standard geometric image matching methods fail due to the abundance of distractors, occlusions, illumination changes, highlights and specularities, shadows, and perspective distortions. The problem is solved by locating the main geometric entities of the bus facade through a cascade of classifiers, and then refining the matching with robust geometric matching. The method works in real time and, as experimental results show, has a good performance in terms of recognition rate and reliability. <s> BIB013 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Spatial orientation <s> Microsoft's Kinect 3-D motion sensor is a low cost 3D camera that provides color and depth information of indoor environments. In this demonstration, the functionality of this fun-only camera accompanied by an iPad's tangible interface is targeted to the benefit of the visually impaired. A computer-vision-based framework for real time objects localization and for their audio description is introduced. Firstly, objects are extracted from the scene and recognized using feature descriptors and machine-learning. Secondly, the recognized objects are labeled by instruments sounds, whereas their position in 3D space is described by virtual space sources of sound. As a result, the scene can be heard and explored while finger-triggering the sounds within the iPad, on which a top-view of the objects is mapped. This enables blindfolded users to build a mental occupancy grid of the environment. The approach presented here brings the promise of efficient assistance and could be adapted as an electronic travel aid for the visually-impaired in the near future. <s> BIB014 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Spatial orientation <s> A vibrotactile array is a promising human computer interface which could display graphical information to users in a tactile form. This paper presents the design and testing of an image contour display system with a vibrotactile array. The tactile image display system is attached to the back of the user. It converts visual graphics into 2D tactile images and allows subjects to feel the contours of objects through vibration stimulus. The system consists of a USB camera, 48 (6×8) vibrating motors and an embedded control system. The image is captured by the camera and the 2D contour is extracted and transformed into vibrotactile stimuli using a temporal- spatial dynamic coding method. Preliminary experiments were carried out and the optimal parameters of the vibrating time and duration were explored. To evaluate the feasibility and robustness of this vibration mode, letters were also tactilely displayed and the recognition rate about the alphabet letter display was investigated. It was shown that under the condition of no pre-training for the subjects, the recognition rate was 82%. Such a recognition rate is higher than that of the scanning mode (47.5%) and the improved handwriting mode (76.8%). The results indicated that the proposed method was efficient in conveying the contour information to the visually impaired by means of vibrations. <s> BIB015 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Spatial orientation <s> In this paper we introduce Audible Vision, a system that can help blind and visually impaired users navigate in large indoor open spaces. The system uses computer vision to estimate the location and orientation of the user, and enables the user to perceive his/her relative position to a landmark through 3D audio. Testing shows that Audible Vision can work reliably in real-life ever-changing environment crowded with people. <s> BIB016 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Spatial orientation <s> Independent travel is a well known challenge for blind and visually impaired persons. In this paper, we propose a proof-of-concept computer vision-based wayfinding aid for blind people to independently access unfamiliar indoor environments. In order to find different rooms (e.g. an office, a lab, or a bathroom) and other building amenities (e.g. an exit or an elevator), we incorporate object detection with text recognition. First we develop a robust and efficient algorithm to detect doors, elevators, and cabinets based on their general geometric shape, by combining edges and corners. The algorithm is general enough to handle large intra-class variations of objects with different appearances among different indoor environments, as well as small inter-class differences between different objects such as doors and door-like cabinets. Next, in order to distinguish intra-class objects (e.g. an office door from a bathroom door), we extract and recognize text information associated with the detected objects. For text recognition, we first extract text regions from signs with multiple colors and possibly complex backgrounds, and then apply character localization and topological analysis to filter out background interference. The extracted text is recognized using off-the-shelf optical character recognition (OCR) software products. The object type, orientation, location, and text information are presented to the blind traveler as speech. <s> BIB017 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Spatial orientation <s> We develop a novel camera-based computer vision technology to automatically recognize banknotes to assist visually impaired people. Our banknote recognition system is robust and effective with the following features: 1) high accuracy: high true recognition rate and low false recognition rate; 2) robustness: handles a variety of currency designs and bills in various conditions; 3) high efficiency: recognizes banknotes quickly; and 4) ease of use: helps blind users to aim the target for image capture. To make the system robust to a variety of conditions including occlusion, rotation, scaling, cluttered background, illumination change, viewpoint variation, and worn or wrinkled bills, we propose a component-based framework by using speeded up robust features (SURF). Furthermore, we employ the spatial relationship of matched SURF features to detect if there is a bill in the camera view. This process largely alleviates false recognition and can guide the user to correctly aim at the bill to be recognized. The robustness and generalizability of the proposed system are evaluated on a dataset including both positive images (with U.S. banknotes) and negative images (no U.S. banknotes) collected under a variety of conditions. The proposed algorithm achieves 100% true recognition rate and 0% false recognition rate. Our banknote recognition system is also tested by blind users. <s> BIB018 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Spatial orientation <s> This paper proposes a novel assistive system for the visually impaired. The system is composed of a Microsoft Kinect sensor, keypad-type controller, tactile device, laptop computer and so on. The system can recognize three-dimensional objects from depth data generated by the Kinect sensor, and inform visually impaired users not only about the existence of objects, but also about their classes such as chairs and upward stairs. Ordinarily, the system works as a conventional white cane. When a user instructs the system to find the object of a particular class, the system executes the recognition scheme that is designed to find the instructed object. If the object is found in the field of view of the Kinect sensor, the tactile device provides vibration feedback. The recognition schemes are applied to actual scenes. The experimental results indicate that the system is promising as means of helping the visually impaired find the desired objects. <s> BIB019 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Spatial orientation <s> The aim of this paper is to present a service for blind and people with low vision to assist them to cross the street independently. The presented approach provides the user with significant information such as detection of pedestrian crossing signal from any point of view, when the pedestrian crossing signal light is green, the detection of dynamic and fixed obstacles, predictions of the movement of fellow pedestrians and information on objects which may intersect his path. Our approach is based on capturing multiple frames using a depth camera which is attached to a user's headgear. Currently a testbed system is built on a helmet and is connected to a laptop in the user's backpack. In this paper, we discussed efficiency of using Speeded-Up Robust Features (SURF) algorithm for object recognition for purposes of blind people assistance. The system predicts the movement of objects of interest to provide the user with information on the safest path to navigate and information on the surrounding area. Evaluation of this approach on real sequence video frames provides 90% of human detection and more than 80% for recognition of other related objects. <s> BIB020 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Spatial orientation <s> Abstract The paper describes the Object detection and recognition along with interpretation is considered as minimum survival requirements to all the creatures in the world. Especially, the human beings rule the world because of their known interpretation survival tactics than any other animals. Automatic interpretation of objects eventually event reaction makes an environment a further better place to live. This paper implements a method to track and recognize the object in a surveillance area of the visually impaired people. In order to survive in the real world, visually impaired people have to be aware of the environment. Visually impaired people need some assistance in order to move from one place to another in day to day life. It might be in a dependent manner with the help of others or in an independent manner with the help of canes, trained dogs to guide them. In both the cases the significant objective of them is to detect the obstacle in front of them and avoiding it while moving. With the advent of electronic technologies self-assistive devices are made to help them. The system should be able to report the location, distance and direction of items in the room such as equipment, furniture, doors and even other users. It must be a reliable system that minimizes the impact of installation and maintenance. A great number of benefits are realized from the implementation of systems, such as greater safety and eventually, better enhanced quality of life. <s> BIB021 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Spatial orientation <s> Abstract Assisting the visually impaired along their navigation path is a challenging task which drew the attention of several researchers. A lot of techniques based on RFID, GPS and computer vision modules are available for blind navigation assistance. In this paper, we proposed a depth estimation technique from a single image based on local depth hypothesis devoid of any user intervention and its application to assist the visually impaired people. The ambient space ahead of the user is captured by a camera and the captured image is resized for computational efficiency. The obstacles in the foreground of the image are segregated using edge detection followed by morphological operations. Then depth is estimated for each obstacle based on local depth hypothesis. The estimated depth map is then compared with the reference depth map of the corresponding depth hypothesis and the deviation of the estimated depth map from the reference depth map is used to retrieve the spatial information about the obstacles ahead of the user. <s> BIB022 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Spatial orientation <s> In this work we describe main features of software modules developed for Android smartphones that are dedicated for the blind users. The main module can recognise and match scanned objects to a database of objects., e.g. food or medicine containers. The two other modules are capable of detecting major colours and locate direction of the maximum brightness regions in the captured scenes. We conclude the paper with a short summary of the tests of the software aiding activities of daily living of a blind user. <s> BIB023 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Spatial orientation <s> This paper proposes a novel concept for helping the visually impaired know what kind of object there is in an environment. This concept is implemented as a cane system that selects a target object based on a user's demand, recognizes the object from depth data obtained by a Microsoft Kinect sensor, and returns the recognition results via a tactile device. The proposed system is evaluated through a user study where one blindfolded subject actually uses the system to find chairs in an experimental environment. The experimental results indicate that the system is promising as means of helping the visually impaired recognize objects. <s> BIB024 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Spatial orientation <s> A computer vision-based wayfinding and navigation aid can improve the mobility of blind and visually impaired people to travel independently. In this paper, we develop a new framework to detect and recognize stairs, pedestrian crosswalks, and traffic signals based on RGB-D (Red, Green, Blue, and Depth) images. Since both stairs and pedestrian crosswalks are featured by a group of parallel lines, we first apply Hough transform to extract the concurrent parallel lines based on the RGB (Red, Green, and Blue) channels. Then, the Depth channel is employed to recognize pedestrian crosswalks and stairs. The detected stairs are further identified as stairs going up (upstairs) and stairs going down (downstairs). The distance between the camera and stairs is also estimated for blind users. Furthermore, the traffic signs of pedestrian crosswalks are recognized. The detection and recognition results on our collected datasets demonstrate the effectiveness and efficiency of our proposed framework. <s> BIB025
Visually impaired people often want more than just information about their location, having the need to relate their current location to the features existing in the surrounding environment. Orientation and mobility are essential skills for performing a proper navigation . In this process, mobility, or micro-navigation, relates to obstacle detection and avoidance in the immediate physical environment. Orientation, or macro-navigation, translates as the ability to create and maintain awareness of one's position in the physical space relative to both the landmarks in the surrounding environment, whether they are points of interest (POI) or obstacles (from micro-navigation), and to the user's desired destination BIB002 . A wide range of systems and tools is available for enhancing the mobility of visually impaired individuals. The white cane and the guide dog are the most popular. The white cane is the simplest, cheapest, most reliable and the most popular. However, it does not provide all the necessary information of context such as speed, volume and distances. The eyes usually gather this information, which is necessary for the perception and control of locomotion BIB010 . Several approaches have been conducted over the last decades to address problems relevant to blind mobility and context awareness. They can be classified into two main categories. 'Electronic Travel Aids' (ETAs) are designed to improve mobility by detecting obstacles in the user's surroundings. In order to improve the blind user's autonomy, ''Electronic Orientation Aids'' (EOAs) provide the blind with some degree of situational awareness and guidance in unknown environments BIB012 . Apart from a few implementations that use some of the location techniques described in the previous section, up to now, EOAs have mainly been based on GNSS and location-based services. However, in recent years, computer vision techniques have successfully been used to provide contextual awareness and orientation indications. In general, these assistive orientation systems use computer vision techniques to provide information ranging from the simple presence of obstacles, or the distinction between fixed and moving obstacles, to the recognition of specific objects in the captured image. In some cases, even the distance and relative displacement of the detected objects to the user is provided, using depth information. Although very simple in their purpose, systems designed to provide the blind user with information about the existence of objects in his path (through the use of artificial vision sensors) use a wide range of techniques to analyze the image. Traditional image processing techniques can be used to detect the contours of objects in the scene BIB022 BIB015 . More advanced approaches use artificial intelligence techniques to detect obstacles in the captured image BIB001 and even to classify the scene, presenting basic forms of characterization/description of the environment as being very cluttered or relatively broad BIB011 . Other classification methods may provide information regarding the spatial distribution of the obstacles/objects in the scene BIB016 , achieving the overall objective of providing direct specific orientation instructions and simple contextual awareness. More advanced systems, which apply object recognition algorithms to detect and recognize specific objects in the scene, go even further trying to reduce the gap between sighted and non-sighted people. Using their natural sensors, sighted users not only detect the existence of objects and obstacles in their immediate surroundings, but they are also able to recognize them and their attributes, such as color, shape and relative spatial orientation. The simplest approaches use markers placed at specific points of interest BIB008 . When detected, these markers are used to estimate the user location and, subsequently, the objects that are expected to be found on the scene. Additionally, it is also possible to inform the user about the distance and relative position to the marker (pose). However, the most common systems that use object recognition to provide contextual information try to locate and recognize natural objects in the scene without the need to use artificial markers placed in the infrastructure. As discussed in earlier subsections, the placement of markers/sensors in the infrastructure is costly and requires a lot of maintenance. Given this fact, many assistive systems nowadays try to give the user information about the presence and orientation of natural objects in the scene, such as crosswalks BIB006 BIB007 BIB009 or text commonly found in places like buses or office doors BIB017 BIB005 BIB013 . Even the distinction between similar objects used in everyday life that may be easily confused by blind users, like different bank notes BIB018 , food or medicine containers BIB023 , can be incorporated in spatial orientation systems which use advanced computer vision techniques to provide spatial awareness through the recognition of natural objects. Although not specifically related to spatial orientation, the techniques used in these examples provide awareness about the presence of physical items in the context of the user, and the same techniques may be extended to the purpose of spatial awareness. Table 2 summarizes the features provided by the most common spatial orientation devices, as well as their availability in terms of indoor vs. outdoor scenario. With the recent advances in 3D vision and depth sensors, an all-new kind of contextual input may be used in the context of assistive systems for the visually impaired: depth information. Using feature descriptors and machine learning techniques, different objects can be extracted and classified BIB014 . These types of systems can recognize threedimensional objects from the depth data and inform visually impaired users not only about the existence of objects but also their class, such as chairs and upward stairs BIB024 BIB019 BIB025 , working similarly to a conventional white cane, with an extended range. Some systems even incorporate the detection and distinction between fixed and moving obstacles and object recognition in one global solution, mostly for pedestrian detection and avoidance BIB003 BIB004 BIB020 BIB021 .
A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Navigation <s> This paper presents the research for the development of a new travel aid to increase the independent mobility of blind and elderly travellers. This aid will build on the technologies of geographical information systems (GIS) and the Global Positioning System (GPS). The MOBIC Travel Aid (MOTA) consists of two interrelated components : the MOBIC Pre-journey System (MOPS) to assist users in planning journeys and the MOBIC Outdoor System (MooDs) to execute these plans by providing users with orientation and navigation assistance during journeys. The MOBIC travel aid is complementary to primary mobility aids such as the long cane or guide dog. Results of a study of user requirements are presented and their implications for the initial design of the system are discussed. <s> BIB001 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Navigation <s> We describe some of the results of our program of basic and applied research on navigating without vision. One basic research topic that we have studied extensively is path integration, a form of navigation in which perceived self-motion is integrated over time to obtain an estimate of current position and orientation. In experiments on pathway completion, one test of path integration ability, we have found that subjects who are passively guided over the outbound path without vision exhibit significant errors when attempting to return to the origin but are nevertheless sensitive to turns and segment lengths in the stimulus path. We have also found no major differences in path integration ability among blind and sighted populations. A model we have developed that attributes errors in path integration to errors in encoding the stimulus path is a good beginning toward understanding path integration performance. In other research on path integration, in which optic flow information was manipulated in addition to the proprioceptive and vestibular information of nonvisual locomotion, we have found that optic flow is a weak input to the path integration process. In other basic research, our studies of auditory distance perception in outdoor environments show systematic underestimation of sound source distance. Our applied research has been concerned with developing and evaluating a navigation system for the visually impaired that uses three recent technologies: the Global Positioning System, Geographic Information Systems, and virtual acoustics. Our work shows that there is considerable promise of these three technologies in allowing visually impaired individuals to navigate and learn about unfamiliar environments without the assistance of human guides. <s> BIB002 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Navigation <s> The position-tracking accuracy of a location-aware mobile system can change dynamically as a function of the user’s location and other variables specific to the tracker technology used. This is especially problematic for mobile augmented reality systems, which ideally require extremely precise position tracking for the user’s head, but which may not always be able to achieve the necessary level of accuracy. While it is possible to ignore variable positional accuracy in an augmented reality user interface, this can make for a confusing system; for example, when accuracy is low, virtual objects that are nominally registered with real ones may be too far off to be of use. To address this problem, we describe the early stages of an experimental mobile augmented reality system that adapts its user interface automatically to accommodate changes in tracking accuracy. Our system employs different technologies for tracking a user’s position, resulting in a wide variation in positional accuracy: an indoor ultrasonic tracker and an outdoor real-time kinematic GPS system. For areas outside the range of both, we introduce a dead-reckoning approach that combines a pedometer and orientation tracker with environmental knowledge expressed in spatial maps and accessibility graphs. We present preliminary results from this approach in the context of a navigational guidance system that helps users to orient themselves in an unfamiliar environment. Our system uses inferencing and path planning to guide users toward targets that they choose. <s> BIB003 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Navigation <s> Walking is the most fundamental means of human transportation. Unlike travel by car, walking is not planar, but rather stereoscopic. We therefore developed a real navigation system for pedestrian point-to-point navigation. We propose herein a method of 3D pedestrian navigation, in which position detection is driven mainly by dead reckoning. The proposed method enables ubiquitous round-the-clock 3D positioning, even inside buildings or between tall buildings. In addition, pedestrian navigation is customized by changing the costs of the road network links. Finally, a positioning data accumulation system is implemented so that we can log tracks and easily incorporate new roads or attributes in the future. <s> BIB004 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Navigation <s> The Context sensitive Indoor Navigation System (CoINS) implements an architecture to develop context-aware indoor user guidance services and applications. This paper presents a detailed discussion on algorithms and architectural issues in building an indoor guidance system. We first start with the World Model and required mapping to 2D for the process of path calculation and simplification. We also compare several algorithm optimizations applied in this particular context. The system provides the infrastructure to support different techniques of presenting the path and supporting user orientation to reach a certain destination in indoor premises. <s> BIB005 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Navigation <s> This paper describes, path planning and following algorithms for use in indoor navigation for the blind and visually impaired. Providing indoor navigational assistance for this type of users presents additional challenges not faced by conventional guidance systems, due to the personal nature of the interactions. The algorithms are part of an overall Indoor Navigation Model that is used to provide assistance and guidance in unfamiliar indoor environments. Path planning uses the A* and Dijkstra's shortest path algorithms, to operate on an "Intelligent Map", that is based on a new data structure termed "cactus tree" which is predicated on the relationships between the different objects that represent an indoor environment. The paths produced are termed "virtual hand rails", which can be used to dynamically plan a path for a user within a region. The path following algorithm is based on dead reckoning, but incorporates human factors as well as information about the flooring and furnishing structures along the intended planned path. Experimental and simulating results show that the guiding/navigation problem becomes a divergent mathematical problem if the positional information offered by the positioning and tracking systems does not reach a certain requirement. This research explores the potential to design an application for the visually impaired even when to- date 'positioning and tracking' system cannot offer reliable position information that highly required by this type of application. <s> BIB006 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Navigation <s> Indoor navigation technology is needed to support seamless mobility for the visually impaired. A small portable personal navigation device that provides current position, useful contextual wayfinding information about the indoor environment and directions to a destination would greatly improve access and independence for people with low vision. This paper describes the construction of such a device which utilizes a commercial Ultra-Wideband (UWB) asset tracking system to support real-time location and navigation information. Human trials were conducted to assess the efficacy of the system by comparing target-finding performance between blindfolded subjects using the navigation system for real-time guidance, and blindfolded subjects who only received speech information about their local surrounds but no route guidance information (similar to that available from a long cane or guide dog). A normal vision control condition was also run. The time and distance traveled was measured in each trial and a point-back test was performed after goal completion to assess cognitive map development. Statistically significant differences were observed between the three conditions in time and distance traveled; with the navigation system and the visual condition yielding the best results, and the navigation system dramatically outperforming the non-guided condition. <s> BIB007 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Navigation <s> This paper proposes a framework to enable intuitive navigation guidance for complex buildings which are huge in size and their space boundaries contain non-convex shape including non-navigable areas inside. Our approach utilizes ‘topological’ way-finding method to generate paths. This can be done by means of the integration of building information model (BIM) with our new algorithm to subdivide the spaces. The second main principle is to improve the visual information by using a new method to render all three dimensional views possibly observed in a building beforehand. This has been realized by imaging serviced using client-server architecture with supercomputer computation power. <s> BIB008 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Navigation <s> This study investigated the interaction between remembered landmark and path integration strategies for estimating current location when walking in an environment without vision. We asked whether observers navigating without vision only rely on path integration information to judge their location, or whether remembered landmarks also influence judgments. Participants estimated their location in a hallway after viewing a target (remembered landmark cue) and then walking blindfolded to the same or a conflicting location (path integration cue). We found that participants averaged remembered landmark and path integration information when they judged that both sources provided congruent information about location, which resulted in more precise estimates compared to estimates made with only path integration. In conclusion, humans integrate remembered landmarks and path integration in a gated fashion, dependent on the congruency of the information. Humans can flexibly combine information about remembered landmarks with path integration cues while navigating without visual information. <s> BIB009 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Navigation <s> Whereas outdoor navigation systems typically rely upon GPS, indoor systems have to rely upon dierent techniques for localizing the user, as GPS signals cannot be received indoors. Over the past decade various indoor navigation systems have been developed. This paper provides a comprehensive overview of existing indoor navigation systems and analyzes the dierent techniques used for: (1) locating the user; (2) planning a path; (3) representing the environment; and (4) interacting with the user. Our survey identies a number of research issues that could facilitate large scale deployment of indoor navigation systems. <s> BIB010
The term ''navigation'' defines the behavior of moving toward a destination, with all the motor, sensory, and cognitive processes that it implies . Downs and Stea define navigation as ''the process of solving one class of spatial problems, the movement of a person from one location on the earth's surface to another'' . They divided the process into four tasks: orienting oneself in the environment, choosing the route, keeping on track and recognizing that the destination has been reached. Human navigation is performed using a combination of mobility and orientation . In general, human navigation in indoor and outdoor environments is performed by measuring the distance and relative orientation to one, or multiple, reference points (context). People employ either path integration, orienting themselves relative to a starting position, or landmark-based navigation, where they rely upon perceptual cues together with an external or cognitive map. Humans may also use a combination of both path integration and landmark-based navigation BIB002 . A number of features in the environment can be used to help determine the location. To maintain a sense of where they are in such situations, humans rely on their estimates of the direction and velocity of movement obtained from their vestibular, proprioceptive, and kinesthetic senses, here referred to as path integration BIB009 . In the case of path integration, a single reference point is used throughout the navigation, and the location is estimated based on the addition of all the changes in position and orientation . In the case of landmark-based navigation, users change from reference point (landmarks) to reference point as they navigate in the environment, considering the relative position of the landmarks. In this case, a physical or cognitive map of the environment is used. By periodically measuring the displacement and changes in the orientation (based on heading and motion) and combining them with the distance and orientation relative to a reference point, such as a landmark, users can estimate their new location and orientation while navigating in an environment. A powerful assistive device combines both micro-navigation (sensing the immediate environment) and macronavigation (reaching a remote destination) functionalities. The micro-navigation functions serve to restore a set of sensorimotor behaviors based on visual object localization (context). The macro-navigation functions provide the user with global orientation and navigation skills . All navigation systems have three functional components: an input module to determine the location and orientation in space, a spatial database of the environment and an interface, which delivers information to the user. Location information is usually obtained using the individual location technologies discussed in a previous section, or by a fusion or combination of different inputs, including computer vision. These three components are used as well in the case of navigation systems designed for the visually impaired BIB007 . The location can be used for both planning the path and providing contextual information. The advantage of using a navigation system to plan a route is that the path can be optimized based on different concurring paths and specific user requirements, such as shortest or safest path. In the case of individuals with vision impairments, a path that goes along walls reduces the chance of the user getting lost and a path that avoids low ceilings is much safer BIB010 . In this context, a geographic information system (GIS) designed to enable all these assistive features must provide ways to store and deliver data of much broader extent than simple points of interest and POI categorization. An appropriate geographic system is a core element in any navigation system. Path planning algorithms use graphs or grids to represent the environment. These elements must also be stored in the GIS. To plan a path using graph-based approaches, the environment is divided into sets of nodes and edges connecting these nodes. Edges connect nodes based on the environment map and if one node is accessible from the other one. In this case, each edge may have a weight assigned to it based on different criteria for the path planning. A graph-based approach has the advantage of creating the nodes only if there are objects. Edges are created only if objects are accessible from each other. In complicated environments with many objects, the graph may become big and decrease the performance of the path planning algorithm. The weight associated with edges or cells plays an important role when customizing a path. For example, in the case of a path that should avoid stairs, the edges with stairs receive higher weights, and edges with low ceiling have higher weights when planning a path for individuals with visual impairments. Most of the current navigation systems use either Dijkstra BIB003 BIB004 BIB008 BIB005 or A* BIB003 BIB006 BIB001 for path planning.
A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Interface <s> This paper presents incorporation of certain human vision properties in the image processing methodologies, applied in the vision substitutive system for human blind. The prototype of the system has digital video camera fixed in a headgear, stereo earphone and a laptop computer, interconnected. The processing of the captured image is designed as human vision. It involves lateral inhibition, which is developed using Feed Forward Neural Network (FFNN) and domination of the object properties with suppression of background by means of Fuzzy based Image Processing System (FLIPS). The processed image is mapped to stereo acoustic signals to the earphone. The sound is generated using non-linear frequency incremental sine wave. The sequence of the scanning to construct the acoustic signal is designed to produce stereo signals, which aids to locate the object in horizontal axis. Frequency variation implies the location of object in the vertical axis. The system is tested with blind volunteer and his suggestion in formatting, pleasantness and discrimination of sound pattern were also considered. <s> BIB001 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Interface <s> Our work is dealing with alternative interaction modes for visually impaired and blind people to use computers. The aim of the proposed approach is to exploit the human hearing capabilities to a better degree than this is done by customary screen-readers. A surrounding, three-dimensional audio interface is potentially increasing the information flow between a computer and the user. This paper presents a virtual audio reality (VAR) system which allows computer users to explore a virtual environment only by their sense of hearing. The used binaural audio rendering implements directional hearing and room acoustics via headphones to provide an authentic simulation of a real room. Users can freely move around using a joystick. The proposed application programming interface (API) is intended to ease the development of user applications for this VAR system. It provides an easy to use C++ interface to the audio rendering layer. The signal processing is performed by a digital signal processor (DSP). Besides the details of the technical realisation, this paper also investigates the user requirements for the target group. <s> BIB002 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Interface <s> Audio navigation interfaces have traditionally been studied (and implemented) using headphones. However, many potential users (especially those with visual impairments) are hesitant to adopt these emerging wayfinding technologies if doing so requires them to reduce their ability to hear environmental sounds by wearing headphones. In this study we examined the performance of the SWAN audio navigation interface using bone-conduction headphones (“bonephones”), which do not cover the ear. Bonephones enabled all participants to complete the navigation tasks with good efficiencies, though not immediately as effective as regular headphones. Given the functional success here, and considering that the spatialization routines were not optimized for bonephones (this essentially represents a worst-case scenario), the prospects are excellent for more widespread usage of bone conduction for auditory navigation, and likely for many other auditory displays. <s> BIB003 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Interface <s> The objective of this study is to improve the quality of life for the visually impaired by restoring their ability to self-navigate. In this paper we describe a compact, wearable device that converts visual information into a tactile signal. This device, constructed entirely from commercially available parts, enables the user to perceive distant objects via a different sensory modality. Preliminary data suggest that this device is useful for object avoidance in simple environments. <s> BIB004 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Interface <s> In an easel, a clamp for suspending sheets or other objects is formed with an elongated plate and a pair of brackets that support a bar. The brackets are particularly formed so that they incline downwardly towards the plate upon which they are mounted and the bar is arranged to be slidingly affixed to the brackets. The bar slides up and down and may grip objects placed between it and the plate. Ideally, the bar is provided with cushion means which provide the actual gripping action against the plate. <s> BIB005 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Interface <s> This keynote paper discusses the problem of outdoor mobility of the visually impaired and reviews key assistive technologies aiding the blind in independent travel. Space perception abilities important for mobility of the visually impaired are discussed first and related definitions and basic concepts such as: cognitive mapping, wayfinding and navigation are explained. The main mobility barriers the visually impaired encounter in every day life are pointed out. In this respect special attention is given to the information the blind traveller needs to be safer and more skilful in mobility. Also sensory substitution methods and interfaces for nonvisual presentation of the obstacles and communicating navigational data are addressed. Finally, the current projects under way and available technologies aiding the blind in key mobility tasks such as: obstacle avoidance, orientation, navigation and travel in urban environments are reviewed and discussed. <s> BIB006 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Interface <s> In this paper we introduce Audible Vision, a system that can help blind and visually impaired users navigate in large indoor open spaces. The system uses computer vision to estimate the location and orientation of the user, and enables the user to perceive his/her relative position to a landmark through 3D audio. Testing shows that Audible Vision can work reliably in real-life ever-changing environment crowded with people. <s> BIB007 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Interface <s> A vibrotactile array is a promising human computer interface which could display graphical information to users in a tactile form. This paper presents the design and testing of an image contour display system with a vibrotactile array. The tactile image display system is attached to the back of the user. It converts visual graphics into 2D tactile images and allows subjects to feel the contours of objects through vibration stimulus. The system consists of a USB camera, 48 (6×8) vibrating motors and an embedded control system. The image is captured by the camera and the 2D contour is extracted and transformed into vibrotactile stimuli using a temporal- spatial dynamic coding method. Preliminary experiments were carried out and the optimal parameters of the vibrating time and duration were explored. To evaluate the feasibility and robustness of this vibration mode, letters were also tactilely displayed and the recognition rate about the alphabet letter display was investigated. It was shown that under the condition of no pre-training for the subjects, the recognition rate was 82%. Such a recognition rate is higher than that of the scanning mode (47.5%) and the improved handwriting mode (76.8%). The results indicated that the proposed method was efficient in conveying the contour information to the visually impaired by means of vibrations. <s> BIB008 </s> A review of assistive spatial orientation and navigation technologies for the visually impaired <s> Interface <s> This paper proposes a novel concept for helping the visually impaired know what kind of object there is in an environment. This concept is implemented as a cane system that selects a target object based on a user's demand, recognizes the object from depth data obtained by a Microsoft Kinect sensor, and returns the recognition results via a tactile device. The proposed system is evaluated through a user study where one blindfolded subject actually uses the system to find chairs in an experimental environment. The experimental results indicate that the system is promising as means of helping the visually impaired recognize objects. <s> BIB009
Accurate recognition and distinction between the contextual elements found in the environment, whether by using computer vision or any other form of input is of the highest importance on an EOA device. Interfacing with the user to provide information about the elements found in the scene is also crucial, as the interpretation of the reality around the user directly influences his safety and, ultimately, the adoption of this kind of assistive technology. The most commonly found ways of interfacing with the user nowadays are sonification, audio description and haptic interfaces. These are the most commonly found ways to interface with an electronic assistive system for the blind, and its use is valid to both receiving alerts about the physical elements detected, as well as to receive and understand wayfinding instructions. Sonification is, by definition, the use of non-speech audio to convey information or perceptualize data. The use of acoustic (sound/sonification) patterns to provide this information to the user is very common among EOAs for the visually impaired BIB005 BIB001 . In some cases, sonification is even used to provide the relative position of the detected obstacles BIB007 . These systems use 3D audio (Fig. 2) to provide audio cues that can be perceived as if they were being generated by the detected landmark. The concept behind 3D audio is the use of different sound sources, located at different locations to provide the feeling of directional hearing (Fig. 2) . The most obvious advantage of adding spatial sound modeling to audio interfaces over sequential techniques is the natural perception. Individuals without hearing impairment use their directional hearing for orientation at all times BIB002 . This kind of interface can be used to provide simple, yet immediately perceivable cues about bearing or relative position (pose) to an obstacle. The fact that blind people often rely on audio cues from the environment for orientation creates restraints on using headphones for acoustic feedback. Alternatives like bonephones are viable BIB003 . Audio description has the same considerations as the sonification methods. According to , ''one major issue to be considered in the design of an interface is whether a rich description of the scene, or only highly symbolic information, should be provided to the user.'' Another approach is to present the information about the obstacles detected in the image through the use of haptic interfaces BIB009 BIB008 BIB004 . 3D range data may be converted into a 2D vibrating array attached to the user's body . With appropriate signal coding, the use of 2D vibrating patterns can reproduce depth information. Haptic interfaces are also used in a way in which an array of pins works in a similar way as Braille display . Some other, less usual, forms of interface are still in investigational devices, not available for commercial use. One example consists of a camera (mounted in sunglasses), one transducer and a postage-stamp-size electrode array that is positioned on the top surface of the tongue. The recorded images are translated into gentle electrical signals and displayed on the tongue. The generated stimulation patterns reflect key features of the recorded images like high contrast objects and their movements. As a general consideration, any of the user's remaining sensory channels (tactile or acoustic) can be used. However, their use should be carefully considered as it may interfere with performing other tasks that the blind users cannot do without. The amount of information to be presented to the user should be carefully considered as well, as information capacity of the non-visual senses is much smaller than vision BIB006 . The cues provided by these interface channels represent the most common ways of interfacing with assistive devices, for the blind, and provide the means to understand the information generated that can be used whether for context description or wayfinding.
Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Introduction <s> Defining an efficient training set is one of the most delicate phases for the success of remote sensing image classification routines. The complexity of the problem, the limited temporal and financial resources, as well as the high intraclass variance can make an algorithm fail if it is trained with a suboptimal dataset. Active learning aims at building efficient training sets by iteratively improving the model performance through sampling. A user-defined heuristic ranks the unlabeled pixels according to a function of the uncertainty of their class membership and then the user is asked to provide labels for the most uncertain pixels. This paper reviews and tests the main families of active learning algorithms: committee, large margin, and posterior probability-based. For each of them, the most recent advances in the remote sensing community are discussed and some heuristics are detailed and tested. Several challenging remote sensing scenarios are considered, including very high spatial resolution and hyperspectral image classification. Finally, guidelines for choosing the good architecture are provided for new and/or unexperienced user. <s> BIB001 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Introduction <s> Deep learning has been shown to achieve outstanding performance in a number of challenging real-world applications. However, most of the existing works assume a fixed set of labeled data, which is not necessarily true in real-world applications. Getting labeled data is usually expensive and time consuming. Active labelling in deep learning aims at achieving the best learning result with a limited labeled data set, i.e., choosing the most appropriate unlabeled data to get labeled. This paper presents a new active labeling method, AL-DL, for cost-effective selection of data to be labeled. AL-DL uses one of three metrics for data selection: least confidence, margin sampling, and entropy. The method is applied to deep learning networks based on stacked restricted Boltzmann machines, as well as stacked autoencoders. In experiments on the MNIST benchmark dataset, the method outperforms random labeling consistently by a significant margin. <s> BIB002 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Introduction <s> Recent advances in microscopy imaging and genomics have created an explosion of patient data in the pathology domain. Whole-slide images (WSIs) of tissues can now capture disease processes as they unfold in high resolution, recording the visual cues that have been the basis of pathologic diagnosis for over a century. Each WSI contains billions of pixels and up to a million or more microanatomic objects whose appearances hold important prognostic information. Computational image analysis enables the mining of massive WSI datasets to extract quantitative morphologic features describing the visual qualities of patient tissues. When combined with genomic and clinical variables, this quantitative information provides scientists and clinicians with insights into disease biology and patient outcomes. To facilitate interaction with this rich resource, we have developed a web-based machine-learning framework that enables users to rapidly build classifiers using an intuitive active learning process that minimizes data labeling effort. In this paper we describe the architecture and design of this system, and demonstrate its effectiveness through quantification of glioma brain tumors. <s> BIB003 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Introduction <s> We present a simple and yet effective approach that can incorporate rationales elicited from annotators into the training of any offthe-shelf classifier. We show that our simple approach is effective for multinomial na¨ Bayes, logistic regression, and support vector machines. We additionally present an active learning method tailored specifically for the learning with rationales framework. <s> BIB004 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Introduction <s> "Wonderfully erudite, humorous, and easy to read." --KDNuggets In the world's top research labs and universities, the race is on to invent the ultimate learning algorithm: one capable of discovering any knowledge from data, and doing anything we want, before we even ask. In The Master Algorithm, Pedro Domingos lifts the veil to give us a peek inside the learning machines that power Google, Amazon, and your smartphone. He assembles a blueprint for the future universal learner--the Master Algorithm--and discusses what it will mean for business, science, and society. If data-ism is today's philosophy, this book is its bible. <s> BIB005 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Introduction <s> Interactive model analysis, the process of understanding, diagnosing, and refining a machine learning model with the help of interactive visualization, is very important for users to efficiently solve real-world artificial intelligence and data mining problems. Dramatic advances in big data analytics has led to a wide variety of interactive model analysis tasks. In this paper, we present a comprehensive analysis and interpretation of this rapidly developing area. Specifically, we classify the relevant work into three categories: understanding, diagnosis, and refinement. Each category is exemplified by recent influential work. Possible future research opportunities are also explored and discussed. <s> BIB006 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Introduction <s> Standing at the paradigm shift towards data-intensive science, machine learning techniques are becoming increasingly important. In particular, as a major breakthrough in the field, deep learning has proven as an extremely powerful tool in many fields. Shall we embrace deep learning as the key to all? Or, should we resist a 'black-box' solution? There are controversial opinions in the remote sensing community. In this article, we analyze the challenges of using deep learning for remote sensing data analysis, review the recent advances, and provide resources to make deep learning in remote sensing ridiculously simple to start with. More importantly, we advocate remote sensing scientists to bring their expertise into deep learning, and use it as an implicit general model to tackle unprecedented large-scale influential challenges, such as climate change and urbanization. <s> BIB007 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Introduction <s> The automatic detection and classification of stance (e.g., certainty or agreement) in text data using natural language processing and machine-learning methods creates an opportunity to gain insight into the speakers’ attitudes toward their own and other people’s utterances. However, identifying stance in text presents many challenges related to training data collection and classifier training. To facilitate the entire process of training a stance classifier, we propose a visual analytics approach, called ALVA, for text data annotation and visualization. ALVA’s interplay with the stance classifier follows an active learning strategy to select suitable candidate utterances for manual annotaion. Our approach supports annotation process management and provides the annotators with a clean user interface for labeling utterances with multiple stance categories. ALVA also contains a visualization method to help analysts of the annotation and training process gain a better understanding of the categories used by the annotators. The visualization uses a novel visual representation, called CatCombos, which groups individual annotation items by the combination of stance categories. Additionally, our system makes a visualization of a vector space model available that is itself based on utterances. ALVA is already being used by our domain experts in linguistics and computational linguistics to improve the understanding of stance phenomena and to build a stance classifier for applications such as social media monitoring. <s> BIB008 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Introduction <s> Assigning labels to data instances is a prerequisite for many machine learning tasks. Similarly, labeling is applied in visual-interactive analysis approaches. However, the strategies for creating labels often differ in the two fields. In this paper, we study the process of labeling data instances with the user in the loop, from both the machine learning and visual-interactive perspective. Based on a review of differences and commonalities, we propose the 'Visual-Interactive Labeling' (VIAL) process, conflating the strengths of both. We describe the six major steps of the process and highlight their related challenges. <s> BIB009 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Introduction <s> Labeled datasets are always limited, and oftentimes the quantity of labeled data is a bottleneck for data analytics. This especially affects supervised machine learning methods, which require labels for models to learn from the labeled data. Active learning algorithms have been proposed to help achieve good analytic models with limited labeling efforts, by determining which additional instance labels will be most beneficial for learning for a given model. Active learning is consistent with interactive analytics as it proceeds in a cycle in which the unlabeled data is automatically explored. However, in active learning users have no control of the instances to be labeled, and for text data, the annotation interface is usually document only. Both of these constraints seem to affect the performance of an active learning model. We hypothesize that visualization techniques, particularly interactive ones, will help to address these constraints. In this paper, we implement a pilot study of visualization in active learning for text classification, with an interactive labeling interface. We compare the results of three experiments. Early results indicate that visualization improves high-performance machine learning model building with an active learning algorithm. <s> BIB010 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Introduction <s> Labeling data instances is an important task in machine learning and visual analytics. Both fields provide a broad set of labeling strategies, whereby machine learning (and in particular active learning) follows a rather model-centered approach and visual analytics employs rather user-centered approaches (visual-interactive labeling). Both approaches have individual strengths and weaknesses. In this work, we conduct an experiment with three parts to assess and compare the performance of these different labeling strategies. In our study, we (1) identify different visual labeling strategies for user-centered labeling, (2) investigate strengths and weaknesses of labeling strategies for different labeling tasks and task complexities, and (3) shed light on the effect of using different visual encodings to guide the visual-interactive labeling process. We further compare labeling of single versus multiple instances at a time, and quantify the impact on efficiency. We systematically compare the performance of visual interactive labeling with that of active learning. Our main findings are that visual-interactive labeling can outperform active learning, given the condition that dimension reduction separates well the class distributions. Moreover, using dimension reduction in combination with additional visual encodings that expose the internal state of the learning model turns out to improve the performance of visual-interactive labeling. <s> BIB011
Big data are leading to dramatic changes in science (with the advent of data-driven science) and in society (with potential to support economic, public health, and other advances). Machine leaning and deep learning technologies are central to leveraging big data for applications in both domains. Recent advances in machine learning and especially in deep learning, coupled with release of many open source tools (e.g., Google TensorFlow -an open-source software library for machine intelligence), creates the potential to leverage big data to address GIScience and Remote Sensing (RS) research and application challenges. But, doing so requires an in-depth understanding of the methods, their limitations, and strategies for overcoming those limitations. Two primary goals for this paper are: to synthesize ideas and results from machine learning and deep learning, plus visual analytics, and (2) to provide a base from which new GIScience and RS advances can be initiated. Machine learning (ML) and deep learning (DL), where DL is a sub-domain of ML, are increasingly successful in extracting information from big data (when mentioned together subsequently, we use the abbreviation of M&DL). The primary focus of research in M&DL has thus far been accurate results, often at the expense of human understanding of how the results were achieved BIB005 BIB006 . However, accurate results often depend on building large human-generated training data sets that can be expensive in both financial and person cost to create BIB001 BIB003 BIB004 BIB011 BIB007 BIB008 . As a result, there remain several impediments to broader adoption of M&DL, along with a range of concerns about potential negative outcomes related to the explainability of results produced. We agree here with a range of authors who have pointed to the need for human-in-the-loop strategies to both improve performance of the methods for complex problems and to increase explainability of the methods and their results BIB005 BIB006 BIB011 BIB008 BIB009 . There is a clear need for methods that allow human decision-makers to assess when to accept those results and when to treat them with caution or even skepticism. Further, we contend that advances in visual analytics offer a broad framework for addressing both the performance and explainability needs cited above. Visual analytics provides systems that enable analytical reasoning about complex problems . They accomplish this through close coupling of computational data processing methods with visual interfaces designed to help users make efficient choices: in building training data, in parameterizing and steering computational methods, and in understanding the results of those methods and how they were derived (further details about why and how visual analytics can aid M&DL, are elaborated in Section 3.2). One rapidly developing ML method, active learning (Section 3.1), aims at achieving good learning results with a limited labeled data set, by choosing the most beneficial unlabeled data to be labeled by annotators (human or machine), in order to train and thus improve ML model performance BIB002 BIB010 . Active deep learning (Section 3.4) is a method introduced to help cope with the tension between the typical DL requirement to have a very large gold standard training set and the impracticality of building such a big training set initially in domains that require expertise to label training data. As we elaborate below, recent developments in visual analytics offer strategies to enable productive human-in-the-loop active learning. In this paper, we argue specifically for taking a visual analytics approach to empowering active deep learning for (geo) text and image classification; we review a range of recent developments in the relevant fields that can be leveraged to support this approach. Our contention is that visual anaytics interfaces can reduce the time that domain experts need to devote to labeling data for text (or image) classification, by applying an iterative, active learning process. We also contextualize the potential of integrating active learning, visual analytics, and active deep learning methods in GIScience and RS through discussion of recent work. Here, we provide a road map to the rest of the paper. Section 2 outlines the scope of this review and our intended audience. Section 3, is the core of the paper, focused on synthesizing important and recent developments and their implications and applications. Here, we focus on recent advances in several subfields of Computer Science that GIScience and RS can leverage. Specifically, we examine and appraise key components of influential work in active learning (Section 3.1), visual analytics (Section 3.2), active learning with visual analytics (Section 3.3), and active deep learning (Section 3.4), respectively. In Section 4, we review recent GIScience and RS applications in (geo) text and image classification that take advantage of the methods from one or a combination of different fields covered in Section 3. The paper concludes in Section 5 with discussion of key challenges and opportunities-from both technical (Section 5.2.1) and application (Section 5.2.2, particularly for GIScience and RS) perspectives. The paper covers a wide array of recent research from multiple domains with many cross-connections. Given that text must present the sub-domains linearly, we start with a diagrammatic depiction of the domains and their relations to preview the overall structure of the review and the key connections. Specifically, Figure 1 illustrates the links between different fields covered in this paper and the flows that can guide the reader through the core part of this review. To provide background for readers (particularly those from GIScience and RS) who are new to M&DL, in Appendix A, we introduce essential terms and the main types of classification tasks in M&DL. domains. An introduction to essential concepts in machine learning (ML) and deep learning (DL) for understanding the core part of the review (i.e., Section 3) is provided in the Appendix A.
Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Scope and Intended Audience <s> Visual data mining techniques have proven to be of high value in exploratory data analysis, and they also have a high potential for mining large databases. In this article, we describe and evaluate a new visualization-based approach to mining large databases. The basic idea of our visual data mining techniques is to represent as many data items as possible on the screen at the same time by mapping each data value to a pixel of the screen and arranging the pixels adequately. The major goal of this article is to evaluate our visual data mining techniques and to compare them to other well-known visualization techniques for multidimensional data: the parallel coordinate and stick-figure visualization techniques. For the evaluation of visual data mining techniques, the perception of data properties counts most, while the CPU time and the number of secondary storage accesses are only of secondary importance. In addition to testing the visualization techniques using real data, we developed a testing environment for database visualizations similar to the benchmark approach used for comparing the performance of database systems. The testing environment allows the generation of test data sets with predefined data characteristics which are important for comparing the perceptual abilities of visual data mining techniques. <s> BIB001 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Scope and Intended Audience <s> We present an approach to the process of constructing knowledge through structured exploration of large spatiotemporal data sets. First, we introduce our problem context and define both Geographic Visualization (GVis) and Knowledge Discovery in Databases (KDD), the source domains for methods being integrated. Next, we review and compare recent GVis and KDD developments and consider the potential for their integration, emphasizing that an iterative process with user interaction is a central focus for uncovering interest and meaningful patterns through each. We then introduce an approach to design of an integrated GVis-KDD environment directed to exploration and discovery in the context of spatiotemporal environmental data. The approach emphasizes a matching of GVis and KDD meta-operations. Following description of the GVis and KDD methods that are linked in our prototype system, we present a demonstration of the prototype applied to a typical spatiotemporal dataset. We conclude by outlining, briefly, resea... <s> BIB002 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Scope and Intended Audience <s> Abstract Voluminous geographic data have been, and continue to be, collected with modern data acquisition techniques such as global positioning systems (GPS), high-resolution remote sensing, location-aware services and surveys, and internet-based volunteered geographic information. There is an urgent need for effective and efficient methods to extract unknown and unexpected information from spatial data sets of unprecedentedly large size, high dimensionality, and complexity. To address these challenges, spatial data mining and geographic knowledge discovery has emerged as an active research field, focusing on the development of theory, methodology, and practice for the extraction of useful information and knowledge from massive and complex spatial databases. This paper highlights recent theoretical and applied research in spatial data mining and knowledge discovery. We first briefly review the literature on several common spatial data-mining tasks, including spatial classification and prediction; spatial association rule mining; spatial cluster analysis; and geovisualization. The articles included in this special issue contribute to spatial data mining research by developing new techniques for point pattern analysis, prediction in space–time data, and analysis of moving object data, as well as by demonstrating applications of genetic algorithms for optimization in the context of image classification and spatial interpolation. The papers concludes with some thoughts on the contribution of spatial data mining and geographic knowledge discovery to geographic information sciences. <s> BIB003 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Scope and Intended Audience <s> Active learning is a supervised machine learning technique in which the learner is in control of the data used for learning. That control is utilized by the learner to ask an oracle, typically a human with extensive knowledge of the domain at hand, about the classes of the instances for which the model learned so far makes unreliable predictions. The active learning process takes as input a set of labeled examples, as well as a larger set of unlabeled examples, and produces a classifier and a relatively small set of newly labeled data. The overall goal is to create as good a classifier as possible, without having to mark-up and supply the learner with more data than necessary. The learning process aims at keeping the human annotation effort to a minimum, only asking for advice where the training utility of the result of such a query is high. Active learning has been successfully applied to a number of natural language processing tasks, such as, information extraction, named entity recognition, text categorization, part-of-speech tagging, parsing, and word sense disambiguation. This report is a literature survey of active learning from the perspective of natural language processing. <s> BIB004 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Scope and Intended Audience <s> Understand the need for analyses of large, complex, information-rich data sets. Identify the goals and primary tasks of the data-mining process. Describe the roots of data-mining technology. Recognize the iterative character of a data-mining process and specify its basic steps. Explain the influence of data quality on a data-mining process. Establish the relation between data warehousing and data mining. Data mining is an iterative process within which progress is defined by discovery, through either automatic or manual methods. Data mining is most useful in an exploratory analysis scenario in which there are no predetermined notions about what will constitute an "interesting" outcome. Data mining is the search for new, valuable, and nontrivial information in large volumes of data. It is a cooperative effort of humans and computers. Best results are achieved by balancing the knowledge of human experts in describing problems and goals with the search capabilities of computers. In practice, the two primary goals of data mining tend to be prediction and description. Prediction involves using some variables or fields in the data set to predict unknown or future values of other variables of interest. Description, on the other hand, focuses on finding patterns describing the data that can be interpreted by humans. Therefore, it is possible to put data-mining activities into one of two categories: Predictive data mining, which produces the model of the system described by the given data set, or Descriptive data mining, which produces new, nontrivial information based on the available data set. <s> BIB005 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Scope and Intended Audience <s> Active learning is a machine learning technique that selects the most informative samples for labeling and uses them as training data. It has been widely explored in multimedia research community for its capability of reducing human annotation effort. In this article, we provide a survey on the efforts of leveraging active learning in multimedia annotation and retrieval. We mainly focus on two application domains: image/video annotation and content-based image retrieval. We first briefly introduce the principle of active learning and then we analyze the sample selection criteria. We categorize the existing sample selection strategies used in multimedia annotation and retrieval into five criteria: risk reduction, uncertainty, diversity, density and relevance. We then introduce several classification models used in active learning-based multimedia annotation and retrieval, including semi-supervised learning, multilabel learning and multiple instance learning. We also provide a discussion on several future trends in this research direction. In particular, we discuss cost analysis of human annotation and large-scale interactive multimedia annotation. <s> BIB006 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Scope and Intended Audience <s> The key idea behind active learning is that a machine learning algorithm can perform better with less training if it is allowed to choose the data from which it learns. An active learner may pose "queries," usually in the form of unlabeled data instances to be labeled by an "oracle" (e.g., a human annotator) that already understands the nature of the problem. This sort of approach is well-motivated in many modern machine learning and data mining applications, where unlabeled data may be abundant or easy to come by, but training labels are difficult, time-consuming, or expensive to obtain. This book is a general introduction to active learning. It outlines several scenarios in which queries might be formulated, and details many query selection algorithms which have been organized into four broad categories, or "query selection frameworks." We also touch on some of the theoretical foundations of active learning, and conclude with an overview of the strengths and weaknesses of these approaches in practice, including a summary of ongoing work to address these open challenges and opportunities. <s> BIB007
The potential to bring the advances in M&DL to GIScience is reflected in a fairly long history of work on spatial and spatio-temporal data mining. In 2001, for example, Han and Miller provided a broad introduction to data mining and knowledge discovery methods for geographic data. In a second edition in 2009 , with reversed authorship, multivariate spatial clustering was discussed and visual exploration and explanation in geospatial analysis was touched upon. Directed to a broader audience, Han et al. BIB005 provided one of the most highly cited introductions to data mining; the third edition includes an introduction to active learning (Section 3.1) and briefly introduces neural networks (the core technology of DL), but visual analytics (Section 3.2) is not mentioned. Even though they include an introduction to data visualization and visual data mining, Han and colleagues' focus is on traditional data visualization methods for understanding data prior to making decisions on data mining methods and for understanding outcomes of data mining, not on the more integrated visual-computational approaches that characterize advances in visual analytics. Thus, their visual data mining approach, while it does propose leveraging visualization advances in productive ways, is comparable to ideas introduced in the late 1990s (e.g., BIB001 BIB002 ); it does not focus on visual interfaces to enable human input to the data mining process or on support of human reasoning about that process. In work that complements that cited above, Guo and Mennis BIB003 also investigated spatial data mining and geographic knowledge discovery, where they briefly reviewed several common spatial data mining tasks, including spatial classification and prediction, spatial cluster analysis, and geovisualization. The authors argued that data mining is data-driven, but more importantly, human-centered, with users controlling the selection and integration of data, choosing analysis methods, and interpreting results-it is an iterative and inductive learning process. Guo and Mennis pointed out that handling big and complex spatial data and understanding (hidden) complex structure are two major challenges for spatial data mining. To address these challenges, both efficient computational algorithms to process large data sets and effective visualization techniques to present and explore complex patterns from big spatial data, are required. In earlier work outside the GISience context, Fayyad et al. emphasized the potential role of information visualization in data mining and knowledge discovery. They proposed that the next breakthroughs will come from integrated solutions that allow (domain) end users to explore their data using a visual interface-with the goal being to unify data mining algorithms and visual interfaces , and thereby to enable human analysts to explore and discover patterns hidden in big data sets. The main goals of this review paper, building on the long term GIScience interest in ML, are to: (1) survey recent work on active learning, DL, and active DL to provide suggestions for new directions built upon these evolving methods, and (2) bring active learning, DL, active DL, and complementary developments in visual analytics to GIScience, and by doing so extend the current GIScience "toolbox". Through the synthesis of multiple rapidly developing research areas, this systematic review is relevant to multiple research domains, including but not limited to GIScience, computer science, data science, information science, visual analytics, information visualization, image analysis, and computational linguistics. This paper does not attempt to review pure/traditional active learning (see Figure 2 , which illustrates a typical pool-based active learning cycle); for classic and recent reviews of these topics, see: BIB007 . A survey aimed at making active learning more practical for real-world use can be found in ; a survey from the perspective of natural language processing (NLP) can be found in BIB004 ; and a survey of active learning in multimedia annotation and retrieval can be found in BIB006 . Our review focuses on investigating methods that extend and/or integrate active learning with visual analytics and DL for (geo) text and image classification, specifically on the two parts of the active learning cycle highlighted in Figure 3 . The pool-based active learning cycle
Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> The State of the Art: Active Learning, Visual Analytics, and Deep Learning <s> User studies are important for many aspects of the design process and involve techniques ranging from informal surveys to rigorous laboratory studies. However, the costs involved in engaging users often requires practitioners to trade off between sample size, time requirements, and monetary costs. Micro-task markets, such as Amazon's Mechanical Turk, offer a potential paradigm for engaging a large number of users for low time and monetary costs. Here we investigate the utility of a micro-task market for collecting user measurements, and discuss design considerations for developing remote micro user evaluation tasks. Although micro-task markets have great potential for rapidly collecting user measurements at low costs, we found that special care is needed in formulating tasks in order to harness the capabilities of the approach. <s> BIB001 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> The State of the Art: Active Learning, Visual Analytics, and Deep Learning <s> For large, real-world inductive learning problems, the number of training examples often must be limited due to the costs associated with procuring, preparing, and storing the training examples and/or the computational costs associated with learning from them. In such circumstances, one question of practical importance is: if only n training examples can be selected, in what proportion should the classes be represented? In this article we help to answer this question by analyzing, for a fixed training-set size, the relationship between the class distribution of the training data and the performance of classification trees induced from these data. We study twenty-six data sets and, for each, determine the best class distribution for learning. The naturally occurring class distribution is shown to generally perform well when classifier performance is evaluated using undifferentiated error rate (0/1 loss). However, when the area under the ROC curve is used to evaluate classifier performance, a balanced distribution is shown to perform well. Since neither of these choices for class distribution always generates the best-performing classifier, we introduce a "budget-sensitive" progressive sampling algorithm for selecting training examples based on the class associated with each example. An empirical analysis of this algorithm shows that the class distribution of the resulting training set yields classifiers with good (nearly-optimal) classification performance. <s> BIB002 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> The State of the Art: Active Learning, Visual Analytics, and Deep Learning <s> Large-scale supervised datasets are crucial to train convolutional neural networks (CNNs) for various computer vision problems. However, obtaining a massive amount of well-labeled data is usually very expensive and time consuming. In this paper, we introduce a general framework to train CNNs with only a limited number of clean labels and millions of easily obtained noisy labels. We model the relationships between images, class labels and label noises with a probabilistic graphical model and further integrate it into an end-to-end deep learning system. To demonstrate the effectiveness of our approach, we collect a large-scale real-world clothing classification dataset with both noisy and clean labels. Experiments on this dataset indicate that our approach can better correct the noisy labels and improves the performance of trained CNNs. <s> BIB003 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> The State of the Art: Active Learning, Visual Analytics, and Deep Learning <s> Labeled datasets are always limited, and oftentimes the quantity of labeled data is a bottleneck for data analytics. This especially affects supervised machine learning methods, which require labels for models to learn from the labeled data. Active learning algorithms have been proposed to help achieve good analytic models with limited labeling efforts, by determining which additional instance labels will be most beneficial for learning for a given model. Active learning is consistent with interactive analytics as it proceeds in a cycle in which the unlabeled data is automatically explored. However, in active learning users have no control of the instances to be labeled, and for text data, the annotation interface is usually document only. Both of these constraints seem to affect the performance of an active learning model. We hypothesize that visualization techniques, particularly interactive ones, will help to address these constraints. In this paper, we implement a pilot study of visualization in active learning for text classification, with an interactive labeling interface. We compare the results of three experiments. Early results indicate that visualization improves high-performance machine learning model building with an active learning algorithm. <s> BIB004 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> The State of the Art: Active Learning, Visual Analytics, and Deep Learning <s> Providing accurate predictions is challenging for machine learning algorithms when the number of features is larger than the number of samples in the data. Prior knowledge can improve machine learning models by indicating relevant variables and parameter values. Yet, this prior knowledge is often tacit and only available from domain experts. We present a novel approach that uses interactive visualization to elicit the tacit prior knowledge and uses it to improve the accuracy of prediction models. The main component of our approach is a user model that models the domain expert's knowledge of the relevance of different features for a prediction task. In particular, based on the expert's earlier input, the user model guides the selection of the features on which to elicit user's knowledge next. The results of a controlled user study show that the user model significantly improves prior knowledge elicitation and prediction accuracy, when predicting the relative citation counts of scientific documents in a specific domain. <s> BIB005
As outlined above, leveraging the potential of DL to increase classification accuracy (for images or text) requires extensive amounts of manually labeled data. This is particularly challenging in domains requiring experts with prior knowledge that is often tacit BIB004 BIB005 BIB003 BIB002 ]-in such cases, even crowdsourcing BIB001 , such as Amazon Mechanical Turk , will not help much. In this section, we review several techniques that are central to addressing this challenge-in particular, active learning (Section 3.1), visual analytics (Section 3.2), active learning with visual analytics (Section 3.3), and active deep learning (Section 3.4).
Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> What's AL and Why AL? <s> Active learning differs from “learning from examples” in that the learning algorithm assumes at least some control over what part of the input domain it receives information about. In some situations, active learning is provably more powerful than learning from examples alone, giving better generalization for a fixed number of training examples.In this article, we consider the problem of learning a binary concept in the absence of noise. We describe a formalism for active concept learning called selective sampling and show how it may be approximately implemented by a neural network. In selective sampling, a learner receives distribution information from the environment and queries an oracle on parts of the domain it considers “useful.” We test our implementation, called an SG-network, on three domains and observe significant improvement in generalization. <s> BIB001 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> What's AL and Why AL? <s> Active learning is a supervised machine learning technique in which the learner is in control of the data used for learning. That control is utilized by the learner to ask an oracle, typically a human with extensive knowledge of the domain at hand, about the classes of the instances for which the model learned so far makes unreliable predictions. The active learning process takes as input a set of labeled examples, as well as a larger set of unlabeled examples, and produces a classifier and a relatively small set of newly labeled data. The overall goal is to create as good a classifier as possible, without having to mark-up and supply the learner with more data than necessary. The learning process aims at keeping the human annotation effort to a minimum, only asking for advice where the training utility of the result of such a query is high. Active learning has been successfully applied to a number of natural language processing tasks, such as, information extraction, named entity recognition, text categorization, part-of-speech tagging, parsing, and word sense disambiguation. This report is a literature survey of active learning from the perspective of natural language processing. <s> BIB002 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> What's AL and Why AL? <s> One of the principal bottlenecks in applying learning techniques to classification problems is the large amount of labeled training data required. Especially for images and video, providing training data is very expensive in terms of human time and effort. In this paper we propose an active learning approach to tackle the problem. Instead of passively accepting random training examples, the active learning algorithm iteratively selects unlabeled examples for the user to label, so that human effort is focused on labeling the most “useful” examples. Our method relies on the idea of uncertainty sampling, in which the algorithm selects unlabeled examples that it finds hardest to classify. Specifically, we propose an uncertainty measure that generalizes margin-based uncertainty to the multi-class case and is easy to compute, so that active learning can handle a large number of classes and large data sizes efficiently. We demonstrate results for letter and digit recognition on datasets from the UCI repository, object recognition results on the Caltech-101 dataset, and scene categorization results on a dataset of 13 natural scene categories. The proposed method gives large reductions in the number of training examples required over random selection to achieve similar classification accuracy, with little computational overhead. <s> BIB003 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> What's AL and Why AL? <s> Defining an efficient training set is one of the most delicate phases for the success of remote sensing image classification routines. The complexity of the problem, the limited temporal and financial resources, as well as the high intraclass variance can make an algorithm fail if it is trained with a suboptimal dataset. Active learning aims at building efficient training sets by iteratively improving the model performance through sampling. A user-defined heuristic ranks the unlabeled pixels according to a function of the uncertainty of their class membership and then the user is asked to provide labels for the most uncertain pixels. This paper reviews and tests the main families of active learning algorithms: committee, large margin, and posterior probability-based. For each of them, the most recent advances in the remote sensing community are discussed and some heuristics are detailed and tested. Several challenging remote sensing scenarios are considered, including very high spatial resolution and hyperspectral image classification. Finally, guidelines for choosing the good architecture are provided for new and/or unexperienced user. <s> BIB004 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> What's AL and Why AL? <s> Crowd sourcing has become an popular approach for annotating the large quantities of data required to train machine learning algorithms. However, obtaining labels in this manner poses two important challenges. First, naively labeling all of the data can be prohibitively expensive. Second, a significant fraction of the annotations can be incorrect due to carelessness or limited domain expertise of crowd sourced workers. Active learning provides a natural formulation to address the former issue by affordably selecting an appropriate subset of instances to label. Unfortunately, most active learning strategies are myopic and sensitive to label noise, which leads to poorly trained classifiers. We propose an active learning method that is specifically designed to be robust to such noise. We present an application of our technique in the domain of activity recognition for eldercare and validate the proposed approach using both simulated and real-world experiments using Amazon Mechanical Turk. <s> BIB005 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> What's AL and Why AL? <s> Obtaining labels can be expensive or time-consuming, but unlabeled data is often abundant and easier to obtain. Most learning tasks can be made more efficient, in terms of labeling cost, by intelligently choosing specific unlabeled instances to be labeled by an oracle. The general problem of optimally choosing these instances is known as active learning. As it is usually set in the context of supervised learning, active learning relies on a single oracle playing the role of a teacher. We focus on the multiple annotator scenario where an oracle, who knows the ground truth, no longer exists; instead, multiple labelers, with varying expertise, are available for querying. This paradigm posits new challenges to the active learning scenario. We can now ask which data sample should be labeled next and which annotator should be queried to benefit our learning model the most. In this paper, we employ a probabilistic model for learning from multiple annotators that can also learn the annotator expertise even when their expertise may not be consistently accurate across the task domain. We then focus on providing a criterion and formulation that allows us to select both a sample and the annotator/s to query the labels from. <s> BIB006 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> What's AL and Why AL? <s> The key idea behind active learning is that a machine learning algorithm can perform better with less training if it is allowed to choose the data from which it learns. An active learner may pose "queries," usually in the form of unlabeled data instances to be labeled by an "oracle" (e.g., a human annotator) that already understands the nature of the problem. This sort of approach is well-motivated in many modern machine learning and data mining applications, where unlabeled data may be abundant or easy to come by, but training labels are difficult, time-consuming, or expensive to obtain. This book is a general introduction to active learning. It outlines several scenarios in which queries might be formulated, and details many query selection algorithms which have been organized into four broad categories, or "query selection frameworks." We also touch on some of the theoretical foundations of active learning, and conclude with an overview of the strengths and weaknesses of these approaches in practice, including a summary of ongoing work to address these open challenges and opportunities. <s> BIB007 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> What's AL and Why AL? <s> Recent advances in microscopy imaging and genomics have created an explosion of patient data in the pathology domain. Whole-slide images (WSIs) of tissues can now capture disease processes as they unfold in high resolution, recording the visual cues that have been the basis of pathologic diagnosis for over a century. Each WSI contains billions of pixels and up to a million or more microanatomic objects whose appearances hold important prognostic information. Computational image analysis enables the mining of massive WSI datasets to extract quantitative morphologic features describing the visual qualities of patient tissues. When combined with genomic and clinical variables, this quantitative information provides scientists and clinicians with insights into disease biology and patient outcomes. To facilitate interaction with this rich resource, we have developed a web-based machine-learning framework that enables users to rapidly build classifiers using an intuitive active learning process that minimizes data labeling effort. In this paper we describe the architecture and design of this system, and demonstrate its effectiveness through quantification of glioma brain tumors. <s> BIB008
Can machines learn with fewer labeled training instances than those needed in supervised learning (a full explanation of which is provided in Appendix A.2.1) if they are allowed to ask questions? The answer is "yes", with many encouraging results that have been demonstrated for a variety of problem settings and domains. AL BIB007 BIB001 ] is a sub-field of semi-supervised learning (for details, see Appendix A.2.3) that implements this question-asking idea as an iterative process. AL differs from traditional "passive" learning systems that purely "learn from examples". AL systems aim to make ML more economical and more accurate, because the learning algorithms can participate in the acquisition of their own training data, and are able to avoid using unrepresentative or poorly annotated data based on query strategies (Section 3.1.5). AL is well-motivated in many ML based applications, where unlabeled data is massive, but labels are difficult, time-consuming, or expensive to obtain. The key idea behind AL is that a ML model can achieve high accuracy with a minimum of manual labeling effort if the (machine) learner is allowed to ask for more informative labeled examples by selection query. A query is often in the form of an unlabeled instance (e.g., an image or a piece of text), picked by the machine learner according to a specific query strategy (Section 3.1.5), to be labeled by an annotator who understands the nature of the domain problem BIB007 . Informative examples refer to those instances that can help improve the machine learner's learning performance, and the informativeness is measured by different query strategies (Section 3.1.5). AL has been successfully applied to a number of natural language processing tasks BIB002 , such as information extraction, named entity recognition, text categorization, part-of-speech tagging, parsing, and word sense disambiguation. Tuia et al. BIB004 surveyed AL algorithms for RS image classification. Nalisink et al. employed AL to reduce the labeling effort for image classification BIB008 . A good example using AL to overcome label quality problems by combining experts and crowd-sourced annotators can be found in BIB005 . Another good example of using AL from crowds can be found in BIB006 , where a multi-annotator (see Section 3.1.6) AL algorithm was provided. Most AL based methods are for binary classification tasks (see Appendix A.4.1), see BIB003 for an example of multi-class classification (see Appendix A.4.2) AL for image classification. While there has been increasing attention to AL, with applications in many domains, systematic and comprehensive comparison of different AL strategies is missing in the literature. We will come back to this later in Section 3.1.7.
Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL Problem Scenarios <s> Active learning is a machine learning technique that selects the most informative samples for labeling and uses them as training data. It has been widely explored in multimedia research community for its capability of reducing human annotation effort. In this article, we provide a survey on the efforts of leveraging active learning in multimedia annotation and retrieval. We mainly focus on two application domains: image/video annotation and content-based image retrieval. We first briefly introduce the principle of active learning and then we analyze the sample selection criteria. We categorize the existing sample selection strategies used in multimedia annotation and retrieval into five criteria: risk reduction, uncertainty, diversity, density and relevance. We then introduce several classification models used in active learning-based multimedia annotation and retrieval, including semi-supervised learning, multilabel learning and multiple instance learning. We also provide a discussion on several future trends in this research direction. In particular, we discuss cost analysis of human annotation and large-scale interactive multimedia annotation. <s> BIB001 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL Problem Scenarios <s> The key idea behind active learning is that a machine learning algorithm can perform better with less training if it is allowed to choose the data from which it learns. An active learner may pose "queries," usually in the form of unlabeled data instances to be labeled by an "oracle" (e.g., a human annotator) that already understands the nature of the problem. This sort of approach is well-motivated in many modern machine learning and data mining applications, where unlabeled data may be abundant or easy to come by, but training labels are difficult, time-consuming, or expensive to obtain. This book is a general introduction to active learning. It outlines several scenarios in which queries might be formulated, and details many query selection algorithms which have been organized into four broad categories, or "query selection frameworks." We also touch on some of the theoretical foundations of active learning, and conclude with an overview of the strengths and weaknesses of these approaches in practice, including a summary of ongoing work to address these open challenges and opportunities. <s> BIB002 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL Problem Scenarios <s> Many engineering problems require identifying feasible domains under implicit constraints. One example is finding acceptable car body styling designs based on constraints like aesthetics and functionality. Current active-learning based methods learn feasible domains for bounded input spaces. However, we usually lack prior knowledge about how to set those input variable bounds. Bounds that are too small will fail to cover all feasible domains; while bounds that are too large will waste query budget. To avoid this problem, we introduce Active Expansion Sampling (AES), a method that identifies (possibly disconnected) feasible domains over an unbounded input space. AES progressively expands our knowledge of the input space, and uses successive exploitation and exploration stages to switch between learning the decision boundary and searching for new feasible domains. We show that AES has a misclassification loss guarantee within the explored region, independent of the number of iterations or labeled samples. Thus it can be used for real-time prediction of samples' feasibility within the explored region. We evaluate AES on three test examples and compare AES with two adaptive sampling methods -- the Neighborhood-Voronoi algorithm and the straddle heuristic -- that operate over fixed input variable bounds. <s> BIB003
The AL literature BIB002 BIB001 showcases several different problem scenarios in which the active machine learner may solicit input. The three most common scenarios considered in the literature are: membership query synthesis, stream-based selective sampling, and pool-based sampling. All three scenarios assume that machine learners query unlabeled instances to be labeled by annotators (humans or machines). Figure 4 illustrates the differences among these three AL scenarios. The dashed lines connecting instance space (set of possible observations-also called input space BIB002 BIB003 ) in Figure 4 , represent that the machine learner does not know the definition of the instance space (thus the features of the space and their ranges are not known BIB002 ).
Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Sample an instance <s> We consider the problem of using queries to learn an unknown concept. Several types of queries are described and studied: membership, equivalence, subset, superset, disjointness, and exhaustiveness queries. Examples are given of efficient learning methods using various subsets of these queries for formal domains, including the regular languages, restricted classes of context-free languages, the pattern languages, and restricted types of prepositional formulas. Some general lower bound techniques are given. Equivalence queries are compared with Valiant's criterion of probably approximately correct identification under random sampling. <s> BIB001 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Sample an instance <s> "Selective sampling" is a form of directed search that can greatly increase the ability of a connectionist network to generalize accurately. Based on information from previous batches of samples, a network may be trained on data selectively sampled from regions in the domain that are unknown. This is realizable in cases when the distribution is known, or when the cost of drawing points from the target distribution is negligible compared to the cost of labeling them with the proper classification. The approach is justified by its applicability to the problem of training a network for power system security analysis. The benefits of selective sampling are studied analytically, and the results are confirmed experimentally. <s> BIB002 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Sample an instance <s> Active learning differs from “learning from examples” in that the learning algorithm assumes at least some control over what part of the input domain it receives information about. In some situations, active learning is provably more powerful than learning from examples alone, giving better generalization for a fixed number of training examples.In this article, we consider the problem of learning a binary concept in the absence of noise. We describe a formalism for active concept learning called selective sampling and show how it may be approximately implemented by a neural network. In selective sampling, a learner receives distribution information from the environment and queries an oracle on parts of the domain it considers “useful.” We test our implementation, called an SG-network, on three domains and observe significant improvement in generalization. <s> BIB003 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Sample an instance <s> The ability to cheaply train text classifiers is critical to their use in information retrieval, content analysis, natural language processing, and other tasks involving data which is partly or fully textual. An algorithm for sequential sampling during machine learning of statistical classifiers was developed and tested on a newswire text categorization task. This method, which we call uncertainty sampling, reduced by as much as 500-fold the amount of training data that would have to be manually classified to achieve a given level of effectiveness. <s> BIB004 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Sample an instance <s> Abstract In many real-world learning tasks, it is expensive to acquire a sufficient number of labeled examples for training. This paper proposes a general method for efficiently training probabilistic classifiers, by selecting for training only the more informative examples in a stream of unlabeled examples. The method, committee-based sampling , evaluates the informativeness of an example by measuring the degree of disagreement between several model variants. These variants (the committee) are drawn randomly from a probability distribution conditioned by the training set selected so far (Monte-Carlo sampling). The method is particularly attractive because it evaluates the expected information gain from a training example implicitly, making the model both easy to implement and generally applicable. We further show how to apply committee-based sampling for training Hidden Markov Model classifiers, which are commonly used for complex classification tasks. The method was implemented and tested for the task of tagging words in natural language sentences with parts-of-speech. Experimental evaluation of committee-based sampling versus standard sequential training showed a substantial improvement in training efficiency. <s> BIB005 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Sample an instance <s> This paper proposes an efficient example sampling method for example-based word sense disambiguation systems. To construct a database of practical size, a considerable overhead for manual sense disambiguation (overhead for supervision) is required. In addition, the time complexity of searching a large-sized database poses a considerable problem (overhead for search). To counter these problems, our method selectively samples a smaller-sized effective subset from a given example set for use in word sense disambiguation. Our method is characterized by the reliance on the notion of training utility: the degree to which each example is informative for future example sampling when used for the training of the system. The system progressively collects examples by selecting those with greatest utility. The paper reports the effectiveness of our method through experiments on about one thousand sentences. Compared to experiments with other example sampling methods, our method reduced both the overhead for supervision and the overhead for search, without the degeneration of the performance of the system. <s> BIB006 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Sample an instance <s> We propose a general active learning framework for content-based information retrieval. We use this framework to guide hidden annotations in order to improve the retrieval performance. For each object in the database, we maintain a list of probabilities, each indicating the probability of this object having one of the attributes. During training, the learning algorithm samples objects in the database and presents them to the annotator to assign attributes. For each sampled object, each probability is set to be one or zero depending on whether or not the corresponding attribute is assigned by the annotator. For objects that have not been annotated, the learning algorithm estimates their probabilities with biased kernel regression. Knowledge gain is then defined to determine, among the objects that have not been annotated, which one the system is the most uncertain. The system then presents it as the next sample to the annotator to which it is assigned attributes. During retrieval, the list of probabilities works as a feature vector for us to calculate the semantic distance between two objects, or between the user query and an object in the database. The overall distance between two objects is determined by a weighted sum of the semantic distance and the low-level feature distance. The algorithm is tested on both synthetic databases and real databases of 3D models. In both cases, the retrieval performance of the system improves rapidly with the number of annotated samples. Furthermore, we show that active learning outperforms learning based on random sampling. <s> BIB007 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Sample an instance <s> The question of whether it is possible to automate the scientific process is of both great theoretical interest1,2 and increasing practical importance because, in many scientific areas, data are being generated much faster than they can be effectively analysed. We describe a physically implemented robotic system that applies techniques from artificial intelligence3,4,5,6,7,8 to carry out cycles of scientific experimentation. The system automatically originates hypotheses to explain observations, devises experiments to test these hypotheses, physically runs the experiments using a laboratory robot, interprets the results to falsify hypotheses inconsistent with the data, and then repeats the cycle. Here we apply the system to the determination of gene function using deletion mutants of yeast (Saccharomyces cerevisiae) and auxotrophic growth experiments9. We built and tested a detailed logical model (involving genes, proteins and metabolites) of the aromatic amino acid synthesis pathway. In biological experiments that automatically reconstruct parts of this model, we show that an intelligent experiment selection strategy is competitive with human performance and significantly outperforms, with a cost decrease of 3-fold and 100-fold (respectively), both cheapest and random-experiment selection. <s> BIB008 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Sample an instance <s> There is growing interest in the application of machine learning techniques in bioinformatics. The supervised machine learning approach has been widely applied to bioinformatics and gained a lot of success in this research area. With this learning approach researchers first develop a large training set, which is a timeconsuming and costly process. Moreover, the proportion of the positive examples and negative examples in the training set may not represent the real-world data distribution, which causes concept drift. Active learning avoids these problems. Unlike most conventional learning methods where the training set used to derive the model remains static, the classifier can actively choose the training data and the size of training set increases. We introduced an algorithm for performing active learning with support vector machine and applied the algorithm to gene expression profiles of colon cancer, lung cancer, and prostate cancer samples. We compared the classification performance of active learning with that of passive learning. The results showed that employing the active learning method can achieve high accuracy and significantly reduce the need for labeled training instances. For lung cancer classification, to achieve 96% of the total positives, only 31 labeled examples were needed in active learning whereas in passive learning 174 labeled examples were required. That meant over 82% reduction was realized by active learning. In active learning the areas under the receiver operating characteristic (ROC) curves were over 0.81, while in passive learning the areas under the ROC curves were below 0.50 <s> BIB009 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Sample an instance <s> Learning ranking (or preference) functions has been a major issue in the machine learning community and has produced many applications in information retrieval. SVMs (Support Vector Machines) - a classification and regression methodology - have also shown excellent performance in learning ranking functions. They effectively learn ranking functions of high generalization based on the "large-margin" principle and also systematically support nonlinear ranking by the "kernel trick". In this paper, we propose an SVM selective sampling technique for learning ranking functions. SVM selective sampling (or active learning with SVM) has been studied in the context of classification. Such techniques reduce the labeling effort in learning classification functions by selecting only the most informative samples to be labeled. However, they are not extendable to learning ranking functions, as the labeled data in ranking is relative ordering, or partial orders of data. Our proposed sampling technique effectively learns an accurate SVM ranking function with fewer partial orders. We apply our sampling technique to the data retrieval application, which enables fuzzy search on relational databases by interacting with users for learning their preferences. Experimental results show a significant reduction of the labeling effort in inducing accurate ranking functions. <s> BIB010 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Sample an instance <s> Active learning is well-suited to many problems in natural language processing, where unlabeled data may be abundant but annotation is slow and expensive. This paper aims to shed light on the best active learning approaches for sequence labeling tasks such as information extraction and document segmentation. We survey previously used query selection strategies for sequence models, and propose several novel algorithms to address their shortcomings. We also conduct a large-scale empirical comparison using multiple corpora, which demonstrates that our proposed methods advance the state of the art. <s> BIB011 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Sample an instance <s> Active learning is a machine learning technique that selects the most informative samples for labeling and uses them as training data. It has been widely explored in multimedia research community for its capability of reducing human annotation effort. In this article, we provide a survey on the efforts of leveraging active learning in multimedia annotation and retrieval. We mainly focus on two application domains: image/video annotation and content-based image retrieval. We first briefly introduce the principle of active learning and then we analyze the sample selection criteria. We categorize the existing sample selection strategies used in multimedia annotation and retrieval into five criteria: risk reduction, uncertainty, diversity, density and relevance. We then introduce several classification models used in active learning-based multimedia annotation and retrieval, including semi-supervised learning, multilabel learning and multiple instance learning. We also provide a discussion on several future trends in this research direction. In particular, we discuss cost analysis of human annotation and large-scale interactive multimedia annotation. <s> BIB012 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Sample an instance <s> Most active learning approaches select either informative or representative unlabeled instances to query their labels. Although several active learning algorithms have been proposed to combine the two criteria for query selection, they are usually ad hoc in finding unlabeled instances that are both informative and representative. We address this challenge by a principled approach, termed QUIRE, based on the min-max view of active learning. The proposed approach provides a systematic way for measuring and combining the informativeness and representativeness of an instance. Extensive experimental results show that the proposed QUIRE approach outperforms several state-of -the-art active learning approaches. <s> BIB013 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Sample an instance <s> Active learning has received great interests from researchers due to its ability to reduce the amount of supervision required for effective learning. As the core component of active learning algorithms, query synthesis and pool-based sampling are two main scenarios of querying considered in the literature. Query synthesis features low querying time, but only has limited applications as the synthesized query might be unrecognizable to human oracle. As a result, most efforts have focused on pool-based sampling in recent years, although it is much more time-consuming. In this paper, we propose new strategies for a novel querying framework that combines query synthesis and pool-based sampling. It overcomes the limitation of query synthesis, and has the advantage of fast querying. The basic idea is to synthesize an instance close to the decision boundary using labelled data, and then select the real instance closest to the synthesized one as a query. For this purpose, we propose a synthesis strategy, which can synthesize instances close to the decision boundary and spreading along the decision boundary. Since the synthesis only depends on the relatively small labelled set, instead of evaluating the entire unlabelled set as many other active learning algorithms do, our method has the advantage of efficiency. In order to handle more complicated data and make our framework compatible with powerful kernel-based learners, we also extend our method to kernel version. Experiments on several real-world data sets show that our method has significant advantage on time complexity and similar performance compared to pool-based uncertainty sampling methods. <s> BIB014 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Sample an instance <s> In this paper, we consider the problem of actively learning a linear classifier through query synthesis where the learner can construct artificial queries in order to estimate the true decision boundaries. This problem has recently gained a lot of interest in automated science and adversarial reverse engineering for which only heuristic algorithms are known. In such applications, queries can be constructed de novo to elicit information (e.g., automated science) or to evade detection with minimal cost (e.g., adversarial reverse engineering). We develop a general framework, called dimension coupling (DC), that 1) reduces a d-dimensional learning problem to d-1 low dimensional sub-problems, 2) solves each sub-problem efficiently, 3) appropriately aggregates the results and outputs a linear classifier, and 4) provides a theoretical guarantee for all possible schemes of aggregation. The proposed method is proved resilient to noise. We show that the DC framework avoids the curse of dimensionality: its computational complexity scales linearly with the dimension. Moreover, we show that the query complexity of DC is near optimal (within a constant factor of the optimum algorithm). To further support our theoretical analysis, we compare the performance of DC with the existing work. We observe that DC consistently outperforms the prior arts in terms of query complexity while often running orders of magnitude faster. <s> BIB015 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Sample an instance <s> How can we find a general way to choose the most suitable samples for training a classifier? Even with very limited prior information? Active learning, which can be regarded as an iterative optimization procedure, plays a key role to construct a refined training set to improve the classification performance in a variety of applications, such as text analysis, image recognition, social network modeling, etc. Although combining representativeness and informativeness of samples has been proven promising for active sampling, state-of-the-art methods perform well under certain data structures. Then can we find a way to fuse the two active sampling criteria without any assumption on data? This paper proposes a general active learning framework that effectively fuses the two criteria. Inspired by a two-sample discrepancy problem, triple measures are elaborately designed to guarantee that the query samples not only possess the representativeness of the unlabeled data but also reveal the diversity of the labeled data. Any appropriate similarity measure can be employed to construct the triple measures. Meanwhile, an uncertain measure is leveraged to generate the informativeness criterion, which can be carried out in different ways. Rooted in this framework, a practical active learning algorithm is proposed, which exploits a radial basis function together with the estimated probabilities to construct the triple measures and a modified best-versus-second-best strategy to construct the uncertain measure, respectively. Experimental results on benchmark datasets demonstrate that our algorithm consistently achieves superior performance over the state-of-the-art active learning algorithms. <s> BIB016 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Sample an instance <s> Abstract Classification of evolving data streams is a challenging task, which is suitably tackled with online learning approaches. Data is processed instantly requiring the learning machinery to (self-)adapt by adjusting its model. However for high velocity streams, it is usually difficult to obtain labeled samples to train the classification model. Hence, we propose a novel o nline b atch-based a ctive l earning algorithm (OBAL) to perform the labeling. OBAL is developed for crisis management applications where data streams are generated by the social media community. OBAL is applied to discriminate relevant from irrelevant social media items. An emergency management user will be interactively queried to label chosen items. OBAL exploits the boundary items for which it is highly uncertain about their class and makes use of two classifiers: k-Nearest Neighbors (kNN) and Support Vector Machine (SVM). OBAL is equipped with a labeling budget and a set of uncertainty strategies to identify the items for labeling. An extensive analysis is carried out to show OBAL’s performance, the sensitivity of its parameters, and the contribution of the individual uncertainty strategies. Two types of datasets are used: synthetic and social media datasets related to crises. The empirical results illustrate that OBAL has a very good discrimination power. <s> BIB017
Sample a large pool of instances ). Each starts from the set of all possible observations (the instance space at left) and applies a query strategy (light blue box) for selecting which instance to ask the human or machine annotator to label (dark blue box). Membership query synthesis was proposed in BIB001 , and further developed and extended in BIB008 BIB014 BIB015 . In this scenario, the machine learner knows the definition of the instance space (e.g., feature dimensions and ranges are known). The learner can generate (i.e., synthesize) a new instance (e.g., an image or a piece of text) from scratch (thus one that meets the parameters of the instance space, but may or may not actually exist ) that satisfies the instance space definition, and then enlist an annotator for labeling BIB014 BIB015 . Query synthesis can synthesize a new artificial (membership) query from scratch using a small amount of labelled data-it is therefore very efficient BIB014 . Query synthesis is often tractable and efficient for finite problem domains . Thus, query synthesis has recently gained interest in some domains in which labels do not come from human annotators, but from experiments, where only heuristics are known. In such domains, artificial queries can be synthesized to elicit information (e.g., automated science BIB008 ) or to detect and extract knowledge and design information with minimal cost (e.g., adversarial reverse engineering) BIB015 . Query synthesis is reasonable for some domain problems, but one major problem is that the synthesized (membership) queries are often not meaningful, and thus annotators, particularly human ones, can find it hard to assign labels . By contrast, the stream-based and pool-based scenarios introduced below can address these limitations, because the queries always correspond to real examples. Therefore, the labels can be more readily provided by annotators . In stream-based selective sampling (also called stream-based or sequential AL), given an unlabeled instance, which is drawn one at a time from the data source, the machine learner must decide whether to query its label or to discard it BIB003 BIB002 BIB005 . In a stream-based selective sampling scenario, learners can use the following two ways to query: (1) use a query strategy (Section 3.1.5.), (2) compute a region of uncertainty and pick instances falling in that region. The stream-based scenario has been studied in several real-world tasks (e.g., learning ranking functions for information retrieval BIB010 , social media text classifications BIB017 , and word sense disambiguation BIB006 , where a word such as "bank" in "river bank" can be distinguished from the word "bank" in "financial bank"). One advantage of the stream-based selective sampling AL method is that it is suitable for mobile and embedded devices where memory and power is often limited, because in this scenario, each unlabeled instance is drawn one at a time from the data source. In pool-based sampling AL BIB004 BIB011 , samples are selected from an existing pool for labeling using criteria designed to assess the informativeness of an instance. Informativeness has been defined as representing the ability of an instance to reduce the generalization error of a ML model BIB013 BIB016 ; query strategies designed to achieve informativeness of samples are discussed in Section 3.1.5. A substantial proportion of AL methods in the literature are pool-based BIB017 , with examples in domains that include: text classification (see examples later in this paper for text and image classification), image classification and retrieval BIB012 , information retrieval BIB007 , video classification and retrieval BIB012 , speech recognition , and cancer diagnosis BIB009 . Only a few AL methods employed stream-based selective sampling BIB017 . For many real-world learning problems, large collections of unlabeled data can be gathered at once. This motivates pool-based sampling, because pool-based sampling evaluates and ranks the entire collection before selecting the best query . This helps build a classifier with better performance and less labeled examples. As outlined above, the three sampling scenarios have different primary applications. Membership query synthesis is most applicable to limited applications such as automated scientific discovery and adversarial reverse engineering BIB015 , due to fact that instances produced by synthesized queries might be not recognizable to human annotators . Stream-based methods are typically used for streaming data (as the name implies) because they scan the data sequentially and make individual decisions for each instance. Because they do not consider the data as a whole, stream-based selective sampling methods are typically less effective than pool-based for any situation in which data can be assembled ahead of time. Due to the limited focus of membership query synthesis and stream-based selective sampling, and the broad focus of pool-based sampling, a substantial proportion of AL methods discussed in the literature are pool-based BIB017 . Not surprisingly, this is also true for application of AL to (geo) text and image classification. Given this overall emphasis in the literature, and within the subset directed to geospatial applications, the focus in the remainder of the paper is on pool-based sampling, with the alternatives mentioned only to highlight particular recent innovations.
Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Batch-Mode AL <s> The goal of active learning is to select the most informative examples for manual labeling. Most of the previous studies in active learning have focused on selecting a single unlabeled example in each iteration. This could be inefficient since the classification model has to be retrained for every labeled example. In this paper, we present a framework for "batch mode active learning" that applies the Fisher information matrix to select a number of informative examples simultaneously. The key computational challenge is how to efficiently identify the subset of unlabeled examples that can result in the largest reduction in the Fisher information. To resolve this challenge, we propose an efficient greedy algorithm that is based on the property of submodular functions. Our empirical studies with five UCI datasets and one real-world medical image classification show that the proposed batch mode active learning algorithm is more effective than the state-of-the-art algorithms for active learning. <s> BIB001 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Batch-Mode AL <s> Support vector machine (SVM) active learning is one popular and successful technique for relevance feedback in content-based image retrieval (CBIR). Despite the success, conventional SVM active learning has two main drawbacks. First, the performance of SVM is usually limited by the number of labeled examples. It often suffers a poor performance for the small-sized labeled examples, which is the case in relevance feedback. Second, conventional approaches do not take into account the redundancy among examples, and could select multiple examples that are similar (or even identical). In this work, we propose a novel scheme for explicitly addressing the drawbacks. It first learns a kernel function from a mixture of labeled and unlabeled data, and therefore alleviates the problem of small-sized training data. The kernel will then be used for a batch mode active learning method to identify the most informative and diverse examples via a min-max framework. Two novel algorithms are proposed to solve the related combinatorial optimization: the first approach approximates the problem into a quadratic program, and the second solves the combinatorial optimization approximately by a greedy algorithm that exploits the merits of submodular functions. Extensive experiments with image retrieval using both natural photo images and medical images show that the proposed algorithms are significantly more effective than the state-of-the-art approaches. A demo is available at http://msm.cais.ntu.edu.sg/LSCBIR/. <s> BIB002 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Batch-Mode AL <s> Abstract Classification of evolving data streams is a challenging task, which is suitably tackled with online learning approaches. Data is processed instantly requiring the learning machinery to (self-)adapt by adjusting its model. However for high velocity streams, it is usually difficult to obtain labeled samples to train the classification model. Hence, we propose a novel o nline b atch-based a ctive l earning algorithm (OBAL) to perform the labeling. OBAL is developed for crisis management applications where data streams are generated by the social media community. OBAL is applied to discriminate relevant from irrelevant social media items. An emergency management user will be interactively queried to label chosen items. OBAL exploits the boundary items for which it is highly uncertain about their class and makes use of two classifiers: k-Nearest Neighbors (kNN) and Support Vector Machine (SVM). OBAL is equipped with a labeling budget and a set of uncertainty strategies to identify the items for labeling. An extensive analysis is carried out to show OBAL’s performance, the sensitivity of its parameters, and the contribution of the individual uncertainty strategies. Two types of datasets are used: synthetic and social media datasets related to crises. The empirical results illustrate that OBAL has a very good discrimination power. <s> BIB003
In most AL research, queries are selected in serial (i.e., labeling one instance at a time). This is not practical when training a model is slow or expensive. By contrast, batch-mode (also batch mode) AL allows the machine learner to query a batch (i.e., group) of unlabeled instances simultaneously to be labeled, which is better suited to parallel labeling environments or models with slow training procedures to accelerate the learning speed. In batch-mode AL, the number of instances in each query group is called batch size. For some recent overview papers for batch-mode AL, see BIB003 BIB001 BIB002 .
Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL Query Strategies <s> We propose an algorithm called query by commitee , in which a committee of students is trained on the same data set. The next query is chosen according to the principle of maximal disagreement . The algorithm is studied for two toy models: the high-low game and perceptron learning of another perceptron. As the number of queries goes to infinity, the committee algorithm yields asymptotically finite information gain. This leads to generalization error that decreases exponentially with the number of examples. This in marked contrast to learning from randomly chosen inputs, for which the information gain approaches zero and the generalization error decreases with a relatively slow inverse power law. We suggest that asymptotically finite information gain may be an important characteristic of good query algorithms. <s> BIB001 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL Query Strategies <s> We describe and evaluate experimentally a method for clustering words according to their distribution in particular syntactic contexts. Words are represented by the relative frequency distributions of contexts in which they appear, and relative entropy between those distributions is used as the similarity measure for clustering. Clusters are represented by average context distributions derived from the given words according to their probabilities of cluster membership. In many cases, the clusters can be thought of as encoding coarse sense distinctions. Deterministic annealing is used to find lowest distortion sets of clusters: as the annealing parameter increases, existing clusters become unstable and subdivide, yielding a hierarchical "soft" clustering of the data. Clusters are used as the basis for class models of word coocurrence, and the models evaluated with respect to held-out test data. <s> BIB002 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL Query Strategies <s> The ability to cheaply train text classifiers is critical to their use in information retrieval, content analysis, natural language processing, and other tasks involving data which is partly or fully textual. An algorithm for sequential sampling during machine learning of statistical classifiers was developed and tested on a newswire text categorization task. This method, which we call uncertainty sampling, reduced by as much as 500-fold the amount of training data that would have to be manually classified to achieve a given level of effectiveness. <s> BIB003 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL Query Strategies <s> Abstract In many real-world learning tasks, it is expensive to acquire a sufficient number of labeled examples for training. This paper proposes a general method for efficiently training probabilistic classifiers, by selecting for training only the more informative examples in a stream of unlabeled examples. The method, committee-based sampling , evaluates the informativeness of an example by measuring the degree of disagreement between several model variants. These variants (the committee) are drawn randomly from a probability distribution conditioned by the training set selected so far (Monte-Carlo sampling). The method is particularly attractive because it evaluates the expected information gain from a training example implicitly, making the model both easy to implement and generally applicable. We further show how to apply committee-based sampling for training Hidden Markov Model classifiers, which are commonly used for complex classification tasks. The method was implemented and tested for the task of tagging words in natural language sentences with parts-of-speech. Experimental evaluation of committee-based sampling versus standard sequential training showed a substantial improvement in training efficiency. <s> BIB004 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL Query Strategies <s> Information extraction from HTML documents requires a classifier capable of assigning semantic labels to the words or word sequences to be extracted. If completely labeled documents are available for training, well-known Markov model techniques can be used to learn such classifiers. In this paper, we consider the more challenging task of learning hidden Markov models (HMMs) when only partially (sparsely) labeled documents are available for training. We first give detailed account of the task and its appropriate loss function, and show how it can be minimized given an HMM. We describe an EM style algorithm for learning HMMs from partially labeled data. We then present an active learning algorithm that selects "difficult" unlabeled tokens and asks the user to label them. We study empirically by how much active learning reduces the required data labeling effort, or increases the quality of the learned model achievable with a given amount of user effort. <s> BIB005 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL Query Strategies <s> In many real world applications, active selection of training examples can significantly reduce the number of labelled training examples to learn a classification function. Different strategies in the field of support vector machines have been proposed that iteratively select a single new example from a set of unlabelled examples, query the corresponding class label and then perform retraining of the current classifier. However, to reduce computational time for training, it might be necessary to select batches of new training examples instead of single examples. Strategies for single examples can be extended straightforwardly to select batches by choosing the h > 1 examples that get the highest values for the individual selection criterion. We present a new approach that is especially designed to construct batches and incorporates a diversity measure. It has low computational requirements making it feasible for large scale problems with several thousands of examples. Experimental results indicate that this approach provides a faster method to attain a level of generalization accuracy in terms of the number of labelled examples. <s> BIB006 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL Query Strategies <s> We analyze the “query by committee” algorithm, a method for filtering informative queries from a random stream of inputs. We show that if the two-member committee algorithm achieves information gain with positive lower bound, then the prediction error decreases exponentially with the number of queries. We show that, in particular, this exponential decrease holds for query learning of perceptrons. <s> BIB007 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL Query Strategies <s> The paper is concerned with two-class active learning. While the common approach for collecting data in active learning is to select samples close to the classification boundary, better performance can be achieved by taking into account the prior data distribution. The main contribution of the paper is a formal framework that incorporates clustering into active learning. The algorithm first constructs a classifier on the set of the cluster representatives, and then propagates the classification decision to the other samples via a local noise model. The proposed model allows to select the most representative samples as well as to avoid repeatedly labeling samples in the same cluster. During the active learning process, the clustering is adjusted using the coarse-to-fine strategy in order to balance between the advantage of large clusters and the accuracy of the data representation. The results of experiments in image databases show a better performance of our algorithm compared to the current methods. <s> BIB008 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL Query Strategies <s> Interactively learning from a small sample of unlabeled examples is an enormously challenging task. Relevance feedback and more recently active learning are two standard techniques that have received much attention towards solving this interactive learning problem. How to best utilize the user's effort for labeling, however, remains unanswered. It has been shown in the past that labeling a diverse set of points is helpful, however, the notion of diversity has either been dependent on the learner used, or computationally expensive. In this paper, we intend to address these issues by proposing a fundamentally motivated, information-theoretic view of diversity and its use in a fast, non-degenerate active learning-based relevance feedback setting. Comparative testing and results are reported and thoughts for future work are presented. <s> BIB009 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL Query Strategies <s> With the advent and proliferation of digital cameras and computers, the number of digital photos created and stored by consumers has grown extremely large. This created increasing demand for image retrieval systems to ease interaction between consumers and personal media content. Active learning is a widely used user interaction model for retrieval systems, which learns the query concept by asking users to label a number of images at each iteration. In this paper, we study sampling strategies for active learning in personal photo retrieval. In order to reduce human annotation efforts in a content-based image retrieval setting, we propose using multiple sampling criteria for active learning: informativeness, diversity and representativeness. Our experimental results show that by combining multiple sampling criteria in active learning, the performance of personal photo retrieval system can be significantly improved. <s> BIB010 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL Query Strategies <s> Supervised and semi-supervised learning are frequently applied methods to annotate videos by map..ing low-level features into high-level semantic concepts. Though they work well for certain concepts, the performance is still far from reality due to the large gap between the features and the semantics. The main constraint of these methods is that the information contained in a limited number of labeled training samples can hardly represent the distributions of the semantic concepts. In this paper, we propose a novel semi-automatic video annotation framework, active learning with clustering tuning, to tackle the disadvantages of current video annotation solutions. In this framework, firstly an initial training set is constructed based on clustering the entire video dataset. And then a SVM-based active learning scheme is proposed, which aims at maximizing the margin of the SVM classifier by manually selectively labeling a small set of samples. Moreover, in each round of active learning, we tune/refine the clustering results based on the prediction results of current stage, which is beneficial for selecting the most informative samples in the active learning process, as well as helps further improve the final annotation accuracy in the post-processing step. Experimental results show that the proposed scheme performs superior to typical active learning algorithms in terms of both annotation accuracy and stability. <s> BIB011 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL Query Strategies <s> Relevance feedback, which uses the terms in relevant documents to enrich the user's initial query, is an effective method for improving retrieval performance. An associated key research problem is the following: Which documents to present to the user so that the user's feedback on the documents can significantly impact relevance feedback performance. This paper views this as an active learning problem and proposes a new algorithm which can efficiently maximize the learning benefits of relevance feedback. This algorithm chooses a set of feedback documents based on relevancy, document diversity and document density. Experimental results show a statistically significant and appreciable improvement in the performance of our new approach over the existing active feedback methods. <s> BIB012 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL Query Strategies <s> Active learning has been demonstrated to be an effective approach to reducing human labeling effort in multimedia annotation tasks. However, most of the existing active learning methods for video annotation are studied in a relatively simple context where concepts are sequentially annotated with fixed effort and only a single modality is applied. However, we usually have to deal with multiple modalities, and sequentially annotating concepts without preference cannot suitably assign annotation effort. To address these two issues, in this paper we propose a multi-concept multi-modality active learning method for video annotation in which multiple concepts and multiple modalities can be simultaneously taken into consideration. In each round of active learning, this method selects the concept that is expected to get the highest performance gain and a batch of suitable samples to be annotated for this concept. Then, a graph-based semi-supervised learning is conducted on each modality for the selected concept. The proposed method is able to sufficiently explore the human effort by considering both the learnabilities of different concepts and the potentials of different modalities. Experimental results on TRECVID 2005 benchmark have demonstrated its effectiveness and efficiency. <s> BIB013 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL Query Strategies <s> Active learning is well-suited to many problems in natural language processing, where unlabeled data may be abundant but annotation is slow and expensive. This paper aims to shed light on the best active learning approaches for sequence labeling tasks such as information extraction and document segmentation. We survey previously used query selection strategies for sequence models, and propose several novel algorithms to address their shortcomings. We also conduct a large-scale empirical comparison using multiple corpora, which demonstrates that our proposed methods advance the state of the art. <s> BIB014 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL Query Strategies <s> Active learning is a supervised machine learning technique in which the learner is in control of the data used for learning. That control is utilized by the learner to ask an oracle, typically a human with extensive knowledge of the domain at hand, about the classes of the instances for which the model learned so far makes unreliable predictions. The active learning process takes as input a set of labeled examples, as well as a larger set of unlabeled examples, and produces a classifier and a relatively small set of newly labeled data. The overall goal is to create as good a classifier as possible, without having to mark-up and supply the learner with more data than necessary. The learning process aims at keeping the human annotation effort to a minimum, only asking for advice where the training utility of the result of such a query is high. Active learning has been successfully applied to a number of natural language processing tasks, such as, information extraction, named entity recognition, text categorization, part-of-speech tagging, parsing, and word sense disambiguation. This report is a literature survey of active learning from the perspective of natural language processing. <s> BIB015 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL Query Strategies <s> One of the principal bottlenecks in applying learning techniques to classification problems is the large amount of labeled training data required. Especially for images and video, providing training data is very expensive in terms of human time and effort. In this paper we propose an active learning approach to tackle the problem. Instead of passively accepting random training examples, the active learning algorithm iteratively selects unlabeled examples for the user to label, so that human effort is focused on labeling the most “useful” examples. Our method relies on the idea of uncertainty sampling, in which the algorithm selects unlabeled examples that it finds hardest to classify. Specifically, we propose an uncertainty measure that generalizes margin-based uncertainty to the multi-class case and is easy to compute, so that active learning can handle a large number of classes and large data sizes efficiently. We demonstrate results for letter and digit recognition on datasets from the UCI repository, object recognition results on the Caltech-101 dataset, and scene categorization results on a dataset of 13 natural scene categories. The proposed method gives large reductions in the number of training examples required over random selection to achieve similar classification accuracy, with little computational overhead. <s> BIB016 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL Query Strategies <s> Active learning has been proven a reliable strategy to reduce manual efforts in training data labeling. Such strategies incorporate the user as oracle: the classifier selects the most appropriate example and the user provides the label. While this approach is tailored towards the classifier, more intelligent input from the user may be beneficial. For instance, given only one example at a time users are hardly able to determine whether this example is an outlier or not. In this paper we propose user-based visually-supported active learning strategies that allow the user to do both, selecting and labeling examples given a trained classifier. While labeling is straightforward, selection takes place using a interactive visualization of the classifier's a-posteriori output probabilities. By simulating different user selection strategies we show, that user-based active learning outperforms uncertainty based sampling methods and yields a more robust approach on different data sets. The obtained results point towards the potential of combining active learning strategies with results from the field of information visualization. <s> BIB017 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL Query Strategies <s> Defining an efficient training set is one of the most delicate phases for the success of remote sensing image classification routines. The complexity of the problem, the limited temporal and financial resources, as well as the high intraclass variance can make an algorithm fail if it is trained with a suboptimal dataset. Active learning aims at building efficient training sets by iteratively improving the model performance through sampling. A user-defined heuristic ranks the unlabeled pixels according to a function of the uncertainty of their class membership and then the user is asked to provide labels for the most uncertain pixels. This paper reviews and tests the main families of active learning algorithms: committee, large margin, and posterior probability-based. For each of them, the most recent advances in the remote sensing community are discussed and some heuristics are detailed and tested. Several challenging remote sensing scenarios are considered, including very high spatial resolution and hyperspectral image classification. Finally, guidelines for choosing the good architecture are provided for new and/or unexperienced user. <s> BIB018 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL Query Strategies <s> Active learning is a machine learning technique that selects the most informative samples for labeling and uses them as training data. It has been widely explored in multimedia research community for its capability of reducing human annotation effort. In this article, we provide a survey on the efforts of leveraging active learning in multimedia annotation and retrieval. We mainly focus on two application domains: image/video annotation and content-based image retrieval. We first briefly introduce the principle of active learning and then we analyze the sample selection criteria. We categorize the existing sample selection strategies used in multimedia annotation and retrieval into five criteria: risk reduction, uncertainty, diversity, density and relevance. We then introduce several classification models used in active learning-based multimedia annotation and retrieval, including semi-supervised learning, multilabel learning and multiple instance learning. We also provide a discussion on several future trends in this research direction. In particular, we discuss cost analysis of human annotation and large-scale interactive multimedia annotation. <s> BIB019 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL Query Strategies <s> In this letter, we present a novel batch-mode active learning technique for solving multiclass classification problems by using the support vector machine classifier with the one-against-all architecture. The uncertainty of each unlabeled sample is measured by defining a criterion which not only considers the smallest distance to the decision hyperplanes but also takes into account the distances to other hyperplanes if the sample is within the margin of their decision boundaries. To select batch of most uncertain samples from all over the decision region, the uncertain regions of the classifiers are partitioned into multiple parts depending on the number of geometrical margins of binary classifiers passing on them. Then, a balanced number of most uncertain samples are selected from each part. To minimize the redundancy and keep the diversity among these samples, the kernel k-means clustering algorithm is applied to the set of uncertain samples, and the representative sample (medoid) from each cluster is selected for labeling. The effectiveness of the proposed method is evaluated by comparing it with other batch-mode active learning techniques existing in the literature. Experimental results on two different remote sensing data sets confirmed the effectiveness of the proposed technique. <s> BIB020 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL Query Strategies <s> Most active learning approaches select either informative or representative unlabeled instances to query their labels. Although several active learning algorithms have been proposed to combine the two criteria for query selection, they are usually ad hoc in finding unlabeled instances that are both informative and representative. We address this challenge by a principled approach, termed QUIRE, based on the min-max view of active learning. The proposed approach provides a systematic way for measuring and combining the informativeness and representativeness of an instance. Extensive experimental results show that the proposed QUIRE approach outperforms several state-of -the-art active learning approaches. <s> BIB021 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL Query Strategies <s> Active learning methods select informative instances to effectively learn a suitable classifier. Uncertainty sampling, a frequently utilized active learning strategy, selects instances about which the model is uncertain but it does not consider the reasons for why the model is uncertain. In this article, we present an evidence-based framework that can uncover the reasons for why a model is uncertain on a given instance. Using the evidence-based framework, we discuss two reasons for uncertainty of a model: a model can be uncertain about an instance because it has strong, but conflicting evidence for both classes or it can be uncertain because it does not have enough evidence for either class. Our empirical evaluations on several real-world datasets show that distinguishing between these two types of uncertainties has a drastic impact on the learning efficiency. We further provide empirical and analytical justifications as to why distinguishing between the two uncertainties matters. <s> BIB022 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL Query Strategies <s> How can we find a general way to choose the most suitable samples for training a classifier? Even with very limited prior information? Active learning, which can be regarded as an iterative optimization procedure, plays a key role to construct a refined training set to improve the classification performance in a variety of applications, such as text analysis, image recognition, social network modeling, etc. Although combining representativeness and informativeness of samples has been proven promising for active sampling, state-of-the-art methods perform well under certain data structures. Then can we find a way to fuse the two active sampling criteria without any assumption on data? This paper proposes a general active learning framework that effectively fuses the two criteria. Inspired by a two-sample discrepancy problem, triple measures are elaborately designed to guarantee that the query samples not only possess the representativeness of the unlabeled data but also reveal the diversity of the labeled data. Any appropriate similarity measure can be employed to construct the triple measures. Meanwhile, an uncertain measure is leveraged to generate the informativeness criterion, which can be carried out in different ways. Rooted in this framework, a practical active learning algorithm is proposed, which exploits a radial basis function together with the estimated probabilities to construct the triple measures and a modified best-versus-second-best strategy to construct the uncertain measure, respectively. Experimental results on benchmark datasets demonstrate that our algorithm consistently achieves superior performance over the state-of-the-art active learning algorithms. <s> BIB023
Query strategies are central in AL methods; they are used to identify those training examples that can contribute most to the learning performance of ML models. Various AL query strategies have been proposed, defined, and discussed in several surveys to improve over random sample selection BIB018 BIB015 BIB019 . Here we highlight the most commonly used query strategies in AL: (1) uncertainty sampling, (2) diversity, (3) density, and (4) relevance. Uncertainty sampling BIB003 picks the instances that the (machine) learner model is most uncertain about. Due to its simplicity, intuitiveness, and empirical success in many domains, uncertainty sampling is the most commonly used strategy. Though uncertainty sampling has many limitations, such as sensitivity to noise and outliers, it still works surprisingly well BIB022 . The heuristic of selecting the most uncertain instances stems from the fact that in many learning algorithms the essential classification boundary can be preserved based solely on the nearby samples, and the samples that are far from the boundary can be viewed as redundant. For a binary classification, the samples that are closest to a classification boundary will be selected. When multiple learners exist, a widely applied strategy is selecting the samples that have the maximum disagreement among the learners BIB007 BIB001 . The disagreement of multiple learners can also be viewed as an uncertainty measure. This query strategy is called query-by-committee (QBC) BIB001 . A committee of ML models are trained on the same data set. Each committee member then votes on the labelings of query candidates. The most informative query is the instance on which they most disagree. Two main disagreement measures have been proposed in the literature: (1) vote entropy BIB004 and (2) average Kullback-Leibler (KL) divergence . Vote entropy compares only the committee members' top ranked class , whereas KL divergence metric measures the difference between two probability distributions. KL divergence to the mean BIB002 is an average of the KL divergence between each distribution and the mean of all the distributions. Thus, this disagreement measure picks the instance with the largest average difference between the label distributions of any committee member and the consensus as the most informative query . Other commonly used uncertainty sampling variants include: least confident, margin sampling, and entropy. Least confident is an uncertainty sampling variant for multi-class classification (Appendix A.4.2), where the machine learner queries the instance whose prediction is the least confident (as the name implies). The least confident strategy only considers information about the most probable label, and thus, it "throws away" information about the remaining label distribution. Margin sampling BIB005 can overcome the drawback (mentioned in the preceding sentence) of the least confident strategy, by considering the posterior of the second most likely label . Entropy is an uncertainty sampling variant that uses entropy as an uncertainty measure. Entropy-based uncertainty sampling has achieved strong empirical performance across many tasks . A detailed discussion about when each variant of uncertainty sampling should be used is provided in . The second query strategy, based on a diversity criterion BIB006 , was first investigated in batch-mode AL (Section 3.1.4), where Brinker BIB006 used diversity in AL with SVMs. Diversity concerns the capability of the learning model to avoid selecting query candidates that rank well according to the heuristic (i.e., query strategy), but are redundant among each other. More specifically, a diversity based query strategy is used to select those unlabeled samples that are far from the selected set and thus can reduce redundancy within the selected samples. Diversity has been studied extensively for margin-based heuristics, where the base margin sampling heuristic is constrained using a measure of diversity between the candidates. An algorithm for a general diversity-based heuristic can be found in BIB018 . In many applications, we need to select a batch of samples instead of just one in an AL iteration. For example, updating (i.e., retraining) a model may need extensive computation, and thus labeling just one sample each time will make the AL process quite slow. Joshi et al. BIB016 proposed that the selected samples in a batch should be diverse. Dagli et al. BIB009 and Wu et al. BIB010 emphasized that the diversity criterion should not only be investigated in batch-mode but also be considered on all labeled samples, to avoid having selected samples being constrained in an (increasingly) restricted area. The third strategy used by a machine learner is to select samples using a density BIB014 criterion that selects samples within regions of high density. The main argument for a density-based criterion is that informative instances should not only be those that are uncertain, but also those that are "representative" of the underlying distribution (i.e., inhabit dense regions of the instance space). In density-based selection, the query candidates are selected from dense areas of the feature space because those instances are considered as most representative BIB021 BIB023 BIB010 . The representativeness of an instance can be evaluated by how many instances among the unlabeled data are similar to it. Density-based selection of candidates can be used to initialize an AL model when no labels are available at all. Wu et al. BIB010 proposed a representativeness measure for each sample according to the distance to its nearby samples. Another strategy uses clustering-based methods BIB008 BIB011 , which first group the samples and then selects samples at and around the cluster centers. Qi et al. BIB011 combine AL with clustering, and their method can refine the clusters with merging and splitting operations after each iteration, which is beneficial for selecting the most informative samples in the AL process, and also helps further improve the final annotation accuracy in the post-processing step. The fourth strategy, relevance criterion, is usually applied in multi-label classification tasks (Appendix A.4.3). Based on a relevance criterion, those samples that have the highest probability to be relevant for a certain class are selected BIB019 . This strategy fosters the identification of positive examples for a class. Ayache and Quénot have conducted an empirical study on different sample selection strategies for AL for indexing concepts in videos. Their experimental results clearly show that the relevance criterion can achieve better performance than an uncertainty criterion for some concepts. It is difficult to directly compare these criteria. Seifert and Granitzer's experiments BIB017 showed that the benefits of these strategies depend on specific tasks, data sets, and classifiers (Appendix A.3). Wang et al. BIB019 provided several general suggestions: (1) for binary classification problems, applying a relevance criterion may achieve the best results for some extremely unbalanced cases where positive samples are much less frequent than negative ones, (2) in batch-mode AL (Section 3.1.4), integrating a diversity criterion will be helpful for computational efficiency, (3) in many cases, these criteria are combined explicitly or implicitly, (4) the diversity and density criteria are normally not used individually (because they are not directly associated with classification results) and most commonly they are used to enhance the uncertainty criterion. The uncertainty criterion relates to the confidence of a ML algorithm in correctly classifying the considered sample, while the diversity criterion aims at selecting a set of unlabeled samples that are as diverse (distant from one another) as possible, thus reducing the redundancy among the selected samples. The combination of the two criteria results in the selection of the potentially most informative set (Section 3.1.1) of samples at each iteration of the AL process. Patra et al. BIB020 combine the uncertainty and diversity criteria, where they proposed a batch-mode AL (Section 3.1.4) method for multi-class classification (Appendix A.4.2) with SVM classifiers. In the uncertainty step, m samples are selected from all over the uncertain regions of the classifiers. In the diversity step, a batch of h (m > h > 1) samples that are diverse from each other are chosen among the m samples that are selected in the uncertainty step. Xu et al. BIB012 also employed SVM-based batch-mode AL, whereas their method incorporated diversity and density measures. To improve classifier performance for interactive video annotation, Wang et al. BIB013 have combined uncertainty, diversity, density and relevance for sample selection in AL and named the comprehensive strategy as effectiveness.
Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Recent and Novel AL Methods <s> Obtaining labels can be expensive or time-consuming, but unlabeled data is often abundant and easier to obtain. Most learning tasks can be made more efficient, in terms of labeling cost, by intelligently choosing specific unlabeled instances to be labeled by an oracle. The general problem of optimally choosing these instances is known as active learning. As it is usually set in the context of supervised learning, active learning relies on a single oracle playing the role of a teacher. We focus on the multiple annotator scenario where an oracle, who knows the ground truth, no longer exists; instead, multiple labelers, with varying expertise, are available for querying. This paradigm posits new challenges to the active learning scenario. We can now ask which data sample should be labeled next and which annotator should be queried to benefit our learning model the most. In this paper, we employ a probabilistic model for learning from multiple annotators that can also learn the annotator expertise even when their expertise may not be consistently accurate across the task domain. We then focus on providing a criterion and formulation that allows us to select both a sample and the annotator/s to query the labels from. <s> BIB001 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Recent and Novel AL Methods <s> We present a simple and yet effective approach that can incorporate rationales elicited from annotators into the training of any offthe-shelf classifier. We show that our simple approach is effective for multinomial na¨ Bayes, logistic regression, and support vector machines. We additionally present an active learning method tailored specifically for the learning with rationales framework. <s> BIB002 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Recent and Novel AL Methods <s> Active learning methods select informative instances to effectively learn a suitable classifier. Uncertainty sampling, a frequently utilized active learning strategy, selects instances about which the model is uncertain but it does not consider the reasons for why the model is uncertain. In this article, we present an evidence-based framework that can uncover the reasons for why a model is uncertain on a given instance. Using the evidence-based framework, we discuss two reasons for uncertainty of a model: a model can be uncertain about an instance because it has strong, but conflicting evidence for both classes or it can be uncertain because it does not have enough evidence for either class. Our empirical evaluations on several real-world datasets show that distinguishing between these two types of uncertainties has a drastic impact on the learning efficiency. We further provide empirical and analytical justifications as to why distinguishing between the two uncertainties matters. <s> BIB003
Yan et al. BIB001 , Sharma et al. BIB002 , and Sharma and Bilgic BIB003 introduced some very recent and novel AL based methods. Typical AL algorithms rely on a single annotator (i.e., oracle) who serves in the role of a "teacher". By contrast, the following multiple annotator AL scenario poses new challenges: an oracle, who knows the ground truth, does not exist, and multiple annotators, with varying expertise, are available for querying. Such scenarios are not uncommon in the real world, for example, decision making for emergency management. To bridge the gap, Yan et al. BIB001 focused on an AL scenario from multiple crowdsourcing annotators. The machine learner asks which data sample should be labeled next and which annotator should be queried to improve the performance of the classifier the most. Specifically, Yan et al. employed a probabilistic model to learn from multiple annotators-the model can also learn the annotator's expertise even when their expertise may not be consistently accurate across the task domain. The authors provided an optimization formulation that allows the machine learner to select the most uncertain sample and the most appropriate annotator to query the labels. Their experiments on multiple annotator text data and on three UCI benchmark data sets showed that their AL approach combined with information from multiple annotators improves the learning performance. One of the bottlenecks in eliciting domain knowledge from annotators is that the traditional supervised learning approaches (Appendix A.2.1) cannot handle the elicited rich feedback from domain experts. To address the gap, many methods have been developed, but they are often classifier-specific BIB002 ; these methods do not transfer directly from one domain to another. To further address this problem, Sharma et al. BIB002 proposed an AL approach that can incorporate rationales elicited from annotators into the training of any existing classifier for text classification (Appendix A.5). Their experimental results using four text categorization datasets showed that their approach is effective for incorporating rationales into the learning of multinomial Naıve Bayes, logistic regression, and SVMs classifiers. Traditional uncertainty sampling does not consider the reasons why a (machine) learner is uncertain on the selected instances. Sharma and Bilgic BIB003 addressed this gap by using an evidence-based framework to do so. Specifically, the authors focused on two types of uncertainty: conflicting-evidence uncertainty and insufficient-evidence uncertainty. In the former type of uncertainty, the model is uncertain due to presence of strong but conflicting evidence for each class; in the latter type, the model is uncertain due to insufficient evidence for either class. Their empirical evaluations on several real-world datasets using naıve Bayes for binary classification tasks showed that distinguishing between these two types of uncertainties has a drastic impact on the learning efficiency: conflicting-evidence uncertainty provides the most benefit for learning, substantially outperforming both traditional uncertainty sampling and insufficient-evidence uncertainty sampling. The authors, in their explanation of these results, showed that the instances that are uncertain due to conflicting evidence have lower density in the labeled set, compared to instances that are uncertain due to insufficient evidence; that is, there is less support in the training data for the perceived conflict than for the insufficiency of the evidence.
Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Visual Analytics (VA) and Human-in-the-Loop <s> This is the book form of the Research and Development Agenda for Visual Analytics to be published by IEEE in 2005. <s> BIB001 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Visual Analytics (VA) and Human-in-the-Loop <s> Researchers have made significant progress in disciplines such as scientific and information visualization, statistically based exploratory and confirmatory analysis, data and knowledge representations, and perceptual and cognitive sciences. Although some research is being done in this area, the pace at which new technologies and technical talents are becoming available is far too slow to meet the urgent need. National Visualization and Analytics Center's goal is to advance the state of the science to enable analysts to detect the expected and discover the unexpected from massive and dynamic information streams and databases consisting of data of multiple types and from multiple sources, even though the data are often conflicting and incomplete. Visual analytics is a multidisciplinary field that includes the following focus areas: (i) analytical reasoning techniques, (ii) visual representations and interaction techniques, (iii) data representations and transformations, (iv) techniques to support production, presentation, and dissemination of analytical results. The R&D agenda for visual analytics addresses technical needs for each of these focus areas, as well as recommendations for speeding the movement of promising technologies into practice. This article provides only the concise summary of the R&D agenda. We encourage reading, discussion, and debate as well as active innovation toward the agenda for visual analysis. <s> BIB002 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Visual Analytics (VA) and Human-in-the-Loop <s> We are living in a world which faces a rapidly increasing amount of data to be dealt with on a daily basis. In the last decade, the steady improvement of data storage devices and means to create and collect data along the way influenced our way of dealing with information: Most of the time, data is stored without filtering and refinement for later use. Virtually every branch of industry or business, and any political or personal activity nowadays generate vast amounts of data. Making matters worse, the possibilities to collect and store data increase at a faster rate than our ability to use it for making decisions. However, in most applications, raw data has no value in itself; instead we want to extract the information contained in it. <s> BIB003 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Visual Analytics (VA) and Human-in-the-Loop <s> Visual analytics (VA) system development started in academic research institutions where novel visualization techniques and open source toolkits were developed. Simultaneously, small software companies, sometimes spin-offs from academic research institutions, built solutions for specific application domains. In recent years we observed the following trend: some small VA companies grew exponentially; at the same time some big software vendors such as IBM and SAP started to acquire successful VA companies and integrated the acquired VA components into their existing frameworks. Generally the application domains of VA systems have broadened substantially. This phenomenon is driven by the generation of more and more data of high volume and complexity, which leads to an increasing demand for VA solutions from many application domains. In this paper we survey a selection of state-of-the-art commercial VA frameworks, complementary to an existing survey on open source VA tools. From the survey results we identify several improvement opportunities as future research directions. <s> BIB004 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Visual Analytics (VA) and Human-in-the-Loop <s> Visual analytics is the science of marrying interactive visualizations and analytic algorithms to support exploratory knowledge discovery in large datasets. We argue for a shift from a `human in the loop' philosophy for visual analytics to a `human is the loop' viewpoint, where the focus is on recognizing analysts' work processes, and seamlessly fitting analytics into that existing interactive process. We survey a range of projects that provide visual analytic support contextually in the sensemaking loop, and outline a research agenda along with future challenges. <s> BIB005 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Visual Analytics (VA) and Human-in-the-Loop <s> Machine learning is one of the most important and successful techniques in contemporary computer science. It involves the statistical inference of models (such as classifiers) from data. It is often conceived in a very impersonal way, with algorithms working autonomously on passively collected data. However, this viewpoint hides considerable human work of tuning the algorithms, gathering the data, and even deciding what should be modeled in the first place. Examining machine learning from a human-centered perspective includes explicitly recognising this human work, as well as reframing machine learning workflows based on situated human working practices, and exploring the co-adaptation of humans and systems. A human-centered understanding of machine learning in human context can lead not only to more usable machine learning tools, but to new ways of framing learning computationally. This workshop will bring together researchers to discuss these issues and suggest future research questions aimed at creating a human-centered approach to machine learning. <s> BIB006 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Visual Analytics (VA) and Human-in-the-Loop <s> Predictive analytics embraces an extensive range of techniques including statistical modeling, machine learning, and data mining and is applied in business intelligence, public health, disaster management and response, and many other fields. To date, visualization has been broadly used to support tasks in the predictive analytics pipeline. Primary uses have been in data cleaning, exploratory analysis, and diagnostics. For example, scatterplots and bar charts are used to illustrate class distributions and responses. More recently, extensive visual analytics systems for feature selection, incremental learning, and various prediction tasks have been proposed to support the growing use of complex models, agent-specific optimization, and comprehensive model comparison and result exploration. Such work is being driven by advances in interactive machine learning and the desire of end-users to understand and engage with the modeling process. In this state-of-the-art report, we catalogue recent advances in the visualization community for supporting predictive analytics. First, we define the scope of predictive analytics discussed in this article and describe how visual analytics can support predictive analytics tasks in a predictive visual analytics PVA pipeline. We then survey the literature and categorize the research with respect to the proposed PVA pipeline. Systems and techniques are evaluated in terms of their supported interactions, and interactions specific to predictive analytics are discussed. We end this report with a discussion of challenges and opportunities for future research in predictive visual analytics. <s> BIB007 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Visual Analytics (VA) and Human-in-the-Loop <s> Abstract Classification can be highly challenging when the dataset is extremely large, or when the training data in the underlying domain are difficult to obtain. One feasible solution to this challenge is transfer learning, which extracts the knowledge from source tasks and applies the knowledge to target tasks. Extant transfer learning schemes typically assume that similarities between the source task and the target task to some degree. This assumption does not hold in certain actual applications; analysts unfamiliar with the learning strategy can be frustrated by the complicated transfer relations and the non-intuitive transfer process. This paper presents a suite of visual communication and interaction techniques to support the transfer learning process. Furthermore, a pioneering visual-assisted transfer learning methodology is proposed in the context of classification. Our solution includes a visual communication interface that allows for comprehensive exploration of the entire knowledge transfer process and the relevance among tasks. With these techniques and the methodology, the analysts can intuitively choose relevant tasks and data, as well as iteratively incorporate their experience and expertise into the analysis process. We demonstrate the validity and efficiency of our visual design and the analysis approach with examples of text classification. <s> BIB008 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Visual Analytics (VA) and Human-in-the-Loop <s> Dimensionality Reduction (DR) is a core building block in visualizing multidimensional data. For DR techniques to be useful in exploratory data analysis, they need to be adapted to human needs and domain-specific problems, ideally, interactively, and on-the-fly. Many visual analytics systems have already demonstrated the benefits of tightly integrating DR with interactive visualizations. Nevertheless, a general, structured understanding of this integration is missing. To address this, we systematically studied the visual analytics and visualization literature to investigate how analysts interact with automatic DR techniques. The results reveal seven common interaction scenarios that are amenable to interactive control such as specifying algorithmic constraints, selecting relevant features, or choosing among several DR algorithms. We investigate specific implementations of visual analysis systems integrating DR, and analyze ways that other machine learning methods have been combined with DR. Summarizing the results in a “human in the loop” process model provides a general lens for the evaluation of visual interactive DR systems. We apply the proposed model to study and classify several systems previously described in the literature, and to derive future research opportunities. <s> BIB009 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Visual Analytics (VA) and Human-in-the-Loop <s> Interactive model analysis, the process of understanding, diagnosing, and refining a machine learning model with the help of interactive visualization, is very important for users to efficiently solve real-world artificial intelligence and data mining problems. Dramatic advances in big data analytics has led to a wide variety of interactive model analysis tasks. In this paper, we present a comprehensive analysis and interpretation of this rapidly developing area. Specifically, we classify the relevant work into three categories: understanding, diagnosis, and refinement. Each category is exemplified by recent influential work. Possible future research opportunities are also explored and discussed. <s> BIB010 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Visual Analytics (VA) and Human-in-the-Loop <s> To realize the full potential of machine learning in diverse real-world domains, it is necessary for model predictions to be readily interpretable and actionable for the human in the loop. Analysts, who are the users but not the developers of machine learning models, often do not trust a model because of the lack of transparency in associating predictions with the underlying data space. To address this problem, we propose Rivelo, a visual analytics interface that enables analysts to understand the causes behind predictions of binary classifiers by interactively exploring a set of instance-level explanations. These explanations are model-agnostic, treating a model as a black box, and they help analysts in interactively probing the high-dimensional binary data space for detecting features relevant to predictions. We demonstrate the utility of the interface with a case study analyzing a random forest model on the sentiment of Yelp reviews about doctors. <s> BIB011 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Visual Analytics (VA) and Human-in-the-Loop <s> Visual analytics (VA) systems help data analysts solve complex problems interactively, by integrating automated data analysis and mining, such as machine learning (ML) based methods, with interactive visualizations. We propose a conceptual framework that models human interactions with ML components in the VA process, and that puts the central relationship between automated algorithms and interactive visualizations into sharp focus. The framework is illustrated with several examples and we further elaborate on the interactive ML process by identifying key scenarios where ML methods are combined with human feedback through interactive visualization. We derive five open research challenges at the intersection of ML and visualization research, whose solution should lead to more effective data analysis. <s> BIB012 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Visual Analytics (VA) and Human-in-the-Loop <s> Convolutional Neural Networks (CNNs) currently achieve state-of-the-art accuracy in image classification. With a growing number of classes, the accuracy usually drops as the possibilities of confusion increase. Interestingly, the class confusion patterns follow a hierarchical structure over the classes. We present visual-analytics methods to reveal and analyze this hierarchy of similar classes in relation with CNN-internal data. We found that this hierarchy not only dictates the confusion patterns between the classes, it furthermore dictates the learning behavior of CNNs. In particular, the early layers in these networks develop feature detectors that can separate high-level groups of classes quite well, even after a few training epochs. In contrast, the latter layers require substantially more epochs to develop specialized feature detectors that can separate individual classes. We demonstrate how these insights are key to significant improvement in accuracy by designing hierarchy-aware CNNs that accelerate model convergence and alleviate overfitting. We further demonstrate how our methods help in identifying various quality issues in the training data. <s> BIB013 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Visual Analytics (VA) and Human-in-the-Loop <s> Abstract Measured and simulated data sources from the built environment are increasing rapidly. It is becoming normal to analyze data from hundreds, or even thousands of buildings at once. Mechanistic, manual analysis of such data sets is time-consuming and not realistic using conventional techniques. Thus, a significant body of literature has been generated using unsupervised statistical learning techniques designed to uncover structure and information quickly with fewer input parameters or metadata about the buildings collected. Further, visual analytics techniques are developed as aids in this process for a human analyst to utilize and interpret the results. This paper reviews publications that include the use of unsupervised machine learning techniques as applied to non-residential building performance control and analysis. The categories of techniques covered include clustering, novelty detection, motif and discord detection, rule extraction, and visual analytics. The publications apply these technologies in the domains of smart meters, portfolio analysis, operations and controls optimization, and anomaly detection. A discussion is included of key challenges resulting from this review, such as the need for better collaboration between several, disparate research communities and the lack of open, benchmarking data sets. Opportunities for improvement are presented including methods of reproducible research and suggestions for cross-disciplinary cooperation. <s> BIB014 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Visual Analytics (VA) and Human-in-the-Loop <s> We present a design study of the TensorFlow Graph Visualizer, part of the TensorFlow machine intelligence platform. This tool helps users understand complex machine learning architectures by visualizing their underlying dataflow graphs. The tool works by applying a series of graph transformations that enable standard layout techniques to produce a legible interactive diagram. To declutter the graph, we decouple non-critical nodes from the layout. To provide an overview, we build a clustered graph using the hierarchical structure annotated in the source code. To support exploration of nested structure on demand, we perform edge bundling to enable stable and responsive cluster expansion. Finally, we detect and highlight repeated structures to emphasize a model's modular composition. To demonstrate the utility of the visualizer, we describe example usage scenarios and report user feedback. Overall, users find the visualizer useful for understanding, debugging, and sharing the structures of their models. <s> BIB015
VA focuses on the integration of computational methods (e.g., analytical reasoning algorithms) and interactive visual interfaces to extend the perceptual and cognitive abilities of humans BIB001 , and thus to support human reasoning (via exploratory knowledge discovery) about complex phenomenon with big and often heterogeneous data. VA emphasizes the key role of visual representations as the most effective means to convey information to the human and prompt human cognition and reasoning. VA can support at least three of the core challenges in the context of M&DL: (1) building labeled data efficiently, thus in ways that minimizes the time of human annotators, (2) tuning the methods to produce the most accurate classification results with the least amount of training data and processing time, and (3) helping end users understand both the process through which classifiers are constructed and applied and the end result of their applications (thus supporting "explainable" M&DL). There is now more than a decade of research in VA, an annual conference (one of the three making up IEEE Vis), and increasing research on basic and applied VA across many domains. Thus, a comprehensive review of even the subset of VA focused on classification tasks is beyond the scope of this paper; for some recent overview papers see BIB007 BIB008 BIB014 BIB009 BIB004 BIB003 . A VA agenda is provided in BIB002 , and then for geovisual analytics and related topics in . Here, we focus specifically on the role of VA interfaces helping analysts understand M&DL, and then in Section 3.3 we review the recent efforts that are specifically focused on the integration of VA with AL methods. After surveying a range of projects that support VA contextually in the sensemaking loop, Endert et al. BIB005 argued for a shift from a 'human-in-the-loop' philosophy to a 'human is the loop' viewpoint. A similar argument about the central role of analysts can be found in BIB006 , where the authors emphasized that a human-centered understanding of ML can lead not only to more usable ML tools, but to new ways of framing learning computationally. Biewald explained why human-in-the-loop computing is the future of ML, and the related need for explainable M&DL is discussed in . In related research, Liu et al. BIB010 provided a comprehensive review about using VA via interactive visualization to understand, diagnose, and refine ML models. Additional calls for a VA-enabled human-in-the-loop approach to improve the accuracy of black-box M&DL models are discussed in BIB011 BIB012 . Beyond the arguments for the potential of VA to support ML, a few recent studies demonstrated empirically that VA based interactive interfaces can help users understand DL architectures and thus improve the models' classification accuracy. Wongsuphasawa et al. BIB015 (the Best paper of VAST 2017; IEEE VAST is the leading international conference dedicated to advances in VA) demonstrated a successful example of employing VA to visualize dataflow graphs of DL models in TensorFlow (one of the very popular M&DL libraries released open source in 2015 by Google). The approach used TensorBoard (a VA component for TensorFlow) to help TensorFlow developers understand the underlying behavior of DL models implemented in the system. In research not so closely tied to one particular toolkit, Alsallakh et al. BIB013 presented VA methods to help inspect CNNs and improve the design and accuracy for image classification. Their VA interface can reveal and analyze the hierarchy of similar classes in terms of internal features in CNNs. The authors found that this hierarchy not only influences the confusion patterns between the classes, it furthermore influences the learning behavior of CNNs. Specifically, the early layers in CNNs detect features that can separate high-level groups of classes, even after a few training epochs (in M&DL, an epoch is a complete pass through all the training examples; in other words, the classifier sees all the training examples once by the end of an epoch). By contrast, the latter layers require substantially more epochs to detect specialized features that can separate individual classes. Their methods can also identify various quality issues (e.g., overlapping class semantics, labeling issues, and imbalanced distributions) in the training data. In complementary work, Ming et al. developed a VA interface, RNNVis, for understanding and diagnosing RNNs for NLP tasks. Specifically, they designed and implemented an interactive co-clustering visualization of hidden state unit memories and word clouds, which allows domain users to explore, understand, and compare the internal behavior of different RNN models (i.e., regular RNN, LSTM, and GRU). In particular, the main VA interface of the RNNVis contains glyph-based sentence visualization, memory chips visualization for hidden state clusters, and word clouds visualization for word clusters, as well as a detail view, which shows the model's responses to selected words such as "when" and "where" and interpretations of selected hidden units. Their evaluation-two case studies (focused on language modeling and sentiment analysis) and expert interviews-demonstrated the effectiveness of using their system to understand and compare different RNN models.
Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL With VA <s> Active learning has been proven a reliable strategy to reduce manual efforts in training data labeling. Such strategies incorporate the user as oracle: the classifier selects the most appropriate example and the user provides the label. While this approach is tailored towards the classifier, more intelligent input from the user may be beneficial. For instance, given only one example at a time users are hardly able to determine whether this example is an outlier or not. In this paper we propose user-based visually-supported active learning strategies that allow the user to do both, selecting and labeling examples given a trained classifier. While labeling is straightforward, selection takes place using a interactive visualization of the classifier's a-posteriori output probabilities. By simulating different user selection strategies we show, that user-based active learning outperforms uncertainty based sampling methods and yields a more robust approach on different data sets. The obtained results point towards the potential of combining active learning strategies with results from the field of information visualization. <s> BIB001 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL With VA <s> This paper describes DUALIST, an active learning annotation paradigm which solicits and learns from labels on both features (e.g., words) and instances (e.g., documents). We present a novel semi-supervised training algorithm developed for this setting, which is (1) fast enough to support real-time interactive speeds, and (2) at least as accurate as preexisting methods for learning with mixed feature and instance labels. Human annotators in user studies were able to produce near-state-of-the-art classifiers---on several corpora in a variety of application domains---with only a few minutes of effort. <s> BIB002 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL With VA <s> Performing exhaustive searches over a large number of text documents can be tedious, since it is very hard to formulate search queries or define filter criteria that capture an analyst's information need adequately. Classification through machine learning has the potential to improve search and filter tasks encompassing either complex or very specific information needs, individually. Unfortunately, analysts who are knowledgeable in their field are typically not machine learning specialists. Most classification methods, however, require a certain expertise regarding their parametrization to achieve good results. Supervised machine learning algorithms, in contrast, rely on labeled data, which can be provided by analysts. However, the effort for labeling can be very high, which shifts the problem from composing complex queries or defining accurate filters to another laborious task, in addition to the need for judging the trained classifier's quality. We therefore compare three approaches for interactive classifier training in a user study. All of the approaches are potential candidates for the integration into a larger retrieval system. They incorporate active learning to various degrees in order to reduce the labeling effort as well as to increase effectiveness. Two of them encompass interactive visualization for letting users explore the status of the classifier in context of the labeled documents, as well as for judging the quality of the classifier in iterative feedback loops. We see our work as a step towards introducing user controlled classification methods in addition to text search and filtering for increasing recall in analytics scenarios involving large corpora. <s> BIB003 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL With VA <s> Learning of classifiers to be used as filters within the analytical reasoning process leads to new and aggravates existing challenges. Such classifiers are typically trained ad-hoc, with tight time constraints that affect the amount and the quality of annotation data and, thus, also the users' trust in the classifier trained. We approach the challenges of ad-hoc training by inter-active learning, which extends active learning by integrating human experts' background knowledge to greater extent. In contrast to active learning, not only does inter-active learning include the users' expertise by posing queries of data instances for labeling, but it also supports the users in comprehending the classifier model by visualization. Besides the annotation of manually or automatically selected data instances, users are empowered to directly adjust complex classifier models. Therefore, our model visualization facilitates the detection and correction of inconsistencies between the classifier model trained by examples and the user's mental model of the class definition. Visual feedback of the training process helps the users assess the performance of the classifier and, thus, build up trust in the filter created. We demonstrate the capabilities of inter-active learning in the domain of video visual analytics and compare its performance with the results of random sampling and uncertainty sampling of training sets. <s> BIB004 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL With VA <s> Intelligent systems that learn interactively from their end-users are quickly becoming widespread. Until recently, this progress has been fueled mostly by advances in machine learning; however, more and more researchers are realizing the importance of studying users of these systems. In this article we promote this approach and demonstrate how it can result in better user experiences and more effective learning systems. We present a number of case studies that characterize the impact of interactivity, demonstrate ways in which some existing systems fail to account for the user, and explore new ways for learning systems to interact with their users. We argue that the design process for interactive machine learning systems should involve users at all stages: explorations that reveal human interaction patterns and inspire novel interaction methods, as well as refinement stages to tune details of the interface and choose among alternatives. After giving a glimpse of the progress that has been made so far, we discuss the challenges that we face in moving the field forward. <s> BIB005 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL With VA <s> Assigning labels to data instances is a prerequisite for many machine learning tasks. Similarly, labeling is applied in visual-interactive analysis approaches. However, the strategies for creating labels often differ in the two fields. In this paper, we study the process of labeling data instances with the user in the loop, from both the machine learning and visual-interactive perspective. Based on a review of differences and commonalities, we propose the 'Visual-Interactive Labeling' (VIAL) process, conflating the strengths of both. We describe the six major steps of the process and highlight their related challenges. <s> BIB006 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL With VA <s> Labeled datasets are always limited, and oftentimes the quantity of labeled data is a bottleneck for data analytics. This especially affects supervised machine learning methods, which require labels for models to learn from the labeled data. Active learning algorithms have been proposed to help achieve good analytic models with limited labeling efforts, by determining which additional instance labels will be most beneficial for learning for a given model. Active learning is consistent with interactive analytics as it proceeds in a cycle in which the unlabeled data is automatically explored. However, in active learning users have no control of the instances to be labeled, and for text data, the annotation interface is usually document only. Both of these constraints seem to affect the performance of an active learning model. We hypothesize that visualization techniques, particularly interactive ones, will help to address these constraints. In this paper, we implement a pilot study of visualization in active learning for text classification, with an interactive labeling interface. We compare the results of three experiments. Early results indicate that visualization improves high-performance machine learning model building with an active learning algorithm. <s> BIB007 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL With VA <s> The automatic detection and classification of stance (e.g., certainty or agreement) in text data using natural language processing and machine-learning methods creates an opportunity to gain insight into the speakers’ attitudes toward their own and other people’s utterances. However, identifying stance in text presents many challenges related to training data collection and classifier training. To facilitate the entire process of training a stance classifier, we propose a visual analytics approach, called ALVA, for text data annotation and visualization. ALVA’s interplay with the stance classifier follows an active learning strategy to select suitable candidate utterances for manual annotaion. Our approach supports annotation process management and provides the annotators with a clean user interface for labeling utterances with multiple stance categories. ALVA also contains a visualization method to help analysts of the annotation and training process gain a better understanding of the categories used by the annotators. The visualization uses a novel visual representation, called CatCombos, which groups individual annotation items by the combination of stance categories. Additionally, our system makes a visualization of a vector space model available that is itself based on utterances. ALVA is already being used by our domain experts in linguistics and computational linguistics to improve the understanding of stance phenomena and to build a stance classifier for applications such as social media monitoring. <s> BIB008 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> AL With VA <s> Labeling data instances is an important task in machine learning and visual analytics. Both fields provide a broad set of labeling strategies, whereby machine learning (and in particular active learning) follows a rather model-centered approach and visual analytics employs rather user-centered approaches (visual-interactive labeling). Both approaches have individual strengths and weaknesses. In this work, we conduct an experiment with three parts to assess and compare the performance of these different labeling strategies. In our study, we (1) identify different visual labeling strategies for user-centered labeling, (2) investigate strengths and weaknesses of labeling strategies for different labeling tasks and task complexities, and (3) shed light on the effect of using different visual encodings to guide the visual-interactive labeling process. We further compare labeling of single versus multiple instances at a time, and quantify the impact on efficiency. We systematically compare the performance of visual interactive labeling with that of active learning. Our main findings are that visual-interactive labeling can outperform active learning, given the condition that dimension reduction separates well the class distributions. Moreover, using dimension reduction in combination with additional visual encodings that expose the internal state of the learning model turns out to improve the performance of visual-interactive labeling. <s> BIB009
AL alone has already been applied successfully to many applications (Section 3.1) where labeled data are limited. Here we review some work in AL empowered by VA. In the literature, the integration of interactive VA interfaces and AL methods is also known as interactive ML BIB005 . All of the reviewed work below strongly indicates that VA can play a powerful role in AL. A number of case studies were investigated by Amershi et al. BIB005 to demonstrate how interactivity results in a tight coupling between learning systems and users. The authors report three key results: (1) although AL results in faster convergence, users often get frustrated by having to answer the machine learner's long stream of questions and not having control over the interaction, (2) users naturally want to do more than just label data, and (3) the transparency of ML models can help people provide more effective labels to build a better classifier. Several additional strong arguments about the power to combine VA with AL to leverage the relative advantages of (experienced) human expertise and computational power can be found in the literature BIB009 BIB006 . In one of the more detailed accounts, Holzinger emphasized that in the health (informatics) domain, a small number of data sets or rare events is not uncommon, and so ML based approaches suffer from insufficient training samples. They also present an argument for a human-in-the-loop approach with domain experts by integrating AL with VA, proposing that this integration can be beneficial in solving computationally hard health data problems (e.g., subspace clustering and protein folding), where human expertise can help to reduce an exponential search space through heuristic selection of samples. The ultimate goal of a human-in-the-loop methodology is to design and develop M&DL algorithms that can automatically learn from data and thus can improve with experience over time and eventually without any human-in-the-loop (other than to understand and act upon the results) . Most existing AL research is focused on mechanisms and benefits of selecting meaningful instances for labeling from the machine learner's perspective . A drawback of this typical AL query strategy is that users cannot control which instances are selected to be labeled BIB009 BIB007 -this may affect the performance of an AL model BIB007 . Seifert and Granitzer BIB001 proposed user-based visually-supported AL strategies that allow the user to select and label examples posed by a machine learner. Their experiments showed that restricting human input to labeling only instances that the system picks is suboptimal. Giving users a more active role in terms of a visual selection of examples and in adapting their labeling strategies on top of tailored visualization techniques can increase labeling efficiency. In their experiments, the basis for the user's decision is a visualization of the a-posteriori probabilities of the unlabeled samples. Bernard et al. BIB006 investigated the process of labeling data instances with users in the loop, from both ML (in particular, AL) and VA perspectives. Based on reviewing similarities and differences between AL and VA, they proposed a unified process called visual-interactive labeling (VIL), through which they aim to combine the strengths of VA and AL (first initiatives for the integration of AL and VIL can be found in BIB001 BIB003 BIB004 BIB002 ). In follow on research, Bernard et al. BIB009 performed an experimental study to compare VIL and AL labeling strategies (used independently). In that project, they developed an evaluation toolkit that integrates 16 different established AL strategies, five classifiers, and four visualization techniques. Using their toolkit, Bernard et al. conducted an empirical study with 16 expert participants. Their investigation shows that VIL achieves similar performance to AL. One suggestion based on Bernard et al. BIB009 's experiment findings was to incorporate (visual) analytical guidance in the labeling process in AL. Their investigation represents an important step towards a unified labeling process that combines the individual strengths of VA and AL strategies. We share the same vision with Bernard et al. BIB009 BIB006 -while they call it VIL, we think that VA enabled AL is a more intuitive term for the integration of the power of AL and VA, because VIL "hides" the essential role of AL. Recent developments in ML and VA signal that the two fields are getting closer BIB009 -for example, Sacha et al. proposed a conceptual framework that models human interactions with ML components in the VA process, and makes the interplay between automated algorithms and interactive visualizations more concrete. At the core of the Sacha et al.'s conceptual framework lies the idea that the underlying ML models and hyper-parameters, which cannot be optimized automatically, can be steered via iterative and accessible user interactions. Interactive visualizations serve as an aid or "lens" that not only facilitates the process of interpretation and validation, but also makes the interactions with ML models accessible to domain users . AL and VA alone are not new, but interactive annotation tools empowered by M&DL classifiers for (geo) text and image data are not well developed, and the role of visualization in active learning for text and image related tasks has not been well developed, either. H'oferlin et al. BIB004 extended AL by integrating human experts' domain knowledge via an interactive VA interface for ad-hoc classifiers applied to video classification. Their classifier visualization facilitates the detection and correction of inconsistencies between the classifier trained by examples and the user's mental model of the class definition. Visual feedback of the training process helps the users evaluate the performance of the classifier and, thus, build up trust in the trained classifier. The main contributions of their approach are the quality assessment and model understanding by explorative visualization and the integration of experts' background knowledge by data annotation and model manipulation (modifying a model based on users' expertise can boost the learner, especially in early training epochs, by including fresh domain knowledge). They demonstrated the power of AL combined with VA in the domain of video VA by comparing its performance with the results of random sampling and uncertainty sampling of the training sets. Huang and colleagues' BIB007 experiments and their early results showed that active learning with VA improves learning models performance compared to methods with AL alone for text classification, with an interactive and iterative labeling interface; their AL with visualization method is for a binary (i.e., positive and negative) classification problem (Appendix A.4.1). Heimerl et al. BIB003 incorporated AL to various degrees with VA for text document retrieval to reduce the labeling effort and to increase effectiveness. Specifically, their VA interface for visual classifier training has a main view (shows the classifier's state with projected documents) , a cluster view (shows the documents with most uncertain classification), a content view (shows the selected documents), a manual view used during evaluation, a classifier history for undo/redo navigation, a labeled document view for listing labeled documents, and most importantly the labeling controls with a preview of the estimated impact of the newly labeled documents on the classifier. In more recent work, Kucher et al. BIB008
Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Active Deep Learning (ADL) <s> We introduce a challenging set of 256 object categories containing a total of 30607 images. The original Caltech-101 [1] was collected by choosing a set of object categories, downloading examples from Google Images and then manually screening out all images that did not fit the category. Caltech-256 is collected in a similar manner with several improvements: a) the number of categories is more than doubled, b) the minimum number of images in any category is increased from 31 to 80, c) artifacts due to image rotation are avoided and d) a new and larger clutter category is introduced for testing background rejection. We suggest several testing paradigms to measure classification performance, then benchmark the dataset using two simple metrics as well as a state-of-the-art spatial pyramid matching [2] algorithm. Finally we use the clutter category to train an interest detector which rejects uninformative background regions. <s> BIB001 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Active Deep Learning (ADL) <s> Abstract In natural language processing community, sentiment classification based on insufficient labeled data is a well-known challenging problem. In this paper, a novel semi-supervised learning algorithm called active deep network (ADN) is proposed to address this problem. First, we propose the semi-supervised learning framework of ADN. ADN is constructed by restricted Boltzmann machines (RBM) with unsupervised learning based on labeled reviews and abundant of unlabeled reviews. Then the constructed structure is fine-tuned by gradient-descent based supervised learning with an exponential loss function. Second, in the semi-supervised learning framework, we apply active learning to identify reviews that should be labeled as training data, then using the selected labeled reviews and all unlabeled reviews to train ADN architecture. Moreover, we combine the information density with ADN, and propose information ADN (IADN) method, which can apply the information density of all unlabeled reviews in choosing the manual labeled reviews. Experiments on five sentiment classification datasets show that ADN and IADN outperform classical semi-supervised learning algorithms, and deep learning techniques applied for sentiment classification. <s> BIB002 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Active Deep Learning (ADL) <s> Neural machine translation, a recently proposed approach to machine translation based purely on neural networks, has shown promising results compared to the existing approaches such as phrase-based statistical machine translation. Despite its recent success, neural machine translation has its limitation in handling a larger vocabulary, as training complexity as well as decoding complexity increase proportionally to the number of target words. In this paper, we propose a method based on importance sampling that allows us to use a very large target vocabulary without increasing training complexity. We show that decoding can be efficiently done even with the model having a very large target vocabulary by selecting only a small subset of the whole target vocabulary. The models trained by the proposed approach are empirically found to outperform the baseline models with a small vocabulary as well as the LSTM-based neural machine translation models. Furthermore, when we use the ensemble of a few models with very large target vocabularies, we achieve the state-of-the-art translation performance (measured by BLEU) on the English!German translation and almost as high performance as state-of-the-art English!French translation system. <s> BIB003 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Active Deep Learning (ADL) <s> Deep learning has been shown to achieve outstanding performance in a number of challenging real-world applications. However, most of the existing works assume a fixed set of labeled data, which is not necessarily true in real-world applications. Getting labeled data is usually expensive and time consuming. Active labelling in deep learning aims at achieving the best learning result with a limited labeled data set, i.e., choosing the most appropriate unlabeled data to get labeled. This paper presents a new active labeling method, AL-DL, for cost-effective selection of data to be labeled. AL-DL uses one of three metrics for data selection: least confidence, margin sampling, and entropy. The method is applied to deep learning networks based on stacked restricted Boltzmann machines, as well as stacked autoencoders. In experiments on the MNIST benchmark dataset, the method outperforms random labeling consistently by a significant margin. <s> BIB004 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Active Deep Learning (ADL) <s> Recently, promising results have been shown on face recognition researches. However, face recognition and retrieval across age is still challenging. Unlike prior methods using complex models with strong parametric assumptions to model the aging process, we use a data-driven method to address this problem. We propose a novel coding framework called Cross-Age Reference Coding (CARC). By leveraging a large-scale image dataset freely available on the Internet as a reference set, CARC is able to encode the low-level feature of a face image with an age-invariant reference space. In the testing phase, the proposed method only requires a linear projection to encode the feature and therefore it is highly scalable. To thoroughly evaluate our work, we introduce a new large-scale dataset for face recognition and retrieval across age called Cross-Age Celebrity Dataset (CACD). The dataset contains more than 160,000 images of 2,000 celebrities with age ranging from 16 to 62. To the best of our knowledge, it is by far the largest publicly available cross-age face dataset. Experimental results show that the proposed method can achieve state-of-the-art performance on both our dataset as well as the other widely used dataset for face recognition across age, MORPH dataset. <s> BIB005 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Active Deep Learning (ADL) <s> We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples. <s> BIB006 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Active Deep Learning (ADL) <s> Neural machine translation (NMT) systems have recently achieved results comparable to the state of the art on a few translation tasks, including English→French and English→German. The main purpose of the Montreal Institute for Learning Algorithms (MILA) submission to WMT’15 is to evaluate this new approach on a greater variety of language pairs. Furthermore, the human evaluation campaign may help us and the research community to better understand the behaviour of our systems. We use the RNNsearch architecture, which adds an attention mechanism to the encoderdecoder. We also leverage some of the recent developments in NMT, including the use of large vocabularies, unknown word replacement and, to a limited degree, the inclusion of monolingual language models. <s> BIB007 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Active Deep Learning (ADL) <s> We present a variety of new architectural features and training procedures that we apply to the generative adversarial networks (GANs) framework. We focus on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic. Unlike most work on generative models, our primary goal is not to train a model that assigns high likelihood to test data, nor do we require the model to be able to learn well without using any labels. Using our new techniques, we achieve state-of-the-art results in semi-supervised classification on MNIST, CIFAR-10 and SVHN. The generated images are of high quality as confirmed by a visual Turing test: our model generates MNIST samples that humans cannot distinguish from real data, and CIFAR-10 samples that yield a human error rate of 21.3%. We also present ImageNet samples with unprecedented resolution and show that our methods enable the model to learn recognizable features of ImageNet classes. <s> BIB008 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Active Deep Learning (ADL) <s> Improvements in hardware, the availability of massive amounts of data, and algorithmic upgrades are among the factors supporting better machine translation. <s> BIB009 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Active Deep Learning (ADL) <s> Even though active learning forms an important pillar of machine learning, deep learning tools are not prevalent within it. Deep learning poses several difficulties when used in an active learning setting. First, active learning (AL) methods generally rely on being able to learn and update models from small amounts of data. Recent advances in deep learning, on the other hand, are notorious for their dependence on large amounts of data. Second, many AL acquisition functions rely on model uncertainty, yet deep learning methods rarely represent such model uncertainty. In this paper we combine recent advances in Bayesian deep learning into the active learning framework in a practical way. We develop an active learning framework for high dimensional data, a task which has been extremely challenging so far, with very sparse existing literature. Taking advantage of specialised models such as Bayesian convolutional neural networks, we demonstrate our active learning techniques with image data, obtaining a significant improvement on existing active learning approaches. We demonstrate this on both the MNIST dataset, as well as for skin cancer diagnosis from lesion images (ISIC2016 task). <s> BIB010 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Active Deep Learning (ADL) <s> Recent successes in learning-based image classification, however, heavily rely on the large number of annotated training samples, which may require considerable human effort. In this paper, we propose a novel active learning (AL) framework, which is capable of building a competitive classifier with optimal feature representation via a limited amount of labeled training instances in an incremental learning manner. Our approach advances the existing AL methods in two aspects. First, we incorporate deep convolutional neural networks into AL. Through the properly designed framework, the feature representation and the classifier can be simultaneously updated with progressively annotated informative samples. Second, we present a cost-effective sample selection strategy to improve the classification performance with less manual annotations. Unlike traditional methods focusing on only the uncertain samples of low prediction confidence, we especially discover the large amount of high-confidence samples from the unlabeled set for feature learning. Specifically, these high-confidence samples are automatically selected and iteratively assigned pseudolabels. We thus call our framework cost-effective AL (CEAL) standing for the two advantages. Extensive experiments demonstrate that the proposed CEAL framework can achieve promising results on two challenging image classification data sets, i.e., face recognition on the cross-age celebrity face recognition data set database and object categorization on Caltech-256. <s> BIB011 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Active Deep Learning (ADL) <s> This paper is on active learning where the goal is to reduce the data annotation burden by interacting with a (human) oracle during training. Standard active learning methods ask the oracle to annotate data samples. Instead, we take a profoundly different approach: we ask for annotations of the decision boundary. We achieve this using a deep generative model to create novel instances along a 1d line. A point on the decision boundary is revealed where the instances change class. Experimentally we show on three data sets that our method can be plugged into other active learning schemes, that human oracles can effectively annotate points on the decision boundary, that our method is robust to annotation noise, and that decision boundary annotations improve over annotating data samples. <s> BIB012 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Active Deep Learning (ADL) <s> We propose a new active learning (AL) method for text classification with convolutional neural networks (CNNs). In AL, one selects the instances to be manually labeled with the aim of maximizing model performance with minimal effort. Neural models capitalize on word embeddings as representations (features), tuning these to the task at hand. We argue that AL strategies for multi-layered neural models should focus on selecting instances that most affect the embedding space (i.e., induce discriminative word representations). This is in contrast to traditional AL approaches (e.g., entropy-based uncertainty sampling), which specify higher level objectives. We propose a simple approach for sentence classification that selects instances containing words whose embeddings are likely to be updated with the greatest magnitude, thereby rapidly learning discriminative, task-specific embeddings. We extend this approach to document classification by jointly considering: (1) the expected changes to the constituent word representations; and (2) the model's current overall uncertainty regarding the instance. The relative emphasis placed on these criteria is governed by a stochastic process that favors selecting instances likely to improve representations at the outset of learning, and then shifts toward general uncertainty sampling as AL progresses. Empirical results show that our method outperforms baseline AL approaches on both sentence and document classification tasks. We also show that, as expected, the method quickly learns discriminative word embeddings. To the best of our knowledge, this is the first work on AL addressing neural models for text classification. <s> BIB013
As discussed further in Appendix A.1, DL can discover intricate patterns hidden in big data. Advances in DL have been dramatic and rapid, and the landscape of M&DL is changing quickly as a result. For example, Jean and colleagues BIB003 BIB007 in 2015 demonstrated for the first time that DL could beat Google's existing phrase-based statistical process for language translation and by November 2016, after Google switched to that approach, evidence showed that their new system was already on par with human translation BIB009 . We have seen above many successful use cases for AL (Section 3.1) and AL integrated with VA (Section 3.3). Now we review some recent work in AL combined with DL-active deep learning (ADL). It is also called deep active learning (e.g., see ), but active deep learning is a much more commonly used term in the literature. The main process of ADL is very similar to AL. The main difference is that the machine learner in regular AL is a traditional ML algorithm (e.g., SVM), whereas in ADL, the learner is a DL one, such as CNN. As emphasized in Appendix A.1, DL has better scalability for Big Data problems than traditional ML . This motivates ADL because it combines the power of DL and AL-better scalability than ML and less labeled data than regular DL for training a good machine learner. AL has been investigated with some DL architectures for image classification and text classification (including sentiment analysis). Wang and Shang BIB004 applied AL methods in DL networks for image classification. The (DL) classifiers they used are stacked restricted Boltzmann machines (stacked RBMs) and stacked auto-encoders, with three commonly used uncertainty sampling based query strategies (i.e., least confidence, margin sampling, and entropy, see Section 3.1.5). Their experiments were run on the well-known MNIST benchmark data set (one of the classic data sets for benchmarking ML algorithms). The authors conclude that their ADL method outperforms random sampling consistently by a significant margin, regardless of the selection of uncertainty-based strategy and classifier. Gal et al. BIB010 also developed an AL framework that integrates DL for image classification, whereas the classifier they used is Bayesian CNNs. Their result showed a significant improvement on existing AL approaches. Another successful integration example of deep CNNs and AL for image classification can be found in BIB011 -the authors proposed an ADL framework called Cost-Effective Active Learning (CEAL), where the classifier can be simultaneously updated with progressively annotated informative samples. Unlike most traditional AL methods focusing on uncertain samples of low prediction confidence, their strategy selects two complementary kinds of samples to incrementally improve the classifier training and feature learning: (1) the minority informative kind contributes to training more powerful classifiers, and (2) the majority high confidence kind contributes to learning more discriminative feature representations. Although the number of samples that belongs to the first type is small (e.g., an image with a soccer ball and a dog is much more rare than images that contain only a soccer ball), the most uncertain unlabeled samples usually have great potential impact on the classifiers. Selecting and annotating them as part of the training set can contribute to a better decision boundary of the classifiers. Their framework progressively selects the minority samples among most informative samples, and automatically pseudo-labels (i.e., pick up the class which has the maximum predicted probability, and use it as if it was the true label ) majority high confidence samples from the unlabeled set for feature learning and model updating. The labeled minority samples benefit the decision boundary of the classifier and the majority pseudo-labeled samples provide sufficient training data for robust feature learning. Their experiment results, on two challenging public benchmark data sets (face recognition on CACD database BIB005 and object categorization on Caltech-256 BIB001 ), demonstrated the effectiveness of their CEAL framework. Most AL methods in the literature (Section 3.1) ask annotators to annotate data samples. By contract, Huijser and van Gemert BIB012 provide a recent example of combining AL with DL, where they took a completely different approach-it asks for annotators to annotate the decision boundary. At this point, their method focuses on a binary classification (Appendix A.4.1) and a linear classification model (i.e., SVM). Additionally, the method used a deep generative model to synthesize samples according to a small amount of labeled samples, which will not work for text related tasks (because deep generative models are designed for continuous data like images BIB006 BIB008 , rather than the discrete data of words and phrases that must be dealt with in NLP problems ). After reviewing some ADL methods for image classification, we now introduce recent ADL work for text classification problems. Zhou et al. BIB002 integrated AL with DL for semi-supervised sentiment classification using RBMs. Their experiments on five sentiment classification data sets showed that their ADL methods outperform classic semi-supervised learning algorithms and DL architectures applied for sentiment classification. Zhang and Wallace BIB013 proposed an ADL method for text classification, where the classifier is a CNN. In contrast to traditional AL approaches (e.g., uncertainty sampling), the most novel contribution is that their method is designed to quickly induce discriminative word embeddings (Appendix A.6), and thus improve text classification. Taking sentiment classification as an example, selecting examples in this way quickly pushes the embeddings of "bad" and "good" apart. Their empirical results (with three data sets about sentiment where two were categorized as positive/negative, one as subjective/objective) show that the method outperforms baseline AL approaches. However, their method is for binary classification (Appendix A.4.1), other types of classification tasks (Appendixes A.4.2-A.4.4) are not touched upon. Research on combining AL with RNNs for short-text classification is rare. To address the gap, Zhou demonstrated using AL with RNNs as classifiers for (Chinese) short-text classification. The proposed ADL algorithm dramatically decreases the amount of labeled samples without significantly influencing the classification accuracy of the original RNNs classifier, which trained on the whole data set. In some cases, the proposed ADL algorithm even achieves better classification accuracy with less trained data than the original RNNs classifier.
Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> GIScience Applications Using AL/AL with VA <s> The increasing availability and use of positioning devices has resulted in large volumes of trajectory data. However, semantic annotations for such data are typically added by domain experts, which is a time-consuming task. Machine-learning algorithms can help infer semantic annotations from trajectory data by learning from sets of labeled data. Specifically, active learning approaches can minimize the set of trajectories to be annotated while preserving good performance measures. The ANALYTiC web-based interactive tool visually guides users through this annotation process. <s> BIB001 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> GIScience Applications Using AL/AL with VA <s> Abstract Classification of evolving data streams is a challenging task, which is suitably tackled with online learning approaches. Data is processed instantly requiring the learning machinery to (self-)adapt by adjusting its model. However for high velocity streams, it is usually difficult to obtain labeled samples to train the classification model. Hence, we propose a novel o nline b atch-based a ctive l earning algorithm (OBAL) to perform the labeling. OBAL is developed for crisis management applications where data streams are generated by the social media community. OBAL is applied to discriminate relevant from irrelevant social media items. An emergency management user will be interactively queried to label chosen items. OBAL exploits the boundary items for which it is highly uncertain about their class and makes use of two classifiers: k-Nearest Neighbors (kNN) and Support Vector Machine (SVM). OBAL is equipped with a labeling budget and a set of uncertainty strategies to identify the items for labeling. An extensive analysis is carried out to show OBAL’s performance, the sensitivity of its parameters, and the contribution of the individual uncertainty strategies. Two types of datasets are used: synthetic and social media datasets related to crises. The empirical results illustrate that OBAL has a very good discrimination power. <s> BIB002
Júnior et al. BIB001 's very recent (2017) work on GPS trajectory classification provides solid evidence that AL can be used together with VA to help domain experts perform semantic labeling of movement data. In this work, they pose three research questions: (1) Is there a ML method that supports building a good classifier for automatic trajectory classification but with a reduced number of required human labeled trajectories? (2) Is the AL method effective for trajectory data? and (3) How can we help the user in labeling trajectories? To answer the rest of their research questions, Júnior et al. developed a web-based interactive tool named ANALYTiC to visually assist domain experts to perform GPS trajectory classification using AL and a simple VA interface, where users can pick one of the six (traditional ML) classifiers (Ada boost, decision tree, Gaussian naive Bayes, k-nearest neighbors (KNN), logistic regression, and random forest) and one of the three query strategies (uncertainty sampling, QBC, and random sampling) to start with trajectory labeling. Their interactive tool supports only binary classification (Appendix A.4.1). Júnior et al. also conducted a series of empirical evaluation experiments with three trajectories data sets (animals, fishing vessels, and GeoLife). Their results showed how the AL strategies choose the best subset to annotate and performed significantly better than the random sampling (baseline strategy). Their examples also demonstrated how the ANALYTiC web-based visual interface can support the domain expert in the AL process and specifically in the trajectory annotation using a set of visual solutions that ease the labeling inference task. They concluded that ML algorithms can infer semantic annotations defined by domain users (e.g., fishing, non-fishing) from trajectories, by learning from sets of manually labeled data. Specifically, AL approaches can reduce the set of trajectories to be labeled while preserving good performance measures. Their ANALYTiC web-based interactive tool visually guides domain experts through this annotation process. Another very recent AL study that is very closely related to GIScience problems can be found in BIB002 , where Pohl et al. applied AL methods to social media data (i.e., tweets) for crisis management. Two ML classifiers (i.e., kNN and SVM) are used in their AL application with several uncertainty strategies for binary classification (Appendix A.4.1) to distinguish between relevant and irrelevant information contained in a data stream. The authors used stream-based (Section 3.1.2) batch-mode AL (Section 3.1.4). Two types of data sets are used in their experiments: synthetic and social media data sets related to crises. Their empirical results illustrate that batch-mode AL is able to, with good performance, distinguish between relevant and irrelevant information from tweets for crisis management. Overall, the application of AL with ML (or DL) applied to non-RS GIScience problems is just beginning. Given the rapid advances in M&DL and AL, we anticipate this situation to change quickly, with additional applications to mobility data, geospatial text analysis, and a range of location-based service applications. An objective of this review, of course, is to enable such development.
Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> RS Applications Using AL/ADL <s> The problem of scarcity of labeled pixels, required for segmentation of remotely sensed satellite images in supervised pixel classification framework, is addressed in this article. A support vector machine (SVM) is considered for classifying the pixels into different landcover types. It is initially designed using a small set of labeled points, and subsequently refined by actively querying for the labels of pixels from a pool of unlabeled data. The label of the most interesting/ ambiguous unlabeled point is queried at each step. Here, active learning is exploited to minimize the number of labeled data used by the SVM classifier by several orders. These features are demonstrated on an IRS-1A four band multi-spectral image. Comparison with related methods is made in terms of number of data points used, computational time and a cluster quality measure. <s> BIB001 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> RS Applications Using AL/ADL <s> We show how to use "complementary priors" to eliminate the explaining-away effects that make inference difficult in densely connected belief nets that have many hidden layers. Using complementary priors, we derive a fast, greedy algorithm that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory. The fast, greedy algorithm is used to initialize a slower learning procedure that fine-tunes the weights using a contrastive version of the wake-sleep algorithm. After fine-tuning, a network with three hidden layers forms a very good generative model of the joint distribution of handwritten digit images and their labels. This generative model gives better digit classification than the best discriminative learning algorithms. The low-dimensional manifolds on which the digits lie are modeled by long ravines in the free-energy landscape of the top-level associative memory, and it is easy to explore these ravines by using the directed connections to display what the associative memory has in mind. <s> BIB002 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> RS Applications Using AL/ADL <s> As the resolution of remote-sensing imagery increases, the full complexity of the scenes becomes increasingly difficult to approach. User-defined classes in large image databases are often composed of several groups of images and span very different scales in the space of low-level visual descriptors. The interactive retrieval of such image classes is then very difficult. To address this challenge, we evaluate here, in the context of satellite image retrieval, two general improvements for relevance feedback using support vector machines (SVMs). First, to optimize the transfer of information between the user and the system, we focus on the criterion employed by the system for selecting the images presented to the user at every feedback round. We put forward an active-learning selection criterion that minimizes redundancy between the candidate images shown to the user. Second, for image classes spanning very different scales in the low-level description space, we find that a high sensitivity of the SVM to the scale of the data brings about a low retrieval performance. We argue that the insensitivity to scale is desirable in this context, and we show how to obtain it by the use of specific kernel functions. Experimental evaluation of both ranking and classification performance on a ground-truth database of satellite images confirms the effectiveness of our approach <s> BIB003 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> RS Applications Using AL/ADL <s> Obtaining training data for land cover classification using remotely sensed data is time consuming and expensive especially for relatively inaccessible locations. Therefore, designing classifiers that use as few labeled data points as possible is highly desirable. Existing approaches typically make use of small-sample techniques and semisupervision to deal with the lack of labeled data. In this paper, we propose an active learning technique that efficiently updates existing classifiers by using fewer labeled data points than semisupervised methods. Further, unlike semisupervised methods, our proposed technique is well suited for learning or adapting classifiers when there is substantial change in the spectral signatures between labeled and unlabeled data. Thus, our active learning approach is also useful for classifying a series of spatially/temporally related images, wherein the spectral signatures vary across the images. Our interleaved semisupervised active learning method was tested on both single and spatially/temporally related hyperspectral data sets. We present empirical results that establish the superior performance of our proposed approach versus other active learning and semisupervised methods. <s> BIB004 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> RS Applications Using AL/ADL <s> In this paper, we propose two active learning algorithms for semiautomatic definition of training samples in remote sensing image classification. Based on predefined heuristics, the classifier ranks the unlabeled pixels and automatically chooses those that are considered the most valuable for its improvement. Once the pixels have been selected, the analyst labels them manually and the process is iterated. Starting with a small and nonoptimal training set, the model itself builds the optimal set of samples which minimizes the classification error. We have applied the proposed algorithms to a variety of remote sensing data, including very high resolution and hyperspectral images, using support vector machines. Experimental results confirm the consistency of the methods. The required number of training samples can be reduced to 10% using the methods proposed, reaching the same level of accuracy as larger data sets. A comparison with a state-of-the-art active learning method, margin sampling, is provided, highlighting advantages of the methods proposed. The effect of spatial resolution and separability of the classes on the quality of the selection of pixels is also discussed. <s> BIB005 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> RS Applications Using AL/ADL <s> This paper investigates different batch-mode active-learning (AL) techniques for the classification of remote sensing (RS) images with support vector machines. This is done by generalizing to multiclass problem techniques defined for binary classifiers. The investigated techniques exploit different query functions, which are based on the evaluation of two criteria: uncertainty and diversity. The uncertainty criterion is associated to the confidence of the supervised algorithm in correctly classifying the considered sample, while the diversity criterion aims at selecting a set of unlabeled samples that are as more diverse (distant one another) as possible, thus reducing the redundancy among the selected samples. The combination of the two criteria results in the selection of the potentially most informative set of samples at each iteration of the AL process. Moreover, we propose a novel query function that is based on a kernel-clustering technique for assessing the diversity of samples and a new strategy for selecting the most informative representative sample from each cluster. The investigated and proposed techniques are theoretically and experimentally compared with state-of-the-art methods adopted for RS applications. This is accomplished by considering very high resolution multispectral and hyperspectral images. By this comparison, we observed that the proposed method resulted in better accuracy with respect to other investigated and state-of-the art methods on both the considered data sets. Furthermore, we derived some guidelines on the design of AL systems for the classification of different types of RS images. <s> BIB006 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> RS Applications Using AL/ADL <s> In this paper, we propose a simple, fast, and reliable active-learning technique for solving remote sensing image classification problems with support vector machine (SVM) classifiers. The main property of the proposed technique consists in its robustness to biased (poor) initial training sets. The presented method considers the 1-D output space of the classifier to identify the most uncertain samples whose labeling and inclusion in the training set involve a high probability to improve the classification results. A simple histogram-thresholding algorithm is used to find out the low-density (i.e., under the cluster assumption, the most uncertain) region in the 1-D SVM output space. To assess the effectiveness of the proposed method, we compared it with other active-learning techniques proposed in the remote sensing literature using multispectral and hyperspectral data. Experimental results confirmed that the proposed technique provided the best tradeoff among robustness to biased (poor) initial training samples, computational complexity, classification accuracy, and the number of new labeled samples necessary to reach convergence. <s> BIB007 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> RS Applications Using AL/ADL <s> Defining an efficient training set is one of the most delicate phases for the success of remote sensing image classification routines. The complexity of the problem, the limited temporal and financial resources, as well as the high intraclass variance can make an algorithm fail if it is trained with a suboptimal dataset. Active learning aims at building efficient training sets by iteratively improving the model performance through sampling. A user-defined heuristic ranks the unlabeled pixels according to a function of the uncertainty of their class membership and then the user is asked to provide labels for the most uncertain pixels. This paper reviews and tests the main families of active learning algorithms: committee, large margin, and posterior probability-based. For each of them, the most recent advances in the remote sensing community are discussed and some heuristics are detailed and tested. Several challenging remote sensing scenarios are considered, including very high spatial resolution and hyperspectral image classification. Finally, guidelines for choosing the good architecture are provided for new and/or unexperienced user. <s> BIB008 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> RS Applications Using AL/ADL <s> In this letter, we present a novel batch-mode active learning technique for solving multiclass classification problems by using the support vector machine classifier with the one-against-all architecture. The uncertainty of each unlabeled sample is measured by defining a criterion which not only considers the smallest distance to the decision hyperplanes but also takes into account the distances to other hyperplanes if the sample is within the margin of their decision boundaries. To select batch of most uncertain samples from all over the decision region, the uncertain regions of the classifiers are partitioned into multiple parts depending on the number of geometrical margins of binary classifiers passing on them. Then, a balanced number of most uncertain samples are selected from each part. To minimize the redundancy and keep the diversity among these samples, the kernel k-means clustering algorithm is applied to the set of uncertain samples, and the representative sample (medoid) from each cluster is selected for labeling. The effectiveness of the proposed method is evaluated by comparing it with other batch-mode active learning techniques existing in the literature. Experimental results on two different remote sensing data sets confirmed the effectiveness of the proposed technique. <s> BIB009 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> RS Applications Using AL/ADL <s> We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry. <s> BIB010 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> RS Applications Using AL/ADL <s> Active learning (AL) algorithms have been proven useful in reducing the number of required training samples for remote sensing applications; however, most methods query samples pointwise without considering spatial constraints on their distribution. This may often lead to a spatially dispersed distribution of training points unfavorable for visual image interpretation or field surveys. The aim of this study is to develop region-based AL heuristics to guide user attention toward a limited number of compact spatial batches rather than distributed points. The proposed query functions are based on a tree ensemble classifier and combine criteria of sample uncertainty and diversity to select regions of interest. Class imbalance, which is inherent to many remote sensing applications, is addressed through stratified bootstrap sampling. Empirical tests of the proposed methods are performed with multitemporal and multisensor satellite images capturing, in particular, sites recently affected by large-scale landslide events. The assessment includes an experimental evaluation of the labeling time required by the user and the computational runtime, and a sensitivity analysis of the main algorithm parameters. Region-based heuristics that consider sample uncertainty and diversity are found to outperform pointwise sampling and region-based methods that consider only uncertainty. Reference landslide inventories from five different experts enable a detailed assessment of the spatial distribution of remaining errors and the uncertainty of the reference data. <s> BIB011 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> RS Applications Using AL/ADL <s> With the popular use of high-resolution satellite images, more and more research efforts have been placed on remote sensing scene classification/recognition. In scene classification, effective feature selection can significantly boost the final performance. In this letter, a novel deep-learning-based feature-selection method is proposed, which formulates the feature-selection problem as a feature reconstruction problem. Note that the popular deep-learning technique, i.e., the deep belief network (DBN), achieves feature abstraction by minimizing the reconstruction error over the whole feature set, and features with smaller reconstruction errors would hold more feature intrinsics for image representation. Therefore, the proposed method selects features that are more reconstructible as the discriminative features. Specifically, an iterative algorithm is developed to adapt the DBN to produce the inquired reconstruction weights. In the experiments, 2800 remote sensing scene images of seven categories are collected for performance evaluation. Experimental results demonstrate the effectiveness of the proposed method. <s> BIB012 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> RS Applications Using AL/ADL <s> Standing at the paradigm shift towards data-intensive science, machine learning techniques are becoming increasingly important. In particular, as a major breakthrough in the field, deep learning has proven as an extremely powerful tool in many fields. Shall we embrace deep learning as the key to all? Or, should we resist a 'black-box' solution? There are controversial opinions in the remote sensing community. In this article, we analyze the challenges of using deep learning for remote sensing data analysis, review the recent advances, and provide resources to make deep learning in remote sensing ridiculously simple to start with. More importantly, we advocate remote sensing scientists to bring their expertise into deep learning, and use it as an implicit general model to tackle unprecedented large-scale influential challenges, such as climate change and urbanization. <s> BIB013 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> RS Applications Using AL/ADL <s> Active deep learning classification of hyperspectral images is considered in this paper. Deep learning has achieved success in many applications, but good-quality labeled samples are needed to construct a deep learning network. It is expensive getting good labeled samples in hyperspectral images for remote sensing applications. An active learning algorithm based on a weighted incremental dictionary learning is proposed for such applications. The proposed algorithm selects training samples that maximize two selection criteria, namely representative and uncertainty. This algorithm trains a deep network efficiently by actively selecting training samples at each iteration. The proposed algorithm is applied for the classification of hyperspectral images, and compared with other classification algorithms employing active learning. It is shown that the proposed algorithm is efficient and effective in classifying hyperspectral images. <s> BIB014
DL has achieved success in many applications, however, a large set of good quality labeled samples are needed to train a good DL classifier, as emphasized in Appendix A.1. Zhu et al. BIB013 provided a very recent survey of DL in RS, where they reviewed the recent advances and analyzed the challenges of using DL with RS data analysis. More importantly, they advocate that RS scientists should adapt DL to tackle large-scale RS challenges, such as application of RS and DL to study climate change and urbanization. However, AL (Section 3.1) and ADL (Section 3.4) based methods are not touched on in their review. In their conclusions, the authors did emphasize that limited training samples in RS represents a challenging bottle-neck to progress. Our review provides a promising solution to the challenges they pointed out. To help RS researchers get started with DL, a technical tutorial on DL for RS data is provided in . AL has a relatively long history and has been widely studied for RS applications (compared with attention given to AL in other components of GIScience). Many successful AL examples in RS in the literature (reviewed below in this section) have demonstrated that AL can aid RS image classification tasks, whereas ADL (Section 3.4) has only been recently applied to RS for image classification. Below, we first introduce some AL methods used for RS image classification, and then more recent ADL methods applied to RS image classification problems. Some pioneering work using AL for RS image classification can be found in BIB001 BIB004 BIB005 BIB006 BIB007 . BIB008 BIB008 surveyed and tested several main AL methods used in RS communities for (multispectral and hyperspectral) RS image classification. As introduced in Section 3.1, an AL process requires the interaction between the annotator (e.g., domain experts) and the model (e.g., a classifier)-the former provides labels, which integrates domain knowledge while labeling, and the latter provides the most informative pixels to enlist annotators for labels. This is crucial for the success of an AL algorithm-the machine learner needs a query strategy (Section 3.1.5) to rank the pixels in the RS image pool. Tuia et al. BIB008 used AL query strategies (Section 3.1.5), also called heuristics in the RS community BIB008 , to group the AL algorithms they reviewed into three main categories BIB005 : committee, large margin, and posterior probability-based. Tuia et al. also analyzed and discussed advantages and drawbacks of the methods they reviewed, and provided some advice for how to choose a good AL architecture. One of the directions they pointed out is the inclusion of contextual information in heuristics (i.e., AL query strategies)-they emphasized that the heuristics proposed in the literature mainly used spectral criteria, whereas few heuristics directly considered positional information and/or textures. To address the gap of lacking heuristics that consider spatial constraints, Stumpf et al. BIB011 developed region-based AL heuristics for RS image classification. Empirical tests with multitemporal and multisensor satellite images of their region-based heuristics, which considered both uncertainty and diversity criteria, demonstrated that their method outperformed pointwise sampling and region-based methods that considered only uncertainty. An early example of applying AL methods in RS can be found in BIB001 , in which Mitra et al. employed an AL technique that selects the n most uncertain samples for segmentation of multispectral RS images, using SVMs for binary classification (Appendix A.4.1). Their AL query strategy is to select the sample closest to the current separating hyperplane of each binary SVM. Ferecatu and Boujemaa BIB003 also employed an SVM classifier in their AL method for remote-sensing image retrieval. Their experimental evaluation of classification performance confirmed the effectiveness of their AL approach for RS image retrieval. Their AL selection criterion focused on minimizing redundancy between the candidate images shown to the user. Obtaining training data for land cover classification using remotely sensed imagery is time consuming and expensive, especially for relatively inaccessible locations. In an early step toward the goal of designing classifiers that use as few labeled data points as possible, Rajan et al. BIB004 proposed an AL technique that efficiently updates existing classifiers by using minimal labeled data points. Specifically, Rajan et al. BIB004 used an AL technique that selects the unlabeled sample that maximizes the information gain between the posteriori probability distribution estimated from the current training set and the (new) training set obtained by including that sample into it. The information gain is measured by the Kullback-Leibler divergence (Section 3.1.5). One main contribution they made was that their AL method can adapt classifiers when there is substantial change in the spectral signatures between labeled and unlabeled data. Their AL approach is also useful for classifying a series of spatially/temporally related images, wherein the spectral signatures vary across the images. Their empirical results provided good performance, which was tested on both single and spatially/temporally related hyperspectral data sets. As introduced in Section 3.1.4, batch-mode AL is better suited to parallel labeling environments or models with slow training procedures to accelerate the learning speed. Tuia et al. BIB005 proposed two batch-mode AL algorithms for multi-class (Appendix A.4.2) RS image classification. The first algorithm extended the SVM margin sampling (Section 3.1.5) by incorporating diversity (Section 3.1.5) in kernel space, while the second is an entropy-based (Section 3.1.5) version of the query-by-bagging algorithm. The AL algorithms in pseudo code were provided in their appendix. Demir et al. BIB006 also investigated several multi-class (Appendix A.4.2) SVM-based batch-mode AL techniques for interactive classification of RS images; one outcome of the research was a proposed cluster-based diversity criterion for informative query selection. Patra and Bruzzone BIB007 also proposed a fast cluster-assumption based AL technique, but they only considered the uncertainty criterion. In a follow up study, Patra and Bruzzone BIB009 proposed a batch-mode AL (Section 3.1.4) technique that considered both uncertainty and diversity criteria for solving multi-class classification (Appendix A.4.2) problems using SVM classifier with OAA architecture. Their experimental results running on two different RS data sets (i.e., hyperspectral and multispectral) confirmed the effectiveness of the proposed technique. Above, we have seen some successful AL methods to tackle RS problems. Now, we will introduce recent ADL (Section 3.4) work for RS image classification. A RS scene can be classified into a specific scene theme (e.g., a part of a forest, a parking lot, and a lake). In this type of classification task, supervised learning techniques are usually employed. Zou et al. BIB012 used AL for RS scene classification to remove less informative deep belief network (DBN) features BIB002 , before a t-test was applied on the remaining features for discriminative feature selection. Specifically, they used iterative execution of AL, with 200 iterations, to collect an informative feature set from the DBN features, and then perform a t-test for feature selection. It is expensive to get good labeled samples in hyperspectral images for RS applications. To address this challenge, Liu et al. BIB014 proposed an ADL method for RS hyperspectral image classification, where their algorithm selects training samples that maximize two selection criteria (i.e., representativeness and uncertainty). The performance of their algorithm was compared with several other AL (but not integrated with DL) classification algorithms that used different query strategies (i.e., random sampling, maximum uncertainty sampling , and QBC BIB008 ; see Section 3.1.5). Their results demonstrated that the proposed algorithm achieved higher accuracy with fewer training samples by actively selecting training samples. DL has been widely studied to recognize ground objects from satellite imagery, whereas Chen and Zipf also emphasized that finding ground truth especially for developing and rural areas is not easy and manually labeling a large set of training data is very expensive. To tackle this challenge, Chen and Zipf propose an ongoing research project named DeepVGI, with the goal of employing ADL (Section 3.4) to classify satellite imagery with Volunteered Geographic Information (VGI) data. In their deepVGI method, Chen and Zipf tested two classic CNNs (LeNet and AlexNet BIB010 ) and a multilayer perceptron (MLP) (a class of the feed-forward neural network) [143] . The overall testing performance of their intial DeepVGI results, compared with Deep-OSM and MapSwipe, demonstrated that DeepVGI's performance (in particular, F1 score and accuracy) is significantly better than DeepOSM, but less good than the MapSwipe volunteers (each image is voted on by three volunteers). Training neural networks with OpenStreetMap (OSM) data, DeepOSM can make predictions of mis-registered roads in OSM data by classifying roads and features from satellite imagery . The DL architecture DeepOSM used is a simple one layer CNN. MapSwipe is a crowd-sourcing mobile application that allows volunteers to label images with buildings or roads. Almost all reported methods applying DL in RS, shared the motivation that getting labeled data for RS imagery is challenging. Thus, AL/ADL, will help clear some hurdles in the process of empowering RS research with DL.
Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Technical Challenges and Opportunities <s> In this paper we present methods of enhancing existing discriminative classifiers for multi-labeled predictions. Discriminative methods like support vector machines perform very well for uni-labeled text classification tasks. Multi-labeled classification is a harder task subject to relatively less attention. In the multi-labeled setting, classes are often related to each other or part of a is-a hierarchy. We present a new technique for combining text features and features indicating relationships between classes, which can be used with any discriminative algorithm. We also present two enhancements to the margin of SVMs for building better models in the presence of overlapping classes. We present results of experiments on real world text benchmark datasets. Our new methods beat accuracy of existing methods with statistically significant improvements. <s> BIB001 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Technical Challenges and Opportunities <s> We explore the task of automatic classification of texts by the emotions expressed. Our novel method arranges neutrality, polarity and emotions hierarchically. We test the method on two datasets and show that it outperforms the corresponding "flat" approach, which does not take into account the hierarchical information. The highly imbalanced structure of most of the datasets in this area, particularly the two datasets with which we worked, has a dramatic effect on the performance of classification. The hierarchical approach helps alleviate the effect. <s> BIB002 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Technical Challenges and Opportunities <s> Defining an efficient training set is one of the most delicate phases for the success of remote sensing image classification routines. The complexity of the problem, the limited temporal and financial resources, as well as the high intraclass variance can make an algorithm fail if it is trained with a suboptimal dataset. Active learning aims at building efficient training sets by iteratively improving the model performance through sampling. A user-defined heuristic ranks the unlabeled pixels according to a function of the uncertainty of their class membership and then the user is asked to provide labels for the most uncertain pixels. This paper reviews and tests the main families of active learning algorithms: committee, large margin, and posterior probability-based. For each of them, the most recent advances in the remote sensing community are discussed and some heuristics are detailed and tested. Several challenging remote sensing scenarios are considered, including very high spatial resolution and hyperspectral image classification. Finally, guidelines for choosing the good architecture are provided for new and/or unexperienced user. <s> BIB003 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Technical Challenges and Opportunities <s> In this letter, we present a novel batch-mode active learning technique for solving multiclass classification problems by using the support vector machine classifier with the one-against-all architecture. The uncertainty of each unlabeled sample is measured by defining a criterion which not only considers the smallest distance to the decision hyperplanes but also takes into account the distances to other hyperplanes if the sample is within the margin of their decision boundaries. To select batch of most uncertain samples from all over the decision region, the uncertain regions of the classifiers are partitioned into multiple parts depending on the number of geometrical margins of binary classifiers passing on them. Then, a balanced number of most uncertain samples are selected from each part. To minimize the redundancy and keep the diversity among these samples, the kernel k-means clustering algorithm is applied to the set of uncertain samples, and the representative sample (medoid) from each cluster is selected for labeling. The effectiveness of the proposed method is evaluated by comparing it with other batch-mode active learning techniques existing in the literature. Experimental results on two different remote sensing data sets confirmed the effectiveness of the proposed technique. <s> BIB004 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Technical Challenges and Opportunities <s> In hierarchical classification, the prediction paths may be required to always end at leaf nodes. This is called mandatory leaf node prediction (MLNP) and is particularly useful when the leaf nodes have much stronger semantic meaning than the internal nodes. However, while there have been a lot of MLNP methods in hierarchical multiclass classification, performing MLNP in hierarchical multilabel classification is much more difficult. In this paper, we propose a novel MLNP algorithm that (i) considers the global hierarchy structure; and (ii) can be used on hierarchies of both trees and DAGs. We show that one can efficiently maximize the joint posterior probability of all the node labels by a simple greedy algorithm. Moreover, this can be further extended to the minimization of the expected symmetric loss. Experiments are performed on a number of real-world data sets with tree- and DAG-structured label hierarchies. The proposed method consistently outperforms other hierarchical and flat multilabel classification methods. <s> BIB005 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Technical Challenges and Opportunities <s> Active learning methods select informative instances to effectively learn a suitable classifier. Uncertainty sampling, a frequently utilized active learning strategy, selects instances about which the model is uncertain but it does not consider the reasons for why the model is uncertain. In this article, we present an evidence-based framework that can uncover the reasons for why a model is uncertain on a given instance. Using the evidence-based framework, we discuss two reasons for uncertainty of a model: a model can be uncertain about an instance because it has strong, but conflicting evidence for both classes or it can be uncertain because it does not have enough evidence for either class. Our empirical evaluations on several real-world datasets show that distinguishing between these two types of uncertainties has a drastic impact on the learning efficiency. We further provide empirical and analytical justifications as to why distinguishing between the two uncertainties matters. <s> BIB006 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Technical Challenges and Opportunities <s> Most of the empirical evaluations of active learning approaches in the literature have focused on a single classifier and a single performance measure. We present an extensive empirical evaluation of common active learning baselines using two probabilistic classifiers and several performance measures on a number of large datasets. In addition to providing important practical advice, our findings highlight the importance of overlooked choices in active learning experiments in the literature. For example, one of our findings shows that model selection is as important as devising an active learning approach, and choosing one classifier and one performance measure can often lead to unexpected and unwarranted conclusions. Active learning should generally improve the model's capability to distinguish between instances of different classes, but our findings show that the improvements provided by active learning for one performance measure often came at the expense of another measure. We present several such results, raise questions, guide users and researchers to better alternatives, caution against unforeseen side effects of active learning, and suggest future research directions. <s> BIB007 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Technical Challenges and Opportunities <s> While deep convolutional neural networks (CNNs) have shown a great success in single-label image classification, it is important to note that real world images generally contain multiple labels, which could correspond to different objects, scenes, actions and attributes in an image. Traditional approaches to multi-label image classification learn independent classifiers for each category and employ ranking or thresholding on the classification results. These techniques, although working well, fail to explicitly exploit the label dependencies in an image. In this paper, we utilize recurrent neural networks (RNNs) to address this problem. Combined with CNNs, the proposed CNN-RNN framework learns a joint image-label embedding to characterize the semantic label dependency as well as the image-label relevance, and it can be trained end-to-end from scratch to integrate both information in a unified framework. Experimental results on public benchmark datasets demonstrate that the proposed architecture achieves better performance than the state-of-the-art multi-label classification model <s> BIB008 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Technical Challenges and Opportunities <s> In this paper, we propose the joint learning attention and recurrent neural network (RNN) models for multi-label classification. While approaches based on the use of either model exist (e.g., for the task of image captioning), training such existing network architectures typically require pre-defined label sequences. For multi-label classification, it would be desirable to have a robust inference process, so that the prediction error would not propagate and thus affect the performance. Our proposed model uniquely integrates attention and Long Short Term Memory (LSTM) models, which not only addresses the above problem but also allows one to identify visual objects of interests with varying sizes without the prior knowledge of particular label ordering. More importantly, label co-occurrence information can be jointly exploited by our LSTM model. Finally, by advancing the technique of beam search, prediction of multiple labels can be efficiently achieved by our proposed network model. <s> BIB009 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Technical Challenges and Opportunities <s> Abstract Classification of evolving data streams is a challenging task, which is suitably tackled with online learning approaches. Data is processed instantly requiring the learning machinery to (self-)adapt by adjusting its model. However for high velocity streams, it is usually difficult to obtain labeled samples to train the classification model. Hence, we propose a novel o nline b atch-based a ctive l earning algorithm (OBAL) to perform the labeling. OBAL is developed for crisis management applications where data streams are generated by the social media community. OBAL is applied to discriminate relevant from irrelevant social media items. An emergency management user will be interactively queried to label chosen items. OBAL exploits the boundary items for which it is highly uncertain about their class and makes use of two classifiers: k-Nearest Neighbors (kNN) and Support Vector Machine (SVM). OBAL is equipped with a labeling budget and a set of uncertainty strategies to identify the items for labeling. An extensive analysis is carried out to show OBAL’s performance, the sensitivity of its parameters, and the contribution of the individual uncertainty strategies. Two types of datasets are used: synthetic and social media datasets related to crises. The empirical results illustrate that OBAL has a very good discrimination power. <s> BIB010
Below we list some main technical challenges and opportunites, from classifier and AL problem scenarios related, to VA and AL/ADL integration. • Multi-label classification: Most existing multi-label classification research has been based on simple ML models (such as logistic regression BIB006 BIB007 , naive Bayes BIB006 BIB007 , and SVM BIB003 BIB006 BIB004 BIB001 ); but, very few on DL architectures, such as CNNs and RNNs. We need to extend the traditional ML models to DL ones for Big Data problems, because as we emphasized in Appendix A.1, DL algorithms have better scalability than traditional ML algorithms . Wang et al. BIB008 and Chen et al. BIB009 have developed a CNN-RNN framework and an order-free RNN for multi-label classification for image data sets, respectively, whereas few DL based multi-label classification methods for text data have been proposed. • Hierarchical classification: As Silla et al. pointed out in their survey about hierarchical classification (Appendix A.4.4) across different application domains, flat classification (Appendix A.4.4) has received much more attention in areas such as data mining and ML. However, many important real-world classification problems are naturally cast as hierarchical classification problems, where the classes to be predicted are organized into a class hierarchy (e.g., for geospatial problems, feature type classification provides a good example)-typically a tree or a directed acyclic graph (DAG). Hierarchical classification algorithms, which utilize the hierarchical relationships between labels in making predictions, can often achieve better prediction performance than flat approaches BIB002 BIB005 . Thus, there is a clear research challenge to develop new approaches that are flexible enough to handle hierarchical classification tasks, in particular, the integration of hierarchical classification with single-label classification and with multi-label classification (i.e., HSC and HMC), respectively. • Stream-based selective sampling AL: As introduced in Section 3.1.2 and discussed in BIB010 , most AL methods in the literature use a pool-based sampling scenario; only a few methods have been developed for data streams. The stream-based approach is more appropriate for some real world scenarios, for example, when memory or processing power is limited (mobile and embedded devices) , crisis management during disaster leveraging social media data streams, or monitoring distributed sensor networks to identify categories of events that pose risks to people or the environment. To address the challenges of the rapidly increasing availability of geospatial streaming data, a key challenge is to develop more effective AL methods and applications using a stream-based AL scenario.
Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> • <s> Abstract Voluminous geographic data have been, and continue to be, collected with modern data acquisition techniques such as global positioning systems (GPS), high-resolution remote sensing, location-aware services and surveys, and internet-based volunteered geographic information. There is an urgent need for effective and efficient methods to extract unknown and unexpected information from spatial data sets of unprecedentedly large size, high dimensionality, and complexity. To address these challenges, spatial data mining and geographic knowledge discovery has emerged as an active research field, focusing on the development of theory, methodology, and practice for the extraction of useful information and knowledge from massive and complex spatial databases. This paper highlights recent theoretical and applied research in spatial data mining and knowledge discovery. We first briefly review the literature on several common spatial data-mining tasks, including spatial classification and prediction; spatial association rule mining; spatial cluster analysis; and geovisualization. The articles included in this special issue contribute to spatial data mining research by developing new techniques for point pattern analysis, prediction in space–time data, and analysis of moving object data, as well as by demonstrating applications of genetic algorithms for optimization in the context of image classification and spatial interpolation. The papers concludes with some thoughts on the contribution of spatial data mining and geographic knowledge discovery to geographic information sciences. <s> BIB001 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> • <s> How to build a compact and informative training data set autonomously is crucial for many real-world learning tasks, especially those with large amount of unlabeled data and high cost of labeling. Active learning aims to address this problem by asking queries in a smart way. Two main scenarios of querying considered in the literature are query synthesis and pool-based sampling. Since in many cases synthesized queries are meaningless or difficult for human to label, more efforts have been devoted to pool-based sampling in recent years. However, in pool-based active learning, querying requires evaluating every unlabeled data point in the pool, which is usually very time-consuming. By contrast, query synthesis has clear advantage on querying time, which is independent of the pool size. In this paper, we propose a novel framework combining query synthesis and pool-based sampling to accelerate the learning process and overcome the current limitation of query synthesis. The basic idea is to select the data point nearest to the synthesized query as the query point. We also provide two simple strategies for synthesizing informative queries. Moreover, to further speed up querying, we employ clustering techniques on the whole data set to construct a representative unlabeled data pool based on cluster centers. Experiments on several real-world data sets show that our methods have distinct advantages in time complexity and similar performance compared to pool-based uncertainty sampling methods. <s> BIB002 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> • <s> Active learning has received great interests from researchers due to its ability to reduce the amount of supervision required for effective learning. As the core component of active learning algorithms, query synthesis and pool-based sampling are two main scenarios of querying considered in the literature. Query synthesis features low querying time, but only has limited applications as the synthesized query might be unrecognizable to human oracle. As a result, most efforts have focused on pool-based sampling in recent years, although it is much more time-consuming. In this paper, we propose new strategies for a novel querying framework that combines query synthesis and pool-based sampling. It overcomes the limitation of query synthesis, and has the advantage of fast querying. The basic idea is to synthesize an instance close to the decision boundary using labelled data, and then select the real instance closest to the synthesized one as a query. For this purpose, we propose a synthesis strategy, which can synthesize instances close to the decision boundary and spreading along the decision boundary. Since the synthesis only depends on the relatively small labelled set, instead of evaluating the entire unlabelled set as many other active learning algorithms do, our method has the advantage of efficiency. In order to handle more complicated data and make our framework compatible with powerful kernel-based learners, we also extend our method to kernel version. Experiments on several real-world data sets show that our method has significant advantage on time complexity and similar performance compared to pool-based uncertainty sampling methods. <s> BIB003 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> • <s> The automatic detection and classification of stance (e.g., certainty or agreement) in text data using natural language processing and machine-learning methods creates an opportunity to gain insight into the speakers’ attitudes toward their own and other people’s utterances. However, identifying stance in text presents many challenges related to training data collection and classifier training. To facilitate the entire process of training a stance classifier, we propose a visual analytics approach, called ALVA, for text data annotation and visualization. ALVA’s interplay with the stance classifier follows an active learning strategy to select suitable candidate utterances for manual annotaion. Our approach supports annotation process management and provides the annotators with a clean user interface for labeling utterances with multiple stance categories. ALVA also contains a visualization method to help analysts of the annotation and training process gain a better understanding of the categories used by the annotators. The visualization uses a novel visual representation, called CatCombos, which groups individual annotation items by the combination of stance categories. Additionally, our system makes a visualization of a vector space model available that is itself based on utterances. ALVA is already being used by our domain experts in linguistics and computational linguistics to improve the understanding of stance phenomena and to build a stance classifier for applications such as social media monitoring. <s> BIB004 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> • <s> The increasing availability and use of positioning devices has resulted in large volumes of trajectory data. However, semantic annotations for such data are typically added by domain experts, which is a time-consuming task. Machine-learning algorithms can help infer semantic annotations from trajectory data by learning from sets of labeled data. Specifically, active learning approaches can minimize the set of trajectories to be annotated while preserving good performance measures. The ANALYTiC web-based interactive tool visually guides users through this annotation process. <s> BIB005 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> • <s> Interactive model analysis, the process of understanding, diagnosing, and refining a machine learning model with the help of interactive visualization, is very important for users to efficiently solve real-world artificial intelligence and data mining problems. Dramatic advances in big data analytics has led to a wide variety of interactive model analysis tasks. In this paper, we present a comprehensive analysis and interpretation of this rapidly developing area. Specifically, we classify the relevant work into three categories: understanding, diagnosis, and refinement. Each category is exemplified by recent influential work. Possible future research opportunities are also explored and discussed. <s> BIB006 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> • <s> Labeling data instances is an important task in machine learning and visual analytics. Both fields provide a broad set of labeling strategies, whereby machine learning (and in particular active learning) follows a rather model-centered approach and visual analytics employs rather user-centered approaches (visual-interactive labeling). Both approaches have individual strengths and weaknesses. In this work, we conduct an experiment with three parts to assess and compare the performance of these different labeling strategies. In our study, we (1) identify different visual labeling strategies for user-centered labeling, (2) investigate strengths and weaknesses of labeling strategies for different labeling tasks and task complexities, and (3) shed light on the effect of using different visual encodings to guide the visual-interactive labeling process. We further compare labeling of single versus multiple instances at a time, and quantify the impact on efficiency. We systematically compare the performance of visual interactive labeling with that of active learning. Our main findings are that visual-interactive labeling can outperform active learning, given the condition that dimension reduction separates well the class distributions. Moreover, using dimension reduction in combination with additional visual encodings that expose the internal state of the learning model turns out to improve the performance of visual-interactive labeling. <s> BIB007 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> • <s> Abstract Classification of evolving data streams is a challenging task, which is suitably tackled with online learning approaches. Data is processed instantly requiring the learning machinery to (self-)adapt by adjusting its model. However for high velocity streams, it is usually difficult to obtain labeled samples to train the classification model. Hence, we propose a novel o nline b atch-based a ctive l earning algorithm (OBAL) to perform the labeling. OBAL is developed for crisis management applications where data streams are generated by the social media community. OBAL is applied to discriminate relevant from irrelevant social media items. An emergency management user will be interactively queried to label chosen items. OBAL exploits the boundary items for which it is highly uncertain about their class and makes use of two classifiers: k-Nearest Neighbors (kNN) and Support Vector Machine (SVM). OBAL is equipped with a labeling budget and a set of uncertainty strategies to identify the items for labeling. An extensive analysis is carried out to show OBAL’s performance, the sensitivity of its parameters, and the contribution of the individual uncertainty strategies. Two types of datasets are used: synthetic and social media datasets related to crises. The empirical results illustrate that OBAL has a very good discrimination power. <s> BIB008
Intergration of different AL problem scenarios: As introduced in Section 3.1.2, among the three main AL problem scenarios, pool-based sampling has received substantial development. But, there is a potential to combine scenarios to take advantage of their respective strengths (e.g., use of real instances that humans are able to annotate for the pool-based sampling and efficiency of membership query synthesis). In early work in this direction, Hu et al. BIB002 and Wang et al. BIB003 have combined membership query synthesis and pool-based sampling scenarios. The conclusion, based on their experiments on several real-world data sets, showed the strength of the combination against pool-based uncertainty sampling methods in terms of time complexity. More query strategies (Section 3.1.5) and M&DL architectures need to be tested to demonstrate the robustness of the improvement of the combination. Intergration of VA with AL/ADL: As Biewald explained in , human-in-the-loop computing is the future of ML. Biewald emphasized that it is often very easy to get a ML algorithm to 80% accuracy whereas almost impossible to get an algorithm to 99%; the best ML models let humans handle that 20%, because 80% accuracy is not good enough for most real world applications. To integrate human-in-the-loop methodology into ML architectures, AL is the most successful "bridge" BIB007 BIB004 BIB008 BIB005 , and VA can further enhance and ease the human's role in the human-machine computing loop BIB006 BIB007 BIB001 . Intergrating the strengths of AL (especially ADL) and VA will raise the effectiveness and efficiency to new levels (Sections 3.1-3.4). Bernard et al. BIB007 provided solid evidence to support this thread of research (Section 3.3).
Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Challenges and Opportunities From Application Perspective (for GIScience and RS Audience) <s> As the resolution of remote-sensing imagery increases, the full complexity of the scenes becomes increasingly difficult to approach. User-defined classes in large image databases are often composed of several groups of images and span very different scales in the space of low-level visual descriptors. The interactive retrieval of such image classes is then very difficult. To address this challenge, we evaluate here, in the context of satellite image retrieval, two general improvements for relevance feedback using support vector machines (SVMs). First, to optimize the transfer of information between the user and the system, we focus on the criterion employed by the system for selecting the images presented to the user at every feedback round. We put forward an active-learning selection criterion that minimizes redundancy between the candidate images shown to the user. Second, for image classes spanning very different scales in the low-level description space, we find that a high sensitivity of the SVM to the scale of the data brings about a low retrieval performance. We argue that the insensitivity to scale is desirable in this context, and we show how to obtain it by the use of specific kernel functions. Experimental evaluation of both ranking and classification performance on a ground-truth database of satellite images confirms the effectiveness of our approach <s> BIB001 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Challenges and Opportunities From Application Perspective (for GIScience and RS Audience) <s> The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond. <s> BIB002 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Challenges and Opportunities From Application Perspective (for GIScience and RS Audience) <s> Similarity measures have a long tradition in fields such as information retrieval, artificial intelligence, and cognitive science. Within the last years, these measures have been extended and reused to measure semantic similarity; i.e., for comparing meanings rather than syntactic differences. Various measures for spatial applications have been de- veloped, but a solid foundation for answering what they measure; how they are best ap- plied in information retrieval; which role contextual information plays; and how similarity values or rankings should be interpreted is still missing. It is therefore difficult to decide which measure should be used for a particular application or to compare results from dif- ferent similarity theories. Based on a review of existing similarity measures, we introduce a framework to specify the semantics of similarity. We discuss similarity-based information retrieval paradigms as well as their implementation in web-based user interfaces for geo- graphic information retrieval to demonstrate the applicability of the framework. Finally, we formulate open challenges for similarity research. <s> BIB003 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Challenges and Opportunities From Application Perspective (for GIScience and RS Audience) <s> We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry. <s> BIB004 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Challenges and Opportunities From Application Perspective (for GIScience and RS Audience) <s> The aim of this article is to provide a basis in evidence for or against the much-quoted assertion that 80% of all information is geospatially referenced. For this purpose, two approaches are presented that are intended to capture the portion of geospatially referenced information in user-generated content: a network approach and a cognitive approach. In the network approach, the German Wikipedia is used as a research corpus. It is considered a network with the articles being nodes and the links being edges. The Network Degree of Geospatial Reference NDGR is introduced as an indicator to measure the network approach. We define NDGR as the shortest path between any Wikipedia article and the closest article within the network that is labeled with coordinates in its headline. An analysis of the German Wikipedia employing this approach shows that 78% of all articles have a coordinate themselves or are directly linked to at least one article that has geospatial coordinates. The cognitive approach is manifested by the categories of geospatial reference CGR : direct, indirect, and non-geospatial reference. These are categories that may be distinguished and applied by humans. An empirical study including 380 participants was conducted. The results of both approaches are synthesized with the aim to 1 examine correlations between NDGR and the human conceptualization of geospatial reference and 2 to separate geospatial from non-geospatial information. From the results of this synthesis, it can be concluded that 56–59% of the articles within Wikipedia can be considered to be directly or indirectly geospatially referenced. The article thus describes a method to check the validity of the ‘80%-assertion’ for information corpora that can be modeled using graphs e.g., the World Wide Web, the Semantic Web, and Wikipedia. For the corpus investigated here Wikipedia, the ‘80%-assertion’ cannot be confirmed, but would need to be reformulated as a ‘60%-assertion’. <s> BIB005 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Challenges and Opportunities From Application Perspective (for GIScience and RS Audience) <s> The recent availability of large amounts of geotagged imagery has inspired a number of data driven solutions to the image geolocalization problem. Existing approaches predict the location of a query image by matching it to a database of georeferenced photographs. While there are many geotagged images available on photo sharing and street view sites, most are clustered around landmarks and urban areas. The vast majority of the Earth's land area has no ground level reference photos available, which limits the applicability of all existing image geolocalization methods. On the other hand, there is no shortage of visual and geographic data that densely covers the Earth - we examine overhead imagery and land cover survey data - but the relationship between this data and ground level query photographs is complex. In this paper, we introduce a cross-view feature translation approach to greatly extend the reach of image geolocalization methods. We can often localize a query even if it has no corresponding ground level images in the database. A key idea is to learn the relationship between ground level appearance and overhead appearance and land cover attributes from sparsely available geotagged ground-level images. We perform experiments over a 1600 km2 region containing a variety of scenes and land cover types. For each query, our algorithm produces a probability density over the region of interest. <s> BIB006 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Challenges and Opportunities From Application Perspective (for GIScience and RS Audience) <s> Traditional gazetteers are built and maintained by authoritative mapping agencies. In the age of Big Data, it is possible to construct gazetteers in a data-driven approach by mining rich volunteered geographic information (VGI) from the Web. In this research, we build a scalable distributed platform and a high-performance geoprocessing workflow based on the Hadoop ecosystem to harvest crowd-sourced gazetteer entries. Using experiments based on geotagged datasets in Flickr, we find that the MapReduce-based workflow running on the spatially enabled Hadoop cluster can reduce the processing time compared with traditional desktop-based operations by an order of magnitude. We demonstrate how to use such a novel spatial-computing infrastructure to facilitate gazetteer research. In addition, we introduce a provenance-based trust model for quality assurance. This work offers new insights on enriching future gazetteers with the use of Hadoop clusters, and makes contributions in connecting GIS to the cloud computing environment for the next frontier of Big Geo-Data analytics. <s> BIB007 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Challenges and Opportunities From Application Perspective (for GIScience and RS Audience) <s> In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision. <s> BIB008 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Challenges and Opportunities From Application Perspective (for GIScience and RS Audience) <s> Scene recognition is one of the hallmark tasks of computer vision, allowing definition of a context for object recognition. Whereas the tremendous recent progress in object recognition tasks is due to the availability of large datasets like ImageNet and the rise of Convolutional Neural Networks (CNNs) for learning high-level features, performance at scene recognition has not attained the same level of success. This may be because current deep features trained from ImageNet are not competitive enough for such tasks. Here, we introduce a new scene-centric database called Places with over 7 million labeled pictures of scenes. We propose new methods to compare the density and diversity of image datasets and show that Places is as dense as other scene datasets and has more diversity. Using CNN, we learn deep features for scene recognition tasks, and establish new state-of-the-art results on several scene-centric datasets. A visualization of the CNN layers' responses allows us to show differences in the internal representations of object-centric and scene-centric networks. <s> BIB009 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Challenges and Opportunities From Application Perspective (for GIScience and RS Audience) <s> Learning effective feature representations and similarity measures are crucial to the retrieval performance of a content-based image retrieval (CBIR) system. Despite extensive research efforts for decades, it remains one of the most challenging open problems that considerably hinders the successes of real-world CBIR systems. The key challenge has been attributed to the well-known ``semantic gap'' issue that exists between low-level image pixels captured by machines and high-level semantic concepts perceived by human. Among various techniques, machine learning has been actively investigated as a possible direction to bridge the semantic gap in the long term. Inspired by recent successes of deep learning techniques for computer vision and other applications, in this paper, we attempt to address an open problem: if deep learning is a hope for bridging the semantic gap in CBIR and how much improvements in CBIR tasks can be achieved by exploring the state-of-the-art deep learning techniques for learning feature representations and similarity measures. Specifically, we investigate a framework of deep learning with application to CBIR tasks with an extensive set of empirical studies by examining a state-of-the-art deep learning method (Convolutional Neural Networks) for CBIR tasks under varied settings. From our empirical studies, we find some encouraging results and summarize some important insights for future research. <s> BIB010 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Challenges and Opportunities From Application Perspective (for GIScience and RS Audience) <s> Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors. <s> BIB011 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Challenges and Opportunities From Application Perspective (for GIScience and RS Audience) <s> Deep Learning waves have lapped at the shores of computational linguistics for several years now, but 2015 seems like the year when the full force of the tsunami hit the major Natural Language Processing (NLP) conferences. However, some pundits are predicting that the final damage will be even worse. Accompanying ICML 2015 in Lille, France, there was another, almost as big, event: the 2015 Deep Learning Workshop. The workshop ended with a panel discussion, and at it, Neil Lawrence said, “NLP is kind of like a rabbit in the headlights of the Deep Learning machine, waiting to be flattened.” Now that is a remark that the computational linguistics community has to take seriously! Is it the end of the road for us? Where are these predictions of steamrollering coming from? At the June 2015 opening of the Facebook AI Research Lab in Paris, its director Yann LeCun said: “The next big step for Deep Learning is natural language understanding, which aims to give machines the power to understand not just individual words but entire sentences and paragraphs.”1 In a November 2014 Reddit AMA (Ask Me Anything), Geoff Hinton said, “I think that the most exciting areas over the next five years will be really understanding text and videos. I will be disappointed if in five years’ time we do not have something that can watch a YouTube video and tell a story about what happened. In a few years time we will put [Deep Learning] on a chip that fits into someone’s ear and have an English-decoding chip that’s just like a real Babel fish.”2 And Yoshua Bengio, the third giant of modern Deep Learning, has also increasingly oriented his group’s research toward language, including recent exciting new developments in neural machine translation systems. It’s not just Deep Learning researchers. When leading machine learning researcher Michael Jordan was asked at a September 2014 AMA, “If you got a billion dollars to spend on a huge research project that you get to lead, what would you like to do?”, he answered: “I’d use the billion dollars to build a NASA-size program focusing on natural language processing, in all of its glory (semantics, pragmatics, etc.).” He went on: “Intellectually I think that NLP is fascinating, allowing us to focus on highly structured inference problems, on issues that go to the core of ‘what is thought’ but remain eminently practical, and on a technology <s> BIB012 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Challenges and Opportunities From Application Perspective (for GIScience and RS Audience) <s> We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection. <s> BIB013 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Challenges and Opportunities From Application Perspective (for GIScience and RS Audience) <s> We explore the use of convolutional neural networks for the semantic classification of remote sensing scenes. Two recently proposed architectures, CaffeNet and GoogLeNet, are adopted, with three different learning modalities. Besides conventional training from scratch, we resort to pre-trained networks that are only fine-tuned on the target data, so as to avoid overfitting problems and reduce design time. Experiments on two remote sensing datasets, with markedly different characteristics, testify on the effectiveness and wide applicability of the proposed solution, which guarantees a significant performance improvement over all state-of-the-art references. <s> BIB014 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Challenges and Opportunities From Application Perspective (for GIScience and RS Audience) <s> Abstract Urban areas of interest (AOI) refer to the regions within an urban environment that attract people's attention. Such areas often have high exposure to the general public, and receive a large number of visits. As a result, urban AOI can reveal useful information for city planners, transportation analysts, and location-based service providers to plan new business, extend existing infrastructure, and so forth. Urban AOI exist in people's perception and are defined by behaviors. However, such perception was rarely captured until the Social Web information technology revolution. Social media data record the interactions between users and their surrounding environment, and thus have the potential to uncover interesting urban areas and their underlying spatiotemporal dynamics. This paper presents a coherent framework for extracting and understanding urban AOI based on geotagged photos. Six different cities from six different countries have been selected for this study, and Flickr photo data covering these cities in the past ten years (2004–2014) have been retrieved. We identify AOI using DBSCAN clustering algorithm, understand AOI by extracting distinctive textual tags and preferable photos, and discuss the spatiotemporal dynamics as well as some insights derived from the AOI. An interactive prototype has also been implemented as a proof-of-concept. While Flickr data have been used in this study, the presented framework can also be applied to other geotagged photos. <s> BIB015 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Challenges and Opportunities From Application Perspective (for GIScience and RS Audience) <s> The recent availability of geo-tagged images and rich geospatial data has inspired a number of algorithms for image based geolocalization. Most approaches predict the location of a query image by matching to ground-level images with known locations (e.g., street-view data). However, most of the Earth does not have ground-level reference photos available. Fortunately, more complete coverage is provided by oblique aerial or “bird's eye” imagery. In this work, we localize a ground-level query image by matching it to a reference database of aerial imagery. We use publicly available data to build a dataset of 78K aligned crossview image pairs. The primary challenge for this task is that traditional computer vision approaches cannot handle the wide baseline and appearance variation of these cross-view pairs. We use our dataset to learn a feature representation in which matching views are near one another and mismatched views are far apart. Our proposed approach, Where-CNN, is inspired by deep learning success in face verification and achieves significant improvements over traditional hand-crafted features and existing deep features learned from other large-scale databases. We show the effectiveness of Where-CNN in finding matches between street view and aerial view imagery and demonstrate the ability of our learned features to generalize to novel locations. <s> BIB016 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Challenges and Opportunities From Application Perspective (for GIScience and RS Audience) <s> This paper describes our deep learning system for sentiment analysis of tweets. The main contribution of this work is a new model for initializing the parameter weights of the convolutional neural network, which is crucial to train an accurate model while avoiding the need to inject any additional features. Briefly, we use an unsupervised neural language model to train initial word embeddings that are further tuned by our deep learning model on a distant supervised corpus. At a final stage, the pre-trained parameters of the network are used to initialize the model. We train the latter on the supervised training data recently made available by the official system evaluation campaign on Twitter Sentiment Analysis organized by Semeval-2015. A comparison between the results of our approach and the systems participating in the challenge on the official test sets, suggests that our model could be ranked in the first two positions in both the phrase-level subtask A (among 11 teams) and on the message-level subtask B (among 40 teams). This is an important evidence on the practical value of our solution. <s> BIB017 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Challenges and Opportunities From Application Perspective (for GIScience and RS Audience) <s> Visual understanding of complex urban street scenes is an enabling factor for a wide range of applications. Object detection has benefited enormously from large-scale datasets, especially in the context of deep learning. For semantic urban scene understanding, however, no current dataset adequately captures the complexity of real-world urban scenes. To address this, we introduce Cityscapes, a benchmark suite and large-scale dataset to train and test approaches for pixel-level and instance-level semantic labeling. Cityscapes is comprised of a large, diverse set of stereo video sequences recorded in streets from 50 different cities. 5000 of these images have high quality pixel-level annotations, 20 000 additional images have coarse annotations to enable methods that leverage large volumes of weakly-labeled data. Crucially, our effort exceeds previous attempts in terms of dataset size, annotation richness, scene variability, and complexity. Our accompanying empirical study provides an in-depth analysis of the dataset characteristics, as well as a performance evaluation of several state-of-the-art approaches based on our benchmark. <s> BIB018 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Challenges and Opportunities From Application Perspective (for GIScience and RS Audience) <s> The role of social media, in particular microblogging platforms such as Twitter, as a conduit for actionable and tactical information during disasters is increasingly acknowledged. However, time-critical analysis of big crisis data on social media streams brings challenges to machine learning techniques, especially the ones that use supervised learning. The Scarcity of labeled data, particularly in the early hours of a crisis, delays the machine learning process. The current state-of-the-art classification methods require a significant amount of labeled data specific to a particular event for training plus a lot of feature engineering to achieve best results. In this work, we introduce neural network based classification methods for binary and multi-class tweet classification task. We show that neural network based models do not require any feature engineering and perform better than state-of-the-art methods. In the early hours of a disaster when no labeled data is available, our proposed method makes the best use of the out-of-event data and achieves good results. <s> BIB019 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Challenges and Opportunities From Application Perspective (for GIScience and RS Audience) <s> A new methodology is introduced that leverages data harvested from social media for tasking the collection of remote-sensing imagery during disasters or emergencies. The images are then fused with multiple sources of contributed data for the damage assessment of transportation infrastructure. The capability is valuable in situations where environmental hazards such as hurricanes or severe weather affect very large areas. During these types of disasters it is paramount to ‘cue’ the collection of remote-sensing images to assess the impact of fast-moving and potentially life-threatening events. The methodology consists of two steps. First, real-time data from Twitter are monitored to prioritize the collection of remote-sensing images for evolving disasters. Commercial satellites are then tasked to collect high-resolution images of these areas. Second, a damage assessment of transportation infrastructure is carried out by fusing the tasked images with contributed data harvested from social media such as Flickr and Twitter, and any additional available data. To demonstrate its feasibility, the proposed methodology is applied and tested on the 2013 Colorado floods with a special emphasis in Boulder County and the cities of Boulder and Longmont. <s> BIB020 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Challenges and Opportunities From Application Perspective (for GIScience and RS Audience) <s> Following an avalanche, one of the factors that affect victims’ chance of survival is the speed with which they are located and dug out. Rescue teams use techniques like trained rescue dogs and electronic transceivers to locate victims. However, the resources and time required to deploy rescue teams are major bottlenecks that decrease a victim’s chance of survival. Advances in the field of Unmanned Aerial Vehicles (UAVs) have enabled the use of flying robots equipped with sensors like optical cameras to assess the damage caused by natural or manmade disasters and locate victims in the debris. In this paper, we propose assisting avalanche search and rescue (SAR) operations with UAVs fitted with vision cameras. The sequence of images of the avalanche debris captured by the UAV is processed with a pre-trained Convolutional Neural Network (CNN) to extract discriminative features. A trained linear Support Vector Machine (SVM) is integrated at the top of the CNN to detect objects of interest. Moreover, we introduce a pre-processing method to increase the detection rate and a post-processing method based on a Hidden Markov Model to improve the prediction performance of the classifier. Experimental results conducted on two different datasets at different levels of resolution show that the detection performance increases with an increase in resolution, while the computation time increases. Additionally, they also suggest that a significant decrease in processing time can be achieved thanks to the pre-processing step. <s> BIB021 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Challenges and Opportunities From Application Perspective (for GIScience and RS Audience) <s> Hate speech detection on Twitter is critical for applications like controversial event extraction, building AI chatterbots, content recommendation, and sentiment analysis. We define this task as being able to classify a tweet as racist, sexist or neither. The complexity of the natural language constructs makes this task very challenging. We perform extensive experiments with multiple deep learning architectures to learn semantic word embeddings to handle this complexity. Our experiments on a benchmark dataset of 16K annotated tweets show that such deep learning methods outperform state-of-the-art char/word n-gram methods by ~18 F1 points. <s> BIB022 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Challenges and Opportunities From Application Perspective (for GIScience and RS Audience) <s> This paper extends recent research into the usefulness of volunteered photos for land cover extraction, and investigates whether this usefulness can be automatically assessed by an easily accessible, off-the-shelf neural network pre-trained on a variety of scene characteristics. Geo-tagged photographs are sometimes presented to volunteers as part of a game which requires them to extract relevant facts about land use. The challenge is to select the most relevant photographs in order to most efficiently extract the useful information while maintaining the engagement and interests of volunteers. By repurposing an existing network which had been trained on an extensive library of potentially relevant features, we can quickly carry out initial assessments of the general value of this approach, pick out especially salient features, and identify focus areas for future neural network training and development. We compare two approaches to extract land cover information from the network: a simple post hoc weighting approach accessible to non-technical audiences and a more complex decision tree approach that involves training on domain-specific features of interest. Both approaches had reasonable success in characterizing human influence within a scene when identifying the land use types (as classified by Urban Atlas) present within a buffer around the photograph’s location. This work identifies important limitations and opportunities for using volunteered photographs as follows: (1) the false precision of a photograph’s location is less useful for identifying on-the-spot land cover than the information it can give on neighbouring combinations of land cover; (2) ground-acquired photographs, interpreted by a neural network, can supplement plan view imagery by identifying features which will never be discernible from above; (3) when dealing with contexts where there are very few exemplars of particular classes, an independent a posteriori weighting of existing scene attributes and categories can buffer against over-specificity. <s> BIB023 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Challenges and Opportunities From Application Perspective (for GIScience and RS Audience) <s> Urban planning applications (energy audits, investment, etc.) require an understanding of built infrastructure and its environment, i.e., both low-level, physical features (amount of vegetation, building area and geometry etc.), as well as higher-level concepts such as land use classes (which encode expert understanding of socio-economic end uses). This kind of data is expensive and labor-intensive to obtain, which limits its availability (particularly in developing countries). We analyze patterns in land use in urban neighborhoods using large-scale satellite imagery data (which is available worldwide from third-party providers) and state-of-the-art computer vision techniques based on deep convolutional neural networks. For supervision, given the limited availability of standard benchmarks for remote-sensing data, we obtain ground truth land use class labels carefully sampled from open-source surveys, in particular the Urban Atlas land classification dataset of $20$ land use classes across $~300$ European cities. We use this data to train and compare deep architectures which have recently shown good performance on standard computer vision tasks (image classification and segmentation), including on geospatial data. Furthermore, we show that the deep representations extracted from satellite imagery of urban environments can be used to compare neighborhoods across several cities. We make our dataset available for other machine learning researchers to use for remote-sensing applications. <s> BIB024
As Raad emphasized in , "When data volume swells beyond a human's ability to discern the patterns in it ... GIS, infused with artificial intelligence, can help executives make better decisions", we share the same vision that GIScience researchers need to bring M&DL into our community, and start to build GeoAI. Early achievements in M&DL have thus far been greater for image data than for text BIB011 BIB012 (the main reasons are discussed in ). A major reason is the availability of big image repositories, such as ImageNet BIB002 , that support such work for benchmarking. For example, well-known pre-trained CNN models (i.e., ConvNets)-AlexNet BIB004 , VGG ConvNets BIB008 , and GoogLeNet BIB013 )-are trained on the ImageNet BIB002 . Although substantial progress has been made in applying M&DL to image-based tasks, a range of challenges remain in RS and other geospatial image domains. One key challenge is related to leveraging image data collected by the increasing variety of drone-mounted sensors. Drones can easily get a big set of image data, for example, in disaster management applications. In this context, DL has already been applied to building extraction in disaster situations , as well as avalanche support focused on finding victims BIB021 . Moving beyond "traditional" uses of supervised DL with image classification, one challenge is to develop interactive web apps that combine AL/ADL and VA to ask volunteers and domain experts to label a small set of data and then build a good classifier, which can help to quickly classify the images and then plot them on map. Doing so can help decision makers to get the big picture and generate insights in a quick and accurate manner. Such a system, of course, will require substantial testing to be usable in domains where life and property are at risk, but it is that risk that should drive research toward this objective. While M&DL for image classification has a longer history BIB011 BIB012 , success in handling NLP tasks, such as language modeling and sentiment analysis BIB022 , is catching up. As Knight emphasizes, it is hard to envision how we will collaborate with AI machines without machines understanding our language, since language is the most powerful way we make sense of the world and interact with it. These advances in text processing are particularly important since massive amounts of unstructured text are generated each day; based on industry estimates, as much as 80% of data generated by be unstructured . Estimates suggest that at least 60% of that unstructured text contains geospatial references BIB005 . These unstructured data signify and give meaning to geospatial information through natural language. However, GIScience has paid limited attention to unstructured data sources. An important step in moving from the unstructured text to meaningful information is to classify the text into categories relevant to target tasks (i.e., text classification, Appendix A.5). In Section 4, we have seen some successful applications using AL and ADL in the GIScience and RS fields. Even though most of these are based on RS imagery, with some on GPS trajectories, and only a few focus on geospatial text data, as outlined in the review above, advances in M&DL are rapidly being extending into a wide array of other domains, including to address NLP and other text related challenges. Related to these image and NLP processing advances in M&DL, there are multiple GIScience and RS problems, such as geographic information retrieval (GIR), geospatial semantics, and geolocalization, to which VA, AL, and ADL based strategies can be applied productively. We highlight just a few of these below. • Geospatial image based applications: Based on the advances achieved in M&DL, many promising geospaital applications using big geospatial image data sets are becoming possible. Diverse GIScience and RS problems can benefit from the methods we reviewed in this paper, potential applications include: land use and land cover classification BIB014 BIB023 , identification and understanding of patterns and interests in urban environments BIB015 BIB024 , and geospatial scene understanding BIB009 BIB018 and content-based image retrieval BIB001 BIB010 . Another important research direction is image geolocalization (prediction of the geolocation of a query image BIB006 ), see BIB016 for an example of DL based geolocalization using geo-tagged images, which did not touch on AL or VA. • Geospatial text based applications: GIR and spatial language processing have potential application to social media mining in domains such as emergency management. There have already been some successful examples of DL classification algorithms being applied to tackling GIScience problems relating to crisis management, sentiment analysis, sarcasm detection, and hate speech detection in tweets; see: BIB022 BIB019 BIB017 . A review of the existing geospatial semantic research can be found in , but neither DL or AL, nor VA are touched upon in that review. Thus, the research topics and challenges discussed there can find potential solutions using the methods we have investigated in this paper. For example, the methods we investigated here will be useful for semantic similarity and word-sense disambiguation, which are the important components of GIR BIB003 . Through integrating GIR with VA, AL and/or ADL, domain experts can play an important role into the DL empowered computational loop for steering the improvement of the machine learner's performance. Recently, Adams and McKenzie used character-level CNN to classify multilingual text, and their method can be improved using the "tool sets" we investigated in this paper. Some specific application problems for which we believe that VA-enabled ADL has the potential to make a dramatic impact are: identification of documents (from tweets, through news stories, to blogs) that are "about" places; classification of geographic statements by scale; and retrieval of geographic statements about movement or events. • Geospatial text and image based applications: Beyond efforts to apply AL and related methods to text alone, text-oriented applications can be expanded with the fusion of text and geospatial images (e.g., RS imagery). See Cervone et al. BIB020 for an example in which RS and social media data (specifically, tweets and Flickr images) are fused for damage assessment during floods. The integration of VA and AL/ADL should also be explored as a mechanism to generate actionable insights from heterogeneous data sources in a quick manner. Deep learning shines where big labeled data is available. Thus, existing research in digital gazetteer that used big data analytics (see BIB007 for an example, where neither DL or AL, nor VA was used) can also be advanced from the methods reviewed in this paper. More specifically, for example, the method used in BIB007 -place types from (Flickr) photo tags, can be extended and enriched by image classification and recognition from the geospatial image based applications mentioned above. Overall, based on the review above, we contend that GeoAI, as implemented via M&DL methods empowered with VA, AL, and ADL, will have a wide array of geospatial applications and thus has considerable potential to address major scientific and societal challenges.
Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.1. Machine learning and Deep Learning <s> The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks. This motivates longer term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation, and manifold learning. <s> BIB001 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.1. Machine learning and Deep Learning <s> A new generation of data processing systems, including web search, Google’s Knowledge Graph, IBM’s Watson, and several different recommendation systems, combine rich databases with software driven by machine learning. The spectacular successes of these trained systems have been among the most notable in all of computing and have generated excitement in health care, finance, energy, and general business. But building them can be challenging, even for computer scientists with PhD-level training. If these systems are to have a truly broad impact, building them must become easier. We explore one crucial pain point in the construction of trained systems: feature engineering. Given the sheer size of modern datasets, feature developers must (1) write code with few effective clues about how their code will interact with the data and (2) repeatedly endure long system waits even though their code typically changes little from run to run. We propose brainwash, a vision for a feature engineering data system that could dramatically ease the ExploreExtract-Evaluate interaction loop that characterizes many trained system projects. <s> BIB002 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.1. Machine learning and Deep Learning <s> Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors. <s> BIB003
Machine learning (ML) [143, is a sub-field of computer science, in particular, artificial Intelligence (AI), that focuses on algorithms for learning from data. Traditional ML relies on feature engineering, the process of using domain-specific prior knowledge to manually extract features from data BIB001 . The features are then used to generate a ML model, which can make predictions for new unseen data. In both ML and pattern recognition, a feature (sometimes also called signal) [143] is an individual measurable attribute/property or characteristic of a phenomenon being observed. Features encode information from raw data that allows ML algorithms to predict the category of an unknown object (e.g., a piece of text or an image) or a value (e.g., stock price) BIB002 . Thus, any attribute that improves the ML algorithm's performance can serve as a feature. Deep learning (DL, i.e., deep neural nets) is a subset of ML, where ML is a subset of AI (see for a detailed introduction to the relations among the three domains of research and practice). DL can discover intricate hidden patterns from big data without feature engineering BIB003 . Feature engineering is a core, human labor intensive technique for traditional ML BIB001 BIB002 , and the potential to skip this often expensive step is one motivation for recent attention to DL. Furthermore, DL algorithm performance improves dramatically when data volume increases -thus, DL algorithms have better scalability than traditional ML algorithms for Big Data problems. The expensive process of feature engineering is skipped for DL, because DL can automatically learn features from data, but it must be replaced by much larger labeled data sets that can be as time consuming to create as the process of feature engineering. While data set labeling is easier than discovering the underlying features that generalize the category it belongs to, the volume of data needed is the bottleneck for DL. This is why we need active deep learning (Section 3), to reduce the amount of data that must be labeled.
Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.2. Types of Learning Methods <s> From the Publisher: ::: This is an interdisciplinary book on neural networks, statistics and fuzzy systems. A unique feature is the establishment of a general framework for adaptive data modeling within which various methods from statistics, neural networks and fuzzy logic are presented. Chapter summaries, examples and case studies are also included.[Includes companion Web site with ... Software for use with the book. <s> BIB001 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.2. Types of Learning Methods <s> Semi-supervised learning is a learning paradigm concerned with the study of how computers and natural systems such as humans learn in the presence of both labeled and unlabeled data. Traditionally, learning has been studied either in the unsupervised paradigm (e.g., clustering, outlier detection) where all the data is unlabeled, or in the supervised paradigm (e.g., classification, regression) where all the data is labeled.The goal of semi-supervised learning is to understand how combining labeled and unlabeled data may change the learning behavior, and design algorithms that take advantage of such a combination. Semi-supervised learning is of great interest in machine learning and data mining because it can use readily available unlabeled data to improve supervised learning tasks when the labeled data is scarce or expensive. Semi-supervised learning also shows potential as a quantitative tool to understand human category learning, where most of the input is self-evidently unlabeled. In this introductory book, we present some popular semi-supervised learning models, including self-training, mixture models, co-training and multiview learning, graph-based methods, and semi-supervised support vector machines. For each model, we discuss its basic mathematical formulation. The success of semi-supervised learning depends critically on some underlying assumptions. We emphasize the assumptions made by each model and give counterexamples when appropriate to demonstrate the limitations of the different models. In addition, we discuss semi-supervised learning for cognitive psychology. Finally, we give a computational learning theoretic perspective on semi-supervised learning, and we conclude the book with a brief discussion of open questions in the field. <s> BIB002 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.2. Types of Learning Methods <s> We present a robust and real-time monocular six degree of freedom relocalization system. Our system trains a convolutional neural network to regress the 6-DOF camera pose from a single RGB image in an end-to-end manner with no need of additional engineering or graph optimisation. The algorithm can operate indoors and outdoors in real time, taking 5ms per frame to compute. It obtains approximately 2m and 3 degrees accuracy for large scale outdoor scenes and 0.5m and 5 degrees accuracy indoors. This is achieved using an efficient 23 layer deep convnet, demonstrating that convnets can be used to solve complicated out of image plane regression problems. This was made possible by leveraging transfer learning from large scale classification data. We show that the PoseNet localizes from high level features and is robust to difficult lighting, motion blur and different camera intrinsics where point based SIFT registration fails. Furthermore we show how the pose feature that is produced generalizes to other scenes allowing us to regress pose with only a few dozen training examples. <s> BIB003 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.2. Types of Learning Methods <s> This book offers a comprehensive review of multilabel techniques widely used to classify and label texts, pictures, videos and music in the Internet. A deep review of the specialized literature on the field includes the available software needed to work with this kind of data. It provides the user with the software tools needed to deal with multilabel data, as well as step by step instruction on how to use them. The main topics covered are: The special characteristics of multi-labeled data and the metrics available to measure them. The importance of taking advantage of label correlations to improve the results. The different approaches followed to face multi-label classification. The preprocessing techniques applicable to multi-label datasets. The available software tools to work with multi-label data. This book is beneficial for professionals and researchers in a variety of fieldsbecause of the wide range of potential applications for multilabel classification. Besides its multiple applications to classify different types of online information, it is also useful in many other areas, such as genomics and biology. No previous knowledge about the subject is required. The book introduces all the needed concepts to understand multilabel data characterization, treatment and evaluation. <s> BIB004 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.2. Types of Learning Methods <s> The United States spends more than $1B each year on initiatives such as the American Community Survey (ACS), a labor-intensive door-to-door study that measures statistics relating to race, gender, education, occupation, unemployment, and other demographic factors. Although a comprehensive source of data, the lag between demographic changes and their appearance in the ACS can exceed half a decade. As digital imagery becomes ubiquitous and machine vision techniques improve, automated data analysis may provide a cheaper and faster alternative. Here, we present a method that determines socioeconomic trends from 50 million images of street scenes, gathered in 200 American cities by Google Street View cars. Using deep learning-based computer vision techniques, we determined the make, model, and year of all motor vehicles encountered in particular neighborhoods. Data from this census of motor vehicles, which enumerated 22M automobiles in total (8% of all automobiles in the US), was used to accurately estimate income, race, education, and voting patterns, with single-precinct resolution. (The average US precinct contains approximately 1000 people.) The resulting associations are surprisingly simple and powerful. For instance, if the number of sedans encountered during a 15-minute drive through a city is higher than the number of pickup trucks, the city is likely to vote for a Democrat during the next Presidential election (88% chance); otherwise, it is likely to vote Republican (82%). Our results suggest that automated systems for monitoring demographic trends may effectively complement labor-intensive approaches, with the potential to detect trends with fine spatial resolution, in close to real time. <s> BIB005 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.2. Types of Learning Methods <s> Deep learning has shown to be effective for robust and real-time monocular image relocalisation. In particular, PoseNet [22] is a deep convolutional neural network which learns to regress the 6-DOF camera pose from a single image. It learns to localize using high level features and is robust to difficult lighting, motion blur and unknown camera intrinsics, where point based SIFT registration fails. However, it was trained using a naive loss function, with hyper-parameters which require expensive tuning. In this paper, we give the problem a more fundamental theoretical treatment. We explore a number of novel loss functions for learning camera pose which are based on geometry and scene reprojection error. Additionally we show how to automatically learn an optimal weighting to simultaneously regress position and orientation. By leveraging geometry, we demonstrate that our technique significantly improves PoseNets performance across datasets ranging from indoor rooms to a small city. <s> BIB006
There are three major types of learning methods in ML (and DL, since DL is a branch of ML) BIB004 : supervised learning, unsupervised learning, and semi-supervised learning. Appendix A.2.1. Supervised Learning Supervised learning is the ML task of inferring a function from labeled training data. In supervised learning, the data instances are labeled by human annotators or experts in a problem domain BIB001 . Labeling refers to the process of annotating each piece of text or image with one of a pre-defined set of class names. ML methods can use this information to learn a model that can infer the knowledge needed to automatically label new (i.e., never seen before) data instances. Supervised ML methods usually divide the data set into two (i.e., training and test) or three (i.e., training, validation, and test) disjoint subsets. The labels of instances in the test set will not be given to the ML algorithm, but will only be used to evaluate its performance. The main idea of supervised learning is to build a ML model (e.g., a classifier for classification tasks, or a regression model for regression tasks) using the training data set and using the testing data set to validate the model's performance. With supervised learning there are several metrics to measure success. These metrics can be used to judge the adequacy of a method in particular situations and to compare the effectiveness of different methods over various situations . Appendix A.2.2. Unsupervised Learning Unsupervised learning is the ML task of inferring a function to describe hidden structure from "unlabeled" data (i.e., without human annotation). Since the examples given to the learner are unlabeled, expert knowledge is not a foundation of the learning and there is no evaluation of the accuracy of the structure learned by the relevant algorithm. A clustering algorithm called k-means and another algorithm called principal component analysis (PCA) are popular unsupervised ML algorithms, among others. Appendix A.2.3. Semi-Supervised Learning Semi-supervised learning BIB002 is a learning paradigm concerned with the study of how computers and humans learn using both labeled and unlabeled data. One goal of research in semi-supervised learning is to understand how combining labeled and unlabeled data may change the learning behavior, and design algorithms that take advantage of such a combination. A survey focusing on semi-supervised learning for classification can be found in . In the survey, Zhu emphasized that there are some similarities between ML and human learning. Understanding human cognitive model(s) can lead to novel ML approaches . Do humans learn in a semi-supervised manner? The answer is "yes". Humans accumulate "unlabeled" input data, which (often unconsciously) are used to help build the connection between "labels" and input once labeled data is provided . As emphasized in Section 1, labeled data sets are often difficult, expensive, and/or time consuming to obtain, as they require the efforts of experienced human annotators or domain experts. Semi-supervised learning addresses this problem by using a large amount of unlabeled data, together with a relatively small amount of labeled data, to build good classifiers (Appendix A.3). Semi-supervised learning has received considerable attention both in theory and in practice in ML and data mining because it requires less human effort and gives higher accuracy than supervised methods . Appendix A.2.4. Brief Discussion of Learning Types When a data set contains both labeled and unlabeled samples, ML methods can combine techniques from the two previous categories (i.e., supervised and unsupervised) to accomplish semi-supervised learning tasks BIB002 . Labeled data instances can be used to induce a model, as in supervised learning, then the model can be refined with the information from unlabeled samples. Analogously, unsupervised tasks can be improved by introducing the clues given by the labeled instances. Active learning (Section 3.1) is semi-supervised learning, and most DL algorithms (e.g., CNN, RNN, and LSTM) belong to supervised learning. In this paper, we focus on M&DL for classification (where the output of the process is categorical/discrete). Supervised/semi-supervised ML is also used for regression tasks (where the output of the process is continuous). The application of regression is beyond the scope of this paper; interested readers can find recent overviews in BIB003 BIB005 BIB006 . Appendix A.3. Classifier A ML algorithm that implements a type of classification task (Appendix A.4) is known as a classifier. The most popular ML algorithms for classification problems are: logistic regression, naive Bayes, and support vector machine (SVM). The convolutional neural network (CNN), recurrent neural network (RNN), and two variants of RNN-long short-term memory (LSTM) and gated recurrent unit (GRU), are among most commonly used DL algorithms (also called architectures) for classification problems.
Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.4. Types of Classification Tasks <s> Grouping images into (semantically) meaningful categories using low level visual features is a challenging and important problem in content based image retrieval. Using binary Bayesian classifiers, we attempt to capture high level concepts from low level image features under the constraint that the test image does belong to one of the classes of interest. Specifically, we consider the hierarchical classification of vacation images; at the highest level, images are classified into indoor/outdoor classes, outdoor images are further classified into city/landscape classes, and finally, a subset of landscape images is classified into sunset, forest, and mountain classes. We demonstrate that a small codebook (the optimal size of codebook is selected using a modified MDL criterion) extracted from a vector quantizer can be used to estimate the class-conditional densities of the observed features needed for the Bayesian methodology. On a database of 6931 vacation photographs, our system achieved an accuracy of 90.5% for indoor vs. outdoor classification, 95.3% for city vs. landscape classification, 96.6% for sunset vs. forest and mountain classification, and 95.5% for forest vs. mountain classification. We further develop a learning paradigm to incrementally train the classifiers as additional training samples become available and also show preliminary results for feature size reduction using clustering techniques. <s> BIB001 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.4. Types of Classification Tasks <s> For multi-class classification with Support Vector Machines (SVMs) a binary decision tree architecture is proposed for computational efficiency. The proposed SVM- based binary tree takes advantage of both the efficient computation of the tree architecture and the high classification accuracy of SVMs. A modified Self-Organizing Map (SOM), K- SOM (Kernel-based SOM), is introduced to convert the multi-class problems into binary trees, in which the binary decisions are made by SVMs. For consistency between the SOM and SVM the K-SOM utilizes distance measures at the kernel space, not at the input space. Also, by allowing overlaps in the binary decision tree, it overcomes the performance degradation of the tree structure, and shows classification accuracy comparable to those of the popular multi-class SVM approaches with "one-to-one" and "one-to-the others". Keywords—Support Vector Machine, multi-class classification, Self-Organizing Map, binary decision tree <s> BIB002 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.4. Types of Classification Tasks <s> The “one against one” and the “one against all” are the two most popular strategies for multi-class SVM; however, according to the literature review, it seems impossible to conclude which one is better for handwriting recognition. Thus, we compared these two classical strategies on two different handwritten character recognition problems. Several post-processing methods for estimating posterior probability were also evaluated and the results were compared with the ones obtained using MLP. Finally, the “one against all” strategy appears significantly more accurate for digit recognition, while the difference between the two strategies is much less obvious with upper-case letters. Besides, the “one against one” strategy is substantially faster to train and seems preferable for problems with a very large number of classes. To conclude, SVMs allow significantly better estimation of probabilities than MLP, which is promising from the point of view of their incorporation into handwriting recognition systems. <s> BIB003 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.4. Types of Classification Tasks <s> This paper deals with categorization tasks where categories are partially ordered to form a hierarchy. First, it introduces the notion of consistent classification which takes into account the semantics of a class hierarchy. Then, it presents a novel global hierarchical approach that produces consistent classification. This algorithm with AdaBoost as the underlying learning procedure significantly outperforms the corresponding “flat” approach, i.e. the approach that does not take into account the hierarchical information. In addition, the proposed algorithm surpasses the hierarchical local top-down approach on many synthetic and real tasks. For evaluation purposes, we use a novel hierarchical evaluation measure that has some attractive properties: it is simple, requires no parameter tuning, gives credit to partially correct classification and discriminates errors by both distance and depth in a class hierarchy. <s> BIB004 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.4. Types of Classification Tasks <s> Hierarchical multi-label classification (HMC) is a variant of classification where instances may belong to multiple classes at the same time and these classes are organized in a hierarchy. This article presents several approaches to the induction of decision trees for HMC, as well as an empirical study of their use in functional genomics. We compare learning a single HMC tree (which makes predictions for all classes together) to two approaches that learn a set of regular classification trees (one for each class). The first approach defines an independent single-label classification task for each class (SC). Obviously, the hierarchy introduces dependencies between the classes. While they are ignored by the first approach, they are exploited by the second approach, named hierarchical single-label classification (HSC). Depending on the application at hand, the hierarchy of classes can be such that each class has at most one parent (tree structure) or such that classes may have multiple parents (DAG structure). The latter case has not been considered before and we show how the HMC and HSC approaches can be modified to support this setting. We compare the three approaches on 24 yeast data sets using as classification schemes MIPS's FunCat (tree structure) and the Gene Ontology (DAG structure). We show that HMC trees outperform HSC and SC trees along three dimensions: predictive accuracy, model size, and induction time. We conclude that HMC trees should definitely be considered in HMC tasks where interpretable models are desired. <s> BIB005 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.4. Types of Classification Tasks <s> This paper presents a systematic analysis of twenty four performance measures used in the complete spectrum of Machine Learning classification tasks, i.e., binary, multi-class, multi-labelled, and hierarchical. For each classification task, the study relates a set of changes in a confusion matrix to specific characteristics of data. Then the analysis concentrates on the type of changes to a confusion matrix that do not change a measure, therefore, preserve a classifier's evaluation (measure invariance). The result is the measure invariance taxonomy with respect to all relevant label distribution changes in a classification problem. This formal analysis is supported by examples of applications where invariance properties of measures lead to a more reliable evaluation of classifiers. Text classification supplements the discussion with several case studies. <s> BIB006 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.4. Types of Classification Tasks <s> The widely known binary relevance method for multi-label classification, which considers each label as an independent binary problem, has been sidelined in the literature due to the perceived inadequacy of its label-independence assumption. Instead, most current methods invest considerable complexity to model interdependencies between labels. This paper shows that binary relevance-based methods have much to offer, especially in terms of scalability to large datasets. We exemplify this with a novel chaining method that can model label correlations while maintaining acceptable computational complexity. Empirical evaluation over a broad range of multi-label datasets with a variety of evaluation metrics demonstrates the competitiveness of our chaining method against related and state-of-the-art methods, both in terms of predictive performance and time complexity. <s> BIB007 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.4. Types of Classification Tasks <s> We explore the task of automatic classification of texts by the emotions expressed. Our novel method arranges neutrality, polarity and emotions hierarchically. We test the method on two datasets and show that it outperforms the corresponding "flat" approach, which does not take into account the hierarchical information. The highly imbalanced structure of most of the datasets in this area, particularly the two datasets with which we worked, has a dramatic effect on the performance of classification. The hierarchical approach helps alleviate the effect. <s> BIB008 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.4. Types of Classification Tasks <s> Many real-world applications involve multi-label classification, in which the labels are organized in the form of a tree or directed acyclic graph (DAG). However, current research efforts typically ignore the label dependencies or can only exploit the dependencies in tree-structured hierarchies. In this paper, we present a novel hierarchical multi-label classification algorithm which can be used on both tree- and DAG-structured hierarchies. The key idea is to formulate the search for the optimal consistent multi-label as the finding of the best subgraph in a tree/DAG. Using a simple greedy strategy, the proposed algorithm is computationally efficient, easy to implement, does not suffer from the problem of insufficient/skewed training data in classifier training, and can be readily used on large hierarchies. Theoretical results guarantee the optimality of the obtained solution. Experiments are performed on a large number of functional genomics data sets. The proposed method consistently outperforms the state-of-the-art method on both tree- and DAG-structured hierarchies. <s> BIB009 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.4. Types of Classification Tasks <s> In this paper we present the PHOCS-2 algorithm, which extracts a “Predicted Hierarchy Of ClassifierS”. The extracted hierarchy helps us to enhance performance of flat classification. Nodes in the hierarchy contain classifiers. Each intermediate node corresponds to a set of classes and each leaf node corresponds to a single class. In the PHOCS-2 we make estimation for each node and achieve more precise computation of false positives, true positives and false negatives. Stopping criteria are based on the results of the flat classification. The proposed algorithm is validated against nine datasets. <s> BIB010 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.4. Types of Classification Tasks <s> Comprehensive Coverage of the Entire Area of Classification Research on the problem of classification tends to be fragmented across such areas as pattern recognition, database, data mining, and machine learning. Addressing the work of these different communities in a unified way, Data Classification: Algorithms and Applications explores the underlying algorithms of classification as well as applications of classification in a variety of problem domains, including text, multimedia, social network, and biological data. This comprehensive book focuses on three primary aspects of data classification: Methods-The book first describes common techniques used for classification, including probabilistic methods, decision trees, rule-based methods, instance-based methods, support vector machine methods, and neural networks. Domains-The book then examines specific methods used for data domains such as multimedia, text, time-series, network, discrete sequence, and uncertain data. It also covers large data sets and data streams due to the recent importance of the big data paradigm. Variations-The book concludes with insight on variations of the classification process. It discusses ensembles, rare-class learning, distance function learning, active learning, visual learning, transfer learning, and semi-supervised learning as well as evaluation aspects of classifiers. <s> BIB011 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.4. Types of Classification Tasks <s> We address the task of hierarchical multi-label classification (HMC). HMC is a task of structured output prediction where the classes are organized into a hierarchy and an instance may belong to multiple classes. In many problems, such as gene function prediction or prediction of ecological community structure, classes inherently follow these constraints. The potential for application of HMC was recognized by many researchers and several such methods were proposed and demonstrated to achieve good predictive performances in the past. However, there is no clear understanding when is favorable to consider such relationships (hierarchical and multi-label) among classes, and when this presents unnecessary burden for classification methods. To this end, we perform a detailed comparative study over 8 datasets that have HMC properties. We investigate two important influences in HMC: the multiple labels per example and the information about the hierarchy. More specifically, we consider four machine learning tasks: multi-label classification, hierarchical multi-label classification, single-label classification and hierarchical single-label classification. To construct the predictive models, we use predictive clustering trees (a generalized form of decision trees), which are able to tackle each of the modelling tasks listed. Moreover, we investigate whether the influence of the hierarchy and the multiple labels carries over for ensemble models. For each of the tasks, we construct a single tree and two ensembles (random forest and bagging). The results reveal that the hierarchy and the multiple labels do help to obtain a better single tree model, while this is not preserved for the ensemble models. <s> BIB012 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.4. Types of Classification Tasks <s> Multi-label learning studies the problem where each example is represented by a single instance while associated with a set of labels simultaneously. During the past decade, significant amount of progresses have been made toward this emerging machine learning paradigm. This paper aims to provide a timely review on this area with emphasis on state-of-the-art multi-label learning algorithms. Firstly, fundamentals on multi-label learning including formal definition and evaluation metrics are given. Secondly and primarily, eight representative multi-label learning algorithms are scrutinized under common notations with relevant analyses and discussions. Thirdly, several related learning settings are briefly summarized. As a conclusion, online resources and open research problems on multi-label learning are outlined for reference purposes. <s> BIB013 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.4. Types of Classification Tasks <s> In hierarchical classification, the prediction paths may be required to always end at leaf nodes. This is called mandatory leaf node prediction (MLNP) and is particularly useful when the leaf nodes have much stronger semantic meaning than the internal nodes. However, while there have been a lot of MLNP methods in hierarchical multiclass classification, performing MLNP in hierarchical multilabel classification is much more difficult. In this paper, we propose a novel MLNP algorithm that (i) considers the global hierarchy structure; and (ii) can be used on hierarchies of both trees and DAGs. We show that one can efficiently maximize the joint posterior probability of all the node labels by a simple greedy algorithm. Moreover, this can be further extended to the minimization of the expected symmetric loss. Experiments are performed on a number of real-world data sets with tree- and DAG-structured label hierarchies. The proposed method consistently outperforms other hierarchical and flat multilabel classification methods. <s> BIB014 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.4. Types of Classification Tasks <s> Hierarchical multi-label classification assigns a document to multiple hierarchical classes. In this paper we focus on hierarchical multi-label classification of social text streams. Concept drift, complicated relations among classes, and the limited length of documents in social text streams make this a challenging problem. Our approach includes three core ingredients: short document expansion, time-aware topic tracking, and chunk-based structural learning. We extend each short document in social text streams to a more comprehensive representation via state-of-the-art entity linking and sentence ranking strategies. From documents extended in this manner, we infer dynamic probabilistic distributions over topics by dividing topics into dynamic "global" topics and "local" topics. For the third and final phase we propose a chunk-based structural optimization strategy to classify each document into multiple classes. Extensive experiments conducted on a large real-world dataset show the effectiveness of our proposed method for hierarchical multi-label classification of social text streams. <s> BIB015 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.4. Types of Classification Tasks <s> Multilabel learning has become a relevant learning paradigm in the past years due to the increasing number of fields where it can be applied and also to the emerging number of techniques that are being developed. This article presents an up-to-date tutorial about multilabel learning that introduces the paradigm and describes the main contributions developed. Evaluation measures, fields of application, trending topics, and resources are also presented. <s> BIB016 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.4. Types of Classification Tasks <s> This book offers a comprehensive review of multilabel techniques widely used to classify and label texts, pictures, videos and music in the Internet. A deep review of the specialized literature on the field includes the available software needed to work with this kind of data. It provides the user with the software tools needed to deal with multilabel data, as well as step by step instruction on how to use them. The main topics covered are: The special characteristics of multi-labeled data and the metrics available to measure them. The importance of taking advantage of label correlations to improve the results. The different approaches followed to face multi-label classification. The preprocessing techniques applicable to multi-label datasets. The available software tools to work with multi-label data. This book is beneficial for professionals and researchers in a variety of fieldsbecause of the wide range of potential applications for multilabel classification. Besides its multiple applications to classify different types of online information, it is also useful in many other areas, such as genomics and biology. No previous knowledge about the subject is required. The book introduces all the needed concepts to understand multilabel data characterization, treatment and evaluation. <s> BIB017
Classification in M&DL is a predictive task, which aims to learn from existing labeled data and predict the label for new data BIB011 . The labels representing classes or categories are finite and discrete (otherwise the task would be regression, instead of classification) . In supervised/semi-supervised ML (Appendixes A.2.1 and A.2.3), classification tasks include the following types BIB017 BIB006 : binary, multi-class, multi-label, and hierarchical classifications. See Figure A1 . Multi-class classification (also called multiclass classification or multinomial classification) refers to the task of classifying instances into one and only one of a set of (more than two) pre-defined and mutually exclusive classes BIB017 (e.g., adding a "neutral" class to the "positive" and "negative" in sentiment analysis). Multi-class classification can be seen as a generalization of binary classification (Appendix A.4.1). Many multi-class classification algorithms rely on binarization , a method that iteratively trains a binary classifier for each class against the others, following a one-vs-all (OVA) (also called one-against-all (OAA) or one-vs-rest (OVR)) approach, or for each pair of classes, using an one-vs-one (OVO) (also called one-against-one (OAO)) technique [143] . A comparison between OAO and OAA can be found in BIB003 for handwriting recognition with SVMs. Appendix A.4.3. Multi-Label Classification Both binary and multi-class classifications are "single-label" methods (thus, binary/multi-class classifications is also called single-label classification in the literature BIB012 ), where each instance is only associated with a single class label (see Figure A1a ,b for an illustration). By contrast, multi-label classification (also multilabel classification) produces a labeled data set where each instance is associated with a vector of output values BIB017 BIB007 BIB013 BIB016 , instead of only one value. The length of this vector is fixed according to the number of different, pre-defined, and not mutually exclusive labels in the data set. Each element of the vector will be a binary value, indicating if the corresponding label is true for the sample or not. Several labels can be active simultaneously. Each distinct combination of labels is known as a labelset BIB017 . Figure A1c provides one of the most common multi-label applications, image labeling. The data set has four labels in total and each image can be assigned any of them, or even all at once if there was an image in which the four concepts, corresponding to the labels, appear. Multi-label classification has its roots as a solution for tagging documents with several but not mutually exclusive categories (e.g., a piece of text might be about any of: religion, politics, finance, and education at the same time or none of these). Multi-label classification is currently applied in many fields, most of them related to automatic labeling of social media resources such as images, music, video, news, and blog posts BIB017 . Appendix A.4.4. Hierarchical Classification Hierarchical classification, as the name implies, differs from the three types discussed above (Appendixes A.4.1-A.4.3), which all consider each class to be at the same level, called flat classification (flat here means non-hierarchical BIB008 ). For hierarchical classification, classes are defined at multiple levels and are organized in hierarchies BIB004 , as illustrated in Figure A1d . The hierarchy is predefined and cannot be changed during classification. The categories are partially ordered, usually from more generic to more specific BIB008 . In hierarchical classification, the output labels reside on a tree or directed acyclic graph (DAG) structured hierarchy BIB014 BIB005 BIB009 . Silla and Freitas provide a survey of hierarchical classification across different application domains. Many ML classification algorithms are flat, where they simply ignore the label structure and treat the labels as a loose set. By contrast, hierarchical classification algorithms, utilize the hierarchical relationships between labels in making predictions; they can often predict better than flat approaches BIB008 BIB014 . Ghazi et al. BIB008 explored text classification based on emotions expressed in the text. Their method organized neutrality, polarity, and emotions hierarchically. The authors tested their method on two datasets and showed that their method outperforms the corresponding "flat" approach. However, Sapozhnikov and Ulanov BIB010 pointed out in some cases, classification performance cannot be enhanced using a hierarchy of labels. Some authors showed that flat classification outperforms a hierarchical one in the presence of a large number of labels (See later in this section for a further discussion about a systematic comparison between hierarchical and flat classifications). Hierarchical classification combined with single-label classification (Appendix A.4.3) are called hierarchical single-label classification (HSC) in the literature BIB012 . Vailaya et al. BIB001 provided an early example of hierarchical classification combined with binary classification (Appendix A.4.1). The authors employed binary Bayesian classifiers to perform hierarchical classification of vacation images. The results of their experiments showed that high-level concepts can be detected from images if each image can be correctly classified into pre-defined categories. Hierarchical classification has also been integrated with multi-class classification (Appendix A.4.2), see BIB002 for examples. Kowsari et al. presented a new approach to hierarchical multi-class text classification, where the authors employed stacks of DL architectures to provide specialized understanding at each level of the text (document) hierarchy. Their experiment ran on a data set of documents from the Web of Science, and the authors employed a hierarchy of two levels: level-1 (they also called it parent-level) contains classes such as "Computer Science" and "Medical Sciences", and at level-2 (they also called this child-level) the parent level "Computer science" has sub-classes such as "Computer Graphics" and "Machine Learning". Their results showed that combinations of RNN at the higher level (i.e., level-1 or parent-level in their experiment) and CNN at the lower level (i.e., level-2 or child-level) achieve much better and more consistent performance than those obtained by conventional approaches using naive Bayes or SVM. Their results also showed that DL methods can improve document classification performance and that they can provide extensions to methods that only considered the multi-class problem and thus can classify documents within a hierarchy with better performance. Hierarchical classification has been integrated with multi-label classification (Appendix A.4.3), called hierarchical multi-label classification (HMC) in the literature BIB012 BIB015 . HMC is a variant of classification where the pre-defined classes are organized in a hierarchy and each instance may belong to multiple classes simultaneously BIB012 BIB005 . Ren et al. BIB015 conducted extensive experiments on a large real-world data set and their results showed the effectiveness of their method for HMC of social text streams. HMC has received attention, because many real world classification scenarios are multi-label classification and the labels are normally hierarchical in nature. But, research has not yet established when it is proper to consider such relationships (hierarchical and multi-label) among classes, and when this presents an unnecessary burden for classification methods. To address this problem, Levatic et al. BIB012 conducted a comparative study over 8 data sets that have HMC properties. The authors investigated two important influences in HMC: multiple labels per example and information about the hierarchy. Specifically, Levatic et al. considered four ML classification tasks: multi-label classification (Appendix A.4.3), HMC, single-label classification (Appendix A.4.3), and HSC. The authors concluded that the inclusion of hierarchical information in the model construction phase for single trees improves the predictive performance-whether they used HMC trees or HSC tree architecture. HMC trees should be used on domains with a well-populated class hierarchy (L > 2), while the HSC tree architecture performs better if the number of labels per example is closer to one. Appendix A.4.5. Evaluation Metrics for Classification Tasks Different types of classification tasks need different evaluation metrics. Sokolova and Lapalme BIB006 systematically analyzed and summarized twenty-four performance measures used in ML classification tasks (i.e., binary, multi-class, multi-label, and hierarchical) in tables (with formula and concise descriptions of evaluation focus). Their formal analysis was supported by examples of applications where invariance properties of measures lead to a more reliable evaluation of classifiers (Appendix A.3).
Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.5. Text and Image Classifications <s> The automated categorization (or classification) of texts into predefined categories has witnessed a booming interest in the last 10 years, due to the increased availability of documents in digital form and the ensuing need to organize them. In the research community the dominant approach to this problem is based on machine learning techniques: a general inductive process automatically builds a classifier by learning, from a set of preclassified documents, the characteristics of the categories. The advantages of this approach over the knowledge engineering approach (consisting in the manual definition of a classifier by domain experts) are a very good effectiveness, considerable savings in terms of expert labor power, and straightforward portability to different domains. This survey discusses the main approaches to text categorization that fall within the machine learning paradigm. We will discuss in detail issues pertaining to three different problems, namely, document representation, classifier construction, and classifier evaluation. <s> BIB001 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.5. Text and Image Classifications <s> Active learning is a machine learning technique that selects the most informative samples for labeling and uses them as training data. It has been widely explored in multimedia research community for its capability of reducing human annotation effort. In this article, we provide a survey on the efforts of leveraging active learning in multimedia annotation and retrieval. We mainly focus on two application domains: image/video annotation and content-based image retrieval. We first briefly introduce the principle of active learning and then we analyze the sample selection criteria. We categorize the existing sample selection strategies used in multimedia annotation and retrieval into five criteria: risk reduction, uncertainty, diversity, density and relevance. We then introduce several classification models used in active learning-based multimedia annotation and retrieval, including semi-supervised learning, multilabel learning and multiple instance learning. We also provide a discussion on several future trends in this research direction. In particular, we discuss cost analysis of human annotation and large-scale interactive multimedia annotation. <s> BIB002 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.5. Text and Image Classifications <s> Performing exhaustive searches over a large number of text documents can be tedious, since it is very hard to formulate search queries or define filter criteria that capture an analyst's information need adequately. Classification through machine learning has the potential to improve search and filter tasks encompassing either complex or very specific information needs, individually. Unfortunately, analysts who are knowledgeable in their field are typically not machine learning specialists. Most classification methods, however, require a certain expertise regarding their parametrization to achieve good results. Supervised machine learning algorithms, in contrast, rely on labeled data, which can be provided by analysts. However, the effort for labeling can be very high, which shifts the problem from composing complex queries or defining accurate filters to another laborious task, in addition to the need for judging the trained classifier's quality. We therefore compare three approaches for interactive classifier training in a user study. All of the approaches are potential candidates for the integration into a larger retrieval system. They incorporate active learning to various degrees in order to reduce the labeling effort as well as to increase effectiveness. Two of them encompass interactive visualization for letting users explore the status of the classifier in context of the labeled documents, as well as for judging the quality of the classifier in iterative feedback loops. We see our work as a step towards introducing user controlled classification methods in addition to text search and filtering for increasing recall in analytics scenarios involving large corpora. <s> BIB003 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.5. Text and Image Classifications <s> Rapid crisis response requires real-time analysis of messages. After a disaster happens, volunteers attempt to classify tweets to determine needs, e.g., supplies, infrastructure damage, etc. Given labeled data, supervised machine learning can help classify these messages. Scarcity of labeled data causes poor performance in machine training. Can we reuse old tweets to train classifiers? How can we choose labeled tweets for training? Specifically, we study the usefulness of labeled data of past events. Do labeled tweets in different language help? We observe the performance of our classifiers trained using different combinations of training sets obtained from past disasters. We perform extensive experimentation on real crisis datasets and show that the past labels are useful when both source and target events are of the same type (e.g. both earthquakes). For similar languages (e.g., Italian and Spanish), cross-language domain adaptation was useful, however, when for different languages (e.g., Italian and English), the performance decreased. <s> BIB004
Text classification and image classification are two important applications of classification tasks in ML (Appendix A.4). Image classification is the task of classifying images to pre-defined class names (i.e., labels). Image classification can be applied to many real-world problems, for example, retrieval of all images that contain (damaged) roads. A survey of multimedia (i.e., images and videos) annotation and retrieval using active learning (Section 3.1) can be found in BIB002 . A review on deep learning algorithms in computer vision for tasks, such as image classification and image retrieval, can be found in . Text classification (also called text categorization), analogous to image classification, is the task of classifying text to pre-defined categories. Text classification in ML is a fundamental step in making large repositories of unstructured text searchable and has important applications in the real world BIB003 . For example, automatically tagging social media messages during natural disasters by topics can facilitate information retrieval for crisis management BIB004 . Text classification is also closely related to standard natural language processing (NLP) problems such as named entity recognition (NER), in which words are classified into: person, location, organization, etc. Some of the best methods to accomplish this task are ML based (e.g., Stanford NER [221, 222] ). A comprehensive review of text classification methods and results can be found in BIB001 , including evaluation of text classifiers, particularly measures of text categorization effectiveness. Significance tests in the evaluation of text classification methods can be found in .
Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.6. Word Embedding <s> The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. ::: ::: An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of "Canada" and "Air" cannot be easily combined to obtain "Air Canada". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible. <s> BIB001 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.6. Word Embedding <s> We propose two novel model architectures for computing continuous vector representations of words from very large data sets. The quality of these representations is measured in a word similarity task, and the results are compared to the previously best performing techniques based on different types of neural networks. We observe large improvements in accuracy at much lower computational cost, i.e. it takes less than a day to learn high quality word vectors from a 1.6 billion words data set. Furthermore, we show that these vectors provide state-of-the-art performance on our test set for measuring syntactic and semantic word similarities. <s> BIB002 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.6. Word Embedding <s> Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75% on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition. <s> BIB003 </s> Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review <s> Appendix A.6. Word Embedding <s> Context-predicting models (more commonly known as embeddings or neural language models) are the new kids on the distributional semantics block. Despite the buzz surrounding these models, the literature is still lacking a systematic comparison of the predictive models with classic, count-vector-based distributional semantic approaches. In this paper, we perform such an extensive evaluation, on a wide range of lexical semantics tasks and across many parameter settings. The results, to our own surprise, show that the buzz is fully justified, as the context-predicting models obtain a thorough and resounding victory against their count-based counterparts. <s> BIB004
We have introduced text and image classifications above (Appendix A.5). When using DL algorithms for text classification and image classification, one of the big technical differences is that images have matrix representations and thus can be directly fed into deep neural nets. But, for text data, translation into word embeddings is needed. In NLP and DL, a word embedding is basically vectors of real numbers mapped from words or phrases from the vocabulary to represent semantic/syntactic information of words in a way that computers can understand. Once word embeddings have been trained, we can use them to obtain relations such as similarities between words. Word2Vec BIB001 BIB002 and GloVe (Global Vectors for word representation) BIB003 are two popular word embedding algorithms used to construct vector representations for words. Word2Vec "vectorizes" words-it is a two-layer neural network that processes text. Its input is a text corpus and its output is a vocabulary in which each item has a vector attached to it, which can be fed into a deep neural net or simply queried to detect relationships between words. While Word2vec is not a deep neural network, it turns text into a numerical form that deep nets can understand. So we can start with powerful mathematical operations on words to detect semantic similarities between words. Similar to Word2Vec, GloVe is an unsupervised learning algorithm (Appendix A.2.2) used to compute vector representations for words. Training is performed on aggregated global word-word co-occurrence statistics from a input corpus, and the resulting representations showcase linear substructures of the word vector space. The main difference between Word2Vec and GloVe is that the former is a "predictive" model, whereas the latter is a "count-based" model BIB004 . If we can control well all the hyper-parameters of Word2Vec and GloVe, the embeddings generated using the two methods are very similarly in NLP tasks. One advantage of GloVe over Word2Vec is that it is easier to parallelize the implementation BIB003 , which means it is easier to train over a big volume of data on GPUs for parallelism.
Literature Survey on Interaction Design and Existing Software Applications for Dyslectic Users <s> INTRODUCTION <s> This paper describes the effects of using a word processor on the creative writing of a small group of children with learning disabilities. Each week the children wrote one word-processed and one handwritten story. The effects of using a word processor seemed to be influenced by the particular problems the children were experiencing with written work. For the children with severe spelling problems, using a word processor seemed to result in fewer spelling errors, while for the children who were still predominantly concerned with the mechanics of the writing task, using a word processor seemed to result in longer stories. <s> BIB001 </s> Literature Survey on Interaction Design and Existing Software Applications for Dyslectic Users <s> INTRODUCTION <s> Abstract The ability to read is one of the main skills of a human being. However, some of us have reading difficulties, regardless of social status, level of intelligence or education. This disorder is the main characteristic of dyslexia and is maintained throughout life, requiring early and specialized intervention. Dyslexia is defined as a learning disturbance in the area of reading, writing and spelling. Although the numbers of prevalence rely heavily on the type of investigation conducted, several studies indicate that up to 17% of the world population is dyslexic, and that men have greater prevalence. In this work we will address the use of assistive mobile applications for dyslexia by analyzing possible solutions and proposing a prototype of a mobile application that can be used by dyslexic and whilst giving feedback both to the dyslexic him/herself and to the assisting technician or teacher. The implemented prototype focuses the Portuguese language and was tested with Portuguese students with ages between 10 and 12 years old. Preliminary results show that the proposed gamified set of activities, allow dyslexics to improve multisensory perception, constituting an added value facilitator of adaptiveness and learning. <s> BIB002
Dyslexia is a hidden learning disorder in reading, spelling and written language, and maybe in number work. It is a learning disability, which cannot be completely treated and has negative consequences for dyslectics' life by making it complicated . Learning difficulties, caused by dyslexia, have often a negative impact on the way dyslectics are used to thinking, behaving and living. Statistics have shown that approximately 70-80% of people with reading problems are probably dyslectics, and one out of five students have a language-based learning disability . Research have shown that dyslexia is a cognitive disorder, which affects deeply dyslectics' daily routine by isolating them often from the community. It is very usual for a dyslectic person to complain that (s)he is not able to be focused on a specific task, recall tasks, orders, messages, routes or even their daily schedule . Furthermore, it is important to point out that research supports that there is a relation between dyslexia and the type of languages. A language can be either opaque (e.g. the English, Danish, French languages, etc.), or transparent (e.g. the Greek, Italian, Spanish languages, etc.), which difference affects the level of a language's complexity, and has an impact on dyslectics' reading and writing performance BIB002 . Studies have also proved that assistive technology contributes significantly the improvement of dyslectics' cognitive skills BIB002 , BIB001 , . Technology is an alternative and modern way of helping people with dyslexia improve their skills on reading, writing, memory, organization or numeracy conceptual areas. Maybe technology is not able to treat dyslexia yet, but it is able to facilitate dyslectics by enhancing the motivation for improvement [8] , . Especially Human-Computer Interaction (HCI) field can enhance this trial through designing systems for building a dyslexia friendly environment After a systematic literature research on interaction design of systems and existing software applications supporting dyslectic users, we realized that related studies to the field of dyslexia are very limited, even though dyslexia is a cognitive disorder with strong impacts to dyslectics' life. With this study, our goal is to contribute future research on developing designs for software applications addressing to dyslectic users.
Literature Survey on Interaction Design and Existing Software Applications for Dyslectic Users <s> Related Work on IxD Guidelines and Parameters <s> In this paper, we present an exploratory study of the web navigation experiences of dyslexic users. Findings indicate that dyslexics exhibit distinctive web navigation behaviour and preferences. We believe that the outcomes of this study add to our understanding of the particular needs of this web user population and have implications for the design of effective navigation structures. <s> BIB001 </s> Literature Survey on Interaction Design and Existing Software Applications for Dyslectic Users <s> Related Work on IxD Guidelines and Parameters <s> Compared to the online interaction behavior of other users, little is known about the difficulties dyslexic Web users encounter online. This paper reviews existing literature at the intersection of dyslexia and accessibility research to determine what useful knowledge exists regarding this important and relatively large group of users. This review uncovers that, although there are few published usability tests with dyslexic users, there is a considerable body of knowledge on dyslexia as well as many design guidelines for authoring dyslexic-accessible interfaces. Through a comparison of existing accessibility guidelines for dyslexic and non-dyslexic users and discussion of the plain language movement, it is argued that dyslexic-accessible practices may redress difficulties encountered by all Internet users. This conclusion suggests that usability testing yielding a clearer profile of the dyslexic user would further inform the practice of universal design, but also that enough knowledge is already available to allow doing more to increase accessibility for dyslexic Internet users. <s> BIB002 </s> Literature Survey on Interaction Design and Existing Software Applications for Dyslectic Users <s> Related Work on IxD Guidelines and Parameters <s> This paper presents an empirical study on problems encountered by users with dyslexia when using websites. The study was performed by a user evaluation of 16 websites by a panel of 13 participants with dyslexia, each participant evaluating 10 websites. The results presented in the paper are based on 693 instances of accessibility and usability problems. Most frequent problems were related to navigation issues, problems with presentation and organisation of information, lack or misfunctioning of specific funtionality in websites, and issues with language. <s> BIB003 </s> Literature Survey on Interaction Design and Existing Software Applications for Dyslectic Users <s> Related Work on IxD Guidelines and Parameters <s> In this paper, we offer set of guidelines and a web service that presents Web texts in a more more accessible way to people with dyslexia. The layout guidelines for developing this service are based on a user study with a group of twenty two dyslexic users. The data collected from our study combines qualitative data from interviews and questionnaires and quantitative data from tests carried out using eye tracking. We analyze and compare both kinds of data and present a set of layout guidelines for making the text Web more readable for dyslexic users. To the best of our knowledge, our methodology for defining dyslexic-friendly guidelines and our web service are novel. <s> BIB004 </s> Literature Survey on Interaction Design and Existing Software Applications for Dyslectic Users <s> Related Work on IxD Guidelines and Parameters <s> In 2012, Wikipedia was the sixth-most visited website on the Internet. Being one of the main repositories of knowledge, students from all over the world consult it. But, around 10% of these students have dyslexia, which impairs their access to text-based websites. How could Wikipedia be presented to be more readable for this target group? In an experiment with 28 participants with dyslexia, we compare reading speed, comprehension, and subjective readability for the font sizes 10, 12, 14, 18, 22, and 26 points, and line spacings 0.8, 1.0, 1.4, and 1.8. The results show that font size has a significant effect on the readability and the understandability of the text, while line spacing does not. On the basis of our results, we recommend using 18-point font size when designing web text for readers with dyslexia. Our results significantly differ from previous recommendations, presumably, because this is the first work to cover a wide range of values and to study them in the context of an actual website. <s> BIB005 </s> Literature Survey on Interaction Design and Existing Software Applications for Dyslectic Users <s> Related Work on IxD Guidelines and Parameters <s> People with dyslexia often face difficulties on consuming written content at the Web. This occurs mainly because websites' designs do not consider the barriers faced by them, since dyslexia is not taken into account as often as other functional limitations. Guidelines for designing accessible Web pages are being consolidated and studied. Meanwhile, people with dyslexia face barriers and develop workarounds to overcome these difficulties. This work presents a customization toolbar called Firefixia, especially designed to support people with dyslexia to adapt the presentation of Web content according to their preferences. Firefixia was tested by 4 participants with diagnosed dyslexia. The participants evaluated and provided us feedback regarding the toolbar most/least useful features. From the presented results, one expects to highlight the need for end-user customization features that are easy to access, easy to use, and easy to explore. Participants reported that the most useful customization features are the text size, the text alignment, and the link color. Finally, this work indicates promising directions for end-user customization tools such as Firefixia. <s> BIB006 </s> Literature Survey on Interaction Design and Existing Software Applications for Dyslectic Users <s> Related Work on IxD Guidelines and Parameters <s> We present a user study for two different automatic strategies that simplify text content for people with dyslexia. The strategies considered are the standard one (replacing a complex word with the most simpler synonym) and a new one that presents several synonyms for a complex word if the user requests them. We compare texts transformed by both strategies with the original text and to a gold standard manually built. The study was undertook by 96 participants, 47 with dyslexia plus a control group of 49 people without dyslexia. To show device independence, for the new strategy we used three different reading devices. Overall, participants with dyslexia found texts presented with the new strategy significantly more readable and comprehensible. To the best of our knowledge, this is the largest user study of its kind. <s> BIB007 </s> Literature Survey on Interaction Design and Existing Software Applications for Dyslectic Users <s> Related Work on IxD Guidelines and Parameters <s> Around 10% of the people have dyslexia, a neurological disability that impairs a person's ability to read and write. There is evidence that the presentation of the text has a significant effect on a text's accessibility for people with dyslexia. However, to the best of our knowledge, there are no experiments that objectively measure the impact of the font type on reading performance. In this paper, we present the first experiment that uses eye-tracking to measure the effect of font type on reading speed. Using a within-subject design, 48 subjects with dyslexia read 12 texts with 12 different fonts. Sans serif, monospaced and roman font styles significantly improved the reading performance over serif, proportional and italic fonts. On the basis of our results, we present a set of more accessible fonts for people with dyslexia. <s> BIB008
Otávio et al investigated Web accessibility issues for users with dyslexia by involving in their study related literature studies on interaction design parameters. A number of related works on interaction design for dyslexia have been mentioned in their research. Some of them focused on functionality and some others on the user interface: From one hand, the studies of Freire et al BIB003 and Al-Wabil et al BIB001 focused on functionalities that could help dyslectic users improve their performance. In their studies, they refer to a number of 693 problems on accessibility and usability, which problems are related to difficulties in navigation, architecture of information, the form of texts, the organization of the content, the language and the amount of information that makes harder for dyslectics to scan a text. Because of the fact that such difficulties can be distracting for dyslectics, interaction design of systems for dyslexia has to be focused on fulfil these functionalities. On the other hand, the studies of Rello et al BIB004 , BIB005 , , Santana et al BIB006 , Rello & Barbosa , and Rello & Baeza-Yates BIB007 , BIB008 focused on user interface design-parameters. The recommended design-parameters allow users to highlight content of texts, adjust the size and type of fonts, the alignment of a text, the spacing of characters, the fore-and background colours, the length of texts, and its borders. Additionally, there are suggestions, which could improve dyslectics' reading skills: Rello & Baeza-Yates recommend Helvetica, Courier, Arial, Verdana and Computer Modern Unicode font types as the best font types for dyslectic users BIB007 , BIB008 . Jacob McCarthy et al. [21] , BIB002 included into their study a literature survey on interaction design for dyslectic users, which resulted in a number of parameters focused on the user interface as well. In this study, there have been mentioned features that allow dyslectic users to adjust the size of a text, and design parameters that refer to short sentences, use of pictures, dark background, and San Serif fonts of 12pt or larger. These recommendations are an overview of other researchers' studies , , which McCarthy provides us.
Literature Survey on Interaction Design and Existing Software Applications for Dyslectic Users <s> Interaction Design guidelines and parameters <s> This paper outlines and explains the guideline needed to design an effective interaction design (IxD) for dyslexic children’s reading application. The guideline is developed based on theories that underly dyslexia and its effects towards reading, with emphasis given to the visual related theories and phonological deficit theory and core-affect theory. The needs of a dyslexic child to read properly and correctly with understanding of the related theories inspires the development of this guideline as it is aimed to aid the process of learning to read by facilitating them with useful design. Tested on a number of dyslexic children, the design seems to reduce their memory load for this particular task and thus reduce their difficulties in reading. Hence the role of an interaction designer is needed to answer the whats and hows and to design an interactive product (in this case – reading applications) to help dyslexic children to read. <s> BIB001 </s> Literature Survey on Interaction Design and Existing Software Applications for Dyslectic Users <s> Interaction Design guidelines and parameters <s> Part I: Essentials of designing interactive systems 1. Designing interactive systems: A fusion of skills 2. PACT: A framework for designing interactive systems 3. The process of human-centred interactive systems design 4. Usability 5. Experience design 6. The Home Information Centre (HIC): A case study in designing interactive systems Part II: Techniques for designing interactive systems 7. Understanding 8. Envisionment 9. Design 10. Evaluation 11. Task analysis 12. Visual user interface design 13. Multimodal user interface design Part III: Contexts for designing interactive systems 14. Designing websites 15. Social media 16. Collaborative environments 17. Agents and avatars 18. Ubiquitous computing 19. Mobile computing 20. Wearable computing Part IV: Foundations of designing interactive systems 21. Memory and attention 22. Affect 23. Cognition and action 24. Social interaction 25. Perception and navigation 1.1 The variety of interactive systems 1.2 The concerns of interactive systems design 1.3 Being digital 1.4 The skills of the interactive systems designer 1.5 Why being human-centred is important 2.1 Introduction 2.2 People 2.3 Activites 2.4 Contexts 2.5 Technologies 2.6 Scoping a Problem with PACT 3.1 Introduction 3.2 Developing personas and scenarios 3.3 Using scenarios throughout design 3.4 A scenario-based design method 4.1 Introduction 4.2 Accessiblity 4.3 Usability 4.4 Acceptability 4.5 Design principles 5.1 Introduction 5.2 Engagement 5.3 Designing for pleasure 5.4 Aesthetics 5.5 Service design 6.1 Introduction 6.2 Scenarios for the HIC 6.3 Evaluating early interface prototypes 6.4 A first design 6.5 The second interface design 7.1 Understanding requirements 7.2 Participative design 7.3 Interviews 7.4 Questionnaires 7.5 Probes 7.6 Card sorting techniques 7.7 Working with groups 7.8 Fieldwork: Observing activites in situ 7.9 Artefact collection and 'desk work' 8.1 Finding suitable representations 8.2 Basic techniques 8.3 Prototypes 8.4 Envisionment in practice 9.1 Introduction 9.2 Conceptual design 9.3 Metaphors in design 9.4 Conceptual design using scenarios 9.5 Physical deisgn 9.6 Designing interactions 10.1 Introduction 10.2 Expert evaluation 10.3 Participant-based evaluation 10.4 Evaluation in practice 10.5 Evaluation: further issues 11.1 Goals, tasks and actions 11.2 Task analysis and systems design 11.3 Hierarchical task analysis 11.4 GOMS: a cognitive model of procedural knowledge 11.5 Structural knowledge 11.6 Cognitive work analysis 12.1 Introduction 12.2 Graphical user interfaces 12.3 Interface design guidelines 12.4 Psychological principles and interface design 12.5 Information design 13.1 Introduction 13.2 Using sound at the interface 13.3 Tangible interaction 13.4 Getting a feel for tangible computing 13.5 Gestural interaction and surface computing 14.1 Introduction 14.2 Website development 14.3 The information architecture of websites 14.4 Navigation design for websites 14.5 Case study: designing the Robert Louis Stevenson website 15.1 Introduction 15.2 Background ideas 15.3 Social networking 15.4 Sharing with others 15.5 Cloud computing 16.1 Introduction 16.2 Issues for cooperative working 16.3 Technologies to support cooperative working 16.4 Collabroative virtual environments 16.5 Case study: developing a collaborative Table-Top application 17.1 Agents 17.2 Adaptive systems 17.3 An architecture for agents 17.4 Other aplications of agent-based interaction 17.5 Avatars and conversational agents 18.1 Ubiquitious Computing 18.2 Information spaces 18.3 Blended Spaces 18.4 Home environments 18.5 Navigating in wireless sensor networks 19.1 Introduction 19.2 Context awareness 19.3 Undertanding in mobile computing 19.4 Design 19.5 Evaluation 20.1 Introduction 20.2 Smart materials 20.3 Material design 20.4 From materials to implants 21.1 Introduction 21.2 Memory 21.3 Attention 21.4 Human error 22.1 Introduction 22.2 Psychological thoeries of emotion 22.3 Detecting and recognising emotions 22.4 Expressing emotion 22.5 Potential applications and key issues for further research 23.1 Human information processing 23.2 Situated action 23.3 Distributed cognition 23.4 Embodied cognition 23.5 Activity theory 24.1 Introduction 24.2 Human communication 24.3 People in groups 24.4 Presence 24.5 Culture and identity 25.1 Introduction 25.2 Visual perception 25.3 Non-visual perception 25.4 Navigation <s> BIB002
Research on interaction design guidelines resulted in one design guideline with an emphasis on three design dimensions -Form, Content and Behavior-mentioned of high importance for software applications' design addressing to dyslectic users. To be more precise the interaction design guideline supports that these dimensions and their elements facilitate users, who address visual (the form dimension), or phonological deficits (the content and behavior dimensions) due to dyslexia , [28] , BIB002 , BIB001 . Simple and clear layouts with font sizes from 12 to 14 and Sans Serif fonts, as well as features that allow dyslectic users to adjust the font size, the style, and colors, or specific combinations on colors and contrasts by avoiding bright colors, have been recommended as supportive to dyslectic users and able to improve their reading performance. Additionally, features that provide explanations, enrichment of texts with pictures and audio elements make reading tasks more accessible for users with dyslexia. Moving forward, our literature analysis led us to Rello and Barbosa study on IxD parameters of software applications for dyslectic users. These interaction design parameters focus on the Form dimension, as visual deficits affect deeply dyslectic users' reading performance. This study recommends a number of layout-design parameters as appropriate to help dyslectic users improve their reading performance . Specifically, Font Types /Sizes: Arial, Comic Sans Verdana, Century, Gothic, Trebuchet, Dan Sassoon Primary, Times New Roman, Courier, Dyslexie/12 or 14, and extra-large letter spacing, Brightness-Colors: Low Brightness & color differences among text and background, and Light grey as font color, Space/Lines/Columns: Lines of 60-70 /Characters Clear Spacing between letter combinations/Line spacing: 1.3, 1.4, 1.5, 1.5-2/ Narrow columns should be avoided. Explaining the Rello and Barbosa text layout parameters, Sans Serif fonts of a size between 12 and 14, low brightness and light contrasts between background and fonts' colors have been recommended by their study. Furthermore, suggestions for lines of 60 to 70 characters maximum and clear spacing between letter combinations, as well as line spacing from 1.3 to 2, and avoidance of narrow columns have been recommended as supportive to dyslectic users and able to improve their reading performance . Based on comparisons among the IxD guidelines/parameters there are many similarities on (i) the font type and size, (ii) the recommendations about avoiding bright colors and narrow columns, and (iii) the suggesting number of characters and line spacing (see table 2 ). Table 3 . Suggested IxD generated by Comparisons Regarding the interaction design guidelines and parameters shown in the study, and their relation to the design parameters of the related works, there is a clear agreement among them. Both IxD guidelines/parameters of literature research and related work focused on user interface and functionalities that help dyslectic users improve their reading performance. In both related works and literature research's IxD guidelines, design parameters have been proposed for developing designs for software applications addressing to dyslectic users. Their aim? To facilitate and help dyslectic users improve their reading skills and performance.
Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> Introduction. <s> This paper describes a new technique for implementing educational programming languages using tangible interface technology. It emphasizes the use of inexpensive and durable parts with no embedded electronics or power supplies. Students create programs in offline settings---on their desks or on the floor---and use a portable scanning station to compile their code. We argue that languages created with this approach offer an appealing and practical alternative to text-based and visual languages for classroom use. In this paper we discuss the motivations for our project and describe the design and implementation of two tangible programming languages. We also describe an initial case study with children and outline future research goals. <s> BIB001 </s> Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> Introduction. <s> This paper argues that the "kindergarten approach to learning" -- characterized by a spiraling cycle of Imagine, Create, Play, Share, Reflect, and back to Imagine -- is ideally suited to the needs of the 21st century, helping learners develop the creative-thinking skills that are critical to success and satisfaction in today's society. The paper discusses strategies for designing new technologies that encourage and support kindergarten-style learning, building on the success of traditional kindergarten materials and activities, but extending to learners of all ages, helping them continue to develop as creative thinkers. <s> BIB002 </s> Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> Introduction. <s> "Digital fluency" should mean designing, creating, and remixing, not just browsing, chatting, and interacting. <s> BIB003 </s> Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> Introduction. <s> Learning to program is hard, also because it requires students to deal with abstraction. A program is an abstract construct: most pieces of a program are not concrete, literal values, but they are abstract symbols standing for things or actions in the system they model and control. Thus, when learning to program, novices not only need to learn about the system, but they also need to learn about the programming language. They need to think about the concrete effects in the system their abstract program constructs will cause once the program will execute. This thinking on two levels of abstraction (program and system), and in two dimensions of time (coding and execution) can cause a significant burden. In this short paper we propose to collapse those two levels. We wonder whether it would be possible to devise a programming environment where the program is the system. To do this we need languages that are the system, instead of languages that are about the system. We propose to use tangible languages as a way towards this idea. We briefly present three such languages which we used in the context of an informal learning setting and report our initial lessons learned. <s> BIB004
While programming is often seen as a key element of constructionist 1 approaches (starting from LOGO (Feuerzeig et al., 1970) , a programming language designed to enable learning abstract concepts of disciplines like math, geometry, physics, and potentially all others, by manipulating computational objects ), the research on learning to program through a constructionist strategy is somewhat limited, mostly focusing on how to bring the abstract and formal nature of programming languages into "concrete" or even tangible objects, accessible also to children with lim-ited abstraction power BIB003 BIB001 BIB004 . Notwithstanding this, programming is in some sense intrinsically constructionist, as it always involves the production of an artifact that can be shown and shared. Of course, this does not mean that programming automatically leads to constructivist/constructionist pedagogies: in facts, we see very different approaches, from open project-based learning to much more traditional education through lectures and closed exercises. Specific languages and environments play an important role too: for example, visual programming languages make it easier (by removing the request to face unnatural textual syntactic rules) to realize small but meaningful projects, keeping students motivated, and support a constructionist approach where students are encouraged to develop and share their projects -video games, animated stories, or simulations of simple real-world phenomena. Constructionist ideas are also floating around mainstream programming practice and they are even codified in some software engineering approaches: agile methods like eXtreme Programming , for example, suggest several techniques that can be easily connected to the constructionist word of advice about discussing, sharing, and productively collaborating to successfully build knowledge together ; moreover the incremental and iterative process of creative thinking and learning BIB002 fits well with the agile preference to "responding to change over following a plan" . It actually originated by observing how the traditional kindergarten approach to learning is ideally suited to learn to think creatively, and it is now called "creative learning spiral" (Fig. 1) . According to this model, when one learns by creating something (e.g., a computer program) she imagines what she wants to do, creates a project based on this idea, plays with her creation, shares her idea and her creation with others, reflects on the experience and feedback received from others, and all this leads her to imagine new ideas, new functionalities, new improvements for her project, or new projects. The process is iterated many times. This spiral describes an iterative process, highly overlapping with the iterative software development cycle. ).
Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> No technological hurdles: <s> The importance of computer science education in secondary, and even primary school, has been pointed out by many authors. But too often pupils only experience ICT, both at home and at school, and confuse it with computer science. We organized a game-contest, the KangourouofInformatics, with the aim to attract all pupils (not only the talented ones), expose them to the scientific aspects of informatics in a fun way, and convey a correct conception of the discipline. Peculiarities of the game are its focus on team work and on engaging pupils in discovering what lays behind what they experience every day. <s> BIB001 </s> Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> No technological hurdles: <s> We describe a teaching activity about word-processors we proposed to a group of 25 pupils in 9th/10th grades of an Italian secondary school. While the pupils had some familiarity with word-processor operations, they had had no formal instruction about the automatic elaboration of formatted texts. The proposed kinesthetic/tactile activities turned out to be a good way for conveying non-trivial abstract computing concepts. <s> BIB002 </s> Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> No technological hurdles: <s> Mathematics popularization is an important, creative kind of research, entangled with many other research programs of basic interest -- Mike Fellows ::: ::: This chapter is a history of the Computer Science Unplugged project, and related work on math and computer science popularization that Mike Fellows has been a driving force behind, including MEGA-Mathematics and games design. Mike's mission has been to open up the knowns and unknowns of mathematical science to the public. We explore the genesis of MEGA-Math and "Unplugged" in the early 1990s, and then the sudden growth of interest in Unplugged after the year 2003, including the contributions from many different cultures and its deployment in a large variety of contexts. Woven through this history is the importance of story: that presenting math and computing topics through story-telling and drama can captivate children and adults alike, and provides a whole new level of engagement with what can be perceived as a dry topic. It is also about not paying attention to boundaries -- whether teaching advanced computer science concepts to elementary school children or running a mathematics event in a park. <s> BIB003 </s> Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> No technological hurdles: <s> Many students hold incorrect ideas and negative attitudes about computer science (CS). In order to address these difficulties, a series of learning activities called Computer Science Unplugged was developed by Tim Bell and his colleagues. These activities expose young people to central concepts in CS in an entertaining way without requiring a computer. The CS Unplugged activities have become more and more popular among CS educators and several activities are recommended in the ACM K-12 curriculum for elementary schools. CS Unplugged is used worldwide and has been translated into many languages. We examined the effect of the CS Unplugged activities on middle-school students’ ideas about CS and their desire to consider and study it in high school. The results indicate that following the activities the ideas of the students on what CS is about were partially improved, but their desire to study CS lessened. In order to provide possible explanations to these results, we analyzed the CS Unplugged activities to determine to what extent the objectives of CS Unplugged were addressed in the activities. In addition, we checked whether the activities were designed according to constructivist principles and whether they were explicitly linked to central concepts in CS. We found that only some of the objectives were addressed in the activities, that the activities do not engage with the students’ prior knowledge and that most of the activities are not explicitly linked to central concepts in CS. We offer suggestions for modifying the CS Unplugged activities so that they will be more likely to achieve their objectives. <s> BIB004 </s> Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> No technological hurdles: <s> In this paper we report on our experiments in teaching computer science concepts with a mix of tangible and abstract object manipulations. The goal we set ourselves was to let pupils discover the challenges one has to meet to automatically manipulate formatted text. We worked with a group of 25 secondary-school pupils (9-10th grade), and they were actually able to “invent” the concept of mark-up language. From this experiment we distilled a set of activities which will be replicated in other classes (6th grade) under the guidance of math teachers. <s> BIB005 </s> Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> No technological hurdles: <s> In order to introduce informatic concepts to students of Italian secondary schools, we devised a number of interactive workshops conceived for pupils aged 10–17. Each workshop is intended to give pupils the opportunity to explore a computer science topic: investigate it firsthand, make hypotheses that can then be tested in a guided context during the activity, and construct viable mental models. This paper reports about how we designed and conducted these workshops. <s> BIB006
• they allow students (and teachers) to have meaningful experiences related to important CS concepts (like algorithms) without having to wait until they get some technology and programming fluency (Bell and Lodi, to appear) . It is important to note that evidence shows unplugged activities should not replace programming activities, but can be helpful to make them more effective . The following two examples, taken from CS Unplugged 3 and ALaDDIn 4 , illustrate typical unplugged approaches to introduce children to programming. In CS Unplugged "Rescue Mission", pupils are given by the teacher a very simple language with only three commands: 1 step forward, 90 degrees left, 90 degrees right. The task is to compose a sequence of instructions to move a robot from one given cell on a grid to a given other cell. Pupils are divided into groups of three where each one has a role: either programmer, bot, or tester. This division of roles is done to emphasize the fact that programs cannot be adjusted on the fly; they must be first planned, then implemented, then tested and debugged until they work correctly. ALaDDIn "Algomotricity and Mazes" is an activity designed according to a strategy called algomotricity BIB001 BIB002 BIB005 BIB006 , where pupils are exposed to an informatic concept/process by playful activities which involve a mix of tangible and abstract object manipulations; they can investigate it firsthand, make hypotheses that can then be tested in a guided context during the activity, and eventually construct viable mental models. Algomotricity starts "unplugged" BIB003 but ends with a computer-based phase to close the loop with pupils' previous acquaintance with applications BIB004 . "Algomotricity and Mazes" focuses on primitives and control structures. The task is that of verbally guiding a "robot" (a blindfolded person) through a simple path. Working in groups, pupils are requested to propose a very limited set of primitives to be written each on a sticky note, and to compose them into a program to be executed by the "robot". Also, they have the possibility of exploiting basic control structures (if, repeat-until, repeat-n-times) . The conductor may decide to swap some programs and "robots", in order to emphasize the ambiguity of some instructions or the dependency of programs on special features of the "robot" (e.g., step/foot size). In the last phase, students are given computers and a slightly modified version of Scratch. They are requested to write programs that guide a sprite through mazes of increasing complexity where shape patterns foster the use of loops.
Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> Notional Machines <s> This article brings together, summarizes, and comments on several threads of research that have contributed to our understanding of the challenges that novice programmers face when learning about the runtime dynamics of programs and the role of the computer in program execution. More specifically, the review covers the literature on programming misconceptions, the cognitive theory of mental models, constructivist theory of knowledge and learning, phenomenographic research on experiencing programming, and the theory of threshold concepts. These bodies of work are examined in relation to the concept of a “notional machine”—an abstract computer for executing programs of a particular kind. As a whole, the literature points to notional machines as a major challenge in introductory programming education. It is argued that instructors should acknowledge the notional machine as an explicit learning objective and address it in teaching. Teaching within some programming paradigms, such as object-oriented programming, may benefit from using multiple notional machines at different levels of abstraction. Pointers to some promising pedagogical techniques are provided. <s> BIB001 </s> Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> Notional Machines <s> Comprehension of programming and programs is known to be a difficult task for many beginning students, with many computing courses showing significant drop out and failure rates. In this paper, we present a new notional machine design and implementation to help with understanding of programming and its dynamics for beginning learners. The notional machine offers an abstraction of the physical machine designed for comprehension and learning purposes. We introduce the notional machine and a graphical notation for its representation. We also present Novis, an implementation of a dynamic real-time visualiser of this notional machine, integrated into BlueJ. <s> BIB002
An important intuition for approaching programming from a constructionist perspective is that programs are a join point between our mind and the computer, the interpreter of the formal description of what we have in mind. Thus, programs appeal to our curiosity and ingenuity and are wonderful artifacts to share and discuss with other active minds. Such a sharing, however, assumes that the interpreter is a shared knowledge among peers. When a group of people programs the same 'machine', a shared semantics is in fact given, but unfortunately people, especially novices, do not necessarily write their programs for the formal interpreter they use, rather for the notional machine BIB001 BIB002 they actually have in their minds. A notional machine is an abstract computer responsible for executing programs of a particular kind BIB001 and its grasping refers to all the general properties of the machine that one is learning to control . The purpose of a notional machine is to explain, to give intuitive meaning to the code a programmer writes. It normally encompasses an idealized version of the interpreter and other aspects of the development and run-time environment; moreover, it should bring also a complementary intuition of what the notional machine cannot do, at least without specific directions of the programmer. To introduce a notional machine to the students is often the initial role of the instructors. Ideally this should be somewhat incremental in complexity, but not all programming languages are suitable for incremental models: in fact, most of the success for introductory courses of visual languages or Lisp dialects is that they allow shallow presentations of syntax, thus letting the learners focus on the more relevant parts of their notional machines. An explicit reference to the notional machine can foster meta-cognition and, during teamwork, it can help in identifying misconceptions. But how can the notional machine be made explicit? Tracing of the computational process and visualization of the execution are effective candidate tools. They allow instructors to make as clear as possible: (i) what novice programmers should expect the notional machine will do and (ii) what it actually does.
Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> Abstract Programming Patterns <s> We look at the essential thinking skills students need to learn in the introductory computer science course based on object-oriented programming. We create a framework for such a course based on the elementary programming and design patterns. Some of these patterns are known in the pattern community, others enrich the collection. Our goal is to help students focus on mastering reasoning and design skills before the language idiosynchracies muddy the water. <s> BIB001 </s> Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> Abstract Programming Patterns <s> The use of all variables in 109 novice-level, but expert written, procedural programs were analyzed in order to find a small but still comprehensive set of generic roles that describe the nature of the successive values a variable obtains. This paper gives the results of the analysis: a list of only nine roles that cover 99% of variables, frequencies of the roles, and discovered role changes. <s> BIB002 </s> Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> Abstract Programming Patterns <s> Roles can be assigned to occurrences of variables in programs according to a small number of patterns of use that are both language- and algorithm-independent. Preliminary studies on explicitly teaching roles of variables to novice students have shown that roles are an excellent pedagogical tool for clarifying the structure and meaning of programs. This paper describes the results of an investigation designed to test the understandability and acceptability of the role concept and of the individual roles as seen by computer science educators. The investigation consisted of a short tutorial on roles, a brief training session on assigning roles to variables, a test evaluating the subjects' ability to assign roles, and a set of open questions concerning their opinions of roles. Roles were identified with 85 accuracy, and in typical uses of variables with 93 accuracy. <s> BIB003
A small number of abstract programming patterns can be applied to a potentially infinite spectrum of specific conditions. This is often a challenge for novices, given that most of the times the discipline is taught (i) introducing one or more primitive tools (e.g., variables), and (ii) showing some examples highlighting how these tools can be used to solve specific problems. This might lead to the rise of misconceptions of pupils w.r.t. the above-mentioned tools. The concept of role of variables BIB002 BIB001 has been proposed in order to guide novice programmers from the operational knowledge of a variable as the holder of a mutable value to the ability to identify abstract use cases following a small number of roles (such as those in Fig. 3 ). Such ability is of great help when tackling the solution of a specific problem, for instance, that of computing the maximal value within a sequence. Indeed, this is a great opportunity for letting pupils realize that this problem is a special case of the more general quest for optimal value. The latter can be found using a most-wanted holder to be compared with each element of the sequence and containing the highest value seen so far. This method easily fits the search of the maximal as well as the minimal value, and it also efficiently handles less obvious cases such as that of finding the distinct vowels occurring in a sentence. These roles can also be gradually introduced following the hierarchy of Fig. 3 , starting from the concept of literal (e.g., an integer value or a string) and building knowledge about one role on the top of already understood roles. For selection and iteration as well there are several standard use patterns that occur over and over again. Selection patterns (Bergin, 1999) and loop patterns have been introduced with the same goal. For instance, to illustrate the idea, the loop and a half pattern is an efficient processing strategy for a sequence of elements whose end can be detected only after at least one element has been read. It uses an infinite loop whose body accesses the next sequence element. If there are no more elements, the loop is escaped through a controlled jump, otherwise some special actions are possibly executed before continuing the iteration. The code snippet shown in Fig. 4 shows one of the canonical incarnations of this pattern: the possibly repeated check of a value given as input, detecting and ignoring invalid entries. Selection and loop patterns fit well within a constructionist-based learning path: they might be naturally discovered when critically analyzing software implementations. For instance, the previous loop could be the end point of a reasoning scheme started from the detection of a duplicated line of code in a quick-and-dirty initial implementation. Fig. 3 . Roles of variables, organized in a constructionist-like hierarchy where the predecessor of an arrow is a prerequisite for learning the corresponding successor (source: BIB002 ). In general, abstract programming patterns are provided in a short number, in order to cover them within a standard introductory computer programming course; moreover, the related concepts are easily grasped by experienced computer science teachers (BenAri and BIB003 , thus they can be embedded in already existing curricula with low effort.
Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> sometimes a novice programmer "doesn't get" a concept or "gets it wrong" in a way that is not a harmless (or desirable) alternative interpretation. Incorrect and incomplete understandings of programming concepts result in unproductive programming behavior and dysfunctional programs <s> This article brings together, summarizes, and comments on several threads of research that have contributed to our understanding of the challenges that novice programmers face when learning about the runtime dynamics of programs and the role of the computer in program execution. More specifically, the review covers the literature on programming misconceptions, the cognitive theory of mental models, constructivist theory of knowledge and learning, phenomenographic research on experiencing programming, and the theory of threshold concepts. These bodies of work are examined in relation to the concept of a “notional machine”—an abstract computer for executing programs of a particular kind. As a whole, the literature points to notional machines as a major challenge in introductory programming education. It is argued that instructors should acknowledge the notional machine as an explicit learning objective and address it in teaching. Teaching within some programming paradigms, such as object-oriented programming, may benefit from using multiple notional machines at different levels of abstraction. Pointers to some promising pedagogical techniques are provided. <s> BIB001 </s> Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> sometimes a novice programmer "doesn't get" a concept or "gets it wrong" in a way that is not a harmless (or desirable) alternative interpretation. Incorrect and incomplete understandings of programming concepts result in unproductive programming behavior and dysfunctional programs <s> Efforts to improve computer science education are underway, and teachers of computer science are challenged in introductory programming courses to help learners develop their understanding of programming and computer science. Identifying and addressing students’ misconceptions is a key part of a computer science teacher's competence. However, relevant research on this topic is not as fully developed in the computer science education field as it is in mathematics and science education. In this article, we first review relevant literature on general definitions of misconceptions and studies about students’ misconceptions and other difficulties in introductory programming. Next, we investigate the factors that contribute to the difficulties. Finally, strategies and tools to address difficulties including misconceptions are discussed. Based on the review of literature, we found that students exhibit various misconceptions and other difficulties in syntactic knowledge, conceptual knowledge, and strategic knowledge. These difficulties experienced by students are related to many factors including unfamiliarity of syntax, natural language, math knowledge, inaccurate mental models, lack of strategies, programming environments, and teachers’ knowledge and instruction. However, many sources of students’ difficulties have connections with students’ prior knowledge. To better understand and address students’ misconceptions and other difficulties, various instructional approaches and tools have been developed. Nevertheless, the dissemination of these approaches and tools has been limited. Thus, first, we suggest enhancing the dissemination of existing tools and approaches and investigating their long-term effects. Second, we recommend that computing education research move beyond documenting misconceptions to address the development of students’ (mis)conceptions by integrating conceptual change theories. Third, we believe that developing and enhancing instructors’ pedagogical content knowledge (PCK), including their knowledge of students’ misconceptions and ability to apply effective instructional approaches and tools to address students’ difficulties, is vital to the success of teaching introductory programming. <s> BIB002
According to Clancy, there are two macro-causes of misconceptions: over-or undergeneralizing and a confused computational model. High-level languages provide an abstraction on control and data, making programming simpler and more powerful, but, by contrast, hiding details of the executor to the user, who can consequently find mysterious some constructs and behaviors (Clancy, 2004) . Much literature about misconceptions in CSEd can be found: we list some of the most important causes of misconceptions, experienced especially by novices, divided into different areas, found mainly in (Clancy, 2004; BIB001 and in the works they reference. For a complete review see for example BIB002 .
Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> Programming Languages for Learning to Program <s> Most ideas come from previous ideas. The sixties, particularly in the ARPA community, gave rise to a host of notions about “human-computer symbiosis” through interactive time-shared computers, graphics screens and pointing devices. Advanced computer languages were invented to simulate complex systems such as oil refineries and semi-intelligent behavior. The soon to follow paradigm shift of modern personal computing, overlapping window interfaces, and object-oriented design came from seeing the work of the sixties as something more than a “better old thing”. That is, more than a better way: to do mainframe computing; for end-users to invoke functionality; to make data structures more abstract. Instead the promise of exponential growth in computing/$/volume demanded that the sixties be regarded as “ almost a new thing” and to find out what the actual “new things” might be. For example, one would compute with a handheld “Dynabook” in a way that would not be possible on a shared mainframe; millions of potential users meant that the user interface would have to become a learning environment along the lines of Montessori and Bruner; and needs for large scope, reduction in complexity, and end-user literacy would require that data and control structures be done away with in favor of a more biological scheme of protected universal cells interacting only through messages that could mimic any desired behavior. Early Smalltalk was the first complete realization of these new points of view as parented by its many predecessors in hardware, language and user interface design. It became the exemplar of the new computing, in part, because we were actually trying for a qualitative shift in belief structures—a new Kuhnian paradigm in the same spirit as the invention of the printing press—and thus took highly extreme positions which almost forced these new styles to be invented. <s> BIB001 </s> Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> Programming Languages for Learning to Program <s> ➧1 In the past few decades, computer science has driven innovation across a variety of academic fields and become a robust part of democratic participation and the labor economy. Today’s youth are surrounded with applications of these new technologies that affect how they access and produce information and communicate with friends, family, and educators. Yet, though students often gain skills as “users” of these technologies in schools, too many have been denied opportunities to study computer science and produce new knowledge required to become “creators” of computing innovations. The students who do study computer science courses often represent only a narrow band of students that excludes significant numbers of girls and students of color. Further, for a field that depends on creativity, a homogenous workforce fails to take advantage of those with diverse experiences and world viewpoints that likely foster divergent and fresh thinking. This article will provide an overview of Exploring Computer Science (ECS), a curriculum and program developed to broaden participation in computing for high school students in the Los Angeles Unified School District. This program is framed around a three-pronged approach to reform: curricular development, teacher professional development, and policy work across a variety of educational institutions. The focus is to provide the necessary structures and support to schools and teachers that leads to high quality teaching and learning in computer science classrooms. In ECS classrooms, high quality teaching and learning is viewed within the frame of inquiry-based teaching strategies that lead to deep student content learning and engagement. The incorporation of equity-based teaching practices is an essential part of setting up the classroom culture that facilitates inquiry-based learning. As the second largest and one of the most diverse districts in the United States, the Los Angeles Unified School District provides an important context to understand opportunities and obstacles encountered while engaging in institutional K-12 computer science education reform. This article will begin with an account of the educational research that provided key information about the obstacles students encounter in computer science classrooms. Next, we will describe the key elements of the ECS program. Finally, we will highlight several lessons that we have learned that inform the CS 10K campaign (see Jan Cuny’s Critical Perspective “Transforming High School Computing: A Call to Action”, this issue). <s> BIB002 </s> Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> Programming Languages for Learning to Program <s> COPPER (CustOmizable Puzzle Programming EnviRonment) is a meta-configurable tool for creating coding puzzles on a grid using a blocks-based programming language, similar to puzzles in Code.org's Hour of Code. COPPER has the potential to increase student interest and engagement by allowing a teacher to customize levels for individual classes or students. Teachers can create characters for specialized puzzles by uploading pictures to customize their appearance and using the block-language to design the character's behavior. They can then place these characters onto a grid, and arrange them into a puzzle for their students to solve. A teacher can specify the goal of each coding puzzle, as well as restrict which blocks a student may use, allowing a teacher to gradually introduce programming concepts. For example, an elementary school teacher could highlight concepts from a history lesson by building a customized grid where characters from a historical context navigate around objects relevant to the topic being studied. COPPER uses Google's Blockly framework to eliminate the mental overhead of memorizing textual syntax, allowing students to focus on building computational thinking skills. Block-based languages have been shown to be more effective than text-based languages when teaching programming to first-learners. Combined with customization, COPPER has the potential to lead to higher student interest and comprehension of programming concepts in a customized context. This poster will also summarize results obtained through initial experimentation through collaboration with K-8 teachers and their students. <s> BIB003
From a constructionist viewpoint of learning, programming languages have a major role: they are a key means for sharing artifacts and expressing one's theories of the world. The crucial part is that artifacts can be executed independently from the creator: someone's (coded) mental process can become part of the experience of others, and thus criticized, improved, or adapted to a new project. In fact, the origin of the notion itself of constructionism goes back to Papert's experiments with a programming environment (LOGO) designed exactly to let pupils tinker with math and geometry . Does this strategy work even when the learning objective is the programming activity itself? Can a generic programming language be used to give a concrete reification of the computational thinking of a novice programmer? Or do we need something specifically designed for this activity? Alan Kay says that programming languages can be categorized in two classes: "agglutination of features" or "crystallization of style" BIB001 . What is more important for learning effectively in a constructivist way? Features or style? In the last decade, a number of block-based programming tools have been introduced to help students have an easier time when first practicing programming. These tools, often based on web-based technologies, as well as an increase in the number of smartphones and tablets, opened up new ways for innovative coding concepts . In general, they focus on younger learners, support novices in their first programming steps, can be used in informal learning situations, and provide a visual language which allows students to recognize blocks instead of recalling syntax BIB003 . Many popular efforts for spreading computer science in schools, like BIB002 or the teaching material from Code.org, 5 rely on the use of such environments. In addition, such tools have been adopted into many computing classes all over the world (Meerbaum-Salant, Armoni, and Ben-Ari, 2010).
Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> Scheme, Racket <s> DrScheme is a programming environment for Scheme. It fully integrates a graphics-enriched editor, a parser for multiple variants of Scheme, a functional read-eval-print loop, and an algebraic printer. The environment is especially useful for students, because it has a tower of syntactically restricted variants of Scheme that are designed to catch typical student mistakes and explain them in terms the students understand. The environment is also useful for professional programmers, due to its sophisticated programming tools, such as the static debugger, and its advanced language features, such as units and mixins. Beyond the ordinary programming environment tools, DrScheme provides an algebraic stepper, a context-sensitive syntax checker, and a static debugger. The stepper reduces Scheme programs to values, according to the reduction semantics of Scheme. It is useful for explaining the semantics of linguistic facilities and for studying the behavior of small programs. The syntax checker annotates programs with font and color changes based on the syntactic structure of the program. On demand, it draws arrows that point from bound to binding occurrences of identifiers. It also supports α-renaming. Finally, the static debugger provides a type inference system that explains specific inferences in terms of a value-flow graph, selectively overlaid on the program text. <s> BIB001 </s> Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> Scheme, Racket <s> DrJava is a pedagogic programming environment for Java that enables students to focus on designing programs, rather than learning how to use the environment. The environment provides a simple interface based on a "read-eval-print loop" that enables a programmer to develop, test, and debug Java programs in an interactive, incremental fashion. This paper gives an overview of DrJava including its pedagogic rationale, functionality, and implementation. <s> BIB002
Scheme ) is a language originally aimed at bringing structured programming in the lands of Lisp (mainly by adding lexical scoping). The language has nowadays a wide and energetic community of users. Its importance in education, however, is chiefly related to a book, "Structure and Interpretation of Computer Programs" (SICP) , which had a tremendous impact on the practice of programming education. The book derived from a semester course taught at MIT. It has the peculiarity to present programming as a way of organizing thinking and problem solving. Every detail of the Scheme notional machine is worked out in the book: at the end, the reader should be able to understand the mechanics of a Scheme interpreter and to program one by herself (in Scheme). The book, which enjoyed widespread adoption, was originally directed to MIT undergraduates and it is certainly not suitable either for children or even adults without a scientific background: examples are often taken from college-level mathematics and physics. A spin-off of SICP explicitly directed to learning is Racket. Born as 'PLT Scheme', one of its strength is the programming environment DrScheme BIB001 (now DrRacket): it supports educational scaffolding, it suggests proper documentation, and it can use different flavours of the language, starting from a very basic one (Beginning Student Language, it includes only notation for function definitions, function applications, and conditional expressions) to multi-paradigm dialects. The DrRacket approach is supported by an online book "How to design programs" (HTDP) 6 and it has been adapted to other mainstream languages, like Java BIB002 and Python. The availability of different languages directed to the progression of learning should help in overcoming what the DrRacket proponents identify as "the crucial problem" in the interaction between the learner and the programming environment: beginners make mistakes before they know much of the language, but development tools yet diagnose these errors as if the programmer already knew the whole notional machine. Moreover, DrRacket has a minimal interface aimed at not confusing novices, with just two simple interactive panes: a definitions area, and an interactions area, which allows a programmer to ask for the evaluation of expressions that may refer to the definitions. Similarly to what happens in visual languages, Racket allows for direct manipulation of sprites, see an example in Fig. 6 . The authors of HTDP claim that "program design -but not programming -deserves the same role in a liberal arts education as mathematics and language skills." They aim at systematically designed programs thanks to systematic thought, planning, and understanding from the very beginning, at every stage, and for every step. To this end, the HTDP approach is to present "design recipes", supported by predefined scaffolding that should be iteratively refined to match the problem at hand. This is indeed very close to the idea of micropatterns discussed above. 6 Current version: http://www.htdp.org/2018-01-06/Book/index.html
Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> Scratch, Snap!, Alice, and others <s> "Digital fluency" should mean designing, creating, and remixing, not just browsing, chatting, and interacting. <s> BIB001 </s> Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> Scratch, Snap!, Alice, and others <s> Scratch is a visual programming environment that allows users (primarily ages 8 to 16) to learn computer programming while working on personally meaningful projects such as animated stories and games. A key design goal of Scratch is to support self-directed learning through tinkering and collaboration with peers. This article explores how the Scratch programming language and environment support this goal. <s> BIB002
EToys worlds with pre-defined -although programmable -objects, evolved in a generic environment in which everything can be defined in terms of 'statement' blocks. Scratch BIB001 , originally written in Smalltalk, is the most popular and successful visual block-based programming environment. Launched in 2007 by the MIT Media Lab, the Scratch site has grown to more than 25 million registered members with over 29 million Scratch projects shared programs. Unlike traditional programming languages, here graphical programming blocks are used that automatically snap together like Lego bricks when they make syntactical sense . In visual programming languages, a block represents a command or action and they are arranged in scripts. The composition of individual scripts equals the construction of an algorithm. The building blocks offer the possibility, e.g., to animate different objects on a stage, thus defining their behavior. The Scratch environment has some distinctive characteristics, according to its authors BIB002 . Among the ones the authors highlight, some are particularly relevant in the constructionist approach:
Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> Learning to Program in Teams <s> From the Book: ::: “Clean code that works” is Ron Jeffries’ pithy phrase. The goal is clean code that works, and for a whole bunch of reasons: ::: Clean code that works is a predictable way to develop. You know when you are finished, without having to worry about a long bug trail.Clean code that works gives you a chance to learn all the lessons that the code has to teach you. If you only ever slap together the first thing you think of, you never have time to think of a second, better, thing. Clean code that works improves the lives of users of our software.Clean code that works lets your teammates count on you, and you on them.Writing clean code that works feels good.But how do you get to clean code that works? Many forces drive you away from clean code, and even code that works. Without taking too much counsel of our fears, here’s what we do—drive development with automated tests, a style of development called “Test-Driven Development” (TDD for short). ::: In Test-Driven Development, you: ::: Write new code only if you first have a failing automated test.Eliminate duplication. ::: ::: Two simple rules, but they generate complex individual and group behavior. Some of the technical implications are:You must design organically, with running code providing feedback between decisionsYou must write your own tests, since you can’t wait twenty times a day for someone else to write a testYour development environment must provide rapid response to small changesYour designs must consist of many highly cohesive, loosely coupled components, just to make testing easy ::: ::: The two rules imply an order to the tasks ofprogramming: ::: 1. Red—write a little test that doesn’t work, perhaps doesn’t even compile at first ::: 2. Green—make the test work quickly, committing whatever sins necessary in the process ::: 3. Refactor—eliminate all the duplication created in just getting the test to work ::: ::: ::: Red/green/refactor. The TDD’s mantra. ::: ::: Assuming for the moment that such a style is possible, it might be possible to dramatically reduce the defect density of code and make the subject of work crystal clear to all involved. If so, writing only code demanded by failing tests also has social implications: ::: If the defect density can be reduced enough, QA can shift from reactive to pro-active workIf the number of nasty surprises can be reduced enough, project managers can estimate accurately enough to involve real customers in daily developmentIf the topics of technical conversations can be made clear enough, programmers can work in minute-by-minute collaboration instead of daily or weekly collaborationAgain, if the defect density can be reduced enough, we can have shippable software with new functionality every day, leading to new business relationships with customers ::: ::: ::: So, the concept is simple, but what’s my motivation? Why would a programmer take on the additional work of writing automated tests? Why would a programmer work in tiny little steps when their mind is capable of great soaring swoops of design? Courage. ::: Courage ::: Test-driven development is a way of managing fear during programming. I don’t mean fear in a bad way, pow widdle prwogwammew needs a pacifiew, but fear in the legitimate, this-is-a-hard-problem-and-I-can’t-see-the-end-from-the-beginning sense. If pain is nature’s way of saying “Stop!”, fear is nature’s way of saying “Be careful.” Being careful is good, but fear has a host of other effects: ::: Makes you tentativeMakes you want to communicate lessMakes you shy from feedbackMakes you grumpy ::: ::: None of these effects are helpful when programming, especially when programming something hard. So, how can you face a difficult situation and: ::: Instead of being tentative, begin learning concretely as quickly as possible.Instead of clamming up, communicate more clearly.Instead of avoiding feedback, search out helpful, concrete feedback.(You’ll have to work on grumpiness on your own.) ::: ::: Imagine programming as turning a crank to pull a bucket of water from a well. When the bucket is small, a free-spinning crank is fine. When the bucket is big and full of water, you’re going to get tired before the bucket is all the way up. You need a ratchet mechanism to enable you to rest between bouts of cranking. The heavier the bucket, the closer the teeth need to be on the ratchet. ::: ::: The tests in test-driven development are the teeth of the ratchet. Once you get one test working, you know it is working, now and forever. You are one step closer to having everything working than you were when the test was broken. Now get the next one working, and the next, and the next. By analogy, the tougher the programming problem, the less ground should be covered by each test. ::: ::: Readers of Extreme Programming Explained will notice a difference in tone between XP and TDD. TDD isn’t an absolute like Extreme Programming. XP says, “Here are things you must be able to do to be prepared to evolve further.” TDD is a little fuzzier. TDD is an awareness of the gap between decision and feedback during programming, and techniques to control that gap. “What if I do a paper design for a week, then test-drive the code? Is that TDD?” Sure, it’s TDD. You were aware of the gap between decision and feedback and you controlled the gap deliberately. ::: ::: That said, most people who learn TDD find their programming practice changed for good. “Test Infected” is the phrase Erich Gamma coined to describe this shift. You might find yourself writing more tests earlier, and working in smaller steps than you ever dreamed would be sensible. On the other hand, some programmers learn TDD and go back to their earlier practices, reserving TDD for special occasions when ordinary programming isn’t making progress. ::: ::: There are certainly programming tasks that can’t be driven solely by tests (or at least, not yet). Security software and concurrency, for example, are two topics where TDD is not sufficient to mechanically demonstrate that the goals of the software have been met. Security relies on essentially defect-free code, true, but also on human judgement about the methods used to secure the software. Subtle concurrency problems can’t be reliably duplicated by running the code. ::: ::: Once you are finished reading this book, you should be ready to: ::: Start simplyWrite automated testsRefactor to add design decisions one at a time ::: ::: This book is organized into three sections. ::: An example of writing typical model code using TDD. The example is one I got from Ward Cunningham years ago, and have used many times since, multi-currency arithmetic. In it you will learn to write tests before code and grow a design organically.An example of testing more complicated logic, including reflection and exceptions, by developing a framework for automated testing. This example also serves to introduce you to the xUnit architecture that is at the heart of many programmer-oriented testing tools. In the second example you will learn to work in even smaller steps than in the first example, including the kind of self-referential hooha beloved of computer scientists.Patterns for TDD. Included are patterns for the deciding what tests to write, how to write tests using xUnit, and a greatest hits selection of the design patterns and refactorings used in the examples. ::: ::: I wrote the examples imagining a pair programming session. If you like looking at the map before wandering around, you may want to go straight to the patterns in Section 3 and use the examples as illustrations. If you prefer just wandering around and then looking at the map to see where you’ve been, try reading the examples through and refering to the patterns when you want more detail about a technique, then using the patterns as a reference. ::: ::: Several reviewers have commented they got the most out of the examples when they started up a programming environment and entered the code and ran the tests as they read. ::: ::: A note about the examples. Both examples, multi-currency calculation and a testing framework, appear simple. There are (and I have seen) complicated, ugly, messy ways of solving the same problems. I could have chosen one of those complicated, ugly, messy solutions to give the book an air of “reality.” However, my goal, and I hope your goal, is to write clean code that works. Before teeing off on the examples as being too simple, spend 15 seconds imagining a programming world in which all code was this clear and direct, where there were no complicated solutions, only apparently complicated problems begging for careful thought. TDD is a practice that can help you lead yourself to exactly that careful thought. <s> BIB001 </s> Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> Learning to Program in Teams <s> School software projects, as they are common e.g. in German CS classes, traditionally apply inflexible process models, mostly an adapted waterfall model. Typically, such projects are conducted at the end of a school year. In this paper we pursue the question, if and how changing process model and time may help bringing the advantages of project based learning into play. We describe and compare practical experiences of a study with 140 students, considering four different contexts. By applying agile methods, flexibility was gained. The evaluation of the different implementations results in a more holistic and comprehensive view of projects in CSE. <s> BIB002 </s> Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> Learning to Program in Teams <s> Context: Empirical investigations regarding using Agile programming methodologies in high schools are scarce in the literature. Objective: This paper evaluates (i) the performance, (ii) the code quality, and (iii) the satisfaction of both students and teachers in using Agile practices in education. Method: this study includes an experiment, administered in a laboratory controlled setting to measure students' performances and a case study to value the code quality improvements. Questionnaires were also used to evaluate qualitative aspects of Agile practices. Results: groups of students with mixed skills performed significantly better than groups with the same skill level. Moreover, there was also a general increase in code quality along with satisfaction. Conclusions: Agile methodologies are useful in the High School education of young software developers. <s> BIB003
Constructivist approaches often emphasize the importance of social context in which the learning happens (see e.g. ). Working in developers teams requires new skills, especially because software products (even the ones in the reach of novices) are often tangled with many dependencies and division of labour is hard: it inevitably requires appropriate communication and coordination. Therefore, it is important that novice programmers learn to program in an "organized" way, discovering that as a group they are able to solve more challenging and open-ended problems, maybe with interdisciplinary contributions. To this end, agile methodologies fit well with constructivist pedagogies involving learning in teams, and they are increasingly exploited in educational settings (see for example BIB002 BIB003 ). Agile teams are typically small groups of 4-8 co-workers. • Agile values ) (individuals and interactions over processes and tools; customer collaboration over contract negotiation; responding to change over following a plan; working software over comprehensive documentation) relate well with constructivist philosophies. Agile teams are self-organizing, emphasize the need for reflecting regularly on how • to become more effective, and tune and adjust their behavior accordingly. Agile techniques like pair programming, test driven development, iterative software • development, continuous integration are very attractive for a learning context. The iterative nature of agile methods is well exemplified by test-driven development, or TDD BIB001 . This technique reverses the order between code implementation and correctness test. Namely, the specification of the programming task at hand is actually provided with a test the defines correct behavior. The development cycle is then based on the iteration of the following procedure: write a test known to fail according to the current stage of the implementation; i. perform the smallest code update which satisfies all tests, including the one introii. duced in the previous point; optionally refactor the produced code. iii. TDD makes testing the engine driving the overall development process: one of the hardest-to-find contributions for facilitators in an active programming learning context is suggesting a good next test. This has the role of letting pupils aware that their belief at a broad level ("the program works") is false, thus an analogous belief at a smaller scale (for instance, "this function always returns the correct result") should be false, too. This amounts to the destruction of knowledge necessary to build new knowledge (aka a working program) in a constructivist setting. Moreover, refactoring corresponds to the constructivist re-organization of knowledge following the discovery of more viable solutions: most of the developing activities consist in realizing that a system which was thought to correctly work is actually not able to cope with a new test case. This applies of course also to the simplest tasks faced by students engaged in learning the basics of computer programming. Once pupils are convinced that their implementation is flawed, the localization of the code lines to be reconsidered is the other pillar of an active learning setting. Again, a paramount contribution for a successful learning process should be provided by a facilitator suggesting suitable debugging techniques (e.g., proposing critical input values, suggesting points in the execution flow to be verified, or giving advice about variables to be tracked during the next run).
Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> Conclusions <s> This paper describes a new technique for implementing educational programming languages using tangible interface technology. It emphasizes the use of inexpensive and durable parts with no embedded electronics or power supplies. Students create programs in offline settings---on their desks or on the floor---and use a portable scanning station to compile their code. We argue that languages created with this approach offer an appealing and practical alternative to text-based and visual languages for classroom use. In this paper we discuss the motivations for our project and describe the design and implementation of two tangible programming languages. We also describe an initial case study with children and outline future research goals. <s> BIB001 </s> Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> Conclusions <s> "Digital fluency" should mean designing, creating, and remixing, not just browsing, chatting, and interacting. <s> BIB002 </s> Constructionist Attempts at Supporting the Learning of Computer Programming: A Survey <s> Conclusions <s> Learning to program is hard, also because it requires students to deal with abstraction. A program is an abstract construct: most pieces of a program are not concrete, literal values, but they are abstract symbols standing for things or actions in the system they model and control. Thus, when learning to program, novices not only need to learn about the system, but they also need to learn about the programming language. They need to think about the concrete effects in the system their abstract program constructs will cause once the program will execute. This thinking on two levels of abstraction (program and system), and in two dimensions of time (coding and execution) can cause a significant burden. In this short paper we propose to collapse those two levels. We wonder whether it would be possible to devise a programming environment where the program is the system. To do this we need languages that are the system, instead of languages that are about the system. We propose to use tangible languages as a way towards this idea. We briefly present three such languages which we used in the context of an informal learning setting and report our initial lessons learned. <s> BIB003
The literature on learning to program through a constructionist strategy has often focused on how to bring the abstract and formal nature of programming languages into the manipulation of more concrete (or even tangible) "objects" BIB001 BIB002 BIB003 . Many proposals aim at overcoming the (initial) hurdles which textual rules of syntax may pose to children. Also, several environments have been designed in order to increase the appeal of programming by connecting this activity to real-world devices or providing fancy libraries. Instead, more work is probably needed to make educators and learners more aware of the so-called notional machine behind the programming language. Programming environments could be more explicit about the complex relationship between the code one writes and the actions that take place when the program is executed. Moreover, micro-patterns should be exploited in order to enhance problem solving skills of novice programmers, such that they become able to think about the solution of problems in the typical way that make the former suitable to automatic elaboration. Agile methodologies, now also common in professional settings, seem to fit well with constructionist learning. Besides the stress on teamworking, particularly useful seems the agile emphasis on having running artifacts through all the development cycle and the common practice of driving development with explicit or even executable "definitions of done".
A Survey on Closed Frequent Pattern Mining <s> CHARM <s> The task of mining association rules consists of two main steps. The first involves finding the set of all frequent itemsets. The second step involves testing and generating all high confidence rules among itemsets. In this paper we show that it is not necessary to mine all frequent itemsets in the first step, instead it is sufficient to mine the set of closed frequent itemsets, which is much smaller than the set of all frequent itemsets. It is also not necessary to mine the set of all possible rules. We show that any rule between itemsets is equivalent to some rule between closed itemsets. Thus many redundant rules can be eliminated. Furthermore, we present CHARM, an efficient algorithm for mining all closed frequent itemsets. An extensive experimental evaluation on a number of real and synthetic databases shows that CHARM outperforms previous methods by an order of magnitude or more. It is also linearly scalable in the number of transactions and the number of closed itemsets found. <s> BIB001 </s> A Survey on Closed Frequent Pattern Mining <s> CHARM <s> The set of frequent closed itemsets uniquely determines the exact frequency of all itemsets, yet it can be orders of magnitude smaller than the set of all frequent itemsets. In this paper, we present CHARM, an efficient algorithm for mining all frequent closed itemsets. It enumerates closed sets using a dual itemset-tidset search tree, using an efficient hybrid search that skips many levels. It also uses a technique called diffsets to reduce the memory footprint of intermediate computations. Finally, it uses a fast hash-based approach to remove any "nonclosed" sets found during computation. We also present CHARM-L, an algorithm that outputs the closed itemset lattice, which is very useful for rule generation and visualization. An extensive experimental evaluation on a number of real and synthetic databases shows that CHARM is a state-of-the-art algorithm that outperforms previous methods. Further, CHARM-L explicitly generates the frequent closed itemset lattice. <s> BIB002 </s> A Survey on Closed Frequent Pattern Mining <s> CHARM <s> Previous studies have presented convincing arguments that a frequent pattern mining algorithm should not mine all frequent patterns but only the closed ones because the latter leads to not only a more compact yet complete result set but also better efficiency. However, most of the previously developed closed pattern mining algorithms work under the candidate maintenance-and- test paradigm, which is inherently costly in both runtime and space usage when the support threshold is low or the patterns become long. In this paper, we present BIDE, an efficient algorithm for mining frequent closed sequences without candidate maintenance. It adopts a novel sequence closure checking scheme called Bl-Directional Extension and prunes the search space more deeply compared to the previous algorithms by using the BackScan pruning method. A thorough performance study with both sparse and dense, real, and synthetic data sets has demonstrated that BIDE significantly outperforms the previous algorithm: It consumes an order(s) of magnitude less memory and can be more than an order of magnitude faster. It is also linearly scalable in terms of database size. <s> BIB003 </s> A Survey on Closed Frequent Pattern Mining <s> CHARM <s> This paper presents a new classification and search method of 3D object features views. This method is an application of algorithms: • Charm for an object views classification purpose • Algorithm for extracting association rules in order to extract the characteristic view. We use the geometric descriptor of Zernike Moments to index 2D views of 3D object. The proposed method relies on a Bayesian probabilistic approach for search queries. The resulting outcome is presented by a collection of 120 3D models of the Princeton-based benchmark and then compared to those obtained from conventional methods. <s> BIB004
CHARM BIB004 BIB001 BIB003 BIB002 stands for Closed Association Rule Mining algorithm is used to mine the closed frequent patterns. It explores patternset and didset(Document idset) space simultaneously which skips many levels quickly to identify the closed frequent patterns. It uses two pruning strategies, Candidate pruning are not only based on the subset infrequency but also branches are prunes based on nonclosure property. The fundamental operation used is union of two patternset and an intersection of their document ids. The key features of the CHARM algorithm are: It explores both itemset and didsets for quick mining of closed frequent patterns and it uses pure bottom up approaches.
A Survey on Closed Frequent Pattern Mining <s> CLOSET+ <s> Mining frequent closed itemsets provides complete and non-redundant results for frequent pattern analysis. Extensive studies have proposed various strategies for efficient frequent closed itemset mining, such as depth-first search vs. breadthfirst search, vertical formats vs. horizontal formats, tree-structure vs. other data structures, top-down vs. bottom-up traversal, pseudo projection vs. physical projection of conditional database, etc. It is the right time to ask "what are the pros and cons of the strategies?" and "what and how can we pick and integrate the best strategies to achieve higher performance in general cases?"In this study, we answer the above questions by a systematic study of the search strategies and develop a winning algorithm CLOSET+. CLOSET+ integrates the advantages of the previously proposed effective strategies as well as some ones newly developed here. A thorough performance study on synthetic and real data sets has shown the advantages of the strategies and the improvement of CLOSET+ over existing mining algorithms, including CLOSET, CHARM and OP, in terms of runtime, memory usage and scalability. <s> BIB001 </s> A Survey on Closed Frequent Pattern Mining <s> CLOSET+ <s> This paper presents a new scalable algorithm for discovering closed frequent itemsets, a lossless and condensed representation of all the frequent itemsets that can be mined from a transactional database. Our algorithm exploits a divide-and-conquer approach and a bitwise vertical representation of the database and adopts a particular visit and partitioning strategy of the search space based on an original theoretical framework, which formalizes the problem of closed itemsets mining in detail. The algorithm adopts several optimizations aimed to save both space and time in computing itemset closures and their supports. In particular, since one of the main problems in this type of algorithms is the multiple generation of the same closed itemset, we propose a new effective and memory-efficient pruning technique, which, unlike other previous proposals, does not require the whole set of closed patterns mined so far to be kept in the main memory. This technique also permits each visited partition of the search space to be mined independently in any order and, thus, also in parallel. The tests conducted on many publicly available data sets show that our algorithm is scalable and outperforms other state-of-the-art algorithms like CLOSET+ and FP-CLOSE, in some cases by more than one order of magnitude. More importantly, the performance improvements become more and more significant as the support threshold is decreased. <s> BIB002
The CLOSET+ BIB002 BIB001 algorithm is used to mine closed frequent pattern. Initially, it scans the database only once to find the global frequent patterns and sort the database in support descending order and forms the frequent pattern list, scans the document and builds the FP-Tree using the pattern list, using divide and conquer technique and depth first searching paradigm it finds the closed frequent patterns. Finally, stop the process until all the patterns in the global header are mined. The frequent closed patterns are obtained either from result tree or from the output file. The key features of the CLOSET+ algorithm are: It uses hybrid tree projection method for the conditional projected database and it uses horizontal data format.
A Survey on Closed Frequent Pattern Mining <s> CARPENTER <s> The growth of bioinformatics has resulted in datasets with new characteristics. These datasets typically contain a large number of columns and a small number of rows. For example, many gene expression datasets may contain 10,000-100,000 columns but only 100-1000 rows.Such datasets pose a great challenge for existing (closed) frequent pattern discovery algorithms, since they have an exponential dependence on the average row length. In this paper, we describe a new algorithm called CARPENTER that is specially designed to handle datasets having a large number of attributes and relatively small number of rows. Several experiments on real bioinformatics datasets show that CARPENTER is orders of magnitude better than previous closed pattern mining algorithms like CLOSET and CHARM. <s> BIB001 </s> A Survey on Closed Frequent Pattern Mining <s> CARPENTER <s> Microarray data typically contains a large number of columns and a small number of rows, which poses a great challenge for existing frequent (closed) pattern mining algorithms that discover patterns in item enumeration space. In this paper, we propose two algorithms that explore the row enumeration space to mine frequent closed patterns. Several experiments on real-life gene expression data show that the algorithms are faster than existing algorithms, including CLOSET, CHARM, CLOSET+ and CARPENTER. <s> BIB002 </s> A Survey on Closed Frequent Pattern Mining <s> CARPENTER <s> Unlike the traditional datasets, gene expression datasets typically contain a huge number of items and few transactions. Though there were a large number of algorithms that had been developed for mining frequent closed patterns, their running time increased exponentially with the average length of the transactions increasing. Therefore, most current methods for high-dimensional gene expression datasets were impractical. In this paper, we proposed a new data structure, tidset-prefix-plus tree (TP+-tree), to store the compressed transposed table of dataset. Based on TP+-tree, an algorithm, TP+close, was developed for mining frequent closed patterns in gene expression datasets. TP+close adopted top-down and divide-and-conquer search strategies on the transaction space. Moreover, TP+close combined efficient pruning and effective optimizing methods. Several experiments on real-life gene expression datasets showed that TP+close was faster than RERII and CARPENTER, two existing algorithms. <s> BIB003
CARPENTER BIB001 BIB002 BIB003 . In the second step, according to the transpose table, construct the row enumeration tree which enumerates row ids with predefined order and search the tree in depth first order without any pruning strategies. It consists of three pruning strategies, in the prune 1 method, it prunes the branch which are not having enough depth, in prune 2 method, if rj has 100% support in project table of ri , prune the branch rj where support is the depth of the node .and in prune 3 method, At any node in the enumeration tree, if the corresponding itemset of the node has been found before it prunes the branch rooted at this node. The Key features of the CARPENTER algorithm are: It uses row enumeration search for the optimized search and it use depth first approach.
A Survey on Closed Frequent Pattern Mining <s> CFIM-P <s> The mining of frequent itemsets is often challenged by the length of the patterns mined and also by the number of transactions considered for the mining process. Another acute challenge that concerns the performance of any association rule mining algorithm is the presence of „null‟ transactions. This work proposes a closed frequent itemset mining algorithm viz., Closed Frequent Itemset Mining and Pruning (CFIM-P) algorithm using the sub-itemset pruning strategy. CFIM-P algorithm has attempted to eliminate redundant patterns by pruning closed frequent sub-itemsets. An attempt has even been made towards eliminating the null transactions by using Vertical Data Format representation technique for finding the frequent itemsets. <s> BIB001 </s> A Survey on Closed Frequent Pattern Mining <s> CFIM-P <s> The mining of frequent itemsets is often challenged by the length of the patterns mined and also by the number of transactions considered for the mining process. Another acute challenge that concerns the performance of any association rule mining algorithm is the presence of null transactions. This work proposes a closed frequent itmeset mining algorithm viz., Closed Frequent Itemset Mining and Pruning (CFIM-P) algorithm using the sub-itemset pruning strategy. CFIM-P algorithm has attempted to eliminate redundant patterns by pruning closed frequent sub-itemsets. An attempt has even made towards eliminating the null transactions by using the vertical data format representation technique for finding the frequent itemsets. <s> BIB002
CFIM-P BIB001 BIB002 algorithm stands for Closed Frequent Itemset Mining and Pruning algorithm for mining the closed frequent patterns. The algorithm consists of 3 phases. In the first phase, it traces the null document and filters them for ensuing mining procedures. In the second phase, it mines the closed frequent pattern based on the minimum support count. If the already mined superset exists for the subset of frequent pattern then subset is eliminated by the top down manner. After obtaining the closed frequent itemset, it is added to the list of frequent itemset. In the third phase, the mined closed frequent itemset constitute to form patterns. The key features of the CFIM-P algorithm are: It uses Top down strategy and it eliminates the null transaction before starts the mining process.
Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Demand node model <s> This paper presents an automatic cellular network design algorithm which determines the location of transmitters with respect to co-channel interference (CCI). The proposed method is capable of maximizing the average CCI ratio in the planning region while optimizing the covered teletraffic demand. Additionally, we investigate how the proposed algorithm can be extended for locating micro- and macro-cells. <s> BIB001 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Demand node model <s> We consider the following optimization problem for UMTS networks: For a specified teletraffic demand and possible base station locations, choose positions for base stations such that the construction costs are below a given limit, as much teletraffic as possible is supplied, the ongoing costs are minimal, and the intra-cell interference in the range of each base station is low. We prove that for a particular specification of teletraffic (the so called demand node concept), this problem has a polynomial-time approximation scheme, but cannot have a fully polynomial-time approximation scheme unless P = NP. <s> BIB002 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Demand node model <s> In UMTS networks, radio planning cannot only be based on signal predictions but it must also consider the traffic distribution, the power control mechanism as well as the power limits and the signal quality constraints. In this paper we propose mathematical programming models aimed at supporting the decisions in the process of planning where to locate the new base stations and which configuration (e.g., maximum emission power) to select for each of them from a given set of possibilities. In the proposed models we assume as quality measure the signal-to-interference ratio (SIR) and consider two power control mechanisms which keep the received signal power or the SIR at a given target value. Computational results obtained with greedy randomized algorithms are also reported for uplink instances generated using classical propagation models. <s> BIB003 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Demand node model <s> We propose a new solution to the problem of positioning base station transmitters of a mobile phone network and assigning frequencies to the transmitters, both in an optimal way. Since an exact solution cannot be expected to run in polynomial time for all interesting versions of this problem (they are all NP-hard), our algorithm follows a heuristic approach based on the evolutionary paradigm. For this evolution to be efficient, i.e., goal-oriented and sufficiently random at the same time, problem-specific knowledge is embedded in the operators. The problem requires both the minimization of the cost and of the channel interference. We examine and compare two standard multiobjective techniques and a new algorithm - the steady-state evolutionary algorithm with Pareto tournaments. One major finding of the empirical investigation is a strong influence of the choice of the multiobjective selection method on the utility of the problem-specific recombination leading to a significant difference in the solution quality. <s> BIB004
The concept of demand nodes was introduced first by , and it has since been used in different works (e.g. BIB001 BIB002 BIB003 BIB004 . The basic idea is that the demand node represents the centre of an area where the traffic is being generated by the users. The main advantage of this model is that by combining the traffic of a small region in a single point, the computational requirements are drastically reduced; the drawback is that the realism of the problem is also simplified. The demand nodes comprise a number of test points, hence the need for fewer nodes; however, merging test points into a single demand node has the same effect as applying a lossy compression mechanism: the resolution is reduced. Most of the research work using this model also allows total freedom as regards the positioning of candidate sites. This allows the uniform distribution of the sites over the full area to be covered, which usually is not possible in practice as a site cannot simply be placed anywhere, e.g. in the middle of a motorway.
Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Disc model <s> In this paper we introduce the minimum-order approach to frequency assignment and present a theory which relates this approach to the traditional one. This new approach is potentially more desirable than the traditional one. We model assignment problems as both frequency-distance constrained and frequency constrained optimization problems. The frequency constrained approach should be avoided if distance separation is employed to mitigate interference. A restricted class of graphs, called disk graphs, plays a central role in frequency-distance constrained problems. We introduce two generalizations of chromatic number and show that many frequency assignment problems are equivalent to generalized graph coloring problems. Using these equivalences and recent results concerning the complexity of graph coloring, we classify many frequency assignment problems according to the "execution time efficiency" of algorithms that may be devised for their solution. We discuss applications to important real world problems and identify areas for further work. <s> BIB001 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Disc model <s> This paper presents an innovative algorithm for automatic base station placement and dimensioning. A highly efficient optimization strategy forms the core of the proposed algorithm that determines the number of base stations, their sites, and parameters to achieve a high-quality network that meets the requirements of area coverage, traffic capacity, and interference level, while trying to minimize system costs, including the frequency and financial costs. First, the hierarchical approach is outlined and it is applied to place base stations (BSs) for a large-scale network design. Also a fuzzy expert system is developed to exploit the expert experience to adjust BS parameters, e.g., the transmitted power, to improve the network performance. Simulation results are presented and analyzed. <s> BIB002 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Disc model <s> This paper considers automatic cell planning for mobile radio network design. The problem is how to determine the number of cells, the optimal cell sites and parameters in order to meet the system requirements while minimizing the costs involved, including the spectral cost and financial cost. Our solution consists of three parts. First, a fuzzy expert system is used to adjust the parameters of each cell. Second, an optimization strategy using the genetic algorithm is proposed to find the optimal cell sites. Furthermore, we develop an algorithm based on the technique of cell splitting to accommodate the traffic growth in an existing network. <s> BIB003
The first use of disc (circle) graphs in the design of cellular networks was in BIB001 , where it was applied to solve the frequency assignment problem. Later extensions to this model consider intersections among discs and non-uniform traffic distributions (Huang et al. 2000a,b,c) . The main advantage of the approach presented in BIB002 is that it is possible to take into account different goals related to the design of the network; thus, the problems of cell planning and frequency assignment can be addressed simultaneously. Furthermore, the computational costs are not high. The main inconvenience of the disc model has to do with the fact that it assumes an ideal propagation model, so all the cells have the same shape. Even though the size of the cells can vary depending on a non-uniform traffic distribution BIB003 , the shape is always a circle. Another issue is that sites may be located anywhere, so the same problems as in the demand node model arise.
Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Cell and test point model <s> In this paper, the evolution of mobile radio network is presented. First of all, the network life cycle is considered. A mathematical modeling of these life periods is developed inside an optimization problem: optimal location of base stations. It is a combinatorial optimization problem. A multi-period model is built on a concentrator link approach. Finally, three different multi-period techniques are identified, they are based on using the genetic algorithm (GA) to tackle this problem of the design of microcellular networks. <s> BIB001 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Cell and test point model <s> The antenna-positioning problem concerns finding a set of sites for antennas from a set of pre-defined candidate sites, and for each selected site, to determine the number and types of antennas, as well as the associated values for each of the antenna parameters. All these choices must satisfy a set of imperative constraints and optimize a set of objectives. This paper presents a heuristic approach for tackling this complex and highly combinatorial problem. The proposed approach is composed of three phases: a constraint-based pre-processing phase to filter out bad configurations, an optimization phase using tabu search, and a post-optimization phase to improve solutions given by tabu search. To validate the approach, computational results are presented using large and realistic data sets. <s> BIB002 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Cell and test point model <s> This paper deals with the automatic selection and configuration of base station sites for mobile cellular networks. An optimization framework based on simulated annealing is used for site selection and for base-station configuration. Realistic path-loss estimates incorporating terrain data are used. The configuration of each base station involves selecting antenna type, power control, azimuth, and tilt. Results are presented for several design scenarios with between 250 and 750 candidate sites and show that the optimization framework can generate network designs with desired characteristics such as high area coverage and high traffic capacity. The work shows that cellular network design problems are tractable for realistic problem instances. <s> BIB003 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Cell and test point model <s> The placement of antennas is an important step in the design of mobile radio networks. We introduce a model for the antenna placement problem (APP) that addresses cover, traffic demand, interference, different parameterized antenna types, and the geometrical structure of cells. The resulting optimization problem is constrained and multiobjective. We present an evolutionary algorithm, capable of dealing with more than 700 candidate sites in the working area. The results show that the APP is tractable. The automatically generated designs enable experts to focus their efforts on the difficult parts of a network design problem. <s> BIB004 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Cell and test point model <s> The antenna placement problem, or cell planning problem, involves locating and configuring infrastructure for cellular wireless networks. From candidate site locations, a set needs to be selected against objectives relating to issues such as financial cost and service provision. This is an NP-hard optimization problem and consequently heuristic approaches are necessary for large problem instances. In this study, we use a greedy algorithm to select and configure base station locations. The performance of this greedy approach is dependent on the order in which the candidate sites are considered. We compare the ability of four state-of-the-art multiple objective genetic algorithms to find an optimal ordering of potential base stations. Results and discussion on the performance of the algorithms are provided. <s> BIB005 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Cell and test point model <s> Cellular network design is a major issue in second generation GSM mobile telecommunication systems. In this paper, a new model of the problem in its full practical complexity, based on multiobjective constrained combinatorial optimization, has been used. We propose an evolutionary algorithm that aims at approximating the Pareto frontier of the problem, which removes the need for a cellular network designer to rank or weight objectives a priori. Specific coding scheme and genetic operators have been designed. Advanced intensification and diversification search techniques, such as elitism and adaptive sharing, have been used. Three complementary hierarchical parallel models have been designed to improve the solution quality and robustness, to speed-up the search and to solve large instances of the problem. The obtained Pareto fronts and speed-ups on different parallel architectures show the efficiency and the scalability of the parallel model. Performance evaluation of the algorithm has been carried out on different realistic benchmarks. The obtained results show the impact of the proposed parallel models and the introduced search mechanisms. <s> BIB006 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Cell and test point model <s> Cellular network design is a major issue in mobile telecommunication systems. In this paper, a model of the problem in its full practical complexity, based on multiobjective constrained combinatorial optimization, has been investigated. We adopted the Pareto approach at resolution in order to compute a set of diversified non-dominated networks, thus removing the need for the designer to rank or weight objectives a priori. We designed and implemented a ''ready-to-use'' platform for radio network optimization that is flexible regarding both the modeling of the problem (adding, removing, updating new antagonist objectives and constraints) and the solution methods. It extends the ''white-box'' ParadisEO framework for metaheuristics applied to the resolution of mono/multi-objective Combinatorial Optimization Problems requiring both the use of advanced optimization methods and the exploitation of large-scale parallel and distributed environments. Specific coding scheme and genetic and neighborhood operators have been designed and embedded. On the other side, we make use of many generic features related to advanced intensification and diversification search techniques, hybridization of metaheuristics and grid computing for the distribution of the computations. They aim at improving the quality of networks and their robustness. They also allow, to speed-up the search and obtain results in a tractable time, and so efficiently solving large instances of the problem. Using three realistic benchmarks, the computed networks and speed-ups on different parallel and/or distributed architectures show the efficiency and the scalability of hierarchical parallel hybrid models. <s> BIB007
Although this model is known thanks to the works of BIB001 Caminada (1998a,b, 2001) , it appeared first in . In it, the working area is discretized into a set of test points which are spread over the whole area. These test points are used to measure the amount of signal Downloaded by [UMA University of Malaga] at 03:59 04 October 2013 strength in the region where the network operator intends to service the traffic demand of a set of customers. Three subsets of test points are defined: reception test points (RTPs), where the signal quality is tested; service test points (STPs), where the signal quality must exceed a minimum threshold to be usable by customers; and traffic test points (TTPs), where a certain amount of traffic is associated with each customer (measured in Erlangs). In this model, the set of candidate site locations does not have to be uniformly distributed in the terrain, so it is a better representation of the scenarios presented by the operators. Its main advantage is that it allows measuring all the network objectives (such as coverage and capacity). Notwithstanding, there is a clear inconvenience: the computational cost increases because a high number of points is usually used to face the problem (e.g. test points every 200 meters) in order to increase the realism. This realism is the main reason that this model is widely adopted in the literature (e.g. BIB002 BIB003 BIB004 BIB005 BIB006 BIB007 ).
Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> ACP problems addressed by EAs <s> Abstract This paper describes and analyzes CHC, a nontraditional genetic algorithm which combines a conservative selection strategy that always preserves the best individuals found so far with a radical (highly disruptive) recombination operator that produces offspring that are maximally different from both parents. The traditional reasons for preferring a recombination operator with a low probability of disrupting schemata may not hold when such a conservative selection strategy is used. On the contrary, certain highly disruptive crossover operators provide more effective search. Empirical evidence is provided to support these claims. <s> BIB001 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> ACP problems addressed by EAs <s> Genetic algorithms (GAs) are biologically motivated adaptive systems which have been used, with varying degrees of success, for function optimization. In this study, an abstraction of the basic genetic algorithm, the Equilibrium Genetic Algorithm (EGA), and the GA in turn, are reconsidered within the framework of competitive learning. This new perspective reveals a number of different possibilities for performance improvements. This paper explores population-based incremental learning (PBIL), a method of combining the mechanisms of a generational genetic algorithm with simple competitive learning. The combination of these two methods reveals a tool which is far simpler than a GA, and which out-performs a GA on large set of optimization problems in terms of both speed and accuracy. This paper presents an empirical analysis of where the proposed technique will outperform genetic algorithms, and describes a class of problems in which a genetic algorithm may be able to perform better. Extensions to this algorithm are discussed and analyzed. PBIL and extensions are compared with a standard GA on twelve problems, including standard numerical optimization functions, traditional GA test suite problems, and NP-Complete problems. <s> BIB002 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> ACP problems addressed by EAs <s> This paper uses a realistic combinatorial optimization problem as an example to show how a genetic algorithm can be parallelized in an efficient way. The problem considered is the selection of the best set of transmitter locations in order to cover a given geographical region at optimal cost. It is shown that it is possible to obtain good solutions to the problem even with a very low communication load. The parallel program is tested, first on an artificial example, then on a real-life case. <s> BIB003 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> ACP problems addressed by EAs <s> Cellular network operators are dealing with complex problems when planning the network operation. In order to automate the planning process, the development of simulation and optimization tools are under research. In this paper genetic algorithms with three different approaches are studied in order to optimize the base station sites. This research shows that a proper approach in developing the individual structure and fitness function has crucial importance in solving practical base station siting problems with genetic algorithms. <s> BIB004 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> ACP problems addressed by EAs <s> With the imminent introduction of the 3G systems throughout the world precise cell planning in macrocell, microcell and picocell environments have become equally important. Beside coverage of the basic radio link quality parameter others such as rms delay spread and a measure of the system capacity have become increasingly important. Our contribution addresses the planning inside microcells based on a 3D deterministic ray-tracing propagation tool. It is based on the IHE model (Cichon, 1984) and a simple genetic algorithm (SGA) for the base station location optimization. At this stage the optimization is based on coverage and rms delay spread considerations. Our algorithm has as inputs the delay spread threshold and the minimum field strength. The cost function to be minimized is the number of locations in which the values of these parameters are above the threshold in the case of delay spread, and respectively below the threshold in the case of the field strength. <s> BIB005 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> ACP problems addressed by EAs <s> In this article, evolutionary algorithms (EAs) are applied to solve the radio network design problem (RND). The task is to find the best set of transmitter locations in order to cover a given geographical region at an optimal cost. Usually, parallel EAs are needed to cope with the high computational requirements of such a problem. Here, we develop and evaluate a set of sequential and parallel genetic algorithms (GAs) to solve the RND problem efficiently. The results show that our distributed steady state GA is an efficient and accurate tool for solving RND that even outperforms existing parallel solutions. The sequential algorithm performs very efficiently from a numerical point of view, although the distributed version is much faster. <s> BIB006 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> ACP problems addressed by EAs <s> The antenna placement problem, or cell planning problem, involves locating and configuring infrastructure for cellular wireless networks. From candidate site locations, a set needs to be selected against objectives relating to issues such as financial cost and service provision. This is an NP-hard optimization problem and consequently heuristic approaches are necessary for large problem instances. In this study, we use a greedy algorithm to select and configure base station locations. The performance of this greedy approach is dependent on the order in which the candidate sites are considered. We compare the ability of four state-of-the-art multiple objective genetic algorithms to find an optimal ordering of potential base stations. Results and discussion on the performance of the algorithms are provided. <s> BIB007 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> ACP problems addressed by EAs <s> An application of an automatic tool for the planning of a cellular GSM network in a real environment is presented. The basic principles of the algorithm were previously developed by the authors, and in this paper solutions to the problems arising from its application to a real town are proposed. One of the main issues concerns the field prediction models, so two algorithms are used in the real environment: one is based on an artificial neural network (ANN), and one on the Cost231 model, modified for hilly terrain, and in both cases the position and the height of the buildings are considered in detail. The whole planning procedure is applied to the town of Ancona and the results give the optimized location of the radio base stations (RBS), the heights of their antennas, and their transmitted power <s> BIB008 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> ACP problems addressed by EAs <s> Cellular network design is a major issue in mobile telecommunication systems. In this paper, a model of the problem in its full practical complexity, based on multiobjective constrained combinatorial optimization, has been investigated. We adopted the Pareto approach at resolution in order to compute a set of diversified non-dominated networks, thus removing the need for the designer to rank or weight objectives a priori. We designed and implemented a ''ready-to-use'' platform for radio network optimization that is flexible regarding both the modeling of the problem (adding, removing, updating new antagonist objectives and constraints) and the solution methods. It extends the ''white-box'' ParadisEO framework for metaheuristics applied to the resolution of mono/multi-objective Combinatorial Optimization Problems requiring both the use of advanced optimization methods and the exploitation of large-scale parallel and distributed environments. Specific coding scheme and genetic and neighborhood operators have been designed and embedded. On the other side, we make use of many generic features related to advanced intensification and diversification search techniques, hybridization of metaheuristics and grid computing for the distribution of the computations. They aim at improving the quality of networks and their robustness. They also allow, to speed-up the search and obtain results in a tractable time, and so efficiently solving large instances of the problem. Using three realistic benchmarks, the computed networks and speed-ups on different parallel and/or distributed architectures show the efficiency and the scalability of hierarchical parallel hybrid models. <s> BIB009 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> ACP problems addressed by EAs <s> Radio network design (RND) is a fundamental problem in cellular networks for telecommunications. In these networks, the terrain must be covered by a set of base stations (or antennae), each of which defines a covered area called cell. The problem may be reduced to figure out the optimal placement of antennae out of a list of candidate sites trying to satisfy two objectives: to maximize the area covered by the radio signal and to reduce the number of used antennae. Consequently, RND is a bi-objective optimization problem. Previous works have solved the problem by using single-objective techniques which combine the values of both objectives. The used techniques have allowed to find optimal solutions according to the defined objective, thus yielding a unique solution instead of the set of Pareto optimal solutions. In this paper, we solve the RND problem using a multi-objective version of the algorithm CHC, which is the metaheuristic having reported the best results when solving the single-objective formulation of RND. This new algorithm, called MOCHC, is compared against a binary-coded NSGA-II algorithm and also against the provided results in the literature. Our experiments indicate that MOCHC outperfoms NSGA-II and, more importantly, it is more efficient finding the optimal solutions than single-objectives techniques. <s> BIB010 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> ACP problems addressed by EAs <s> The base station placement problem, with n potential candidate sites is NP-Hard with 2 n solutions (Mathar and Niessen, Wirel. Netw. 6, 421---428, 2000). When dimensioned on m unknown variable settings (e.g., number of power settings?+?number of tilt settings, etc.) the computational complexity becomes (m+1) n (Raisanen, PhD. thesis, 2006). We introduce a novel approach to reduce the computational complexity by dimensioning sites only once to guarantee traffic hold requirements are satisfied. This approach works by determining the maximum set of service test points candidate sites can handle without exceeding a hard traffic constraint, T MAX . Following this, the ability of two evolutionary strategies (binary and permutation-coded) to search for the minimum set cover are compared. This reverses the commonly followed approach of achieving service coverage first and then dimensioning to meet traffic hold. To test this approach, three realistic GSM network simulation environments are engineered, and a series of tests performed. Results indicate this approach can quickly meet network operator objectives. <s> BIB011
• Cell: this column indicates how the cell or service area of BTSs is computed. • P w, T i and Az: these three columns show, respectively, whether the power, tilt and azimuth of the BTSs are optimized. These are the most common settings adjusted when BTS dimensioning is addressed. • Objectives: different aspects of the cellular network that are optimized. • Constraints: aspects of the cellular network that are considered as constraints during the optimization process. From the algorithmic point of view, classic GAs have been used in the literature for solving the ACP problem, both generational (genGA) and steady-state ones (ssGA). Indeed, they are applied in almost 50% of the works reviewed. Rather specific evolutionary techniques such as CHC BIB001 , differential evolution (DE, Storn and Price 1995) , PBIL BIB002 , or artificial immune systems (AIS, de Melo Carvalho Filho and de Alencar 2008) are also found. It can be seen that not only sequential approaches exist, but also parallel models deployed on standard parallel platforms such as clusters of computers (dGAs, Calégari et al. 2001, Alba and BIB006 and even grid computing systems BIB009 . If multiobjective approaches are considered, NSGA-II and SPEA2 , the two best known algorithms in the evolutionary multiobjective research community, have been applied in eight of the analysed works. Other specific multiobjective algorithms used are SEAMO BIB007 and MOCHC BIB010 . From the point of view of the formulation, the first proposals have adopted a single objective approach in which the different network aspects to be optimized are weighted into a single (aggregative) function BIB003 BIB004 , Reiningeret al. 1999 . However, recent advances in multiobjective EAs have meant that the number of works using this multiobjective formulation has increased in latter years BIB008 BIB010 BIB009 BIB011 . Figure 2 summarizes the number of reviewed contributions that fall into different categories: mono/multi, ACP model, site selection, cell shape computation, and BTS parameter optimization. Now, each group of columns of the figure is analysed. In the first group, from all the works in the literature reviewed, monoobjective formulations have been more widely used in spite of the fact that the ACP problem is naturally amenable to multiobjective ones. The additional complexity added by the Pareto optimality mechanisms makes ACP researchers reluctant to adopt this kind of technique. However, the multiobjective approach may be the most appropriate because it can provide the decision maker (network designer) with a set of different configurations for the BTSs, none of which is better than the others (non-dominated). These configurations could be used in particular scenarios that may appear during the operational lifetime of the network. The second group of columns shows the ACP models used in the analysed contributions. It is clear that the demand node and test points are the most widely adopted models. Simplicity and low computational requirements in the former case, and realism in the latter, are the reasons that explain these facts. The Disc model has more to do with theoretical studies. Indeed, cellular networks composed exclusively of omnidirectional antennae are hardly found in the real world (vectorization allows the network capacity to be greatly increased). Looking at the third group of columns in Figure 2 , it can be observed that using a candidate site list (CSL) instead of freely placing the BTSs in any location of the network is the most common option. This is because it is unlikely many network operators are granted such freedom (e.g. no BTS can be placed near schools or in the middle of a lake). The fourth group of columns also reflects the preferred choice for computing the cells (serving areas) of the BTSs: propagation models such as the free space model, the Okumura-Hata model or the Walfish-Ikegami model (COST231 1991 or another depends mainly on the computational effort required (ITU 1997) . Omnidirectional and square cells also appear in several contributions (eight and six works, respectively). Tables 1 and 2 include alternative methods for computing the cell associated to BTSs such as modern ray tracing techniques BIB005 . Finally, the last group of columns summarizes the number of articles in which the power, tilt and azimuth are involved in the optimization process. That is, they are decision variables of the search space. Even though differences here are smaller, it can be seen that the power parameter is more often optimized than the other two. It applies to any kind of BTS (omnidirectional, directive, square, etc.) as the main setting to manage the cell size. The tilt and azimuth angles usually appear in very accurate ACP models. They normally lead to highly expensive computational tasks, which explains the lower incidence in the literature. To conclude with this discussion about the analysed works, the objective functions and the constraints used in the different approaches are now analysed. On the objectives side, a clear trend exists in considering the network cost, measured in terms of number of installed sites, and the quality of services (QoS) provided by these sites. These two objective functions are clearly contradictory. The main difference between many contributions lies in the concept of QoS. Maximizing the network coverage is the most widely used option and it appears in 78% of the revised contributions. However, a more realistic approach is based on using such objectives as a constraint (e.g. at least 90% of the network must be covered) so as to discard useless configurations. Indeed, it does not make any sense to deploy an expensive, fully operational network infrastructure just to cover a small percentage of a given target area. Other ways of measuring the network QoS in the literature have taken into consideration the interference caused by cell overlapping or the traffic capacity of the network. As to the constraints, the handover, or the capability of the network to guarantee continuous communication while the mobile user is moving from one cell to another, is the one that most appears.
Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Details on EAs for the ACP problem <s> This paper uses a realistic combinatorial optimization problem as an example to show how a genetic algorithm can be parallelized in an efficient way. The problem considered is the selection of the best set of transmitter locations in order to cover a given geographical region at optimal cost. It is shown that it is possible to obtain good solutions to the problem even with a very low communication load. The parallel program is tested, first on an artificial example, then on a real-life case. <s> BIB001 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Details on EAs for the ACP problem <s> Cellular network operators are dealing with complex problems when planning the network operation. In order to automate the planning process, the development of simulation and optimization tools are under research. In this paper genetic algorithms with three different approaches are studied in order to optimize the base station sites. This research shows that a proper approach in developing the individual structure and fitness function has crucial importance in solving practical base station siting problems with genetic algorithms. <s> BIB002 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Details on EAs for the ACP problem <s> With the imminent introduction of the 3G systems throughout the world precise cell planning in macrocell, microcell and picocell environments have become equally important. Beside coverage of the basic radio link quality parameter others such as rms delay spread and a measure of the system capacity have become increasingly important. Our contribution addresses the planning inside microcells based on a 3D deterministic ray-tracing propagation tool. It is based on the IHE model (Cichon, 1984) and a simple genetic algorithm (SGA) for the base station location optimization. At this stage the optimization is based on coverage and rms delay spread considerations. Our algorithm has as inputs the delay spread threshold and the minimum field strength. The cost function to be minimized is the number of locations in which the values of these parameters are above the threshold in the case of delay spread, and respectively below the threshold in the case of the field strength. <s> BIB003 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Details on EAs for the ACP problem <s> Several optimization approaches have been compared to optimally locate and power size the RBSs of a GSM network, in order to efficiently dimension the transceiver properties without exceeding radioprotection limits. Such optimization tools are embedded within a planning system where several modules interact, exploiting each other’s features. In particular, the optimum planning tools exploit the available radiopropagation models implemented to take into account the different features of current scenarios. Tabu Search methods appear as robust and viable tools for this purpose. I. Introduction In planning modern telecommunication systems, critical parameters are related to locating, power sizing and tilting the basic elements of wireless networks such as radio base stations (RBSs). Network designers need to rely on sophisticated and user friendly tools in order to accurately estimate the electromagnetic (EM) field levels in complex environments and to cheaply and efficiently dimension the transceiver properties without exceeding radioprotection limits. In order to meet such requirements, a planning system, based on a EM predictioning (EMP) tool, enclosing several radiopropagation models, interconnected with a optimum planning tool (OPT) in which several optimization routines are embedded, was developed ([1]-[3]). In this paper we focus on optimization approaches, comparing metaheuristics such as Tabu Search (TS) and Genetic Algorithms (GA), with analytical methods provided by commercial software packages, solving Mixed Integer Linear and Non Linear Programming Models (MIL/NLPM). In section II the implemented radiopropagation approaches are summarized. Section III is devoted to optimization strategies while computational results are discussed in section IV and conclusions are drawn in section V. II. EM field level estimation and radiopropagation models EM field estimation can be performed by means of several radiopropagation models depending on the geographical properties of the observed scenario. Friis’s formula works well in line-of-sight (LOS) condition, but when applied to urban scenarios, it often results in the over-estimation of actual values. More accurate results can be obtained by more sophisticated approaches, such as the empirical COST 231 Okumura-Hata model ([4]-[5]), the semi-empirical COST 231 Walfisch-Ikegami model ([6]-[7]), and a simple ray-tracing algorithm in order to perform EM field calculation in small geographic areas, in which is considered the first reflection contribution and/or the “single-knife-edge” effects. <s> BIB004 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Details on EAs for the ACP problem <s> The authors present a method for planning a base station's position in a mobile communication system taking into account both the requirement to minimise the environmental impact of the radiated electromagnetic fields and the requirement to assure a suitable quality of service, i.e. C/I ratio, coverage, efficiency, served traffic. The model is based on five functionals and the overall optimisation procedure is carried out by a genetic algorithm. As an example of its application, the proposed method is applied to an imaginary town, subdivided into areas with different constraints for the previously mentioned requirements. Results reported show the behaviour of each functional, as well as the global optimisation of the network. <s> BIB005 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Details on EAs for the ACP problem <s> An application of an automatic tool for the planning of a cellular GSM network in a real environment is presented. The basic principles of the algorithm were previously developed by the authors, and in this paper solutions to the problems arising from its application to a real town are proposed. One of the main issues concerns the field prediction models, so two algorithms are used in the real environment: one is based on an artificial neural network (ANN), and one on the Cost231 model, modified for hilly terrain, and in both cases the position and the height of the buildings are considered in detail. The whole planning procedure is applied to the town of Ancona and the results give the optimized location of the radio base stations (RBS), the heights of their antennas, and their transmitted power <s> BIB006
This section reviews the main features of the EAs found in the literature for solving the ACP problem. The potential advantages and drawbacks of each algorithm are analysed in light of their corresponding encoding schemes, genetic operators, local search and parallelization. The first usage of this encoding scheme appears when the optimization task is simply to position the BTSs of the network by selecting a subset of sites from a candidate site list (CSL). Then, EAs work on bit strings of length N , where N is the total number of candidate sites. Each position of the bit string corresponds to a site, i.e. the ith position represents the ith site. The value of the ith is 1 if the ith site is selected, and zero otherwise. This approach is specially used when solving ACP problems that follow the demand node model (see Section 2.2.1): , BIB001 , Chamaret and Condevaux-Lanloy (1998), BIB002 Binary encoding has also been used when the BTSs can be freely placed anywhere on the geographical area of the network (no CSL exists). In this case, the bit string encodes the binary representation of a list of real numbers that represent the (x, y) coordinates of the sites. However, in all the material analysed the tentative solutions also include one or more values that allow dimensioning of the BTS (i.e. allow the BTS service area to be configured). Indeed, in BIB004 and the binary string has also considered the power level of emission. In the works of BIB005 and BIB006 , the authors have not only included the encoding of the emission power, but also the tilt of the antennae. So, for each BTS, 24 bits are used: 9 + 9 bits for the coordinates, 3 bits for the radiated power, and 3 bits for the tilt. BIB003 have just added the height of the BTSs. The main advantage of this binary encoding is that it allows the evolutionary search to be performed by means of classical EA operators. These operators have been originally developed to manipulate binary genotypes ), as will be further analysed in Section 3.2.2.
Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Integer encoding. <s> It is increasingly important to optimally select base stations in the design of cellular networks, as customers demand cheaper and better wireless services. From a set of potential site locations, a subset needs to be selected which optimizes two critical objectives: service coverage and financial cost. As this is an NP-hard optimization problem, heuristic approaches are required for problems of practical size. Our approach consists of two phases which act upon a set of candidate site permutations at each generation. Firstly, a sequential greedy algorithm is designed to commission sites from an ordering of candidate sites, subject to satisfying an alterable constraint. Secondly, an evolutionary optimization technique, which is tested against a randomized approach, is used to search for orderings of candidate sites which optimize multiple objectives. The two-phase strategy is vigorously tested and the results delineated. <s> BIB001 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Integer encoding. <s> The antenna placement problem, or cell planning problem, involves locating and configuring infrastructure for cellular wireless networks. From candidate site locations, a set needs to be selected against objectives relating to issues such as financial cost and service provision. This is an NP-hard optimization problem and consequently heuristic approaches are necessary for large problem instances. In this study, we use a greedy algorithm to select and configure base station locations. The performance of this greedy approach is dependent on the order in which the candidate sites are considered. We compare the ability of four state-of-the-art multiple objective genetic algorithms to find an optimal ordering of potential base stations. Results and discussion on the performance of the algorithms are provided. <s> BIB002 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Integer encoding. <s> The base station placement problem, with n potential candidate sites is NP-Hard with 2 n solutions (Mathar and Niessen, Wirel. Netw. 6, 421---428, 2000). When dimensioned on m unknown variable settings (e.g., number of power settings?+?number of tilt settings, etc.) the computational complexity becomes (m+1) n (Raisanen, PhD. thesis, 2006). We introduce a novel approach to reduce the computational complexity by dimensioning sites only once to guarantee traffic hold requirements are satisfied. This approach works by determining the maximum set of service test points candidate sites can handle without exceeding a hard traffic constraint, T MAX . Following this, the ability of two evolutionary strategies (binary and permutation-coded) to search for the minimum set cover are compared. This reverses the commonly followed approach of achieving service coverage first and then dimensioning to meet traffic hold. To test this approach, three realistic GSM network simulation environments are engineered, and a series of tests performed. Results indicate this approach can quickly meet network operator objectives. <s> BIB003
Integer encoding has been used by Larry Raisanen, Roger Witaker and Steve Hurley at Cardiff University in several works: BIB001 , Whitaker et al. (2004a,b) , BIB002 , BIB003 . Their approach is based on considering that each BTS is identified by an integer. Then, given n candidate BTSs, a permutation π of size n represents a solution to the ACP problem. That is, EAs manipulate integer permutations, so special care has to be taken with the genetic operators used. These BTS permutations are then translated into a cell plan by using a decoder. The decoder works by iteratively packing cells as densely as possible, subject to certain constraints not being violated. This cell plan is then used to compute the fitness function.
Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Real encoding. <s> In this paper, we find the best base station placement using a genetic approach. A new representation describing base station placement with a real number is proposed, and new genetic operators are introduced. This new representation can describe not only the locations of the base stations but also the number of those. Considering both coverage and economic efficiency, we also suggest a weighted objective function. Our algorithm is applied to an obvious optimization problem and then is verified. Moreover, our approach is tried in an inhomogeneous traffic density environment. The simulation result proves that the algorithm enables one to find near optimal base station placement and the efficient number of base stations. <s> BIB001 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> Real encoding. <s> In this paper, the base station placement is automatically determined using genetic approach, and the transmit power is estimated considering the interference situation in the case of interference-dominant systems. For applying a genetic algorithm to the base station placement problem, a new representation scheme with real numbers is proposed. And, corresponding operators such as crossover and mutation are introduced. A weighted objective function is designed for performing the cell planning coverage, cost-effectively. To verify the proposed algorithm, the situation where the optimum positions and number of base stations are obvious is considered. The proposed algorithm is applied to an inhomogeneous traffic density environment, where a base station's coverage may be limited by offered traffic loads. Simulation result proves that the algorithm enables us to find near optimal base station placement and the efficient number of base stations. <s> BIB002
The real encoding is mainly used for solving ACP problems based on freely positioning the BTSs in the working area of the cellular network. Therefore, the tentative solutions are made up of real numbers that represent the BTS coordinates. This scheme is mainly used in works dealing with the disc model (see Section 2.2.2). Indeed, this is the approach used in BIB001 and BIB002 . If K is the maximum number of BTSs to be placed, solutions are encoded as arrays (c 1 , . . . , c K ) , where c i = (x i , y i ) are the coordinates of the ith BTS. When a BTS is not supposed to be deployed, a special 'NULL' value is used. This is the mechanism adopted in these three works to avoid using a variable-length representation and therefore special genetic operators have been developed.
Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> ACP-targeted encoding. <s> An important class of computational problems are grouping problems, where the aim is to group together members of a set (i.e., find a good partition of the set). We show why both the standard and the ordering GAs fare poorly in this domain by pointing out their inherent difficulty to capture the regularities of the functional landscape of the grouping problems. We then propose a new encoding scheme and genetic operators adapted to these problems, yielding the Grouping Genetic Algorithm (GGA). We give an experimental comparison of the GGA with the other GAs applied to grouping problems, and we illustrate the approach with two more examples of important grouping problems successfully treated with the GGA: the problems of Bin Packing and Economies of Scale. <s> BIB001 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> ACP-targeted encoding. <s> In this paper, the evolution of mobile radio network is presented. First of all, the network life cycle is considered. A mathematical modeling of these life periods is developed inside an optimization problem: optimal location of base stations. It is a combinatorial optimization problem. A multi-period model is built on a concentrator link approach. Finally, three different multi-period techniques are identified, they are based on using the genetic algorithm (GA) to tackle this problem of the design of microcellular networks. <s> BIB002 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> ACP-targeted encoding. <s> Engineering of mobile telecommunication networks endures two major problems: the design of the network and the frequency assignment. We address the first problem in this paper, which has been formulated as a multiobjective constrained combinatorial optimisation problem. We propose a genetic algorithm (GA) that aims to approximate the Pareto frontier of the problem. Advanced techniques have been used, such as Pareto ranking, sharing and elitism. The GA has been implemented in parallel on a network of workstations to speed up the search. To evaluate the performance of the GA, we have introduced two new quantitative indicators: the entropy and the contribution. Encouraging results are obtained on real-life problems. <s> BIB003 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> ACP-targeted encoding. <s> The cell planning problem with capacity expansion is examined in wireless communications. The problem decides the location and capacity of each new base station to cover expanded and increased traffic demand. The objective is to minimize the cost of new base stations. The coverage by the new and existing base stations is constrained to satisfy a proper portion of traffic demands. The received signal power at the base station also has to meet the receiver sensitivity. The cell planning is formulated as an integer linear programming problem and solved by a tabu search algorithm. In the tabu search intensification by add and drop move is implemented by short-term memory embodied by two tabu lists. Diversification is designed to investigate proper capacities of new base stations and to restart the tabu search from new base station locations. Computational results show that the proposed tabu search is highly effective. A 10% cost reduction is obtained by the diversification strategies. The gap from the optimal solutions is approximately 1/spl sim/5% in problems that can be handled in appropriate time limits. The proposed tabu search also outperforms the parallel genetic algorithm. The cost reduction by the tabu search approaches 10/spl sim/20% in problems: with 2500 traffic demand areas (TDAs) in code division multiple access (CDMA). <s> BIB004 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> ACP-targeted encoding. <s> We give a short introduction to the results of our theoretical analysis of evolutionary algorithms. These results are used to design an algorithm for a large real-world problem: the placement of antennas for mobile radio networks. Our model for the antenna placement problem (APP) addresses cover, traffic demand, interference, different parameterized antenna types, and the geometrical structure of cells. The resulting optimization problem is constrained and multi-objective. The evolutionary algorithm derived from our theoretical analysis is capable of dealing with more than 700 candidate sites in the working area. The results show that the APP is tractable. The automatically generated designs enable experts to focus their efforts on the difficult parts of a network design problem. <s> BIB005 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> ACP-targeted encoding. <s> Cellular network design is a very large and complex combinatorial optimization problem. It consists of antenna location and parameters settings. Until now, the design is done using radio quality criteria. Radio coverage, traffic capacity and field overlap are the main factors considered within optimization process to make decisions about network solutions. Nevertheless, such objectives do not lead to an efficient organization of network cells whereas this is a major assessment for radio expert planners. Absence of a clear geometrical structure of network cells prevents experts using many theoretical concepts on network design. This paper proposes an original model to evaluate the cell shape and a bi-criteria approach using an Evolutionary Algorithm to handle cells overlap and cells geometry as criteria for real-life network optimization. <s> BIB006 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> ACP-targeted encoding. <s> As Third Generation (3G) mobile networks start to be implemented, there is a need for effective network planning. However, deciding upon the optimum placement for the base stations of the networks is a complex task requiring vast computational resource. This paper discusses the conflicting objectives of base station planning and characterises a multi-objective optimisation problem. We present a genetic encoding of the third generation mobile network planning problem and parallel genetic algorithms to solve it. <s> BIB007 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> ACP-targeted encoding. <s> Automatic cell planning aims at optimising the performance of UMTS networks in terms of capacity, coverage and quality of service by automatically adjusting antenna parameters and common channel powers. This paper presents an overview on optimisation strategies that correspond to different scenarios depending on the operational context. Starting from capacity optimisation, we show how an Automatic Cell Planner (ACP) can be enhanced with specific functionalities such as joint coverage/capacity optimisation, automatic site selection or steered optimisation. Finally, we show how the improvement in quality of service brought about by an ACP can be accurately assessed with dynamic simulations using adequate key performance indicators (KPI). <s> BIB008 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> ACP-targeted encoding. <s> We focus on the dimensioning process of cellular networks that addresses the evaluation of equipment global costs to cover a city. To deal with frequency assignment, that constitutes the most critical resource in mobile systems, the network is usually modeled as a pattern of regular hexagonal cells. Each cell represents the area covered by the signal of a transmitter or base station (BS). Our work emphasizes on the design of irregular hexagonal cells in an adaptive way. Hexagons transform themselves and adapt their shapes according to a traffic density map and to geometrical constraints. This process, called adaptive meshing (AM), may be seen as a solution to minimize the required number of BS to cover a region and to propose a basis for transmitter positioning. The solution we present to the mesh generation problem for mobile network dimensioning is based on the use of an evolutionary algorithm. This algorithm, called hybrid island evolutionary strategy (HIES), performs distributed computation. It allows the user to tackle problem instances with large traffic density map requiring several hundreds of cells. HIES combines local search fast computation on individuals, incorporated into a global island-like strategy. Experiments are done on one real case representing the mobile traffic load of the second French city of Lyon and on several other traffic maps from urban fictive data sets. <s> BIB009 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> ACP-targeted encoding. <s> We propose a new solution to the problem of positioning base station transmitters of a mobile phone network and assigning frequencies to the transmitters, both in an optimal way. Since an exact solution cannot be expected to run in polynomial time for all interesting versions of this problem (they are all NP-hard), our algorithm follows a heuristic approach based on the evolutionary paradigm. For this evolution to be efficient, i.e., goal-oriented and sufficiently random at the same time, problem-specific knowledge is embedded in the operators. The problem requires both the minimization of the cost and of the channel interference. We examine and compare two standard multiobjective techniques and a new algorithm - the steady-state evolutionary algorithm with Pareto tournaments. One major finding of the empirical investigation is a strong influence of the choice of the multiobjective selection method on the utility of the problem-specific recombination leading to a significant difference in the solution quality. <s> BIB010 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> ACP-targeted encoding. <s> Cellular network design is a major issue in mobile telecommunication systems. In this paper , a model of the problem in its full practical complexity, based on multiobjective constrained combinatorial optimization, has been investigated. We adopted the Pareto approach at resolution in order to compute a set of diversified non-dominated networks, thus removing the need for the designer to rank or weight objectives. We design an asynchronous steady-state evolutionary algorithm for its resolution. Specific coding scheme and genetic and neighborhood operators have been designed for the tackled problem. On the other side, we make use of many generic features related to advanced intensification and diversification search techniques, hybridization of metaheuristics and grid computing for the distribution of the computations. They aim at improving the quality of networks and robustness, at speeding-up the search, hence efficiently solving large instances of the problem. Using realistic benchmarks, the computed networks and speed-ups on parallel/distributed architectures show the efficiency and the scalability of hierarchical models of hybridization and parallelization used in conjunction. <s> BIB011 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> ACP-targeted encoding. <s> Cellular network design is a major issue in second generation GSM mobile telecommunication systems. In this paper, a new model of the problem in its full practical complexity, based on multiobjective constrained combinatorial optimization, has been used. We propose an evolutionary algorithm that aims at approximating the Pareto frontier of the problem, which removes the need for a cellular network designer to rank or weight objectives a priori. Specific coding scheme and genetic operators have been designed. Advanced intensification and diversification search techniques, such as elitism and adaptive sharing, have been used. Three complementary hierarchical parallel models have been designed to improve the solution quality and robustness, to speed-up the search and to solve large instances of the problem. The obtained Pareto fronts and speed-ups on different parallel architectures show the efficiency and the scalability of the parallel model. Performance evaluation of the algorithm has been carried out on different realistic benchmarks. The obtained results show the impact of the proposed parallel models and the introduced search mechanisms. <s> BIB012 </s> Evolutionary algorithms for solving the automatic cell planning problem : a survey <s> ACP-targeted encoding. <s> Cellular network design is a major issue in mobile telecommunication systems. In this paper, a model of the problem in its full practical complexity, based on multiobjective constrained combinatorial optimization, has been investigated. We adopted the Pareto approach at resolution in order to compute a set of diversified non-dominated networks, thus removing the need for the designer to rank or weight objectives a priori. We designed and implemented a ''ready-to-use'' platform for radio network optimization that is flexible regarding both the modeling of the problem (adding, removing, updating new antagonist objectives and constraints) and the solution methods. It extends the ''white-box'' ParadisEO framework for metaheuristics applied to the resolution of mono/multi-objective Combinatorial Optimization Problems requiring both the use of advanced optimization methods and the exploitation of large-scale parallel and distributed environments. Specific coding scheme and genetic and neighborhood operators have been designed and embedded. On the other side, we make use of many generic features related to advanced intensification and diversification search techniques, hybridization of metaheuristics and grid computing for the distribution of the computations. They aim at improving the quality of networks and their robustness. They also allow, to speed-up the search and obtain results in a tractable time, and so efficiently solving large instances of the problem. Using three realistic benchmarks, the computed networks and speed-ups on different parallel and/or distributed architectures show the efficiency and the scalability of hierarchical parallel hybrid models. <s> BIB013
The encoding schemes shown in this section have been designed especially to deal with ACP problems, so they do not properly fall into any of the previously defined categories. The most widely used non-classical scheme in the EA literature encodes all the optimizable parameter settings of each BTS in the tentative solution. Let us call it network encoding. This encoding is usually aimed not only at positioning the BTSs but also at dimensioning them. Figure 3 displays an example in which the BTS type, the emission power, and the tilt and azimuth angles are to be optimized. power tilt and azimuth are actually real-valued parameters, they are usually discretized into a rather small set of values in order to reduce the complexity of the optimization problem. This is the approach used in BIB002 , BIB003 , , Altman et al. (2002a,b) , BIB005 , Jamaa et al. (2004a,b) , BIB006 , BIB008 , BIB011 , , BIB012 and BIB013 . The main advantage of this encoding scheme is that EAs are put to work on real solutions so therefore problem-domain specific knowledge can be easily included in the search. Otherwise, no classical well-known operators can be used and newly specific ones have to be developed. Other specific encodings are analysed next. With the goal of minimizing the number of BTSs required to cover a given area, BIB009 have adaptively transformed the hexagonal cell shapes typically used in cellular networks. This adaptive meshing is performed according to a traffic density map and to geometrical constraints. Then, for each cell of the network, the encoding scheme includes six vertices (two real values) plus an attribute that indicates whether it is visible or not. This latter attribute is the particularity of this approach. BIB004 have used group encoding BIB001 to maximize the coverage of traffic demand areas (TDAs) using as few BTSs as possible. In this group encoding, each tentative solution has two parts: the TDA part and the BTS part. In the TDA part a BTS is assigned to each TDA. The BTSs used in the TDA part are then represented in the BTS part. Specific group-oriented operators have been applied. BIB007 have proposed a matrix encoding with size 3 × N , where N is the maximum number of BTSs. All the BTSs are labelled so that the ith column corresponds to the ith BTS. In this encoding, the three values of the ith BTS indicate whether the BTS is present or not in the network (BTS selection), the BTS height and the BTS emission power. This encoding has many drawbacks but no further discussion is given since the authors only present their proposal in the article, with no experimentation at all. Consequently, this article will not be considered further in this survey. The work of BIB010 presents an encoding that mixes real and integer values, as well as a set of frequencies. This specialized encoding is required because it addresses both the BTS positioning and the frequency assignment simultaneously. A candidate solution includes, for each BTS, two real values representing its coordinates, two integer values encoding the transmitting power and the number of available channels in the BTS, and the set of channels assigned to the BTS.