reference
stringlengths
376
444k
target
stringlengths
31
68k
3D Scanning: A Comprehensive Survey <s> C. Forest Analysis <s> AbstractMethods characterizing fine-scale fuels and plant diversity can advance understanding of plant-fire interactions across scales and help in efforts to monitor important ecosystems such as longleaf pine (Pinus palustris Mill.) forests of the southeastern United States. Here, we evaluate the utility of close-range photogrammetry for measuring fuels and plant diversity at fine scales (submeter) in a longleaf pine forest. We gathered point-intercept data of understory plants and fuels on nine 3-m2 plots at a 10-cm resolution. For these same plots, we used close-range photogrammetry to derive 3-dimensional (3D) point clouds representing understory plant height and color. Point clouds were summarized into distributional height and density metrics. We grouped 100 cm2 cells into fuel types, using cluster analysis. Comparison of photogrammetry heights with point-intercept measurements showed that photogrammetry points were weakly to moderately correlated to plant and fuel heights (r = 0.19–0.53). Mann–Whitn... <s> BIB001 </s> 3D Scanning: A Comprehensive Survey <s> C. Forest Analysis <s> Abstract. In the Alps as well as in other mountain regions steep grassland is frequently affected by shallow erosion. Often small landslides or snow movements displace the vegetation together with soil and/or unconsolidated material. This results in bare earth surface patches within the grass covered slope. Close-range and remote sensing techniques are promising for both mapping and monitoring these eroded areas. This is essential for a better geomorphological process understanding, to assess past and recent developments, and to plan mitigation measures. Recent developments in image matching techniques make it feasible to produce high resolution orthophotos and digital elevation models from terrestrial oblique images. In this paper we propose to delineate the boundary of eroded areas for selected scenes of a study area, using close-range photogrammetric data. Striving for an efficient, objective and reproducible workflow for this task, we developed an approach for automated classification of the scenes into the classes grass and eroded. We propose an object-based image analysis (OBIA) workflow which consists of image segmentation and automated threshold selection for classification using the Excess Green Vegetation Index (ExG). The automated workflow is tested with ten different scenes. Compared to a manual classification, grass and eroded areas are classified with an overall accuracy between 90.7% and 95.5%, depending on the scene. The methods proved to be insensitive to differences in illumination of the scenes and greenness of the grass. The proposed workflow reduces user interaction and is transferable to other study areas. We conclude that close-range photogrammetry is a valuable low-cost tool for mapping this type of eroded areas in the field with a high level of detail and quality. In future, the output will be used as ground truth for an area-wide mapping of eroded areas in coarser resolution aerial orthophotos acquired at the same time. <s> BIB002
CRP has been widely used for analyzing and monitoring different characteristics pf forests, which will be elaborated in sthis section. 1) Plant Diversity and Surface Fuel Structure: In BIB001 , a CRP approach was introduced and discussed for measuring the 3D structure of understory vegetation and woody debris, and to test the utility of photogrammetric points for distinguishing and predicting understory fuels and plant diversity. They compared height data derived from CRP and field measurements of fuelbed depth, and evaluated the utility of photogrammetryderived data for separating and classifying 10-cm scale fuel types, plant species, and plant types by studying the area located in Air Force Base (AFB) in northwestern Florida. A total of 57 different species were observed, with an average of 19 species per plot and a standard deviation of 6 species across plots. Digital imagery was processed with the photogrammetry software PhotoModeler Scanner . Using the Smart Match feature-based method, stereo pairs were created by automatically detecting and matching pixels of similar texture and color between overlapping paired images. The statistical and classification analyses were such that the overall classification accuracy was computed as 100 minus the outof-bag estimate of error rate. Overall quantity and allocation difference were calculated using the diffeR package in R. The Field-measured heights were often much greater than photogrammetric heights. Overall classification accuracies of fuel type, species, and plant type classifications were 44, 39 and 44 percent, respectively. Patterns and distributions of point-intercept measurements of fuelbed depth and photogrammetric maximum height were similar, but there were discrepancies. The results indicated that CRP has a potential for yielding fine-scale measurements of understory fuels and plants. Advantages of CRP in the foregoing context include its ability to create a permanent record of understory vegetation and fuels that would support retrospect analyses, and for the calibration of human interpreters. 2) Mapping Eroded Areas on Mountain Grassland: In BIB002 , the problem of erosion in mountain areas has been investigated, and a method for mapping and monitoring eroded areas has been proposed for selected scenes, using CRP data. In order to automatically extract eroded and grass-covered, i.e. non-eroded, parts of selected 3D point clouds of the scene, they presented an approach applied on object-based image classification with automatic threshold optimization, using the Excess Green Vegetation Index. For classification, the raster-based feature ExG was used, which characterizes the segments in terms of spectral properties. The proposed method resulted in overall accuracies between 90.7% and 95.5%, comparing the automatically extracted eroded areas with manual detection.
3D Scanning: A Comprehensive Survey <s> F. City Modeling <s> Abstract. 3D city model is a digital representation of the Earth's surface and it’s related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing day to day for various engineering and non-engineering applications. Generally three main image based approaches are using for virtual 3D city models generation. In first approach, researchers used Sketch based modeling, second method is Procedural grammar based modeling and third approach is Close range photogrammetry based modeling. Literature study shows that till date, there is no complete solution available to create complete 3D city model by using images. These image based methods also have limitations This paper gives a new approach towards image based virtual 3D city modeling by using close range photogrammetry. This approach is divided into three sections. First, data acquisition process, second is 3D data processing, and third is data combination process. In data acquisition process, a multi-camera setup developed and used for video recording of an area. Image frames created from video data. Minimum required and suitable video image frame selected for 3D processing. In second section, based on close range photogrammetric principles and computer vision techniques, 3D model of area created. In third section, this 3D model exported to adding and merging of other pieces of large area. Scaling and alignment of 3D model was done. After applying the texturing and rendering on this model, a final photo-realistic textured 3D model created. This 3D model transferred into walk-through model or in movie form. Most of the processing steps are automatic. So this method is cost effective and less laborious. Accuracy of this model is good. For this research work, study area is the campus of department of civil engineering, Indian Institute of Technology, Roorkee. This campus acts as a prototype for city. Aerial photography is restricted in many country and high resolution satellite images are costly. In this study, proposed method is based on only simple video recording of area. Thus this proposed method is suitable for 3D city modeling. Photo-realistic, scalable, geo-referenced virtual 3D city model is useful for various kinds of applications such as for planning in navigation, tourism, disasters management, transportations, municipality, urban and environmental managements, real-estate industry. Thus this study will provide a good roadmap for geomatics community to create photo-realistic virtual 3D city model by using close range photogrammetry. <s> BIB001 </s> 3D Scanning: A Comprehensive Survey <s> F. City Modeling <s> As the affordability, reliability and ease-of-use of Unmanned Aerial Vehicles (UAV) advances, the use of aerial surveying for cultural heritage purposes becomes a popular choice, yielding an unprecedented volume of high-resolution, geo-tagged image-sets of historical sites from above. As well, recent developments in photogrammetry technology provide a simple and cost-effective method of generating relatively accurate 3D models from 2D images. These techniques provide a set of new tools for archaeologists and cultural heritage experts to capture, store, process, share, visualise and annotate 3D models in the field. This paper focuses on the methodology used to document the cultural heritage site of Asinou Church in Cyprus using various state of the art techniques, such as UAV, photogrammetry and 3D printing. Hundreds of images of the Asinou Church were taken by a UAV with an attached high resolution, low cost camera. These photographic images were then used to create a digital 3D model and a 3D printer was used to create a physical model of the church. Such a methodology provides archaeologists and cultural heritage experts a simple and cost-effective method of generating relatively accurate 3D models from 2D images of cultural heritage sites. <s> BIB002
In BIB001 , a procedure was described for making large-scale 3D city models using video image sequences. Initially, the buildings were filmed in order to obtain a 360 coverage. Afterward, the video sequences were split into frames, and the camera positions for a selected number of frames were obtained by finding the fundamental matrix. The point clouds of the buildings were created with the Agisoft Photoscan Pro software BIB002 , which utilizes the SFM algorithm. The 3D model was created by generating a wireframe and adding shading and textures to create a photorealistic representation of a scene. The compositions of the 3D models was created in the Autodesk R 3ds Max R software . The results of the foregoing study showed that 3D model creation from video sequences is simple and cost-effective.
3D Scanning: A Comprehensive Survey <s> IV. AERIAL PHOTOGRAMMETRY <s> Abstract We discuss the evolution and state-of-the-art of the use of Unmanned Aerial Systems (UAS) in the field of Photogrammetry and Remote Sensing (PaRS). UAS, Remotely-Piloted Aerial Systems, Unmanned Aerial Vehicles or simply, drones are a hot topic comprising a diverse array of aspects including technology, privacy rights, safety and regulations, and even war and peace. Modern photogrammetry and remote sensing identified the potential of UAS-sourced imagery more than thirty years ago. In the last five years, these two sister disciplines have developed technology and methods that challenge the current aeronautical regulatory framework and their own traditional acquisition and processing methods. Navety and ingenuity have combined off-the-shelf, low-cost equipment with sophisticated computer vision, robotics and geomatic engineering. The results are cm-level resolution and accuracy products that can be generated even with cameras costing a few-hundred euros. In this review article, following a brief historic background and regulatory status analysis, we review the recent unmanned aircraft, sensing, navigation, orientation and general data processing developments for UAS photogrammetry and remote sensing with emphasis on the nano-micro-mini UAS segment. <s> BIB001 </s> 3D Scanning: A Comprehensive Survey <s> IV. AERIAL PHOTOGRAMMETRY <s> Remote-sensing applications using the remotely piloted aerial system RPAS are becoming more frequent. RPAS is used in different contexts and in several areas, such as environmental studies, cultural heritage, civil engineering, forestry, and cartography. To process the images resulting from the RPAS, different types of image-based 3D modelling software proprietary or open source are used. MicMac is an open-source software which allows generating georeferenced information which can be manipulated or visualized under a geographical information system GIS environment. So, the integration between the MicMac procedures within a GIS software could be very useful. The main objective of this work was to create an open-source GIS application based on MicMac photogrammetric tools to obtain the orthophotographs, point clouds, and digital surface models. To test the application developed, two distinct areas were considered: one in a more natural environment Aguda beach near Porto city, Portugal and another in an urban environment in the city of Coimbra, Portugal. High-resolution data sets were obtained with a ground sampling distance GSD of approximately 4.5 cm. Shaded relief image and dense point cloud were generated. This open-source application can be automated and can create all the files required to run the functionalities from MicMac to obtain the georeferenced information, within a GIS software, bringing photogrammetric data generation to a wider user community. Moreover, integrating this application with the GIS software has several advantages like generating more georeferenced information, such as vegetation indices, or even creating the land use land cover map. Creation of shapefiles with the projection centre of the camera, the area covered by each photograph, and taking account of the number of images that appear in each location are also useful in performing certain tasks. <s> BIB002
The potential of UAVs for imagery tasks in the context of PaRS has attracted attentions from numerous research communities for decades. According to BIB001 , their competitiveness lies in the fact that they have combined low-cost capturing devices with advances computer-vision and robotic systems, which lead to cm-level accuracies. RS and RPAS are used in different contexts such as environmental studies, cultural heritage, civil engineering, forestry and cartography. MicMac is a popular open-source software which allows to process the images resulting from RPAS for generating georeferenced information, which can be manipulated or visualized under a GIS environment. In BIB002 , an open-source GIS application was proposed based on MicMac, and tested on both natural and urban environments, obtaining HR databases. Moreover, UAV-based photogrammetry can be utilized in contexts such as landslide cut, whose results are suitable for repair and management. The rest of the subjects related to aerial photogrammetry will be discussed from different perspectives in what follows.
3D Scanning: A Comprehensive Survey <s> A. The ISPRS Benchmark <s> For more than two decades, many efforts have been made to develop methods for extracting urban objects from data acquired by airborne sensors. In order to make the results of such algorithms more comparable, benchmarking data sets are of paramount importance. Such a data set, consisting of airborne image and laserscanner data, has been made available to the scientific community by ISPRS WGIII/4. Researchers were encouraged to submit their results of urban object detection and 3D building reconstruction, which were evaluated based on reference data. This paper presents the outcomes of the evaluation for building detection, tree detection, and 3D building reconstruction. The results achieved by different methods are compared and analysed to identify promising strategies for automatic urban object extraction from current airborne sensor data, but also common problems of state-of-the-art methods. <s> BIB001 </s> 3D Scanning: A Comprehensive Survey <s> A. The ISPRS Benchmark <s> Airborne high resolution oblique imagery systems and RPAS/UAVs are very promising technologies that will keep on influencing the development of geomatics in the future years closing the gap between terrestrial and classical aerial acquisitions. These two platforms are also a promising solution for National Mapping and Cartographic Agencies (NMCA) as they allow deriving complementary mapping information. Although the interest for the registration and integration of aerial and terrestrial data is constantly increasing, only limited work has been truly performed on this topic. Several investigations still need to be undertaken concerning algorithms ability for automatic co-registration, accurate point cloud generation and feature extraction from multiplatform image data. One of the biggest obstacles is the non-availability of reliable and free datasets to test and compare new algorithms and procedures. The Scientific Initiative “ISPRS benchmark for multi-platform photogrammetry”, run in collaboration with EuroSDR, aims at collecting and sharing state-of-the-art multi-sensor data (oblique airborne, UAV-based and terrestrial images) over an urban area. These datasets are used to assess different algorithms and methodologies for image orientation and dense matching. As ground truth, Terrestrial Laser Scanning (TLS), Aerial Laser Scanning (ALS) as well as topographic networks and GNSS points were acquired to compare 3D coordinates on check points (CPs) and evaluate cross sections and residuals on generated point cloud surfaces. In this paper, the acquired data, the pre-processing steps, the evaluation procedures as well as some preliminary results achieved with commercial software will be presented. <s> BIB002 </s> 3D Scanning: A Comprehensive Survey <s> A. The ISPRS Benchmark <s> Abstract. In this paper, we present a novel framework for the semantic labeling of airborne laser scanning data on a per-point basis. Our framework uses collections of spherical and cylindrical neighborhoods for deriving a multi-scale representation for each point of the point cloud. Additionally, spatial bins are used to approximate the topography of the considered scene and thus obtain normalized heights. As the derived features are related with different units and a different range of values, they are first normalized and then provided as input to a standard Random Forest classifier. To demonstrate the performance of our framework, we present the results achieved on two commonly used benchmark datasets, namely the Vaihingen Dataset and the GML Dataset A, and we compare the results to the ones presented in related investigations. The derived results clearly reveal that our framework excells in classifying the different classes in terms of pointwise classification and thus also represents a significant achievement for a subsequent spatial regularization. <s> BIB003
The ISPRS Benchmark on 3D Semantic Labeling BIB001 was gathered specifically for fostering research on automated analysis. However, only a few approaches have been evaluated on it due to several classes having quite similar geometric behavior, making the classification on the database rather challenging. In BIB002 , airborne oblique images were obtained on four cardinal and nadir directions, along with UAV images on both nadir and oblique directions, as well as terrestrial images, consisting of both convergent and redundant ones, of two test areas. They prepared databases for two tasks, namely, dense image matching evaluation and Image orientation evaluation. In BIB003 , a framework was presented for semantic labeling on a per-point basis, using collections of spherical and cylindrical neighborhoods, as well as the spatial bins as the basis for a multi-scale geometric representation of the surrounding of each point in the point cloud. In addition, baseline results were provided on the ISPRS database.
3D Scanning: A Comprehensive Survey <s> B. Feature Extraction and Matching <s> We propose a new form of software transactional memory (STM) designed to support dynamic-sized data structures, and we describe a novel non-blocking implementation. The non-blocking property we consider is obstruction-freedom. Obstruction-freedom is weaker than lock-freedom; as a result, it admits substantially simpler and more efficient implementations. A novel feature of our obstruction-free STM implementation is its use of modular contention managers to ensure progress in practice. We illustrate the utility of our dynamic STM with a straightforward implementation of an obstruction-free red-black tree, thereby demonstrating a sophisticated non-blocking dynamic data structure that would be difficult to implement by other means. We also present the results of simple preliminary performance experiments that demonstrate that an "early release" feature of our STM is useful for reducing contention, and that our STM lends itself to the effective use of modular contention managers. <s> BIB001 </s> 3D Scanning: A Comprehensive Survey <s> B. Feature Extraction and Matching <s> Abstract The primary contribution of this paper is an efficient feature extraction and matching implementation for large images in large-scale aerial photogrammetry experiments. First, a Block-SIFT method is designed to overcome the memory limitation of SIFT for extracting and matching features from large photogrammetric images. For each pair of images, the original large image is split into blocks and the possible corresponding blocks in the other image are determined by pre-estimating the relative transformation between the two images. Because of the reduced memory requirement, features can be extracted and matched from the original images without down-sampling. Next, a red-black tree data structure is applied to create a feature relationship to reduce the search complexity when matching tie points. Meanwhile, tree key exchange and segment matching methods are proposed to match the tie points along-track and across-track. Finally, to evaluate the accuracy of the features extracted and matched from the proposed L 2 -SIFT algorithm, a bundle adjustment with parallax angle feature parametrization (ParallaxBA 1 ) is applied to obtain the Mean Square Error (MSE) of the feature reprojections, where the feature extraction and matching result is the only information used in the nonlinear optimisation system. Seven different experimental aerial photogrammetric datasets are used to demonstrate the efficiency and validity of the proposed algorithm. It is demonstrated that more than 33 million features can be extracted and matched from the Taian dataset with 737 images within 21 h using the L 2 -SIFT algorithm. In addition, the ParallaxBA involving more than 2.7 million features and 6 million image points can easily converge to an MSE of 0.03874. The C/C++ source code for the proposed algorithm is available at http://services.eng.uts.edu.au/sdhuang/research.htm . <s> BIB002
In BIB002 , an efficient feature extraction and matching implementation was presented for large images in large-scale aerial photogrammetry experiments, which intended to overcome memory issues with SIFT. The red-black tree structure BIB001 , the tree key exchange method and segment matching were used to improve the efficiency, having in mind that near-zero pitch/roll angles might make the application unrealistic.
3D Scanning: A Comprehensive Survey <s> D. Survey of Historical Heritage <s> Abstract. The survey and representation of pre-hispanic wall painting use to be done with traditional photography, we describe the difficulties and limitations found in this approach and we show another solution to improve the quality of this documentation. It relies on the use of photogrammetry and MicMac program. The calculated orthophotos have an accurate geometry, they are high resolution and the three-dimensional models present a high level of detail. We obtain a complete representation which satisfy the requirements of art historians and conservators to study the meanings of the paintings and their conservation state. Furthermore, as this improvement is achieved by following a particular strategy for the photo sessions and the mathematical processing on the images, it doesn't need the acquisition of additional equipment. We explain how we applied the method in the registration of a structure covered with pictorial representations that was discovered in the archaeological site of Las Higueras, Mexico. <s> BIB001 </s> 3D Scanning: A Comprehensive Survey <s> D. Survey of Historical Heritage <s> Abstract. Photogrammetry has seen an increase in the use of UAVs (Unmanned Aerial Vehicles) for both large and smaller scale cartography. The use of UAVs is also advantageous because it may be used for tasks requiring quick response, including in the case of the inspection and monitoring of buildings. The objective of the project is to study the acquisition and processing protocols which exist in the literature and to adapt them for UAV projects. This implies a study on the calibration of the sensors, flight planning, comparison of software solutions, data management, and analysis on the different products of a UAV project. Two historical buildings of the city of Strasbourg were used as case studies: a part of the Rohan Palace facade and the St-Pierre-le-Jeune Catholic church. In addition, a preliminary test was performed on the Josephine Pavilion. Two UAVs were used in this research; namely the Sensefly Albris and the DJI Phantom 3 Professional. The experiments have shown that the calibration parameters tend to be unstable for small sensors. Furthermore, the dense matching of images remains a particular problem to address in a close range photogrammetry project, more so in the presence of noise on the images. Data management in cases where the number of images is high is also very important. The UAV is nevertheless a suitable solution for the surveying and recording of historical buildings because it is able to take images from points of view which are normally inaccessible to classical terrestrial techniques. <s> BIB002
Investigation and observation of historical heritage sites can broadly benefit from the virtue of aerial photogrammetry, which will be discussed in this section. 1) Pre-Hispanic Wall Painting: In BIB001 , the use of photogrammetry for creating a 3D reconstruction of the Las Higueras architectural structure has been described. The photographic survey was made with a Canon EOS 5D Mark II camera with a resolution of 21.1 Mega-pixels. Altogether, 964 pictures were taken, of which two thirds were used. The image processing was done using the MicMac software . The generated 3D model produced acceptable results regarding correct survey of paintings and conservation treatment, with a resolution of 4 pixels per mm. 2) Historical Buildings of the City of Strasbourg: Both small-and large-scale cartography can be achieved through UAV photogrammetry, which is helpful for tasks demanding short response times, e.g. observing or monitoring buildings which may be inaccessible to conventional terrestrial devices BIB002 . In the latter study, based on Sensefly Albris and the DJI Phantom 3 Professional UAVs, the related acquisition and processing techniques have been studied as well, which involve calibration, flight planning and data management. The case-studies considered included two historical buildings of the city of Strasbourg, namely, a part of the Rohan Palace façade and the St-Pierre-le-Jeune Catholic church, as well as the Josephine Pavilion. In BIB002 , it has been concluded that small sensors are associated with less reliable calibration parameters, where dense matching is more challenging, especially in the presence of noise.
3D Scanning: A Comprehensive Survey <s> E. Street-level Modeling <s> In archaeological Cultural Heritage study 3D modelling has become a very useful process to obtain indispensable data for documentation and visualization. Nowadays the continuous request to achieve photorealistic 3D models has led to testing different techniques and methodologies to speed up both data acquisition and the data processing phase. There are many examples of surveys conducted with the use of range-based and image-based techniques, but, in the last few years, the scientific research has been increasingly moving towards automatic procedures using Computer Vision approach to reduce time during data processing. Computer Vision approach offers a great opportunity for archaeological survey since it can be very easily used by existing Computer Vision interfaces such as 3D web services and open source or low cost software. The aim of this work is to evaluate the performance offered by Computer Vision interfaces for 3D survey of archaeological ruins using some 3D web-service tools and a low cost softwar... <s> BIB001 </s> 3D Scanning: A Comprehensive Survey <s> E. Street-level Modeling <s> Abstract. Among the parts of the territory requiring periodical and careful monitoring, many have steep surfaces: quarries, river basins, land-slides, dangerous mountainsides. Aerial photogrammetry based on lightweight unmanned aircraft systems (UAS) is rapidly becoming the tool of election to survey limited areas of land with a high level of detail. Aerial photogrammetry is traditionally based on vertical images and only recently the use of significantly inclined imagery has been considered. Oblique photogrammetry presents peculiar aspects and offers improved capabilities for steep surface reconstruction. Full comprehension of oblique photogrammetry still requires research efforts and the evaluation of diverse case studies. In the present paper, the focus is on the photogrammetric UAS-based survey of a part of a large sandpit. Various flight configurations are considered: ordinary linear strips, radial strips (as the scarp considered has a semi-circular shape) and curved ones; moreover, nadir looking and oblique image blocks were acquired. Around 300 control points were measured with a topographic total station. The various datasets considered are evaluated in terms of density of the extracted point cloud and in terms of the distance between the reconstructed surface and a number of check points. <s> BIB002 </s> 3D Scanning: A Comprehensive Survey <s> E. Street-level Modeling <s> The morphology of road cut slopes, such as length and high slopes, is one of the most prevalent causes of landslides and terrain stability troubles. Digital elevation models (DEMs) and orthoimages are used for land management purposes. Two flights with different orientations with respect to the target surface were planned, and four photogrammetric projects were carried out during these flights to study the image orientation effects. Orthogonal images oriented to the cut slope with only sidelaps were compared to the classical vertical orientation, with sidelapping, endlapping, and both types of overlapping simultaneously. DEM and orthoimages obtained from the orthogonal project showed smaller errors than those obtained from the other three photogrammetric projects, with the first one being much easier to manage. One additional flight and six photogrammetric projects were used to establish an objective criterion to locate the three ground control points for georeferencing and rectification DEMs and orthoimages. All possible sources of errors were evaluated in the DEMs and orthoimages. <s> BIB003 </s> 3D Scanning: A Comprehensive Survey <s> E. Street-level Modeling <s> Abstract. Multi-view stereo has been shown to be a viable tool for the creation of realistic 3D city models. Nevertheless, it still states significant challenges since it results in dense, but noisy and incomplete point clouds when applied to aerial images. 3D city modelling usually requires a different representation of the 3D scene than these point clouds. This paper applies a fully-automatic pipeline to generate a simplified mesh from a given dense point cloud. The mesh provides a certain level of abstraction as it only consists of relatively large planar and textured surfaces. Thus, it is possible to remove noise, outlier, as well as clutter, while maintaining a high level of accuracy. <s> BIB004
This section discusses the problem of street-level modeling on the basis of aerial photogrammetry, as one of its substantial applications. 1) Steep Surfaces: Regularly monitoring the boundaries of a territory including steep surfaces, e.g. mountainsides, quarries, landslides or river basins, is amongst the important applications of aerial photogrammetry, which can be achieved with a decent level of details using lightweight UASs, having recently led to higher-quality results taking advantage of inclined images under oblique photogrammetry frameworks, rather than relying on strictly vertical images BIB002 . In the latter study, a large sandpit has been considered as a casestudy, with linear, radial and semi-circular strips as different flight configurations, where 300 control points were captured using a topographic TS. The evaluation of the acquired point clouds was performed based on their densities, as well as their differences with given checkpoints. 2) Building Abstraction: City model reconstruction is one of applications of aerial photogrammetry and image-based 3D reconstruction. Multi-view stereo has been shown to be a viable tool for the foregoing task. However, it results in dense, but noisy and incomplete, point clouds when applied to aerial images. In BIB004 , a fully-automatic pipeline was applied in order to generate a simplified mesh from a given dense point cloud. Specifically, they employed the results of the prior works on interpreting the estimation of a dense depth map as a labeling problem. The depth map, which is turned into a height map, serves as an intermediate step for the construction of a simplified mesh which is as close as possible to the data. Major geometric structures were maintained while clutter and noise were removed. 3) Effects of Image Orientation and Ground Control Point Distributions: In BIB003 , the procedure of generating a 3D model of a landslide using the UAV platform Microdrones md4-20041 was described. The vehicle took photos while flying through predefined flight paths. All the photogrammetric operations were carried out with the Photomodeler Scanner V2012 BIB001 software. The DEMs and the adjusted mosaic of images were used as input data to the orthorectification process, which consisted in reprojecting the mosaic based on the morphology of the DEM, and then performing interpolation in order to fill the holes in the DOMs.
3D Scanning: A Comprehensive Survey <s> G. Determining Fault Planes <s> Chinese early aerial photo digital photographic method is fit to active fault researcher. The photographic result can meet the need of morphotectonics quantitative measurement and large scale mapping. The abundant product by using photographic technique can help the researcher to analyze fault and morphotectonics. <s> BIB001 </s> 3D Scanning: A Comprehensive Survey <s> G. Determining Fault Planes <s> This study aims to see the effect of non-metric oblique and vertical camera combination along with the configuration of the ground control points to improve the precision and accuracy in UAV-Photogrammetry project. The field observation method is used for data acquisition with aerial photographs and ground control points. All data are processed by digital photogrammetric process with some scenarios in camera combination and ground control point configuration. The model indicates that the value of precision and accuracy increases with the combination of oblique and vertical camera at all control point configuration. The best products of the UAV-Photogrammetry model are produced in the form of Digital Elevation Model (DEM) compared to the LiDAR DEM. Furthermore, DEM from UAV-Photogrammetry and LiDAR are used to define the fault plane by using cross-section on the model and interpretation to determine the point at the extreme height of terrain changes. The result of the defined fault planes indicate that two models do not show any significant difference. <s> BIB002
The accuracy and precision of digital photogrammetry and field observation procedures utilized by UAVs may be increased by non-metric oblique and vertical camera combination, as well as by properly configuring ground control points, according to BIB002 , which have shown to perform equally well in producing DEM and LiDAR, where the fault plane is found through detecting the cross-section and interpreting the extreme height of terrain changes. In BIB001 , a practical framework has been established for resolving early aerial photogrammetric processing in the Tanlu fault zone throughout China East, whose activities have led to some strong earthquakes. Digital photos obtained by high-precision special scanners and the elements of relative orientations of image pairs were obtained by theoretical analysis. Then whole digital photos were processed by using the Inpho photogrammetry software . The horizontal and vertical deformation values were the main quantitative parameters in the morphotectonics research. By processing the above aerial photo images, DEM, DOM and large-scale topographical map covering, the interesting areas were acquired, based on special maps such as profile map, 3D landscape map and gradient map, for the sake of analyzing fault and morphotectonics.
3D Scanning: A Comprehensive Survey <s> V. STRUCTURE-FROM-MOTION (SFM) PHOTOGRAMMETRY <s> This paper is a survey of the theory and methods of photogrammetric bundle adjustment, aimed at potential implementors in the computer vision community. Bundle adjustment is the problem of refining a visual reconstruction to produce jointly optimal structure and viewing parameter estimates. Topics covered include: the choice of cost function and robustness; numerical optimization including sparse Newton methods, linearly convergent approximations, updating and recursive methods; gauge (datum) invariance; and quality control. The theory is developed for general robust cost functions rather than restricting attention to traditional nonlinear least squares. <s> BIB001 </s> 3D Scanning: A Comprehensive Survey <s> V. STRUCTURE-FROM-MOTION (SFM) PHOTOGRAMMETRY <s> Abstract High-resolution topographic surveying is traditionally associated with high capital and logistical costs, so that data acquisition is often passed on to specialist third party organisations. The high costs of data collection are, for many applications in the earth sciences, exacerbated by the remoteness and inaccessibility of many field sites, rendering cheaper, more portable surveying platforms (i.e. terrestrial laser scanning or GPS) impractical. This paper outlines a revolutionary, low-cost, user-friendly photogrammetric technique for obtaining high-resolution datasets at a range of scales, termed ‘Structure-from-Motion’ (SfM). Traditional softcopy photogrammetric methods require the 3-D location and pose of the camera(s), or the 3-D location of ground control points to be known to facilitate scene triangulation and reconstruction. In contrast, the SfM method solves the camera pose and scene geometry simultaneously and automatically, using a highly redundant bundle adjustment based on matching features in multiple overlapping, offset images. A comprehensive introduction to the technique is presented, followed by an outline of the methods used to create high-resolution digital elevation models (DEMs) from extensive photosets obtained using a consumer-grade digital camera. As an initial appraisal of the technique, an SfM-derived DEM is compared directly with a similar model obtained using terrestrial laser scanning. This intercomparison reveals that decimetre-scale vertical accuracy can be achieved using SfM even for sites with complex topography and a range of land-covers. Example applications of SfM are presented for three contrasting landforms across a range of scales including; an exposed rocky coastal cliff; a breached moraine-dam complex; and a glacially-sculpted bedrock ridge. The SfM technique represents a major advancement in the field of photogrammetry for geoscience applications. Our results and experiences indicate SfM is an inexpensive, effective, and flexible approach to capturing complex topography. <s> BIB002 </s> 3D Scanning: A Comprehensive Survey <s> V. STRUCTURE-FROM-MOTION (SFM) PHOTOGRAMMETRY <s> Abstract Covering 40% of the terrestrial surface, dryland ecosystems characteristically have distinct vegetation structures that are strongly linked to their function. Existing survey approaches cannot provide sufficiently fine-resolution data at landscape-level extents to quantify this structure appropriately. Using a small, unpiloted aerial system (UAS) to acquire aerial photographs and processing theses using structure-from-motion (SfM) photogrammetry, three-dimensional models were produced describing the vegetation structure of semi-arid ecosystems at seven sites across a grass–to shrub transition zone. This approach yielded ultra-fine ( 2 ) spatial resolution canopy height models over landscape-levels (10 ha), which resolved individual grass tussocks just a few cm 3 in volume. Canopy height cumulative distributions for each site illustrated ecologically-significant differences in ecosystem structure. Strong coefficients of determination (r 2 from 0.64 to 0.95) supported prediction of above-ground biomass from canopy volume. Canopy volumes, above-ground biomass and carbon stocks were shown to be sensitive to spatial changes in the structure of vegetation communities. The grain of data produced and sensitivity of this approach is invaluable to capture even subtle differences in the structure (and therefore function) of these heterogeneous ecosystems subject to rapid environmental change. The results demonstrate how products from inexpensive UAS coupled with SfM photogrammetry can produce ultra-fine grain biophysical data products, which have the potential to revolutionise scientific understanding of ecology in ecosystems with either spatially or temporally discontinuous canopy cover. <s> BIB003
HR topographic surveying based on triangulation of the data acquired at known poses of cameras or control points is not only financially considerably costly, but also inconvenient, due to the inaccessibility of some fields of interest, which are the fundamental drawbacks of TLS, and consequently, the GPS BIB002 . Therefore, the latter seminal work proposes the SFM algorithm for alleviating the underlying shortcomings of classical photogrammetric systems, which automates the aforementioned procedures through applying highly redundant BA processes BIB001 according to the matching feature points observed on overlapping bunches of images having been acquired using a computer-level digital camera, thereby achieving decimeter-level vertical accuracies on land areas with complex topographies and with various scales. The proposed method has been successfully tested on an exposed rocky coastal cliff, a breached moraine-dam complex and a glacially-sculpted bedrock ridge in the original study. From an ecological perspective, one of the applications of SFM photogrammetry is in surveying vegetation structures of drylands, and accordingly, their manner of functioning, which has been performed using a small UAS on a semiarid ecosystem in BIB003 . Ultra-fine 3D height models with cm-level spatial resolutions were obtained from landscape levels. Above-ground biomass was predicted from canopy volume with rather high coefficients of determination, i.e. r 2 ranging from 0.64 to 0.95, which proved to be sensitive to vegetation structures, being useful for capturing changes caused by environmental conditions in spatially or temporally discontinuous canopy covers. It is worth noticing that drylands overall constitute more than 40% of the whole terrestrial surface.
3D Scanning: A Comprehensive Survey <s> A. Heritage Monitoring <s> We describe a hardware and software system for digitizing the shape and color of large fragile objects under non-laboratory conditions. Our system employs laser triangulation rangefinders, laser time-of-flight rangefinders, digital still cameras, and a suite of software for acquiring, aligning, merging, and viewing scanned data. As a demonstration of this system, we digitized 10 statues by Michelangelo, including the well-known figure of David, two building interiors, and all 1,163 extant fragments of the Forma Urbis Romae, a giant marble map of ancient Rome. Our largest single dataset is of the David - 2 billion polygons and 7,000 color images. In this paper, we discuss the challenges we faced in building this system, the solutions we employed, and the lessons we learned. We focus in particular on the unusual design of our laser triangulation scanner and on the algorithms and software we developed for handling very large scanned models. <s> BIB001 </s> 3D Scanning: A Comprehensive Survey <s> A. Heritage Monitoring <s> It is believed that folding-arch is the transitional form from beam to curved arch. Guyue Bridge, built in JiaDing 6year (A.D 1213) of Southern Song Dynasty, located in Yiwu City, Zhejiang Province in China, is one of typical objective examples for this transition. It possesses high historical, scientific, artistic, cultural and social values. Facing severe environmental problems and deteriorated heritage situation, our conservation team selected 3D laser scanning as basic recording method, then acquired the precise threedimensional model. Measured the fundamental dimension and components’ sizes, we analysed its stable state. Moreover, combined with historic documents, we reasonably speculated and calculated the original sizes and important scales at the building time. These findings have significant research values as well as evidential meanings for future conservation. <s> BIB002
Heritage monitoring is a fundamental use-case of LS. The associated examples will be presented in what follows. 1) Statues of Michelangelo: In BIB001 , a system was introduced for digitizing the shape and color of large statues, with the statues of Michelangelo as case-studies. They proposed new methods for representing, viewing, aligning, merging, and viewing large 3D models using a laser triangulation scanner. The costs of equipment shipping and the protection considerations have been reported as the main underlying difficulties. 2) The Guyue Bridge: Under severe environmental conditions, in combination with historic evidence, precise 3D models obtained through LS can be used to observe, and possibly speculate previous stages of, heritage sites, which has been performed in order to approximate the original dimensions and scales of the Guyue Bridge, which had been constructed on A.D 1213 by the Southern Song Dynasty in Yiwu, China, as well as to capture the folding arch, i.e. the transitional stage from beam to curved arch, in BIB002 .
3D Scanning: A Comprehensive Survey <s> B. Mining <s> D laser scanning technology to a complete and high-precision reconstruction scan physical and quick access to the raw survey data, the most important feature is high precision, fast speed, approaching the prototype, has been widely used in various fields of mapping and other. In this paper, based on the three-dimensional laser scanner application elaborated on the basis of the domestic situation, and focuses on the application in the mining field, including: 3D reconstruction and measurement of open pit, headframe deformation monitoring, mining subsidence monitoring, construction land reclamation regulatory, difficult arrival region survey of coal gangue dump, landslide monitoring, deformation monitoring of underground mined area. Finally, the five aspects of the future development trend were forecasted and analyzed. <s> BIB001 </s> 3D Scanning: A Comprehensive Survey <s> B. Mining <s> For the mine roadway surface reconstruction based on laser scanning point cloud, the cylinder projection method and Poisson reconstruction method were adopted to build the roadway model in this paper. The two methods were introduced in detail including the three modeling parts of the cylinder projection method and the five parts of the Poisson equation reconstruction method. On this basis, two methods were applied to construct approximately real 3D roadway model by processing laser point cloud data and modeling the model of the roadway based on laser scanning point cloud automatically. Finally, the projection method was compared with Poisson reconstruction method in modeling time and modeling effect based on the actual roadway laser scanning data, in which the two methods showed their respective advantages in efficiency and precision. The methods proposed in this paper could also provide reference for the adjacent applications, such as underground tunnel modeling, etc. <s> BIB002
Different 3D LS technologies utilized for mining were discussed in BIB001 based on the scanner applications elaborated on the basis of domestic situation, consisting of 3D reconstruction and measurement of open pit, headframe deformation monitoring, mining subsidence monitoring, construction land reclamation regulatory, difficult arrival region survey of coal gangue dump, landslide monitoring and deformation monitoring of underground mined area. They also discussed five aspects of possible future developments, which should be able to provide some measure of protection for construction of digital mines. In BIB002 , two solutions were described for roadway modeling: The projection method and Poisson equation, experiments of roadway modeling based on which were carried out. The compared results of the experiment showed that the roadway model established by the Poisson equation method was smoother than that of the cylinder projection method, and that the modeling time of Poisson equation method was much larger than that of the cylinder projection method. Both methods could also be further applied to similar tunnel engineering problems, such as subway tunnels.
3D Scanning: A Comprehensive Survey <s> C. Configuration Modeling for a Constrained Elastica Cable <s> The principle of the elastic equivalence of statically equivalent systems of load, or Saint-Venant's Principle, is given a precise mathematical formulation and proof. Counterexamples to traditional verbal statements of the principle are given, and the results are compared with previous mathematical work on the Saint-Venant principle. <s> BIB001 </s> 3D Scanning: A Comprehensive Survey <s> C. Configuration Modeling for a Constrained Elastica Cable <s> Abstract The paper is concerned with error bounds for iterative methods for the numerical approximation of the zeros xν of Legendre polynomials, i.e., the nodes of the Gauss–Legendre quadrature formula QnG. From these bounds, new stopping criteria are derived. It is furthermore shown, how the calculation of the weights of QnG may depend on the precision of the approximation of xν. <s> BIB002 </s> 3D Scanning: A Comprehensive Survey <s> C. Configuration Modeling for a Constrained Elastica Cable <s> For large complex electromechanical products, cable digital design has a problem of low accuracy because of its large deformable characteristics. To solve the problem, an efficacious method is proposed to do cable geometry physical modeling, simulation and test for two-end constrained spatial flexible cables. Euler angles are used as generalized parametric variables to describe cable configuration on the base of Kirchhoff elastic rod theory. Via establishment of a cylindrical coordinate system according to Saint-Venant principle using end constraint, Euler angles’ Analytical solutions of the Kirchhoff equations are gained. Based on an open source geometry kernel system and using numerical integration ideas, a cable configuration simulation platform is developed in C++ environments. Then, a cable geometry measurement bench is founded with the use of 3D laser scan technology. Finally, a cable configuration simulation and experimental example is given to verify the accuracy of the proposed model and relative algorithm, through 3D positional deviation analysis. <s> BIB003
In BIB003 , explanatory solution formulas as elliptic functions were picked up to depict spatial cable setup in a cylindirical shape framework. The system was set up as indicated by Saint-Venant guidelines BIB001 . Based on the OCC solid modeling kernel , utilizing the Gauss-Legendre quadrature recipe BIB002 , a cable arrangement plan stage was produced which could provide an advanced 3D cable model. Under similar given limits with simulation, a real cable design was measured through LS.
3D Scanning: A Comprehensive Survey <s> A. Extracting Road Information <s> Abstract Accurate 3D road information is important for applications such as road maintenance and virtual 3D modeling. Mobile laser scanning (MLS) is an efficient technique for capturing dense point clouds that can be used to construct detailed road models for large areas. This paper presents a method for extracting and delineating roads from large-scale MLS point clouds. The proposed method partitions MLS point clouds into a set of consecutive “scanning lines”, which each consists of a road cross section. A moving window operator is used to filter out non-ground points line by line, and curb points are detected based on curb patterns. The detected curb points are tracked and refined so that they are both globally consistent and locally similar. To evaluate the validity of the proposed method, experiments were conducted using two types of street-scene point clouds captured by Optech’s Lynx Mobile Mapper System. The completeness, correctness, and quality of the extracted roads are over 94.42%, 91.13%, and 91.3%, respectively, which proves the proposed method is a promising solution for extracting 3D roads from MLS point clouds. <s> BIB001 </s> 3D Scanning: A Comprehensive Survey <s> A. Extracting Road Information <s> This paper presents a new algorithm to directly extract 3D road boundaries from mobile laser scanning (MLS) point clouds. The algorithm includes two stages: 1) non-ground point removal by a voxel-based elevation filter, and 2) 3D road surface extraction by curb-line detection based on energy minimization and graph cuts. The proposed algorithm was tested on a dataset acquired by a RIEGL VMX-450 MLS system. The results fully demonstrate the effectiveness and superiority of the proposed algorithm. <s> BIB002 </s> 3D Scanning: A Comprehensive Survey <s> A. Extracting Road Information <s> Abstract Extracting road information from point clouds obtained through mobile laser scanning (MLS) is essential for autonomous vehicle navigation, and has hence garnered a growing amount of research interest in recent years. However, the performance of such systems is seriously affected due to varying point density and noise. This paper proposes a novel three-dimensional (3D) local feature called the binary kernel descriptor (BKD) to extract road information from MLS point clouds. The BKD consists of Gaussian kernel density estimation and binarization components to encode the shape and intensity information of the 3D point clouds that are fed to a random forest classifier to extract curbs and markings on the road. These are then used to derive road information, such as the number of lanes, the lane width, and intersections. In experiments, the precision and recall of the proposed feature for the detection of curbs and road markings on an urban dataset and a highway dataset were as high as 90%, thus showing that the BKD is accurate and robust against varying point density and noise. <s> BIB003
MLS point clouds can be utilized to extract road information, whose examples will be provided in what follows. 1) 3D Local Feature BKD: Autonomous vehicle navigation, which requires obtaining road information from point clouds, is one of the applications of MLS, which can be achieved through BKDs, consisting of Gaussian kernel estimation and binarization BIB003 . More clearly, the curbs and markings on the road are detected through RF classifiers, from the shape and intensity data present in the point clouds, being then processed in order to extract the number of lanes, as well as their widths and intersections. However, point density and noise are among issues that might affect the performance of such a system, which have been tackled with an accuracy of around 90% in the aforementioned study. 2) Semi-automated Delineation: In BIB001 , a method was proposed for the extraction and delineation of 3D roads utilizing the GPS time, in order to separate the point clouds of the MLS system into a set of sequential road cross-sections and then filtering the non-ground points by applying a windowbased filtering operator. Three types of curbs, namely, jump, point density, and slope change, were modeled and integrated into the proposed method, which detects the curb points from filtered point clouds. Visual inspection and quantitative evaluation showed that the proposed method is effective at extracting 3D roads from MLS point clouds, even in complex urban street-scenes. The disadvantage of the foregoing method is that it is difficult to deal with curbs with boundaries that are characterized as asphalt/soil, asphalt/vegetation, or asphalt/grassy bank. 3) Detecting Road Boundaries: In BIB002 , an algorithm was proposed for extracting road boundaries from MLS point clouds. The algorithm was tested on a point cloud database acquired by a RIEGL VMX-450 system [101] . In general, the algorithm performed non-ground point removal by creating a set of grids based on separating the space into voxels in the vertical direction. Afterward, the lowest points with highest density are selected. In the next step, the road curbs are detected using energy minimization and graph cuts, which may fail on roads with waved surfaces or roads without curbs.
3D Scanning: A Comprehensive Survey <s> B. Street Object Recognition <s> A geometric modeling technique called Octree Encoding is presented. Arbitrary 3-D objects can be represented to any specified resolution in a hierarchical 8-ary tree structure or “octree” Objects may be concave or convex, have holes (including interior holes), consist of disjoint parts, and possess sculptured (i.e., “free-form”) surfaces. The memory required for representation and manipulation is on the order of the surface area of the object. A complexity metric is proposed based on the number of nodes in an object's tree representation. Efficient (linear time) algorithms have been developed for the Boolean operations (union, intersection and difference), geometric operations (translation, scaling and rotation), N-dimensional interference detection, and display from any point in space with hidden surfaces removed. The algorithms require neither floating-point operations, integer multiplications, nor integer divisions. In addition, many independent sets of very simple calculations are typically generated, allowing implementation over many inexpensive high-bandwidth processors operating in parallel. Real time analysis and manipulation of highly complex situations thus becomes possible. <s> BIB001 </s> 3D Scanning: A Comprehensive Survey <s> B. Street Object Recognition <s> We propose a novel approach for solving the perceptual grouping problem in vision. Rather than focusing on local features and their consistencies in the image data, our approach aims at extracting the global impression of an image. We treat image segmentation as a graph partitioning problem and propose a novel global criterion, the normalized cut, for segmenting the graph. The normalized cut criterion measures both the total dissimilarity between the different groups as well as the total similarity within the groups. We show that an efficient computational technique based on a generalized eigenvalue problem can be used to optimize this criterion. We applied this approach to segmenting static images, as well as motion sequences, and found the results to be very encouraging. <s> BIB002 </s> 3D Scanning: A Comprehensive Survey <s> B. Street Object Recognition <s> Abstract A mobile laser scanning (MLS) system allows direct collection of accurate 3D point information in unprecedented detail at highway speeds and at less than traditional survey costs, which serves the fast growing demands of transportation-related road surveying including road surface geometry and road environment. As one type of road feature in traffic management systems, road markings on paved roadways have important functions in providing guidance and information to drivers and pedestrians. This paper presents a stepwise procedure to recognize road markings from MLS point clouds. To improve computational efficiency, we first propose a curb-based method for road surface extraction. This method first partitions the raw MLS data into a set of profiles according to vehicle trajectory data, and then extracts small height jumps caused by curbs in the profiles via slope and elevation-difference thresholds. Next, points belonging to the extracted road surface are interpolated into a geo-referenced intensity image using an extended inverse-distance-weighted (IDW) approach. Finally, we dynamically segment the geo-referenced intensity image into road-marking candidates with multiple thresholds that correspond to different ranges determined by point-density appropriate normality. A morphological closing operation with a linear structuring element is finally used to refine the road-marking candidates by removing noise and improving completeness. This road-marking extraction algorithm is comprehensively discussed in the analysis of parameter sensitivity and overall performance. An experimental study performed on a set of road markings with ground-truth shows that the proposed algorithm provides a promising solution to the road-marking extraction from MLS data. <s> BIB003 </s> 3D Scanning: A Comprehensive Survey <s> B. Street Object Recognition <s> This paper presents a novel approach for extracting street lighting poles directly from MLS point clouds. The approach includes four stages: 1) elevation filtering to remove ground points, 2) Euclidean distance clustering to cluster points, 3) voxel-based normalized cut (Ncut) segmentation to separate overlapping objects, and 4) statistical analysis of geometric properties to extract 3D street lighting poles. A Dataset acquired by a RIEGL VMX-450 MLS system are tested with the proposed approach. The results demonstrate the efficiency and reliability of the proposed approach to extract 3D street lighting poles. <s> BIB004 </s> 3D Scanning: A Comprehensive Survey <s> B. Street Object Recognition <s> Abstract Urban road environments contain a variety of objects including different types of lamp poles and traffic signs. Its monitoring is traditionally conducted by visual inspection, which is time consuming and expensive. Mobile laser scanning (MLS) systems sample the road environment efficiently by acquiring large and accurate point clouds. This work proposes a methodology for urban road object recognition from MLS point clouds. The proposed method uses, for the first time, shape descriptors of complete objects to match repetitive objects in large point clouds. To do so, a novel 3D multi-scale shape descriptor is introduced, that is embedded in a workflow that efficiently and automatically identifies different types of lamp poles and traffic signs. The workflow starts by tiling the raw point clouds along the scanning trajectory and by identifying non-ground points. After voxelization of the non-ground points, connected voxels are clustered to form candidate objects. For automatic recognition of lamp poles and street signs, a 3D significant eigenvector based shape descriptor using voxels (SigVox) is introduced. The 3D SigVox descriptor is constructed by first subdividing the points with an octree into several levels. Next, significant eigenvectors of the points in each voxel are determined by principal component analysis (PCA) and mapped onto the appropriate triangle of a sphere approximating icosahedron. This step is repeated for different scales. By determining the similarity of 3D SigVox descriptors between candidate point clusters and training objects, street furniture is automatically identified. The feasibility and quality of the proposed method is verified on two point clouds obtained in opposite direction of a stretch of road of 4 km. 6 types of lamp pole and 4 types of road sign were selected as objects of interest. Ground truth validation showed that the overall accuracy of the ∼170 automatically recognized objects is approximately 95%. The results demonstrate that the proposed method is able to recognize street furniture in a practical scenario. Remaining difficult cases are touching objects, like a lamp pole close to a tree. <s> BIB005
On top of road information, data about other objects present in the scene can be extracted using MLS, which will be discussed in what follows. 1) Feature Matching: In BIB005 , an approach was proposed for urban road object recognition from MLS point clouds, namely, robust 3D multi-scale shape descriptors referred to as SigVox, which is based on eigenvectors' properties and recursive subdivision of each potential point cluster using the octree algorithm BIB001 . Significant eigenvectors of the points in each voxel are determined by PCA, and mapped onto the appropriate triangle of a sphere approximating an icosahedron. The latter step is repeated for different scales. The number of required levels of the SigVox descriptor depends on the complexity of the geometric shape of the selected objects of interest. The proposed approach has been tested on 4 km road, and achieved over 94% of accuracy compared to the ground truth data. It presents a shape descriptor for complete objects, in order to efficiently extract repetitive objects from large scene point clouds. 2) Detecting Street Lighting Poles: MLS point clouds have been utilized for the purpose of detecting street lighting poles, which consists in removing ground points from the frames returned by the RIEGL VMX-450 MLS system BIB003 using elevation-based filtering, clustering the rest of the points according to Euclidean distance, applying segmentation for separating overlapping objects by means of voxel-based Ncut BIB002 , and performing statistical analysis on the geometric features, so as to extract 3D representations of the lighting poles BIB004 .
3D Scanning: A Comprehensive Survey <s> 1) Byzantine Land Walls of Istanbul: <s> Advances in computer graphics hardware and algorithms, visualization, and interactive techniques for analysis offer the components for a highly integrated, efficient real-time 3D Geographic Information System. We have developed "Virtual GIS", a system with truly immersive capability for navigating and understanding complex and dynamic terrain-based databases. The system provides the means for visualizing terrain models consisting of elevation and imagery data, along with GIS raster layers, protruding features, buildings, vehicles, and other objects. We have implemented window-based and virtual reality versions and in both cases provide a direct manipulation, visual interface for accessing the GIS data. Unique terrain data structures and algorithms allow rendering of large, high resolution datasets at interactive rates. <s> BIB001 </s> 3D Scanning: A Comprehensive Survey <s> 1) Byzantine Land Walls of Istanbul: <s> Abstract Accelerometers have been used for field measurements of wind-induced responses of buildings. However, wind-induced response consists of a static component, i.e. a mean value, and a dynamic fluctuating component. The static component is difficult to measure by accelerometers. An RTK-GPS (Leica MC1000) has a nominal accuracy of ±1 cm +1 ppm for horizontal displacements and ±2cm +2ppm for vertical displacements with a sampling rate of 10 Hz. This study aims to demonstrate the feasibility of RTK-GPS for wind-induced response measurements and its efficiency in measuring the displacement of a full-scale tower. As the first experiment, the accuracy of Real-Time Kinematic-Global Positioning System (RTK-GPS) in measuring sinusoidal displacements was examined, using an electronic exciter. When the vibration frequency was lower than 2 Hz and the vibration amplitude was larger than 2 cm, RTK-GPS results seemed to closely follow the actual displacement. The efficiency of RTK-GPS was then demonstrated in the full-scale measurement of an actual steel tower. Based on the feasibility study of RTK-GPS for measuring wind-induced responses of buildings, the responses with amplitudes larger than 2 cm and natural frequencies lower than 2 Hz can be detected by RTK-GPS. <s> BIB002 </s> 3D Scanning: A Comprehensive Survey <s> 1) Byzantine Land Walls of Istanbul: <s> The efficient management of diseases, pests, or endangered species is an important global issue faced by agencies constrained by limited resources. The management challenge is even greater when organisms are difficult to detect. We show how to prioritize management and survey effort across time and space for networks of susceptible-infected-susceptible subpopulations. We present simple and robust rules of thumb for protecting desirable, or eradicating undesirable, subpopulations connected in typical network patterns (motifs). We further demonstrate that these rules can be generalized to larger networks when motifs are combined in more complex formations. Results show that the best location to manage or survey a pest or a disease on a network is also the best location to protect or survey an endangered species. The optimal starting point in a network is the fastest motif to manage, where line, star, island, and cluster motifs range from fast to slow. Managing the most connected node at the right time and maintaining the same management direction provide advantages over previously recommended outside-in strategies. When a species or disease is not detected and our belief in persistence decreases, our results recommend shifting resources toward management or surveillance of the most connected nodes. Our analytic approximation provides guidance on how long we should manage or survey networks for hard-to-detect organisms. Our rules take into account management success, dispersal, economic cost, and imperfect detection and offer managers a practical basis for managing networks relevant to many significant environmental, biosecurity, and human health issues. <s> BIB003 </s> 3D Scanning: A Comprehensive Survey <s> 1) Byzantine Land Walls of Istanbul: <s> We present GamiCAD, a gamified in-product, interactive tutorial system for first time AutoCAD users. We introduce a software event driven finite state machine to model a user's progress through a tutorial, which allows the system to provide real-time feedback and recognize success and failures. GamiCAD provides extensive real-time visual and audio feedback that has not been explored before in the context of software tutorials. We perform an empirical evaluation of GamiCAD, comparing it to an equivalent in-product tutorial system without the gamified components. In an evaluation, users using the gamified system reported higher subjective engagement levels and performed a set of testing tasks faster with a higher completion ratio. <s> BIB004 </s> 3D Scanning: A Comprehensive Survey <s> 1) Byzantine Land Walls of Istanbul: <s> Sensor miniaturisation, improved battery technology and the availability of low-cost yet advanced Unmanned Aerial Vehicles (UAV) have provided new opportunities for environmental remote sensing. The UAV provides a platform for close-range aerial photography. Detailed imagery captured from micro-UAV can produce dense point clouds using multi-view stereopsis (MVS) techniques combining photogrammetry and computer vision. This study applies MVS techniques to imagery acquired from a multi-rotor micro-UAV of a natural coastal site in southeastern Tasmania, Australia. A very dense point cloud ( < 1–3 cm point spacing) is produced in an arbitrary coordinate system using full resolution imagery, whereas other studies usually downsample the original imagery. The point cloud is sparse in areas of complex vegetation and where surfaces have a homogeneous texture. Ground control points collected with Differential Global Positioning System (DGPS) are identified and used for georeferencing via a Helmert transformation. This study compared georeferenced point clouds to a Total Station survey in order to assess and quantify their geometric accuracy. The results indicate that a georeferenced point cloud accurate to 25–40 mm can be obtained from imagery acquired from ~50 m. UAV-based image capture provides the spatial and temporal resolution required to map and monitor natural landscapes. This paper assesses the accuracy of the generated point clouds based on field survey points. Based on our key findings we conclude that sub-decimetre terrain change (in this case coastal erosion) can be monitored. <s> BIB005 </s> 3D Scanning: A Comprehensive Survey <s> 1) Byzantine Land Walls of Istanbul: <s> Abstract. Terrestrial laser scanning was conducted to document and analyse sites of geoarchaeological interest in Jordan, Egypt and Spain. In those cases, the terrestrial laser scanner LMS-Z420i from Riegl was used in combination with an accurate RTK-GPS for georeferencing of the point clouds. Additionally, local surveying networks were integrated by established transformations and used for indirect registration purposes. All data were integrated in a workflow that involves different software and according results. The derived data were used for the documentation of the sites by accurate plans and cross-sections. Furthermore, the 3D data were analysed for geoarchaeological research problems, such as volumetric determinations, the ceiling thickness of a cave and lighting simulations based on path tracing. The method was reliable in harsh environmental conditions, but the weight of the instrument, the measuring time and the minimum measurement distance were a drawback. However, generally an accurate documentation of the sites was possible. Overall, the integration in a 3D GIS is easily possible by the accurate georeference of the derived data. In addition, local survey results are also implemented by the established transformations. Enhanced analyses based on the derived 3D data shows promising results. <s> BIB006 </s> 3D Scanning: A Comprehensive Survey <s> 1) Byzantine Land Walls of Istanbul: <s> Abstract. Sri Dalada Maligawa – the Temple of the Sacred Tooth Relic – is one of the most important pilgrim sites in Buddhist culture. It is the main part of the UNESCO World Heritage Site Sacred City of Kandy. Since the end of the 17th century the temple has been keeping the sacred tooth of the Buddha. Until now an accurate documentation of the temple with all its rich decorations is missing. The Temple is built in an area vulnerable to environmental factors like earthquakes or monsoon rains and was the target of terrorist attacks. To help preserving this important cultural heritage a research project was carried out. Main part of the project was a 3D-documentation of the entire temple by using Terrestrial-Laser-Scanning (TLS) and the creating of CAD-Plans. In addition to the documentation of the architecture several details were taken in high resolution by Structured-Light-Scanning (SLS). All data will be part of the digital archive of the temple and were used as a base for a general site monitoring, especially to observe cracks. Next to the mere documentation a transfer of knowledge was another aim of the project. In future most of the analysis of the scan data can be done by local specialists. <s> BIB007
The performance of DSLR cameras has been compared to classical photogrammetric dense matching of stereo images, i.e. point clouds, in the PIXEL-PHOTO software, considering a scene consisting of the Byzantine Land Walls of Istanbul as a case-study, in . 2) The Temple of the Sacred Tooth Relic at Kandy, Sri Lanka: In BIB007 , the process of capturing the Temple of the Sacred Tooth has been presented using TLS, by means of the RIEGL VZ-400i . Additionally, to aid the scanning process, the scanner was used in combination with a laptop and the software RiSCAN PRO for controlling the device on site. The color information was captured using a Nikon D700 with 14 mm lens connected to the top of the scanner. The documentation of high-level details was done by hand-held structured-light scanner, namely, Artec MHT . The gathered scans were registered and triangulated with the scanning software Artec Studio . Afterward, the surface model was denoised using the Geomagic Studio BIB006 . The figure has been taken from BIB006 . software. The processed 3D models were used to generate drawings of buildings with AutoCAD BIB004 , and processed with the Add-on Pointsense Heritage from Faro/Kubit . 3) Geoarchaeological Sites in Jordan, Egypt and Spain: Geoarchaeologically interesting sites can be documented using TLS, which has been done on samples in Jordan, Egypt and Spain in BIB006 using the LMS-Z420i from Riegl and an RTK-GPS BIB002 for georeferencing the point clouds BIB005 . Local surveying networks BIB003 were also incorporated for the sake of registration. The resulting data were then utilized for volumetric determination, approximating the ceiling thickness of caves and simulation of lighting using path tracing under harsh experimental conditions, with the weight of the device, the time-consumption and the minimum measuring distance as the underlying limitations. The results can be imported into a 3D GIS BIB001 . The post-processing stages from the foregoing study have been schematically illustrated in Fig. 2 .
3D Scanning: A Comprehensive Survey <s> B. Data Acquisition for Indoor Assets <s> The focus of this proposed research involves the utilization of computer-generated visualizations of proposed wind farms in an effort to provide reliable information to citizens most affected by their presence. Employing the capabilities of Google Earth and Google Sketchup, project planners and developers may offer a resource that individual citizens can manipulate from their own prospective and point of view to determine the scope and effect of projects from where they live, work, and play. Citizen participation in the planning process becomes enhanced through the deployment of computer technology made available on the Internet. Although wind farms remain the primary focus of this paper, the technology may be used in all planning processes where citizen support is required and potential opposition is mitigated by providing accurate and reliable information. <s> BIB001 </s> 3D Scanning: A Comprehensive Survey <s> B. Data Acquisition for Indoor Assets <s> The newly development of technology clearly shows an improvement of three-dimension (3D) data acquisition techniques. The requirements of 3D information and features have been obviously increased during past few years in many related fields. Generally, 3D visualization can provide more understanding and better analysis for making decision. The need of 3D GIS also pushed by the highly demand of 3D in geospatial related applications as well as the existing fast and accurate 3D data collection techniques. This paper focuses on the 3D data acquisition by using terrestrial laser scanning. In this study, Leica C10 terrestrial laser scanner was used to collect 3D data of the assets inside a computer laboratory. The laser scanner device is able to capture 3D point cloud data with high speed and high accuracy. A series of point clouds was produced from the laser scanner. However, more attention must be paid during the point clouds data processing, 3D modelling, and analysis of the laser scanned data. Hence, this paper will discuss about the data processing from 3D point clouds to 3D models. The processing of point cloud data divided into pre-processing (data registration and noise filter) and post-processing (3D modelling). During the process, Leica Cyclone 7.3 was used to process the point clouds and SketchUp was used to construct the 3D asset models. Afterward, the 3D asset models were exported to multipatch geometry format, which is a 3D GIS-ready format for displaying and storing 3D model in GIS environment. The final result of this study is a set of 3D asset models display in GIS-ready format since GIS can provides the best visual interpretation, planning and decision making process. This paper shows the 3D GIS data could be produced by laser scanning technology after further processing of point cloud data. <s> BIB002 </s> 3D Scanning: A Comprehensive Survey <s> B. Data Acquisition for Indoor Assets <s> Requirement of high accuracy data in surveying applications has made calibration procedure a standard routine for all surveying instruments. This is due to the assumption that all observed data are impaired with errors. Thus, this routine is also applicable to terrestrial laser scanner (TLS) to make it available for surveying purposes. There are two calibration approaches: (1) component, and (2) system calibration. With the intention to specifically identify the errors and accuracy of the Leica ScanStation C10 scanner, this study investigates component calibration. Three components of calibration were performed to identify the constant, scale error, accuracy of angular measurement and the effect of angular resolution for distance measurement. The first calibration has been processed using closed least square solutions and has yielded the values of constant (1.2 mm) and scale error (1.000008879). Using variance ratio test (F-Test), angles observation (horizontal and vertical) for Leica C10 scanner and Leica TM5100A theodolite have shown significance difference. This is because the accuracy of both sensors are not similar and these differences are 0.01 and 0.0075o for horizontal and vertical measurements, respectively. Investigation on the resolution setting for Leica C10 scanner has highlighted the drawback of the tilt-and-turn target. Using the highest resolution, Leica Cyclone software only able to recognize the tilt-and-turn target up to 10 m distance compare to 200 m for the black and white target. <s> BIB003
In BIB002 , the accuracy of TLS was evaluated in terms of scanning a room and modeling it, through comparing the measurement differences. The RMSE was 2 cm. They used the Leica ScanStation C10 BIB003 , which is a high-accuracy medium-range, i.e. 0.1∼300 m, laser scanner, the Leica Cyclone 7.3 point cloud processing software [123] , which is capable of aligning multiple scans and removing the noise, and the SketchUp 3D modelling software BIB001 . They made the final models from point clouds manually.
3D Scanning: A Comprehensive Survey <s> C. Urban Environment Modeling <s> In this paper we present a complete point-based pipeline for the capture, display and illustration of very large scans of archeological artifacts. This approach was developed as part of a project involving archeologists and computer graphics researchers, working on the Delphi 'Dancers Column'. We first determined the archeologists' requirements for interactive viewing and documentary illustration. To satisfy these needs we use a compact pointbased structure of the very large data, permitting interactive viewing in 3D. This helps the archeologists to examine and position the fragments. We introduce efficient construction algorithms for this structure, allowing it to be built on limited-memory platforms, such as those available on the field. We also propose a new stylized rendering approach based on an inverse cylindrical projection and 2D skydome rendering. This illustrative style has been used as a planning tool for fragment docking and as a substitute for traditional illustration in an archeological publication. Other uses of these tools are currently under way in the context of this project. <s> BIB001 </s> 3D Scanning: A Comprehensive Survey <s> C. Urban Environment Modeling <s> We address the problem of classifying 3D point clouds: given 3D urban street scenes gathered by a lidar sensor, we wish to assign a class label to every point. This work is a key step toward realizing applications in robots and cars, for example. In this paper, we present a novel approach to the classification of 3D urban scenes based on super-segments, which are generated from point clouds by two stages of segmentation: a clustering stage and a grouping stage. Then, six effective normal and dimension features that vary with object class are extracted at the super-segment level for training some general classifiers. We evaluate our method both quantitatively and qualitatively using the challenging Velodyne lidar data set. The results show that by only using normal and dimension features we can achieve better recognition than can be achieved with high- dimensional shape descriptors. We also evaluate the adopting of the MRF framework in our approach, but the experimental results indicate that thisbarely improved the accuracy of the classified results due to the sparse property of the super-segments. <s> BIB002 </s> 3D Scanning: A Comprehensive Survey <s> C. Urban Environment Modeling <s> In this paper we present a novel street scene modelling framework, which takes advantage of 3D point cloud captured by a high definition LiDAR laser scanner. We propose an automatic and robust approach to detect, segment and classify urban objects from point clouds hence reconstructing a comprehensive 3D urban environment model. Our system first automatically segments grounds point cloud. Then building facades will be detected by using binary range image processing. Remained point cloud will be grouped into voxels and subsequently transformed into super voxels. Local 3D features are extracted from super voxels and classified by trained boosted decision trees with semantic classes e.g. tree, pedestrian, and car. Given labeled point cloud the proposed algorithm reconstructs the realistic model in two phases. Firstly building facades will be rendered by ShadVis algorithm. In the second step we apply a novel and fast method for fitting the solid predefined template mesh models to non-building labeled point cloud. The proposed method is evaluated both quantitatively and qualitatively on a challenging TLS NAVTEQ True databases. <s> BIB003 </s> 3D Scanning: A Comprehensive Survey <s> C. Urban Environment Modeling <s> Terrestrial 3D laser scanning technology has provided a new method for obtaining the spatial information of urban buildings, because of its high accuracy and contactless model, can obtain large amount of 3D coordinates point cloud data of the target. This paper describes in detail the office and field work process and key technologies of the spatial information acquisition of urban buildings using the multi terrestrial 3D laser scanning technology. And the experiment results show that the point precision of the spatial information is reliable, and the 3D reconstruction model is higher accurate, which can meet the requirements of the building facade extraction and indoor area measurement. <s> BIB004
Scene modeling is a major application of LS. Street, i.e. urban, modeling by means of a HD LiDAR laser scanner was proposed in BIB003 , which enables detecting, segmenting and classifying objects present in the resulting point cloud. Binary range image processing was employed for detecting building facades, where the rest of the point cloud is segmented into voxels, and subsequently, super-voxels. After extracting features from the latter, they were classified by means of trained, boosted decision trees into semantic classes such as tree, pedestrian and car. Finally, the scene was reconstructed through rendering facades using the ShadVis algorithm BIB001 , followed by fitting a prescribed solid mesh model to the aforementioned classified objects. The proposed method was qualitatively and quantitatively assessed on the TLS NAVTEQ database BIB002 . Moreover, in BIB004 , a viable strategy was proposed to obtain high-accuracy spatial data of urban structures. The practicability of urban structures' spatial data was discussed in view of the earthbound 3D LS technology. The analysis results demonstrated the legitimacy of the data. The use of the technology can diminish the field workload, avoid labor climbing, and maintain a strategic distance from touch estimation of each building, which can upgrade the proficiency.
3D Scanning: A Comprehensive Survey <s> 2) Estimating the LAI: <s> Thermal lens spectroscopy has been experimentally compared to the conventional Beer's law method in situations where the sample under study possesses substantial light-scattering properties. Large errors in the measured absorbance were found with the conventional transmission-based approach. However, the thermal lens method was found to provide significant experimental immunity from the effect of light scattering on the measurement process. Both the absorbance and light-scattering properties of the samples tested were varied over a range in excess of an order of magnitude. The thermal lens method provided absorbance values for all samples tested which were as good as those obtained for samples that did not possess scattering characteristics. The results demonstrate that the immunity from light-scattering perturbations is substantial, thereby eliminating the need to match the scattering properties of standards and unknowns for accurate quantitation. The experimental results are compared to theoretical predictions and found to be in good agreement. <s> BIB001 </s> 3D Scanning: A Comprehensive Survey <s> 2) Estimating the LAI: <s> Abstract We propose 3D triangulations of airborne Laser Scanning (ALS) point clouds as a new approach to derive 3D canopy structures and to estimate forest canopy effective LAI (LAI e ). Computational geometry and topological connectivity were employed to filter the triangulations to yield a quasi-optimal relationship with the field measured LAI e . The optimal filtering parameters were predicted based on ALS height metrics, emulating the production of maps of LAI e and canopy volume for large areas. The LAI e from triangulations was validated with field measured LAI e and compared with a reference LAI e calculated from ALS data using logarithmic model based on Beer’s law. Canopy transmittance was estimated using All Echo Cover Index (ACI), and the mean projection of unit foliage area (β) was obtained using no-intercept regression with field measured LAI e . We investigated the influence species and season on the triangulated LAI e and demonstrated the relationship between triangulated LAI e and canopy volume. Our data is from 115 forest plots located at the southern boreal forest area in Finland and for each plot three different ALS datasets were available to apply the triangulations. The triangulation approach was found applicable for both leaf-on and leaf-off datasets after initial calibration. Results showed the Root Mean Square Errors (RMSEs) between LAI e from triangulations and field measured values agreed the most using the highest pulse density data (RMSE = 0.63, the coefficient of determination (R 2 ) = 0.53). Yet, the LAI e calculated using ACI-index agreed better with the field measured LAI e (RMSE = 0.53 and R 2 = 0.70). The best models to predict the optimal alpha value contained the ACI-index, which indicates that within-crown transmittance is accounted by the triangulation approach. The cover indices may be recommended for retrieving LAI e only, but for applications which require more sophisticated information on canopy shape and volume, such as radiative transfer models, the triangulation approach may be preferred. <s> BIB002
The LAI is one of the main metrics used when quantifying the energy exchange between land surfaces and the atmosphere. In BIB002 , an approach was proposed to estimate forest canopy effective LAI by 3D triangulation of ALS point clouds. The benefit of their approach over the more commonly used Beer's law-based approach BIB001 is that the triangulation also produces shape estimates.
3D Scanning: A Comprehensive Survey <s> E. Low-cost Hand-held 3D Scanning of Architectural Elements <s> The paper presents the results of the self-calibration of three commercial terrestrial laser scanners (TLS) - Callidus 1.1, Leica HDS 3000 and Leica HDS 2500 - performed at the specially establishe ... <s> BIB001 </s> 3D Scanning: A Comprehensive Survey <s> E. Low-cost Hand-held 3D Scanning of Architectural Elements <s> 3D scanning has gone a long way since its first appearance in cultural heritage digitization and modeling. In the recent years some new low cost, fast, accurate emerging technologies are flooding the market. Envisioning the massive use of these cheap and easy to use devices in the next years, it is crucial to explore the possible fields of application and to test their effectiveness in terms of easiness of 3D data collection, processing, mesh resolution and metric accuracy against the size and features of the objects. In this study we focus the attention on one emerging technology, the Structure Sensor device, in order to verify a 3D pipeline acquisition on an architectural element and its details. The methodological approach is thought to define a pipeline of 3D acquisition exploiting low cost and open source technologies and foresees the assessment of this procedure in comparison with data obtained by a Time of Flight device. <s> BIB002
In BIB002 , the feasibility of using hand-held structure sensor scanners for creating 3D meshes of architectural objects was investigated. A structure sensor is a near-IR structured-light 3D device that works well for scanning objects of volumes less than 1 m 3 . To test the scanner's ability to scan larger objects, an 18 th century doorway in Benedettini monumental complex in Catania (Sicily) was scanned with a structure sensor and TOF laser scanner, namely, HDS 3000 by Leica Geosystem BIB001 , which was used as a reference model. Due to the limited precision of the structure sensor, the doorway was scanned in 23 segments, with 30% overlap, and afterward, put together. The results showed that the mean distance from the TOF model for details was less than 5 mm and 9.6 mm for the whole model, thereby demonstrating that hand-held structure sensors are affordable and precise solutions for digitizing cultural heritage.
3D Scanning: A Comprehensive Survey <s> F. The Kinect Sensor <s> In this paper, we describe a novel approach to create 3D miniatures of persons using a Kinect sensor and a 3D color printer. To achieve this, we acquire color and depth images while the person is rotating on a swivel chair. We represent the model with a signed distance function which is updated and visualized as the images are captured for immediate feedback. Our approach automatically fills small holes that stem from self-occlusions. To optimize the model for 3D printing, we extract a watertight but hollow shell to minimize the production costs. In extensive experiments, we evaluate the quality of the obtained models as a function of the rotation speed, the non-rigid deformations of a person during recording, the camera pose, and the resulting self-occlusions. Finally, we present a large number of reconstructions and fabricated figures to demonstrate the validity of our approach. <s> BIB001 </s> 3D Scanning: A Comprehensive Survey <s> F. The Kinect Sensor <s> The ability to quickly acquire 3D models is an essential capability needed in many disciplines including robotics, computer vision, geodesy, and architecture. In this paper we present a novel method for real-time camera tracking and 3D ::: reconstruction of static indoor environments using an RGB-D sensor. We show that by representing the geometry with a signed distance function (SDF), the camera pose can be efficiently estimated by directly minimizing the error of the depth images on the SDF. As the SDF contains the distances to the surface for ::: each voxel, the pose optimization can be carried out extremely fast. By iteratively estimating the camera poses and integrating the RGB-D data in the voxel grid, a detailed reconstruction of an indoor environment can be achieved. We present reconstructions of several rooms using a hand-held sensor and from onboard an autonomous quadrocopter. Our extensive evaluation on publicly ::: available benchmark data shows that our approach is more accurate and robust than the iterated closest point algorithm (ICP) used by KinectFusion, and yields often a comparable accuracy at much higher speed to feature-based bundle adjustment methods such as RGB-D SLAM for up to medium-sized scenes. <s> BIB002 </s> 3D Scanning: A Comprehensive Survey <s> F. The Kinect Sensor <s> We present a 3D scanning system for deformable objects that uses only a single Kinect sensor. Our work allows considerable amount of nonrigid deformations during scanning, and achieves high quality results without heavily constraining user or camera motion. We do not rely on any prior shape knowledge, enabling general object scanning with freeform deformations. To deal with the drift problem when nonrigidly aligning the input sequence, we automatically detect loop closures, distribute the alignment error over the loop, and finally use a bundle adjustment algorithm to optimize for the latent 3D shape and nonrigid deformation parameters simultaneously. We demonstrate high quality scanning results in some challenging sequences, comparing with state of art nonrigid techniques, as well as ground truth data. <s> BIB003
The Kinect sensor has greatly reduced the underlying computational and financial costs of making 3D models of objects and persons, which will be discussed in what follows. 1) Objects BIB003 : In BIB003 , an algorithm was developed for making 3D scans of deformable objects and people without requiring access to prior information or having them stay still. The method uses a single Kinect sensor . They detect loop-closures to handle severe drifts, and perform dense nonrigid BA, in order to correct the evenly distributed error from a pure loop-closure. The final result is a unified 3D model that does not need an initial template. The method is susceptible to complex changing topologies in the scene, and the computational cost is relatively high: With a 8-core 3.0G Hz Intel Xeon CPU and 64 GB of memory, a sequence with 400 frames takes 9∼10 hours. 2) Persons: 3D scanning and printing procedures can be utilized for creating miniatures of persons, which has been handled by means of a Kinect sensor and a color printer in BIB001 , where color and depth frames are acquired while the person is rotated on a swivel chair. The model is created, updated and visualized in real-time through adopting a SDF BIB002 , and possible holes caused by self-occlusions are automatically filled. In order to minimize the 3D printing costs, a watertight but hollow shell is then obtained, where the quality of the resulting miniature depends on numerous factors, including the camera pose and the consequent selfocclusions, the non-rigid deformations of the subject's body and the rotation speed.
3D Scanning: A Comprehensive Survey <s> G. Polarization and Phase shifting <s> Translucent objects pose a difficult problem for traditional structured light 3D scanning techniques. Subsurface scattering corrupts the range estimation in two ways: by drastically reducing the signal-to-noise ratio and by shifting the intensity peak beneath the surface to a point which does not coincide with the point of incidence. In this paper we analyze and compare two descattering methods in order to obtain reliable 3D coordinates for translucent objects. By using polarization-difference imaging, subsurface scattering can be filtered out because multiple scattering randomizes the polarization direction of light while the surface reflectance partially keeps the polarization direction of the illumination. The descattered reflectance can be used for reliable 3D reconstruction using traditional optical 3D scanning techniques, such as structured light. Phase-shifting is another effective descattering technique if the frequency of the projected pattern is sufficiently high. We demonstrate the performance of these two techniques and the combination of them on scanning real-world translucent objects. <s> BIB001 </s> 3D Scanning: A Comprehensive Survey <s> G. Polarization and Phase shifting <s> 3D printing is a new technology in antenna manufacturing. In this paper, a wideband 45 degree polarization antenna array is designed and manufactured by metallic 3d printing technology. With the employment of a bulkhead between two ridges, the antenna dimension in the scanning plane is reduced, thus a ±30°scanning capability is achieved. The operating bandwidth of the antenna covers X and Ku bands with 2 octave bandwidth. <s> BIB002
Polarization and phase-shifting approaches have been utilized to a considerable extent for the sake of developing 3D scanners, whose examples will be provided in this section. 1) Translucent Objects: Performing 3D scanning on translucent objects is considered a challenging task, which is prone to errors, as subsurface scattering shifts the intensity peak beneath the surface of the object. In BIB001 phase shifting and polarization filtering were tested, and then combined. However, it should be noted that they only tested the algorithm on homogeneous, i.e. single-color, single-material, objects. 2) A Wideband Antenna: In BIB002 , a design of a wideband 45 polarized electrical scanning antenna array was proposed for metallic 3D printing. As wideband array elements, they used a ridged horn. To avoid the problem of increase in the element space in the scanner plane, they designed a rigged horn of 'S' shape and a bulkhead between the two ridges. The last modification makes array suitable for ±30 scanning. The test of the proposed antenna array resulted in a less than 0.3 dB measured loss in 3D printing.
3D Scanning: A Comprehensive Survey <s> D. Building and Urban-area Modeling <s> The importance of landscape and heritage recording and documentation with optical remote sensing sensors is well recognized at international level. The continuous development of new sensors, data capture methodologies and multi-resolution 3D representations, contributes significantly to the digital 3D documentation, mapping, conservation and representation of landscapes and heritages and to the growth of research in this field. This article reviews the actual optical 3D measurement sensors and 3D modeling techniques, with their limitations and potentialities, requirements and specifications. Examples of 3D surveying and modeling of heritage sites and objects are also shown throughout the paper. <s> BIB001 </s> 3D Scanning: A Comprehensive Survey <s> D. Building and Urban-area Modeling <s> Structure from motion (SFM) algorithms are known for their ability to reconstruct a sparse point cloud of scenes that were imaged by a series of overlapping photographs. When complemented by stereo-matching algorithms, detailed three-dimensional models can be built from such photograph collections in a fully automated way. Since 2010, a computer vision software package called PhotoScan has been available from the Russian manufacturer AgiSoft LLC. Even though the programme has a straightforward and simple interface, state-of-the-art SFM and other computer vision algorithms are implemented that enable PhotoScan to generate very dense and accurate three-dimensional meshes of various (archaeological) settings captured in two-dimensional imagery. Using a mixture of archaeological case studies, it will be shown that this low-cost application produces excellent results from aerial imagery in a minimum of time with almost no effort needed. Because of its solid and uncomplicated operation, it looks like this application could become an essential part of the aerial archaeologist's toolkit. Copyright © 2011 John Wiley & Sons, Ltd. <s> BIB002 </s> 3D Scanning: A Comprehensive Survey <s> D. Building and Urban-area Modeling <s> A digital surface model DSM extracted from stereoscopic aerial images, acquired in March 2000, is compared with a DSM derived from airborne light detection and ranging lidar data collected in July 2009. Three densely built-up study areas in the city centre of Ghent, Belgium, are selected, each covering approximately 0.4 km2. The surface models, generated from the two different 3D acquisition methods, are compared qualitatively and quantitatively as to what extent they are suitable in modelling an urban environment, in particular for the 3D reconstruction of buildings. Then the data sets, which are acquired at two different epochs t 1 and t 2, are investigated as to what extent 3D building changes can be detected and modelled over the time interval. A difference model, generated by pixel-wise subtracting of both DSMs, indicates changes in elevation. Filters are proposed to differentiate ‘real’ building changes from false alarms provoked by model noise, outliers, vegetation, etc. A final 3D building change model maps all destructed and newly constructed buildings within the time interval t 2 – t 1. Based on the change model, the surface and volume of the building changes can be quantified. <s> BIB003 </s> 3D Scanning: A Comprehensive Survey <s> D. Building and Urban-area Modeling <s> Abstract. Terrestrial photogrammetry is an accessible method of 3D digital modelling, and can be done with low-cost consumer grade equipment. Globally there are many undocumented buildings, particularly in the developing world, that could benefit from 3D modelling for documentation, redesign or restoration. Areas with buildings at risk of destruction by natural disaster or war could especially benefit. This study considers a range of variables that affect the quality of photogrammetric results. Different point clouds of the same building are produced with different variables, and they are systematically tested to see how the output was affected. This is done by geometrically comparing them to a laser scanned point cloud of the same building. It finally considers how best results can be achieved for different applications, how to mitigate negative effects, and the limits of this technique. <s> BIB004
As demonstrated in the previous sections, buildings and urban areas can be scanned in 3D using a variety of methodologies. Some of the studies comparing the related approaches in terms of their performance in the foregoing context will be discussed in this section. 1) Heritage Sites: In BIB001 , a review of a diverse range of optical 3D measurement and modeling devices, techniques and algorithms developed by the time has been provided, which can be considered as a reference when it comes to focusing specifically on the progress made during the last few years by means of comparing the state-of-the-art on the time the aforementioned article has been written with the one as of now, in terms of the underlying properties, limitations, capabilities and limitations. 2) A Comparison of Aerial Photogrammetry and LiDAR: DSMs resulted from stereoscopic aerial images constitute a means of change detection or 3D reconstruction from urban areas. In BIB003 , they have been qualitatively and quantitatively compared with airborne LiDAR ones, where three areas in the city center of Ghent, Belgium, have been considered as casestudies, each of which covers around 0.4 km 2 . The analysis relied on pixel-wise comparison of the models, where the real changes were distinguished from false alarms caused by noise, vegetation or outliers. The newly reconstructed or removed buildings were detected based on the difference models, whose surfaces and volumes were quantified as well. 3) Terrestrial Photogrammetry for Complex Buildings: In BIB004 , different software for photogrammetric 3D modeling were compared. The models were compared with LS. The software considered were Agisoft Photoscan BIB002 , MicMac , Bentley ContextCapture [152] and VisualSFM , which were evaluated in terms of modeling buildings. For comparison, the C2C distance was considered as the criterion. From selected buildings, 150 photos were taken. Photoscan and ContextCapture returned results of comparable accuracy with that of TLS, but overall, Photoscan led to a better understanding of the geometry. MicMac demonstrated an accurate representation of the geometry as well, but appeared noisy, with some gaps. VisualSFM produced the noisiest results, with a weak geometric performance.
3D Scanning: A Comprehensive Survey <s> F. Forest or Vegetated-area Analysis <s> Abstract This article juxtaposes results from an unmanned aerial vehicle (UAV) and a terrestrial laser scanning (TLS) survey conducted to determine land relief. The determination of terrain relief is a task that requires precision in order to, for example, map natural and anthropogenic uplifts and subsidences of the land surface. One of the problems encountered when using either method to determine relief is the impact of any vegetation covering the given site on the determination of the height of the site's surface. In the discussed case, the site was covered mostly in low vegetation (grass). In one part, it had been mowed, whereas in the other it was 30–40 cm high. An attempt was made to filter point clouds in such a way as to leave only those points that represented the land surface and to eliminate those whose height was substantially affected by the surrounding vegetation. The reference land surface was determined from dense measurements obtained by means of a tacheometer and a rod-mounted reflector. This method ensures that the impact of vegetation is minimized. A comparison of the obtained accuracy levels, costs and effort related to each method leads to the conclusion that it is more efficient to use UAV than to use TLS for dense land relief modeling. <s> BIB001 </s> 3D Scanning: A Comprehensive Survey <s> F. Forest or Vegetated-area Analysis <s> Abstract Site productivity is essential information for sustainable forest management and site index (SI) is the most common quantitative measure of it. The SI is usually determined for individual tree species based on tree height and the age of the 100 largest trees per hectare according to stem diameter. The present study aimed to demonstrate and validate a methodology for the determination of SI using remotely sensed data, in particular fused airborne laser scanning (ALS) and airborne hyperspectral data in a forest site in Norway. The applied approach was based on individual tree crown (ITC) delineation: tree species, tree height, diameter at breast height (DBH), and age were modelled and predicted at ITC level using 10-fold cross validation. Four dominant ITCs per 400 m 2 plot were selected as input to predict SI at plot level for Norway spruce ( Picea abies (L.) Karst.) and Scots pine ( Pinus sylvestris L.). We applied an experimental setup with different subsets of dominant ITCs with different combinations of attributes (predicted or field-derived) for SI predictions. The results revealed that the selection of the dominant ITCs based on the largest DBH independent of tree species, predicted the SI with similar accuracy as ITCs matched with field-derived dominant trees (RMSE: 27.6% vs 23.3%). The SI accuracies were at the same level when dominant species were determined from the remotely sensed or field data (RMSE: 27.6% vs 27.8%). However, when the predicted tree age was used the SI accuracy decreased compared to field-derived age (RMSE: 27.6% vs 7.6%). In general, SI was overpredicted for both tree species in the mature forest, while there was an underprediction in the young forest. In conclusion, the proposed approach for SI determination based on ITC delineation and a combination of ALS and hyperspectral data is an efficient and stable procedure, which has the potential to predict SI in forest areas at various spatial scales and additionally to improve existing SI maps in Norway. <s> BIB002
Forests and vegetated areas can be 3D-scanned using various approaches, which have been compared in the studies reviewed in what follows. 1) A Comparison of Low-altitude UAV Photogrammetry with TLS for Terrain Covered in Low Vegetation: The results of performing photogrammetry using UAV and TLS have been combined in BIB001 , in order to explore and detect land relief, requiring rather high precisions for mapping of the land's natural and anthropogenic uplifts and subsidences. The possible effect of vegetation on the detection of the surface has been reported as one of the underlying challenges, tackling which demands determining and excluding the areas whose heights are severely affected by the foregoing issue. In BIB001 , the latter task was handled through capturing dense measurements returned by a tacheometer and a rod-mounted reflector. As an affirmative conclusion, it has been realized that UAV is more efficient than TLS for the purpose of photogrammetry. 2) Predicting the SI in Boreal Forests Using ALS and Hyper-spectral Data: In sustainable forest management, the SI is the most common quantitative measure of site productivity, which is usually determined for individual tree species based on tree height and the age of the largest trees. In BIB002 , a method has been proposed to determine the SI using RS data. In particular, they fused on ALS and airborne hyper-spectral data, based on ITC delineation, i.e. the tree species and height, DBH and age were modeled and predicted at ITC level. These four dominant ITCs per 400 m 2 plot were selected as inputs in order to predict the SI at plot level.
3D Scanning: A Comprehensive Survey <s> G. Digital Elevation Model (DEM) based Comparison <s> Precise ground coordinates in the feature extraction from raster data is a key point for their integration in geographic information systems. A method and the results of the extraction of planimetric and altimetric features from digital stereo SPOT-PLA data in stereoscopic mode are presented. The method uses a photogrammetric approach. The stereo restitution was done with a digital video plotter using low-cost hardware (PC), and comparisons of the results with digital topographic features (precision of three metres in planimetry and five metres in altimetry) were done in the ARC/INFO environment. Results from a SPOT stereopair (B/H = 0.74) in the Rocky Mountains (Canada) showed a planimetric accuracy of 12 metres with 90 percent confidence for well identifiable features and an altimetric accuracy for a DEM of 30 metres with 90 percent confidence. Recent improvements in the system should increase the accuracy to 10 metres and 15 to 20 metres with 90 percent confidence for planimetry and altimetry, respectively <s> BIB001 </s> 3D Scanning: A Comprehensive Survey <s> G. Digital Elevation Model (DEM) based Comparison <s> This paper presents a method to detect and construct a 3D geometric model of an urban area with complex buildings using aerial LIDAR (Light Detection and Ranging) data. The LIDAR data collected from a nadir direction is a point cloud containing surface samples of not only the building roofs and terrain but also undesirable clutter from trees, cars, etc. The main contribution of this work is the automatic recognition and estimation of simple parametric shapes that can be combined to model very complex buildings from aerial LIDAR data. The main components of the detection and modeling algorithms are (i) Segmentation of roof and terrain points. (ii) Roof topology Inference. We introduce the concept of a roof-topology graph to represent the relationships between the various planar patches of a complex roof structure. (iii) Parametric roof composition. Simple parametric roof shapes that can be combined to create a complex roof structure of a building are recognized by searching for sub-graphs in its roof-topology graph. (iv) Terrain Modeling. The terrain is identified and modeled as a triangulated mesh. Finally, we provide experimental results that demonstrate the validity of our approach for rapid and automatic building detection and geometric modeling with real LIDAR data. We are able to model cities and other urban areas at the rate of about 10 minutes per sq. mile on a low-end PC. <s> BIB002 </s> 3D Scanning: A Comprehensive Survey <s> G. Digital Elevation Model (DEM) based Comparison <s> Abstract. TanDEM-X mission has been acquiring InSAR data to produce high resolution global DEM with greater vertical accuracy since 2010. In this study, TanDEM-X CoSSC data were processed to produce DEMs at 6 m spatial resolution for two test areas of India. The generated DEMs were compared with DEMs available from airborne LiDAR, photogrammetry, SRTM and ICESat elevation point data. The first test site is in Bihar state of India with almost flat terrain and sparse vegetation cover and the second test site is around Godavari river in Andhra Pradesh (A.P.) state of India with flat to moderate hilly terrain. The quality of the DEMs in these two test sites has been specified in terms of most widely used accuracy measures viz. mean, standard deviation, skew and RMSE. The TanDEM-X DEM over Bihar test area gives 5.0 m RMSE by taking airborne LiDAR data as reference. With ICESat elevation data available at 9000 point locations, RMSE of 5.9 m is obtained. Similarly, TanDEM-X DEM for Godavari area was compared with high resolution aerial photogrammetric DEM and SRTM DEM and found RMSE of 5.3 m and 7.5 m respectively. When compared with ICESat elevation data at several point location and also the same point locations of photogrammetric DEM and SRTM, the RMS errors are 4.1 m, 3.5 m and 4.3 m respectively. DEMs were also compared for open-pit coal mining area where elevation changes from -147 m to 189 m. X- and Y-profiles of all DEMs were also compared to see their trend and differences. <s> BIB003 </s> 3D Scanning: A Comprehensive Survey <s> G. Digital Elevation Model (DEM) based Comparison <s> This paper compares two types of digital terrain models (DTMs) with ground elevation measures collected through field work in a dense forest area on the island of Tenerife (Canary Islands, Spain). The first was an existing DTM derived from altimetric features obtained by manual photogrammetric restitution. The second DTM was computed from aerial LiDAR data with a nadir density of 0.8 points·m−2. Both DTMs have a pixel size of 5 m. The field work consisted of measuring three elevation profiles by land surveying techniques using a total station survey and taking into account different vegetation covers. The analysis of the profiles by means of nonparametric techniques showed an accuracy at the 95th percentile between 0.54 m and 24.26 m for the photogrammetry-derived DTM and between 0.22 m and 3.20 m for the LiDAR-derived DTM. Plotting the elevation profiles allowed for the visual detection of locations where the models failed. The LiDAR data were able to reflect more accurately the true ground surface in ar... <s> BIB004 </s> 3D Scanning: A Comprehensive Survey <s> G. Digital Elevation Model (DEM) based Comparison <s> Many digital elevation models (DEMs) now exist, enabling quantitative assessments of topographic change through DEM subtraction. However, many of these DEMs are inherently different – their source data, preprocessing, and interpolation procedures introduce varying amounts of error, which are partially correlated with the local topographic situation (e.g. slope, aspect, and elevation of any given location). Understanding of these differences and potential errors is necessary prior to change detection analysis in order to incorporate the inherent uncertainties into the analysis of topographic change. This study evaluated available DEMs in the Mud 7.5′ United States Geological Survey (USGS) topographic quadrangle in West Virginia, USA. We compare DEMs derived from the USGS digital line graphs (DLGs), the Shuttle Radar Topography Mission (SRTM), a statewide photogrammetric DEM, and the Advanced Spaceborne Thermal Emission and Reflection (ASTER) Global DEM v2, to a LiDAR DEM. Using the LiDAR data as a referenc... <s> BIB005
DEMs can be obtained utilizing a variety of approaches. A few sample studies which have compared items from the foregoing list will be discussed in this section. Over the years, many such models were proposed, which differ greatly in terms of the collection procedure, resolution, and accuracy level. The study reported in BIB005 compares several regional-and global-scale DEMs with a high-accuracy LiDAR DEM, in order to quantitatively and qualitatively assess their differences in the rugged topography of the southern West Virginia, USA coalfields. They concluded that SRTM DEMs and GDEMs describe a topographic surface substantially preferable to the one modeled by LiDAR DEM. 2) Quality Assessment of TanDEM-X Using Airborne Li-DAR and Photogrammetry: In BIB003 , a survey was provided of DEMs gathered using TanDEM-X, which is TSX's twin, a German Earth observation satellite using SAR technology, where interferometric data are acquired in bistatic mode. Specifically, TanDEM-X was evaluated in comparison with other mainstream DEMs such as LiDAR. The study concluded that TanDEM-X raw DEMs are comparable, with RMSEs being around 5 m, to LiDAR ones. 3) A Comparison of LiDAR and Photogrammetry in Tenerife Island: DTMs representing ground elevations can be obtained through manual photogrammetric restitution of altimetric features BIB001 or aerial LiDAR BIB002 , which have been compared on the basis of various vegetation covers in the island of Tenerife, Canary Islands, Spain, in BIB004 , both with pixel sizes of 5 m, and the latter possessing a nadir density of 0.8 points.m −2 . The foregoing study analyzed three elevation profiles through nonparametric methods, utilizing a TS, which led to an accuracy of between 0.54 m and 24.26 m for the photogrammetry-based approach and between 0.22 m and 3.20 m for the LiDAR-based one, at the 95 th percentile, i.e. the latter has resulted in more precise representations of the actual surfaces, especially in the case of the parts which have been invisible to photogrammetry, e.g. Canarian pine forests.
3D Scanning: A Comprehensive Survey <s> 1) Metric Documentation of Cham Towers in Vietnam Using LS and PSP: 3D reconstructions of Cham towers in <s> Topology is one of the mechanisms to describe relationships between spatial objects. Thus, it is the basis for many spatial operations. Models utilizing the topological properties of spatial objects are usually called topological models, and are considered by many researchers as the best suited for complex spatial analysis (i.e., the shortest path search). A number of topological models for two-dimensional and 2.5D spatial objects have been implemented (or are under consideration) by GIS and DBMS vendors. However, when we move to one more dimension (i.e., three-dimensions), the complexity of the relationships increases, and this requires new approaches, rules and representations. This paper aims to give an overview of the 3D topological models presented in the literature, and to discuss generic issues related to 3D modeling. The paper also considers models in object-oriented (OO) environments. Finally, future trends for research and development in this area are highlighted. <s> BIB001 </s> 3D Scanning: A Comprehensive Survey <s> 1) Metric Documentation of Cham Towers in Vietnam Using LS and PSP: 3D reconstructions of Cham towers in <s> Currently the second, or for some manufacturers even the third, generation of terrestrial laser scanning systems is available on the market. Although the new generation of terrestrial 3D laser scanning offers several new (geodetic) features and better performance, it is still essential to test the accuracy behaviour of the new systems for optimised use in each application. As a continuation of previous published investigations the Department Geomatics of the HafenCity University Hamburg (HCU Hamburg) carried out comparative investigations into the accuracy behaviour of the new generation of terrestrial laser scanning systems (Trimble GX and Leica ScanStation using time-of-flight method, Z+F IMAGER 5006 and Faro LS880 HE using phase difference method). The results of the following tests are presented and discussed in this paper: derived distances from point clouds of a 3D test field for accuracy evaluation of 3D laser scanning systems, accuracy tests of distance measurements in comparison to reference, accuracy tests of inclination compensation, influence of the laser beams angle of incidence on 3D accuracy, investigations into scanning noise and investigations into the influence of object colour on distance measurements. <s> BIB002 </s> 3D Scanning: A Comprehensive Survey <s> 1) Metric Documentation of Cham Towers in Vietnam Using LS and PSP: 3D reconstructions of Cham towers in <s> Abstract Precise documentation of cultural heritage status is essential for its protection and scientific studies carried out during the restoration and renovation process. The close range photogrammetry has been used successfully for documentation of cultural heritage. With recent developments in computer and information technologies, this well-known traditional method has been replaced with digital close-range photogrammetry. This new method offers us new opportunities such as automatic orientation and measurement procedures, generation of 3D vector data, digital ortho-image and digital surface model. Terrestrial laser scanning is another technology that in recent years has become increasingly popular for documentation which provides very dense 3D points on an object surface with high accuracy. In addition, the 3D model and digital ortho-image can be easily generated using generated 3D point cloud and recorded digital images. <s> BIB003 </s> 3D Scanning: A Comprehensive Survey <s> 1) Metric Documentation of Cham Towers in Vietnam Using LS and PSP: 3D reconstructions of Cham towers in <s> This paper introduces and documents a novel image database specifically built for the purpose of development and bench-marking of camera-based digital forensic techniques. More than 14,000 images of various indoor and outdoor scenes have been acquired under controlled and thus widely comparable conditions from altogether 73 digital cameras. The cameras were drawn from only 25 different models to ensure that device-specific and model-specific characteristics can be disentangled and studied separately, as validated with results in this paper. In addition, auxiliary images for the estimation of device-specific sensor noise pattern were collected for each camera. Another subset of images to study model-specific JPEG compression algorithms has been compiled for each model. The 'Dresden Image Database' will be made freely available for scientific purposes when this accompanying paper is presented. The database is intended to become a useful resource for researchers and forensic investigators. Using a standard database as a benchmark not only makes results more comparable and reproducible, but it is also more economical and avoids potential copyright and privacy issues that go along with self-sampled benchmark sets from public photo communities on the Internet. <s> BIB004 </s> 3D Scanning: A Comprehensive Survey <s> 1) Metric Documentation of Cham Towers in Vietnam Using LS and PSP: 3D reconstructions of Cham towers in <s> Structure from motion (SFM) algorithms are known for their ability to reconstruct a sparse point cloud of scenes that were imaged by a series of overlapping photographs. When complemented by stereo-matching algorithms, detailed three-dimensional models can be built from such photograph collections in a fully automated way. Since 2010, a computer vision software package called PhotoScan has been available from the Russian manufacturer AgiSoft LLC. Even though the programme has a straightforward and simple interface, state-of-the-art SFM and other computer vision algorithms are implemented that enable PhotoScan to generate very dense and accurate three-dimensional meshes of various (archaeological) settings captured in two-dimensional imagery. Using a mixture of archaeological case studies, it will be shown that this low-cost application produces excellent results from aerial imagery in a minimum of time with almost no effort needed. Because of its solid and uncomplicated operation, it looks like this application could become an essential part of the aerial archaeologist's toolkit. Copyright © 2011 John Wiley & Sons, Ltd. <s> BIB005 </s> 3D Scanning: A Comprehensive Survey <s> 1) Metric Documentation of Cham Towers in Vietnam Using LS and PSP: 3D reconstructions of Cham towers in <s> The Universita Politecnica delle Marche is charged of the study, survey and documentation of the Chan towers in Vietnam. The towers are spread near the coastline from latitude 16° 30’, to latitude of 10° 56’. The Champa civilization flourished from 8 th century till 17 th century produced the Cham towers, places of religious worship. The ancient Cham people created a developed culture expressed in the architecture in the towers, which were Buddhist temples. Roughly there are six different architectonic styles and two different layouts in plan. One lay-out is that one main tower is surrounded by smaller service buildings, like walls, doors, gates; the other layout is a set of three aligned towers. We carried out the documentation with three different techniques, laser scanning, dense point clouds generation with SfM (Structure from Motion) algorithm, and mainly by Panoramic Spherical Photogrammetry (PSP). Laser scanning is indeed the best instrument but sometimes its availability is very limited for different practical and logistic reasons. The point cloud generation produced good results, but it has some problems and limitations of management too. On the other hand PSP is very quick technique in the taking phase, but the restitution is rather slow. PSP is suitable for a good metric documentation, making the Cham tower project the main project of documentation executed with spherical photogrammetry. <s> BIB006 </s> 3D Scanning: A Comprehensive Survey <s> 1) Metric Documentation of Cham Towers in Vietnam Using LS and PSP: 3D reconstructions of Cham towers in <s> The paper reports the results of an integrated Unmanned Aerial Vehicle (UAV) and terrestrial photogrammetric survey realized in the archaeological site of the Roman theatre in Ventimiglia, Italy. The main deliverables were 2D drawings at scale 1:20, which required a Ground Sample Distance (GSD) less than 4 mm and, consequently, accuracy better than 4 mm. The UAV was employed to acquire both vertical and oblique images, while the terrestrial imaging acquisition was realized with the aim of generating separate dense point clouds of some vertical structures, corresponding to the sections required. The variability of results with automatic photogrammetric procedures against different image network configurations, with and without ground control, are analysed and presented. <s> BIB007
Vietnam have been presented in BIB006 . The models were made using LS, SFM and PSP. LS was performed using Z+F IMAGER 5006h BIB002 , and processed by means of the Cyclone software . The SFM algorithm was applied using the Agisoft Photoscan software BIB005 . Finally, PSP models were created using two Cannon cameras with resolutions of 12 and 14 Mega-pixels. The obtained images were stitched together with the PTGui 9 [165] software, and the model formation was created using the Sphere package and bundle block adjustment . The authors preferred the PSP procedure because of the ease of use, although the LS system resulted in more precise model reconstructions. 2) A Comparison of Digital Photogrammetry and LS: In BIB003 , technologies related to digital photogrammetry and TLS were discussed, along with monoscopic multi-image evaluation, stereo digital photometry and TOF TLS. Monoscopic multi-image evaluation methods were used to document historical building facades. The task of image acquisition was performed using a Nikon D100 DSLR camera . The creation of 3D photogrammetric line drawings was carried using efficient CAD functionalities provided by the MicroStation software BIB001 . Stereo digital photogrammetry was used for documenting the Fatih Mosque facades, and model generation was achieved with the Z/I Phodis ST30 software. Finally, a TLS-based survey of the Muayede (Ceremonial Hall) of the Dolmabahce Palace was performed with a TOF scanner, namely, LMS-Z420i from RIEGL , with a mounted Nikon D70s calibrated digital camera BIB004 . The DOMs were automatically created by the RiSCAN PRO processing software . 3) Accuracy and Block Deformation Analysis in Automatic UAV and TLS: The conclusions made through analyzing the performance of an integrated system meant for UAV and TLS have been reported in BIB007 . The archaeological site of the Roman theater in Ventimiglia, Italy, has been considered as a case-study, in order to validate the proposed evaluation framework. The drawings were created at a scale of 1:20. A GSD of less than 4 mm was considered in order to abide by the latter requirement, which means that the accuracy was 4 mm. Both vertical and oblique images were obtained by the UAV. On the other hand, terrestrial images were acquired aiming at capturing individual dense point clouds of vertical structures. The robustness of the photogrammetric system, i.e. the repeatability of the experiments, was examined against various network configurations, based on the presence of ground control.
3D Scanning: A Comprehensive Survey <s> C. 3D Change Detection at Street Levels Using MLS Point Clouds and Terrestrial Images <s> In this paper, we present Lazy Snapping, an interactive image cutout tool. Lazy Snapping separates coarse and fine scale processing, making object specification and detailed adjustment easy. Moreover, Lazy Snapping provides instant visual feedback, snapping the cutout contour to the true object boundary efficiently despite the presence of ambiguous or low contrast edges. Instant feedback is made possible by a novel image segmentation algorithm which combines graph cut with pre-computed over-segmentation. A set of intuitive user interface (UI) tools is designed and implemented to provide flexible control and editing for the users. Usability studies indicate that Lazy Snapping provides a better user experience and produces better segmentation results than the state-of-the-art interactive image cutout tool, Magnetic Lasso in Adobe Photoshop. <s> BIB001 </s> 3D Scanning: A Comprehensive Survey <s> C. 3D Change Detection at Street Levels Using MLS Point Clouds and Terrestrial Images <s> Abstract Automatic change detection and geo-database updating in the urban environment are difficult tasks. There has been much research on detecting changes with satellite and aerial images, but studies have rarely been performed at the street level, which is complex in its 3D geometry. Contemporary geo-databases include 3D street-level objects, which demand frequent data updating. Terrestrial images provides rich texture information for change detection, but the change detection with terrestrial images from different epochs sometimes faces problems with illumination changes, perspective distortions and unreliable 3D geometry caused by the lack of performance of automatic image matchers, while mobile laser scanning (MLS) data acquired from different epochs provides accurate 3D geometry for change detection, but is very expensive for periodical acquisition. This paper proposes a new method for change detection at street level by using combination of MLS point clouds and terrestrial images: the accurate but expensive MLS data acquired from an early epoch serves as the reference, and terrestrial images or photogrammetric images captured from an image-based mobile mapping system (MMS) at a later epoch are used to detect the geometrical changes between different epochs. The method will automatically mark the possible changes in each view, which provides a cost-efficient method for frequent data updating. The methodology is divided into several steps. In the first step, the point clouds are recorded by the MLS system and processed, with data cleaned and classified by semi-automatic means. In the second step, terrestrial images or mobile mapping images at a later epoch are taken and registered to the point cloud, and then point clouds are projected on each image by a weighted window based z-buffering method for view dependent 2D triangulation. In the next step, stereo pairs of the terrestrial images are rectified and re-projected between each other to check the geometrical consistency between point clouds and stereo images. Finally, an over-segmentation based graph cut optimization is carried out, taking into account the color, depth and class information to compute the changed area in the image space. The proposed method is invariant to light changes, robust to small co-registration errors between images and point clouds, and can be applied straightforwardly to 3D polyhedral models. This method can be used for 3D street data updating, city infrastructure management and damage monitoring in complex urban scenes. <s> BIB002
In BIB002 , a method for change detection at street levels was proposed. In order to achieve the aforementioned goal, they took the following steps: • The point clouds were recorded by an MLS system, and processed, with the data cleaned and classified by semiautomatic means; • At a later epoch, terrestrial or MMS images were taken and registered onto the point clouds, which were then projected on each image by a weighted window-based zbuffering method for viewing dependent 2D triangulation; • Stereo pairs of the terrestrial images were rectified and reprojected between each other, in order to check the geometrical consistency between the point clouds and stereo images; • An over-segmentation-based graph cut optimization procedure BIB001 was carried out, taking into account the color, depth and class information, so as to compute the changed area in the image space. The method proved invariant to light changes and robust against small co-registration errors between images and point clouds.
3D Scanning: A Comprehensive Survey <s> E. Single-shot 3D Scanning via Stereoscopic Fringe Analysis <s> A new phase unwrapping algorithm is described that uses local phase information to mask out those parts of the field that cause inconsistencies in the unwrapping. Unlike earlier techniques, which produce only a consistent unwrapping of the phase in the presence of discontinuities, this technique can produce an approximately correct unwrapping. The technique is tolerant of discontinuities and noise in the phase and is fast, efficient, and simple to implement. In the absence of discontinuities an rms signal-to-noise ratio in the wrapped phase of <2:1 can be tolerated. <s> BIB001 </s> 3D Scanning: A Comprehensive Survey <s> E. Single-shot 3D Scanning via Stereoscopic Fringe Analysis <s> In this paper, we present a novel single-shot method for dense and accurate 3D scanning. Our method takes advantage of two conventional techniques, i.e., stereo and Fourier fringe analysis (FFA). While FFA is competent for high-density and high-precision phase measurement, stereo solves the phase ambiguity caused by the periodicity of the fringe. By jointly using the intensity images and unwrapped phase maps from stereo, the pixel-wise absolute depth can be obtained through a sparse matching process efficiently and reliably. Due to its single-shot property and low complexity, the proposed method facilitates dense and accurate 3D scanning in time-critical applications. <s> BIB002
In BIB002 , a single-shot method was proposed that improves both density and accuracy of 3D scanning by combining two conventional techniques, namely, stereoscopic and Fourier fringe analyses BIB001 . In addition, due to its low complexity, the proposed method performs well even in time-critical applications.
3D Scanning: A Comprehensive Survey <s> A. Outlier Detection and Normal-curvature Estimation <s> Abstract This paper proposes robust methods for local planar surface fitting in 3D laser scanning data. Searching through the literature revealed that many authors frequently used Least Squares (LS) and Principal Component Analysis (PCA) for point cloud processing without any treatment of outliers. It is known that LS and PCA are sensitive to outliers and can give inconsistent and misleading estimates. RANdom SAmple Consensus (RANSAC) is one of the most well-known robust methods used for model fitting when noise and/or outliers are present. We concentrate on the recently introduced Deterministic Minimum Covariance Determinant estimator and robust PCA, and propose two variants of statistically robust algorithms for fitting planar surfaces to 3D laser scanning point cloud data. The performance of the proposed robust methods is demonstrated by qualitative and quantitative analysis through several synthetic and mobile laser scanning 3D data sets for different applications. Using simulated data, and comparisons with LS, PCA, RANSAC, variants of RANSAC and other robust statistical methods, we demonstrate that the new algorithms are significantly more efficient, faster, and produce more accurate fits and robust local statistics (e.g. surface normals), necessary for many point cloud processing tasks. Consider one example data set used consisting of 100 points with 20% outliers representing a plane. The proposed methods called DetRD-PCA and DetRPCA, produce bias angles (angle between the fitted planes with and without outliers) of 0.20° and 0.24° respectively, whereas LS, PCA and RANSAC produce worse bias angles of 52.49°, 39.55° and 0.79° respectively. In terms of speed, DetRD-PCA takes 0.033 s on average for fitting a plane, which is approximately 6.5, 25.4 and 25.8 times faster than RANSAC, and two other robust statistical methods, respectively. The estimated robust surface normals and curvatures from the new methods have been used for plane fitting, sharp feature preservation and segmentation in 3D point clouds obtained from laser scanners. The results are significantly better and more efficiently computed than those obtained by existing methods. <s> BIB001 </s> 3D Scanning: A Comprehensive Survey <s> A. Outlier Detection and Normal-curvature Estimation <s> This paper proposes two robust statistical techniques for outlier detection and robust saliency features, such as surface normal and curvature, estimation in laser scanning 3D point cloud data. One is based on a robust z-score and the other uses a Mahalanobis type robust distance. The methods couple the ideas of point to plane orthogonal distance and local surface point consistency to get Maximum Consistency with Minimum Distance (MCMD). The methods estimate the best-fit-plane based on most probable outlier free, and most consistent, points set in a local neighbourhood. Then the normal and curvature from the best-fit-plane will be highly robust to noise and outliers. Experiments are performed to show the performance of the algorithms compared to several existing well-known methods (from computer vision, data mining, machine learning and statistics) using synthetic and real laser scanning datasets of complex (planar and non-planar) objects. Results for plane fitting, denoising, sharp feature preserving and segmentation are significantly improved. The algorithms are demonstrated to be significantly faster, more accurate and robust. Quantitatively, for a sample size of 50 with 20% outliers the proposed MCMD_Z is approximately 5, 15 and 98 times faster than the existing methods: uLSIF, RANSAC and RPCA, respectively. The proposed MCMD_MD method can tolerate 75% clustered outliers, whereas, RPCA and RANSAC can only tolerate 47% and 64% outliers, respectively. In terms of outlier detection, for the same dataset, MCMD_Z has an accuracy of 99.72%, 0.4% false positive rate and 0% false negative rate; for RPCA, RANSAC and uLSIF, the accuracies are 97.05%, 47.06% and 94.54%, respectively, and they have misclassification rates higher than the proposed methods. The new methods have potential for local surface reconstruction, fitting, and other point cloud processing tasks. HighlightsTwo statistical techniques are proposed for outlier detection in point cloud data.The proposed methods can fit robust plane in laser scanning data.The proposed methods produce robust normal and curvature in point cloud processing.They are faster and robust than RANSAC, robust PCA and other existing efficient methods.They have potential for point cloud denoising, segmentation, and reconstruction. <s> BIB002
In BIB002 , two techniques were proposed for outlier detection in LS 3D point cloud data. The first algorithm was based on RZ-score, which is a classical distance-based measure computed as follows: where M AD stands for MAD. If the calculated Rz-score for observation z i is greater than or equal to 2.5, then it is deemed an outlier. The second algorithm uses MD for outlier detection, which is defined as follows: Fig. 3 . Comparison of (a) least squares and (b) total least squares, for fitting planes and estimating normals. The figure has been taken from BIB001 . wherep and are sample mean and covariance matrix, respectively. An observation that MD score exceeds 3.075 will indicate an outlier. Both of the algorithms first fit the best plane based on the majority of consistent data or inliers within the local neighborhood of each point of interest. Then based on the majority of the acceptable points, the outliers are defined locally for each neighborhood.
3D Scanning: A Comprehensive Survey <s> B. Robust Statistical Approaches <s> A reference and support system for a flat CRT tension shadow mask is disclosed which includes a support frame which is frit sealed or shrink fitted onto a peripheral surface surrounding a target area of a flat panel. An upper edge of the support frame is finished so as to provide the desired Q-spacing between the target area and the top of the support frame. Alignment holes are drilled in the sidewalls of the support frame at a given distance from the upper edge to form a reference plane relative thereto. A temporary fixture has a tensioned mask initially secured thereto and is provided with a plurality of pins extending through sidewalls thereof at a given distance from an upper reference surface such that the pins engage the holes in the support frame and precisely position the shadow mask in the plane of the top edge of the support surface. Such fixture may be utilized for not only applying the various phosphors to the target area, but also for finally positioning the tension shadow mask on the support frame so that it may be welded thereto and the temporary frame removed. <s> BIB001 </s> 3D Scanning: A Comprehensive Survey <s> B. Robust Statistical Approaches <s> The minimum covariance determinant (MCD) estimator is a highly robust estimator of multivariate location and scatter. It can be computed efficiently with the FAST-MCD algorithm of Rousseeuw and Van Driessen. Since estimating the covariance matrix is the cornerstone of many multivariate statistical methods, the MCD has also been used to develop robust and computationally efficient multivariate techniques. ::: ::: In this paper, we review the MCD estimator, along with its main properties such as affine equivariance, breakdown value, and influence function. We discuss its computation, and list applications and extensions of the MCD in theoretical and applied multivariate statistics. Copyright © 2009 John Wiley & Sons, Inc. ::: ::: For further resources related to this article, please visit the WIREs website. <s> BIB002 </s> 3D Scanning: A Comprehensive Survey <s> B. Robust Statistical Approaches <s> Abstract This paper proposes robust methods for local planar surface fitting in 3D laser scanning data. Searching through the literature revealed that many authors frequently used Least Squares (LS) and Principal Component Analysis (PCA) for point cloud processing without any treatment of outliers. It is known that LS and PCA are sensitive to outliers and can give inconsistent and misleading estimates. RANdom SAmple Consensus (RANSAC) is one of the most well-known robust methods used for model fitting when noise and/or outliers are present. We concentrate on the recently introduced Deterministic Minimum Covariance Determinant estimator and robust PCA, and propose two variants of statistically robust algorithms for fitting planar surfaces to 3D laser scanning point cloud data. The performance of the proposed robust methods is demonstrated by qualitative and quantitative analysis through several synthetic and mobile laser scanning 3D data sets for different applications. Using simulated data, and comparisons with LS, PCA, RANSAC, variants of RANSAC and other robust statistical methods, we demonstrate that the new algorithms are significantly more efficient, faster, and produce more accurate fits and robust local statistics (e.g. surface normals), necessary for many point cloud processing tasks. Consider one example data set used consisting of 100 points with 20% outliers representing a plane. The proposed methods called DetRD-PCA and DetRPCA, produce bias angles (angle between the fitted planes with and without outliers) of 0.20° and 0.24° respectively, whereas LS, PCA and RANSAC produce worse bias angles of 52.49°, 39.55° and 0.79° respectively. In terms of speed, DetRD-PCA takes 0.033 s on average for fitting a plane, which is approximately 6.5, 25.4 and 25.8 times faster than RANSAC, and two other robust statistical methods, respectively. The estimated robust surface normals and curvatures from the new methods have been used for plane fitting, sharp feature preservation and segmentation in 3D point clouds obtained from laser scanners. The results are significantly better and more efficiently computed than those obtained by existing methods. <s> BIB003
The focus of is planar surface fitting and local normal estimation of a fitted plane. The proposed DiagnosticRobust Principal Component Analysis (DRPCA) algorithm is a combination of diagnostics and robust statistical techniques. Initially, candidate outliers are found using RD, which reduces some outlier effects, and makes the data more homogeneous. Afterward, Robust PCA (RPCA) is used to find more candidate outliers BIB001 , and fit the plane. As far as plane fitting for point clouds is concerned, the proposed method outperforms least squares, PCA, Demixed-PCA, MSAC and RANSAC on simulated and real databases. In BIB003 , robust methods were proposed for local planar surface fitting of 3D LS data by focusing on the Deterministic Minimum Covariance Determinant estimator BIB002 and RPCA, and by using variants of statistically robust algorithms. Fig. 3 illustratively compares least squares and total least squares in fitting planes and estimating normals.
Online Charging Scheduling Algorithms of Electric Vehicles in Smart Grid: An Overview <s> stochAstIc control technIques of eV chArgIng <s> Motivated by the power-grid-side challenges in the integration of electric vehicles, we propose a decentralized protocol for negotiating day-ahead charging schedules for electric vehicles. The overall goal is to shift the load due to electric vehicles to fill the overnight electricity demand valley. In each iteration of the proposed protocol, electric vehicles choose their own charging profiles for the following day according to the price profile broadcast by the utility, and the utility updates the price profile to guide their behavior. This protocol is guaranteed to converge, irrespective of the specifications (e.g., maximum charging rate and deadline) of electric vehicles. At convergence, the l 2 norm of the aggregated demand is minimized, and the aggregated demand profile is as “flat” as it can possibly be. The proposed protocol needs no coordination among the electric vehicles, hence requires low communication and computation capability. Simulation results demonstrate convergence to optimal collections of charging profiles within few iterations. <s> BIB001 </s> Online Charging Scheduling Algorithms of Electric Vehicles in Smart Grid: An Overview <s> stochAstIc control technIques of eV chArgIng <s> The problem of large scale charging of electric vehicles (EVs) is considered. An architecture for the energy management system (EMS) is proposed based on the concept of network switched charging where chargers are controlled by a scheduler that optimizes the overall operating profit of the service provider. It is assumed that the EMS has access to collocated renewable sources (e.g. solar power) and can supplement the renewable with purchased electricity from the grid. The renewable source may vary arbitrarily, and requests of all EVs accepted for service must be completed by their respective deadlines. Under a deterministic model for arbitrary arrivals, charging requests, and service deadlines, online scheduling of EV charging is formulated as a multi-processor deadline scheduling problem for which the optimal scheduler maximizes the competitive ratio against the best offline scheduler. An online scheduling algorithm, referred to as TAGS, is proposed based on the principle of threshold admission and greedy scheduling. TAGS has the complexity of O(n log n) where n is the number of EVs in the facility. It is shown that, when the price offered to the EV customers is higher than the purchasing price of electricity from the grid, TAGS achieves the competitive ratio of 1. Otherwise, TAGS achieves the maximum competitive ratio given by the inverse of a real root of a certain polynomial. Simulations are used to evaluate the performance of TAGS against standard benchmarks and for the setting of optimal charging price. <s> BIB002 </s> Online Charging Scheduling Algorithms of Electric Vehicles in Smart Grid: An Overview <s> stochAstIc control technIques of eV chArgIng <s> Generation of wind is a very complicated process and influenced by large numbers of unknown factors. A probabilistic fuzzy system based prediction model is designed for the short-term wind speed prediction. By introducing the third probability dimension, the proposed prediction model can capture both stochastic and the deterministic uncertainties, and guarantee a better prediction in complex stochastic environment. The effectiveness of this intelligent wind speed prediction model is demonstrated by the simulations on a group of wind speed data. The robust modeling performance further discloses its potential in the practical prediction of wind speed under complex circumstance. <s> BIB003 </s> Online Charging Scheduling Algorithms of Electric Vehicles in Smart Grid: An Overview <s> stochAstIc control technIques of eV chArgIng <s> This paper proposes a new cellphone application algorithm which has been implemented for the prediction of energy consumption at electric vehicle (EV) charging stations at the University of California, Los Angeles (UCLA). For this interactive user application, the total time for accessing the database, processing the data, and making the prediction needs to be within a few seconds. We first analyze three relatively fast machine learning-based time series prediction algorithms and find that the nearest neighbor (NN) algorithm (k NN with k = 1) shows better accuracy. Considering the sparseness of the time series of the charging records, we then discuss the new algorithm based on the new proposed time-weighted dot product (TWDP) dissimilarity measure to improve the accuracy and processing time. Two applications have been designed on top of the proposed prediction algorithm: one predicts the expected available energy at the outlet and the other one predicts the expected charging finishing time. The total time, including accessing the database, data processing, and prediction is approximately 1 s for both applications. The granularity of the prediction is 1 h and the horizon is 24 h; data have been collected from 20 EV charging outlets. <s> BIB004 </s> Online Charging Scheduling Algorithms of Electric Vehicles in Smart Grid: An Overview <s> stochAstIc control technIques of eV chArgIng <s> With the increasing adoption of plug-in electric vehicles (PEVs), it is critical to develop efficient charging coordination mechanisms that minimize the cost and impact of PEV integration to the power grid. In this paper, we consider the optimal PEV charging scheduling, where the non-causal information about future PEV arrivals is not known in advance, but its statistical information can be estimated. This leads to an "online" charging scheduling problem that is naturally formulated as a finite-horizon dynamic programming with continuous state space and action space. To avoid the prohibitively high complexity of solving such a dynamic programming problem, we provide a Model Predictive Control (MPC) based algorithm with computational complexity $O(T^3)$, where $T$ is the total number of time stages. We rigorously analyze the performance gap between the near-optimal solution of the MPC-based approach and the optimal solution for any distributions of exogenous random variables. Furthermore, our rigorous analysis shows that when the random process describing the arrival of charging demands is first-order periodic, the complexity of proposed algorithm can be reduced to $O(1)$, which is independent of $T$. Extensive simulations show that the proposed online algorithm performs very closely to the optimal online algorithm. The performance gap is smaller than $0.4\%$ in most cases. <s> BIB005
The knowledge of future random data is rather different in different applications. Figure 3 illustrates the spectrum of future knowledge. As shown in Fig. 3 , the most ideal case is when the complete knowledge of the future data is known. That is, the charging scheduler knows all the realizations of future data before the beginning of system time. Then the stochastic scheduling problem for EV charging becomes a deterministic problem, which is much easier to tackle with deterministic algorithms. Another extreme case is when absolutely no information about future data is known by the online charging scheduler. Then the scheduler makes decisions based only on the data that has already been revealed. In between, the more general cases are that the scheduler has knowledge of some statistical information or short-term predictions of future data. For instance, the statistical information of the EV traffic patterns could often be acquired through historic data, while the near future data of renewable energy generation (e.g., solar and wind power) can be predicted with high precision. methodologIes wIth complete Knowledge of future dAtA We first consider the case that the complete knowledge of data is known beforehand. In this case, the random data at all times of making decision x 1:T become deterministic. Then the stochastic EV charging problem is reduced to a deterministic problem, which is often referred to as the offline problem. The optimal solution to the offline problem is called the optimal offline solution, and the algorithm adopted to solve the offline problem is called the offline algorithm. Specifically, the optimal solution, denoted by x* 1:T , is calculated by (1) Note that the offline problem is deterministic and in general easier to handle than the online problem. The optimal offline solution is not achievable in practice due to the unrealistic assumption of complete future information. Instead, it is often used as a benchmark to evaluate other online charging scheduling methods. When no information about the future data is known, the charging scheduling algorithm makes decisions based on only the causal information available to the scheduler. A key feature of the online algorithm is that the performance is generally evaluated in the worst case scenario, as no statistics of data could be leveraged to evaluate the average cost. A standard metric to evaluate the worst case performance of an online algorithm is competitive ratio, defined as the maximum ratio between the cost achieved by an online algorithm and that achieved by the optimal offline algorithm over all possible input sequences (e.g., the EV arrival patterns, charging demands, and base load variations). Let F be an online algorithm or policy, P be the set of all feasible policies, and x F 1:t be the decision at time t 1 , …, t i under algorithm/policy F. Then the optimal competitive ratio of policy F is calculated by . (2) To minimize the competitive ratio, there are three main ideas in designing competitive online algorithms for the EV charging problem. Classic Online Scheduling Algorithms: Many classic online scheduling algorithms have been proposed to solve problems other than EV scheduling, such as computing job scheduling and industrial process optimization. Some well-known methods include earliest deadline first (EDF) algorithms, the least laxity first (LLF) algorithm, and the optimal available (OA) algorithm [3] . When applied to EV charging, the EDF always charges the EV with earliest departure time first, the LLF schedules the EV with least laxity (i.e., the parking time length minus the shortest time length of fulfilling charging), and the OA solves the problem by assuming that no random data (or EVs, base load, etc.) will be released in the future. In practice, however, the direct extension of these algorithms to EV charging may yield poor performance due to the special features of the EV charging problem, for example, the bursty and time-varying nature of EV arrivals. These classic algorithms often need modifications to fit in the structure of EV charging problems. Sometimes, the algorithms are combined with pricing and other control schemes (e.g., admission control BIB002 ). Solution-Structure-Based Algorithms: These algorithms are designed by exploring the structures of the optimal offline solution, given that it is easy to obtain. Indeed, exploring the offline solution structure is often used as the first step of online algorithm design. By observing the optimal offline solution, we try to fathom its solution structure. For example, when the objective function in the offline problem is an increasing convex function of the total load from EV charging and other elastic load, an optimal solution to the offline problem always tends to flatten the total load profile over time as much as possible [2, BIB001 BIB005 . This leads to the design of online algorithms that charge the EVs neither too quickly nor too slowly to reduce the fluctuation of the total load. Data-Mining-Based/Data-Driven Algorithms: Data-mining-based/data-driven algorithms are designed by mining the revealed data and analyzing the statistics. The statistics of the available data include cross-correlation, auto-correlation, partial auto-correlation, and so on. Typical datamining-based/data-driven algorithms include genetic algorithms, neural networks, and fuzzy rule-based systems. In general, the data-mining/ data-driven algorithms are more suitable for the case where the structure of a system model cannot be determined easily using empirical or analytical approaches . The efficient design of online EV charging scheduling is often a combination of the above methods. For instance, assuming that the cost function is quadratic with the load, we get the insight that the optimal offline solution should exhibit a load-flattening structure. Meanwhile, we notice that the classic online algorithm OA only flattens the load demand revealed at the current time but underestimates the load demand revealed in the future. In practice, the pattern of random EV arrivals often has some peaks. By taking into account the possible peak arrivals of EVs in the future, an online algorithm named ORCHARD, which speeds up the charging rate of OA by a proper factor, is proposed in [2], and effectively reduces the possible peak load in the future. As a result, the competitive ratio of the online ORCHARD algorithm is shown to be 2.39, which is significantly better than that achieved by the original OA algorithm (i.e., 4). Notice that most existing online algorithms for the EV charging scheduling problem are deterministic, that is, fixed decision output as a function of causal information input. A promising method to improve the worst case performance of existing deterministic online algorithms is to apply a randomized online algorithm. A randomized online algorithm is a random strategy over a set of deterministic online algorithms based on a probability distribution. For instance, the key idea of the algorithm designed in [2] is to speed up the processing rate (charging rate) of OA by a factor, where the factor is a fixed constant. A possible randomized online algorithm is to set the factor as the random variable that follows a certain probability distribution. In general, randomized online algorithms have better worst case performance In practice, some partial knowledge of future data e.g., from the prediction of future data) is available in the design of online algorithms. For instance, power generation and load prediction algorithms are now important components of most of the modern smart grid. Indeed, wind speed can be well predicted by combining probability and fuzzy system concepts BIB003 . For the EV charging problem, EV charging profiles can be predicted based on past data collected and reservations made by EV users in advance. In general, statistical-modeling-based algorithms are often applied for data prediction, for example, artificial neural network (ANN), EV user classification, and other machine learning (ML)-based methods BIB004 . By incorporating the near future estimation, online algorithms could be designed to neglect some unrealistic worst cases and improve performance based on the partially known future.
Online Charging Scheduling Algorithms of Electric Vehicles in Smart Grid: An Overview <s> methodologIes wIth Knowledge of stAtIstIcAl InformAtIon <s> A stochastic modeling and simulation technique for analyzing impacts of electric vehicles charging demands on distribution network is proposed in this paper. Different from the previous deterministic approaches, the feeder daily load models, electric vehicle start charging time, and battery state of charge used in the impact study are derived from actual measurements and survey data. Distribution operation security risk information, such as over-current and under-voltage, is obtained from three-phase distribution load flow studies that use stochastic parameters drawn from Roulette wheel selection. Voltage and congestion impact indicators are defined and a comparison of the deterministic and stochastic analytical approaches in providing information required in distribution network reinforcement planning is presented. Numerical results illustrate the capability of the proposed stochastic models in reflecting system losses and security impacts due to electric vehicle integrations. The effectiveness of a controlled charging algorithm aimed at relieving the system operation problem is also presented. <s> BIB001 </s> Online Charging Scheduling Algorithms of Electric Vehicles in Smart Grid: An Overview <s> methodologIes wIth Knowledge of stAtIstIcAl InformAtIon <s> This paper investigates the application of stochastic dynamic programming to the optimization of charging and frequency regulation capacity bids of an electric vehicle (EV) in a smart electric grid environment. We formulate a Markov decision problem to minimize an EV's expected cost over a fixed charging horizon. We account for both Markov random prices and a Markov random regulation signal. We also propose an enhancement to the classical discrete stochastic dynamic programming method. This enhancement allows optimization over a continuous space of decision variables via linear programming at each state. Simple stochastic process models are built from real data and used to simulate the implementation of the proposed method. The proposed method is shown to outperform deterministic model predictive control in terms of average EV charging cost. <s> BIB002 </s> Online Charging Scheduling Algorithms of Electric Vehicles in Smart Grid: An Overview <s> methodologIes wIth Knowledge of stAtIstIcAl InformAtIon <s> In this paper we propose a stochastic model, based on queueing theory, for electric vehicle (EV) and plug-in hybrid electric vehicle (PHEV) charging demand. Compared to previous studies, our model can provide 1) more accurate forecasts of the load using real-time sub-metering data, along with the level of uncertainty that accompanies these forecasts; 2) a mathematical description of load, along with the level of demand flexibility that accompanies this load, at the wholesale level. This can be useful when designing demand response and dynamic pricing schemes. Our numerical experiments tune the proposed statistics on real PHEV charging data and demonstrate that the forecasting method we propose is more accurate than standard load prediction techniques. <s> BIB003 </s> Online Charging Scheduling Algorithms of Electric Vehicles in Smart Grid: An Overview <s> methodologIes wIth Knowledge of stAtIstIcAl InformAtIon <s> As the electric vehicle (EV) is becoming a significant component of the loads, an accurate and valid model for the EV charging demand is the key to enable accurate load forecasting, demand respond, system planning, and several other important applications. We propose a data driven queuing model for residential EV charging demand by performing big data analytics on smart meter measurements. The data driven model captures the non-homogeneity and periodicity of the residential EV charging behavior through a self-service queue with a periodic and non-homogeneous Poisson arrival rate, an empirical distribution for charging duration and a finite calling population. Upon parameter estimation, we further validate the model by comparing the simulated data series with real measurements. The hypothesis test shows the proposed model accurately captures the charging behavior. We further acquire the long-run average steady state probabilities and simultaneous rate of the EV charging demand through simulation output analysis. <s> BIB004 </s> Online Charging Scheduling Algorithms of Electric Vehicles in Smart Grid: An Overview <s> methodologIes wIth Knowledge of stAtIstIcAl InformAtIon <s> With the increasing adoption of plug-in electric vehicles (PEVs), it is critical to develop efficient charging coordination mechanisms that minimize the cost and impact of PEV integration to the power grid. In this paper, we consider the optimal PEV charging scheduling, where the non-causal information about future PEV arrivals is not known in advance, but its statistical information can be estimated. This leads to an "online" charging scheduling problem that is naturally formulated as a finite-horizon dynamic programming with continuous state space and action space. To avoid the prohibitively high complexity of solving such a dynamic programming problem, we provide a Model Predictive Control (MPC) based algorithm with computational complexity $O(T^3)$, where $T$ is the total number of time stages. We rigorously analyze the performance gap between the near-optimal solution of the MPC-based approach and the optimal solution for any distributions of exogenous random variables. Furthermore, our rigorous analysis shows that when the random process describing the arrival of charging demands is first-order periodic, the complexity of proposed algorithm can be reduced to $O(1)$, which is independent of $T$. Extensive simulations show that the proposed online algorithm performs very closely to the optimal online algorithm. The performance gap is smaller than $0.4\%$ in most cases. <s> BIB005 </s> Online Charging Scheduling Algorithms of Electric Vehicles in Smart Grid: An Overview <s> methodologIes wIth Knowledge of stAtIstIcAl InformAtIon <s> We consider a two-level profit-maximizing strategy, including planning and control, for battery energy storage system (BESS) owners that participate in the primary frequency control (PFC) market. Specifically, the optimal BESS control minimizes the operating cost by keeping the state of charge (SoC) in an optimal range. Through rigorous analysis, we prove that the optimal BESS control is a "state-invariant" strategy in the sense that the optimal SoC range does not vary with the state of the system. As such, the optimal control strategy can be computed offline once and for all with very low complexity. Regarding the BESS planning, we prove that the the minimum operating cost is a decreasing convex function of the BESS energy capacity. This leads to the optimal BESS sizing that strikes a balance between the capital investment and operating cost. Our work here provides a useful theoretical framework for understanding the planning and control strategies that maximize the economic benefits of BESSs in ancillary service markets. <s> BIB006
In this section, we discuss the case where the future data is not known, but its statistical information can be estimated based on historic data. The estimation of the future random processes mainly includes the estimation of the moments (e.g., mean as the first-order moment and variance as the second-order moment) and the estimation of probability distributions (i.e., moments of all orders). When the scheduler has the knowledge of probability distributions of random data, that is, probability density functions (PDFs), algorithms based on dynamic programming can be applied. When the number of decision making times is finite, the problem can be solved by a backward induction method or Monte Carlo sampling techniques BIB001 . When the number of decision making times goes to infinity, the problem can be formulated as infinite-time horizon dynamic programming or a Markov decision process (MDP). Specifically, we denote by s k the system state at time t k , for example, the current charging demand of individual EV, the base load, and electricity price. The action is the charging decision at time t k (i.e., x k ). Then the online EV charging problem is that at time t k , the decision maker chooses an action x k that is available in current state s k . The process responds at the next time step by randomly moving into a new state s k+1 following a known distribution, and then returns a corresponding cost-to-go, denoted by  k (s k ). Specifically, the optimal cost-to-go, denoted by  k * (s k ) at time t k , satisfies the following Bellman's equation BIB006 : where a is a discount factor and P(s k , s k+1 ) is the transition probability from s k to s k+1 . Note that the EV charging process is featured by the battery memory. When formulating the EV charging problem as an MDP, the system state could be defined as the energy levels of the battery stored in the EV or the renewable power supplied in the system. The transition probability could be estimated by the historic data of the renewable power and EV charging demands. There are several standard algorithms to solve the MDP problem, for example, value iteration, policy iteration, modified policy iteration, and prioritized sweeping. When the statistics information of the random data is not clear, a Q-learning algorithm could be adopted to solve the MDP problem. Note that the EV charging problem often contains a continuous space of system state (e.g., the energy level of battery and the electricity price) and a continuous space of action (i.e., the charging rate). The existing research often uses the discrete Bellman's equation to model the EV charging problem BIB006 BIB002 , which can lead to prohibitive computation complexity. On the other hand, as the fast integration of EVs into the power grid, the large scale of EVs could also bring up the issue of the curse of dimensionality. To reduce the computational complexity, approximate (stochastic) dynamic programming (ADP) methods could be adopted BIB002 . In most cases, it is hard to accurately estimate the complete PDF of the random data based on the historic data. A more practical prediction of data statistics is the low-order moment (e.g., the mean and the variance) as it requires much fewer data samples than to accurately characterize the full probability distribution. Then advanced techniques from robust optimization could be adopted to tackle the online problems with partial statistic information. Since the first-order moment is the simplest to estimate compared to other statistics, a lot of works make use of the mean instead of high-order information. Specifically, the model predictive control (MPC) method is one common approach to handle online problems with knowledge of the expected values of random data. To address a wide range of uncertainties and variability, an MPC-based charging scheduling algorithm Classic online scheduling algorithms, solution-structure algorithms, data-mining-based/ data-driven algorithms replaces all future data, for example, renewable energy, base load, arrival rate and charging load demand of EVs, by their expected values and thus reduce a stochastic problem to a deterministic problem. A well accepted metric to evaluate an MPC-based charging scheduling algorithm is value of the stochastic solution (VSS), which evaluates the optimality gap between the optimal solution to Eq. 3 by requiring the distributions of x and the solution of the MPC-based algorithm by replacing x with the means BIB005 . In practice, the statistics of the EV arrival process often exhibit periodicity. For example, the arrival rate of the residential EV charging demand could have a periodicity where the period is one day BIB004 . The daily travel patterns are also likely to exhibit periodicity based on the National Household Travel Survey (NHTS) 2009. 2 Accordingly, the periodicity of the EV random arrival process can facilitate the prediction of EVs' arrivals to improve performance. For instance, BIB005 shows that the MPC-based algorithm could be made more scalable if the random process describing the arrival of charging demands is first-order periodic. Besides, another scenario is to assume that the random data comes from a population that follows a known probability distribution, where the typical parameters (mean, variance, etc.) are unknown. These parameters can be estimated by elementary statistical methods and made more accurate by sensitivity analysis. For instance, recent studies on real-world data verify the hypothesis that the aggregate arrival rates of EVs follow a Poisson distribution BIB003 . For ease of reference, we summarize the methodologies to design online EV charging scheduling algorithms in Table 1 . For the case with complete knowledge of distribution, the algorithms are likely to induce high computational complexity. In this case, exploiting a special solution structure may lead to greatly reduced computational cost. For example, a threshold-based charging algorithm is developed in BIB006 . For the case with partial knowledge of statistics, it is of high interest to improve the performance of a sub-optimal scheduling solution. One possible solution is to combine online/stochastic learning techniques and robust optimization to improve the performance of the algorithm.
Online Charging Scheduling Algorithms of Electric Vehicles in Smart Grid: An Overview <s> performAnce eVAluAtIon <s> In this paper we propose a stochastic model, based on queueing theory, for electric vehicle (EV) and plug-in hybrid electric vehicle (PHEV) charging demand. Compared to previous studies, our model can provide 1) more accurate forecasts of the load using real-time sub-metering data, along with the level of uncertainty that accompanies these forecasts; 2) a mathematical description of load, along with the level of demand flexibility that accompanies this load, at the wholesale level. This can be useful when designing demand response and dynamic pricing schemes. Our numerical experiments tune the proposed statistics on real PHEV charging data and demonstrate that the forecasting method we propose is more accurate than standard load prediction techniques. <s> BIB001 </s> Online Charging Scheduling Algorithms of Electric Vehicles in Smart Grid: An Overview <s> performAnce eVAluAtIon <s> With the increasing adoption of plug-in electric vehicles (PEVs), it is critical to develop efficient charging coordination mechanisms that minimize the cost and impact of PEV integration to the power grid. In this paper, we consider the optimal PEV charging scheduling, where the non-causal information about future PEV arrivals is not known in advance, but its statistical information can be estimated. This leads to an "online" charging scheduling problem that is naturally formulated as a finite-horizon dynamic programming with continuous state space and action space. To avoid the prohibitively high complexity of solving such a dynamic programming problem, we provide a Model Predictive Control (MPC) based algorithm with computational complexity $O(T^3)$, where $T$ is the total number of time stages. We rigorously analyze the performance gap between the near-optimal solution of the MPC-based approach and the optimal solution for any distributions of exogenous random variables. Furthermore, our rigorous analysis shows that when the random process describing the arrival of charging demands is first-order periodic, the complexity of proposed algorithm can be reduced to $O(1)$, which is independent of $T$. Extensive simulations show that the proposed online algorithm performs very closely to the optimal online algorithm. The performance gap is smaller than $0.4\%$ in most cases. <s> BIB002
In this section, we evaluate the performance of the methodologies discussed above. The system time is set to be 24 hours, and the length between two adjacent decision making times is set to be 10 minutes. Suppose that the EV arrivals follow a Poisson distribution, and the parking time of each EV follows an exponential distribution BIB001 . Their charging demand follows a uniform distribution. For the traffic patterns, we set two peak periods, 12:00 to 14:00 and 18:00 to 20:00, which match the realistic vehicle trips in NHTS 2009. We investigate two scenarios where the EVs serve different purposes. In scenario 1, EVs act only as consumers that require satisfying the charging demand. In scenario 2, EVs act as not only consumers but also power suppliers, where EVs could be charged/discharged from/to the grid. For both scenarios, the objective function is to minimize the variance of total load, which consists of the load from EV charging and the inelastic base load. The minimization of load variance in effect reduces system power losses and improves voltage regulation . Specifically, we choose the following algorithms listed in decreasing order of the amount of future data knowledge. Optimal Offline Algorithm: Complete knowledge of the random data is assumed to be known. Specifically, we adopt the interior point method in CVX to compute the optimal offline solution. Online Algorithm with PDF: the complete knowledge of distributions of random data are assumed to be known. Specifically, we adopt the sample average approximation (SAA) method as the online algorithm with PDF. Online Algorithm MPC BIB002 : The expected values of the random data are assumed to be known. Online Algorithm with No Knowledge of Future Information: ORCHARD [2] and OA [3] : No future information is assumed to be known. For both scenarios, we plot the load variance of the five algorithms by increasing the arrival rates during peak hours, as shown in Figs. 4 and 5. Both figures show that the optimal offline algorithm always produces the lowest load variance among the five algorithms. Meanwhile, the online algorithm with PDF achieves lower cost than the MPC algorithm with prediction of means, and both algorithms follow the optimal offline algorithm closely. We also notice that online algorithms ORCHARD and OA produce higher load variance than the other three algorithms, since they assume no predictions or non-causal information of the random data. Between them, ORCHARD significantly outperforms OA, where the OA algorithm performs poorly especially under high peak arrival rate. For all five algorithms, it can easily be observed that the load variance of scenario 2 depicted in Fig. 4 is much smaller than that of scenario 1 depicted in Fig. 5 , which demonstrates the effectiveness of using EVs as mobile energy storage to flatten the system load profiles.
Online Charging Scheduling Algorithms of Electric Vehicles in Smart Grid: An Overview <s> economIc IncentIVe desIgn <s> Generation of wind is a very complicated process and influenced by large numbers of unknown factors. A probabilistic fuzzy system based prediction model is designed for the short-term wind speed prediction. By introducing the third probability dimension, the proposed prediction model can capture both stochastic and the deterministic uncertainties, and guarantee a better prediction in complex stochastic environment. The effectiveness of this intelligent wind speed prediction model is demonstrated by the simulations on a group of wind speed data. The robust modeling performance further discloses its potential in the practical prediction of wind speed under complex circumstance. <s> BIB001 </s> Online Charging Scheduling Algorithms of Electric Vehicles in Smart Grid: An Overview <s> economIc IncentIVe desIgn <s> AC optimal power flow (ACOPF) is typically used for operational dispatch decisions. However, DC optimal power flow (DCOPF) based on linearization of the ACOPF is widely used for locational marginal price (LMP) calculation in market operation and settlements due to its efficiency and reasonable accuracy. The DCOPF is considered to be sufficient in many cases. Especially for calculating LMPs of distribution systems, the DCOPF is a good option considering the large number of nodes in distribution systems. In industry, DCOPF has been employed by several software tools for chronological LMP simulation and forecasting. <s> BIB002 </s> Online Charging Scheduling Algorithms of Electric Vehicles in Smart Grid: An Overview <s> economIc IncentIVe desIgn <s> This paper proposes a new cellphone application algorithm which has been implemented for the prediction of energy consumption at electric vehicle (EV) charging stations at the University of California, Los Angeles (UCLA). For this interactive user application, the total time for accessing the database, processing the data, and making the prediction needs to be within a few seconds. We first analyze three relatively fast machine learning-based time series prediction algorithms and find that the nearest neighbor (NN) algorithm (k NN with k = 1) shows better accuracy. Considering the sparseness of the time series of the charging records, we then discuss the new algorithm based on the new proposed time-weighted dot product (TWDP) dissimilarity measure to improve the accuracy and processing time. Two applications have been designed on top of the proposed prediction algorithm: one predicts the expected available energy at the outlet and the other one predicts the expected charging finishing time. The total time, including accessing the database, data processing, and prediction is approximately 1 s for both applications. The granularity of the prediction is 1 h and the horizon is 24 h; data have been collected from 20 EV charging outlets. <s> BIB003
The major challenge of the online charging algorithm design is the uncertainties in the behavior of EV users. A promising solution is to introduce economic incentive schemes to encourage more users to arrive at the charging station during offpeak hours of base load consumption and less during peak hours so that the total load demand is flattened over time. Equivalently, a pricing method can be used to adjust the EVs' charging demand over time. For instance, a distribution locational marginal pricing method could be adopted to alleviate congestion induced by EV loads BIB002 . Besides, the scheduler can also offer financial compensation to those users who are willing to make reservations a day ahead, park the EV for a longer time, or tolerate charging delay after the specified parking time. Through optimizing the pricing schemes, the scheduler maximizes its overall utility, for example, its profit defined as the revenue minus the operating cost and the cost of offering the incentives. The joint design of a pricing scheme and online EV scheduling is also a promising but challenging topic to investigate, considering the complex correlations between the pricing and the EV user profiles, including arrival rates, parking time, and charging demand. onlIne/stochAstIc leArnIng of rAndom dAtA As shown in Figs. 4 and 5, accurate knowledge of future data can lead to significant performance improvement of online algorithms. Currently, most studies on online scheduling design assume perfect knowledge of (partial) future data or statistical information. In practice, however, the actual knowledge could be inaccurate, and the data collected could be noisy, incomplete, or outdated. It is therefore important to incorporate the acquisition of data knowledge in the design of an online scheduling algorithm. A promising solution is to use online/stochastic learning methods to exploit the random data to assist in the decisions of EV scheduling in an iterative manner BIB001 BIB003 . In this case, however, the learning algorithm efficiency is of paramount importance, as the EV data size could be enormous, and the charging scheduling is a delay-sensitive application.
THE SNARE LANGUAGE OVERVIEW <s> INTRODUCTION <s> The central argument of network research is that actors are embedded in networks of interconnected social relationships that offer opportunities for and constraints on behavior. We review research on the antecedents and consequences of networks at the interpersonal, interunit, and interorganizational levels of analysis, evaluate recent theoretical and empirical trends, and give directions for future research, highlighting the importance of investigating cross-level network phenomena. <s> BIB001 </s> THE SNARE LANGUAGE OVERVIEW <s> INTRODUCTION <s> Understand the need for analyses of large, complex, information-rich data sets. Identify the goals and primary tasks of the data-mining process. Describe the roots of data-mining technology. Recognize the iterative character of a data-mining process and specify its basic steps. Explain the influence of data quality on a data-mining process. Establish the relation between data warehousing and data mining. Data mining is an iterative process within which progress is defined by discovery, through either automatic or manual methods. Data mining is most useful in an exploratory analysis scenario in which there are no predetermined notions about what will constitute an "interesting" outcome. Data mining is the search for new, valuable, and nontrivial information in large volumes of data. It is a cooperative effort of humans and computers. Best results are achieved by balancing the knowledge of human experts in describing problems and goals with the search capabilities of computers. In practice, the two primary goals of data mining tend to be prediction and description. Prediction involves using some variables or fields in the data set to predict unknown or future values of other variables of interest. Description, on the other hand, focuses on finding patterns describing the data that can be interpreted by humans. Therefore, it is possible to put data-mining activities into one of two categories: Predictive data mining, which produces the model of the system described by the given data set, or Descriptive data mining, which produces new, nontrivial information based on the available data set. <s> BIB002
A social network consists of a finite set of actors and the relations defined among them (Wasserman and Faust, 1994) . Actors are discrete individuals, corporate or collective social units, and are linked to one another by social ties (Wasserman and Faust, 1994) . A dyad is a linkage or relation between two actors. Triads are triples of actors and associated ties. To a large extent, the power of network analysis lies in the ability to model the relations among systems of actors. A subgroup of actors is any subset of actors and all ties among them. A group is the collection of all actors on which ties are to be measured. The collection of ties of a specific kind of members of a group is called a relation (Wasserman and Faust, 1994) . Actors may be referred as social entities. An entity is social if involves a network of relations with other social entities (Masolo et al., 2004) . A social entity play several roles in the same network. A role is a combination of particular sets of behavioral, meaningful, and structural attributes (Welser et al., 2007) . The nature of roles and the way of representing them have been discussed in different fields, e.g. knowledge representation, knowledge engineering, object-oriented and conceptual modeling, multi-agent systems, linguistics, and cognitive semantics (Masolo et al., 2004) . Four common features about social roles can be found: (1) roles are properties, e.g. different entities can play the same role; (2) roles are antirigid and they have dynamic properties, e.g. an entity can play different roles simultaneously, an entity can change role, an entity can play the same role several times, simultaneously, a role can be played by different entities simultaneously or at different times, the sequence in which roles may be acquired and relinquished can be subject to restrictions; (3) roles have a relational nature, i.e. roles imply patterns of relations; and (4) roles are linked to contexts, i.e. a contextual approach refer to a variety of factors, including relations, events, organizations and behaviors. The term "context" can have different interpretations, e.g. metaphysical context, cognitive context; and linguistic context. See (Masolo et al., 2004) for a further review. There are different types of social networks. One-mode networks involve just a single set of social entities. Two-mode networks involve two sets of actors, or one set of actors and one set of events (Wasserman and Faust, 1994) . Events have a time associated with them and it is possible for relations, positions and roles to change over time. In spite, events can occur at different times, the organizers of events change over time, and a different set of actors might participate in each event (Licamele et al., 2005) . Dyadic networks and affiliation networks are 344 Barão A. and Rodrigues da Silva A. (2010) . THE SNARE LANGUAGE OVERVIEW. In Proceedings of the 12th International Conference on Enterprise Information Systems -Information Systems Analysis and Specification, pages 344-349 DOI: 10.5220/0002975203440349 Copyright c SciTePress particular cases of two-mode networks. Another kind of network is the ego-centered network where a focal actor (termed "ego") has a set of alters who have ties to ego, and measurements on the ties among these alters. It is possible to consider three or more mode networks, but rarely have social network methods been designed for such data structures (Wasserman and Faust, 1994) . In Social Network Analysis (SNA) scope, dynamics of groups are studied to identify relations and interactions among their members. Starting from these interactions it is possible to identify social patterns and it is possible to detect or propose social or organizational changes that reveal how networks grow or should change. Also, it is possible to find potential causes and consequences of a network change, previewing and controlling networks evolution . These features are dependent of metrics to allow group properties identification or to characterize individual influence on a specific group. Typically scenarios are strategic alliances and collaborations, flows of information (communication), affect (friendship), goods and services (workflow), and influence (advice) BIB001 . Network research represents a different paradigm of research which requires new concepts and methods BIB001 . Traditional SNA studies use much information residing in archives that were not created expressly for social research. Sometimes, such data provide measures of social ties and trace relations of social entities who are reluctant to interviews. Archival data are often inexpensive, especially when in electronic form. The validity of archival data rests on the correspondence between measured connections and the conceptual ties of research interest . The data comprising social networks tend to be heterogeneous, multirelational, and semi-structured. Link mining is a relevant example showing a confluence of research in social networks, link analysis, hypertext and Web mining, graph mining, relational learning, and inductive logic programming BIB002 . New visual models are needed to infer and represent patterns of relations, and this paper proposes the SNARE language as the conceptual framework for SNARE system. The SNARE system purpose is to promote social network capabilities in information systems not designed originally for the efect. In Section 1, we introduce social network concepts. Section 2 overviews social networks modeling techniques and the motivation for a social network language. Section 3 purposes the SNARE language. Finally, Section 4 presents preliminary conclusions of the investigation.
Investigation of Tools, Techniques and Languages for Model Driven Software Product Lines (SPL): A Systematic Literature Review <s> Introduction <s> In the context of Model-Based Testing (MBT) of product lines, effort required to develop models can be significantly reduced by applying systematic product line modeling and configuration methodologies. In our previous work, we developed such a methodology to capture variability in configurable UML state machines and aspect state machines. For each product, these state machines are to be configured for generating executable test cases. In this paper, we extended this methodology using Feature Model for Testing (FM_T) and Component Family Model for Behaviors (CFM_B). FM_T captures variable testing functionalities of a product line, whereas CFM_B provides an abstraction layer on top of the configurable state machines. With our current methodology, a test engineer doesn't need to acquire expertise on behavioral modeling and can simply configure models for a product by selecting features in FM_T and configuring provided attributes in CFM_B. The configured models are then given input to our model-based testing tool, TRansformation-based tool for Uml-baSed Testing (TRUST) for executable test case generation. We applied our extended methodology to a product line of video conferencing system developed by Cisco Systems, Norway. Results show that the methodology significantly reduces the complexity of configuration, thereby significantly reducing required effort and cost (e.g., in terms of training). In addition, it does not require test engineers to have expertise in UML modeling, aspect-oriented modeling, and OCL specification and therefore eases the adoption of MBT in industry. <s> BIB001 </s> Investigation of Tools, Techniques and Languages for Model Driven Software Product Lines (SPL): A Systematic Literature Review <s> Introduction <s> In current software systems with highly volatile requirements, traceability plays a key role to maintain the consistency between requirements and code. Traceability between artifacts involved in the development of software product line (SPL) is still more critical because it is necessary to guarantee that the selection of variants that realize the different SPL products meet the requirements. Current SPL traceability mechanisms trace from variability in features to variations in the configuration of product-line architecture (PLA) in terms of adding and removing components. However, it is not always possible to materialize the variable features of a SPL through adding or removing components, since sometimes they are materialized inside components, i.e., in part of their functionality: a class, a service, and/or an interface. Additionally, variations that happen inside components may crosscut several components of architecture. These kinds of variations are still challenging and their traceability is not currently well supported. Therefore, it is not possible to guarantee that those SPL products with these kinds of variations meet the requirements. This paper presents a solution for tracing variability from features to PLA by taking these kinds of variations into account. This solution is based on models and traceability between models in order to automate SPL configuration by selecting the variants and realizing the product application. The FPLA modeling framework supports this solution which has been deployed in a software factory. Validation has consisted in putting the solution into practice to develop a product line of power metering management applications for smart grids. <s> BIB002 </s> Investigation of Tools, Techniques and Languages for Model Driven Software Product Lines (SPL): A Systematic Literature Review <s> Introduction <s> Version control (VC) and Software Product Line Engineering (SPLE) are two software engineering disciplines to manage variability in time and variability in space. In this paper, a thorough comparison of VC and SPLE is provided, showing that both disciplines imply a number of desirable properties. As a proof of concept for the combination of VC and SPLE, we present SuperMod, a tool realizes an existing conceptual framework that transfers the iterative VC editing model to SPLE. The tool allows to develop a software product line in a single-version workspace step by step, while variability management is completely automated. It offers familiar version control metaphors such as check-out and commit, and in addition uses the SPLE concepts of feature models and feature configuration the definition of logical variability and to define the logical scope of a change. SuperMod has been implemented in a model-driven way and primarily targets EMF models as software artifacts. We successfully apply the tool to a standard SPLE example. <s> BIB003 </s> Investigation of Tools, Techniques and Languages for Model Driven Software Product Lines (SPL): A Systematic Literature Review <s> Introduction <s> Software Product Lines (SPLs) are families of related software systems developed for specific market segments or domains. Commonly, SPLs emerge from sets of existing variants when their individual maintenance becomes infeasible. However, current approaches for SPL migration do not support design models, are partially automated, or do not reflect constraints from SPL domains. To tackle these limitations, the goal of this doctoral research plan is to propose an automated approach to the SPL migration process at the design level. This approach consists of three phases: detection, analysis and transformation. It uses as input the class diagrams and lists of features for each system variant, and relies on search-based algorithms to create a product line architecture that best captures the variability present in the variants. Our expected contribution is to support the adoption of SPL practices in companies that face the scenario of migrating variants to SPLs. <s> BIB004 </s> Investigation of Tools, Techniques and Languages for Model Driven Software Product Lines (SPL): A Systematic Literature Review <s> Introduction <s> Model-based testing relies on a model of the system under test. FineFit is a framework for model-based testing of Java programs. In the FineFit approach, the model is expressed by a set of tables based on Parnas tables. A software product line is a family of programs (the products) with well-defined commonalities and variabilities that are developed by (re)using common artifacts. In this paper, we address the issue of using the FineFit approach to support the development of correct software product lines. We specify a software product line as a specification product line where each product is a FineFit specification of the corresponding software product. The main challenge is to concisely specify the software product line while retaining the readability of the specification of a single system. To address this, we used delta-oriented programming, a recently proposed flexible approach for implementing software product lines, and developed: (1) delta tables as a means to apply the delta-oriented programming idea to the specification of software product lines; and (2) DeltaFineFit as a novel model-based testing approach for software product lines. <s> BIB005 </s> Investigation of Tools, Techniques and Languages for Model Driven Software Product Lines (SPL): A Systematic Literature Review <s> Introduction <s> Software product line (SPL) is extensively used for reusability of resources in family of products. Feature modeling is an important technique used to manage common and variable features of SPL in applications, such as Internet of Things (IoT). In order to adopt SPL for application development, organizations require information, such as cost, scope, complexity, number of features, total number of products, and combination of features for each product to start the application development. Application development of IoT is varied in different contexts, such as heat sensor indoor and outdoor environment. Variability management of IoT applications enables to find the cost, scope, and complexity. All possible combinations of features make it easy to find the cost of individual application. However, exact number of all possible products and features combination for each product is more valuable information for an organization to adopt product line. In this paper, we have proposed binary pattern for nested cardinality constraints (BPNCC), which is simple and effective approach to calculate the exact number of products with complex relationships between application’s feature models. Furthermore, BPNCC approach identifies the feasible features combinations of each IoT application by tracing the constraint relationship from top-to-bottom. BPNCC is an open source and tool-independent approach that does not hide the internal information of selected and non-selected IoT features. The proposed method is validated by implementing it on small and large IoT application feature models with “n” number of constraints, and it is found that the total number of products and all features combinations in each product without any constraint violation. <s> BIB006 </s> Investigation of Tools, Techniques and Languages for Model Driven Software Product Lines (SPL): A Systematic Literature Review <s> Introduction <s> Quality is the most important factor for software development as it mainly defines customer satisfaction that is directly related to the success of a software project. The software process model is used to ensure software quality, represent a variety of task settings, manage project duration, improve the process and range to execute the process understanding, and to appropriate implicit conjecture for all task settings. Several software processes models exist in software albeit with limited scope. Given this viewpoint, this paper presents a new software development life cycle model, “AZ-Model,” for software development by introducing new activities during software development life cycle. It overcomes the limitations of traditional models and significantly impacts the production of a quality product in a time-box. This paper also presents a comprehensive comparative study and statistical analyses to examine the significance of AZ–Model for software development. <s> BIB007
When design changes to a software product lines, developer faced with uncertainty about deciding among multiple possible SPL designs. As the requirement of the system changes according to environment, the SPL grows day by day. commonly used in literature. Feature Model is also the tools and techniques used for creating a collection of similar software systems from a shared set of software assets BIB001 BIB005 . Each product of SPL differs from the others with variable features that provide functionalities according to end user requirements and specifications BIB006 . Developers use SPL to increase the reusability of features that reduce the development cost and time to market, which results in better product development. SPL is an approach used to develop a range of software products with a high degree of similarity. In this approach, a feature model is usually used to keep track of similarities and differences BIB007 . The latest generation of SPL methods and tools are demonstrating even greater improvements and extending more benefits. In current software, traceability, productivity and variability play a key role to maintain the consistency between requirement and code generation. Current SPL traceability mechanism trace from variability in feature of variations (feature model, variable model) in configuration of Product Line Architecture (PLA) in term of adding and removing components BIB003 BIB002 . The Software Product Line Engineering (SPLE) and Model Driven Engineering (MDE) are two main aspects of software reuse (two software development paradigms) BIB004 . SPL is a useful and very successful approach to reuse many domains. Two frameworks of SPL are domain engineering and application engineering. In the production of new software system, the domain engineering (also called product line engineering) is used to reuse domain knowledge. It is used to improve and enhance the quality of developed software from the reuse software domain BIB002 . Application engineering is related with design, management and execution of the system. SPLE is concerned with producing the similar software products. Whereas, MDE focused on creating and modeling the software products. The paper presents an overview of several SPL dimension (i.e. SPLE, MDE) to collect the appropriate number of researches. Model driven SPL is a combination of SPLE and MDE methodologies. In model-driven SPL, instead of creating and modeling the similar software product models individually, the product models are derived from the product line model. Researches have proposed "Model Driven" approaches to improve the state of the art and skills in software engineering. The purpose of this paper is to answer the following research questions in order to summarize MDSPL languages, tools and techniques. Research
Investigation of Tools, Techniques and Languages for Model Driven Software Product Lines (SPL): A Systematic Literature Review <s> Discussion and Limitations <s> Software product lines (SPL) provide support for productivity gains through systematic reuse. Among the various quality attributes supporting these goals, modularity, stability and expressiveness of feature specifications, their composition and configuration knowledge emerge as strategic values in modern software development paradigms. This paper presents a metric-based evaluation aiming at assessing how well the chosen qualities are supported by scenario-based SPL requirements approaches. The selected approaches for this study span from type of notation (textual or graphical based), style to support variability (annotation or composition based), and specification expressiveness. They are compared using the metrics developed in a set of releases from an exemplar case study. Our major findings indicate that composition-based approaches have greater potential to support modularity and stability, and that quantification mechanisms simplify and increase expressiveness of configuration knowledge and composition specifications. <s> BIB001 </s> Investigation of Tools, Techniques and Languages for Model Driven Software Product Lines (SPL): A Systematic Literature Review <s> Discussion and Limitations <s> The application of search-based software engineering techniques to new problems is increasing. Feature location is one of the most important and common activities performed by developers during software maintenance and evolution. Features must be located across families of products and the software artifacts that realize each feature must be identified. However, when dealing with industrial software artifacts, the search space can be huge. We propose and compare five search algorithms to locate features over families of product models guided by latent semantic analysis (LSA), a technique that measures similarities between textual queries. The algorithms are applied to two case studies from our industrial partners (leading manufacturers of home appliances and rolling stock) and are compared in terms of precision and recall. Statistical analysis of the results is performed to provide evidence of the significance of the results. The combination of an evolutionary algorithm with LSA can be used to locate features in families of models from industrial scenarios such as the ones from our industrial partners. <s> BIB002
There are some important languages that are identified in most of the researches as shown in Table 2 . Unified modeling language (UML) models can be annotated with performance properties. We defined a comprehensive classification of behavioral variability in behavioral models including UML. UML plays an important role in modeling, classifying, visualizing and constructing the documents. It is a general-purpose modeling language in field of software engineering that provides a standard way to visualize the design of a system and also help in modeling the system. Architecture Description Language (ADL) formally used to alleviating the software development complexity and making this development process with less error rate. It represents formal notations for describing software architectures in terms of coarse-grained components and connectors. 13 Traditional software metric methods 1 BIB001 14 Latent Semantic Analysis (LSA) 1 BIB002 Koala is also an ADL as it supports the description of the structure of a configuration in terms of its components. The most commonly used technique we found is the feature modeling. It is widely used notation as well as important technique that describes the set of products in SPL. FM is used as Domain requirement variability modeling, which describes the group of requirements such as increased productivity, handling defects and improving quality. It is also widely used during whole product line development process. Feature Modeling is an important technique used to manage common and variable features of Software Product Line (SPL) in applications such as Internet of Things (IoT). It usually used to keep track of similarities and differences. But there are also some complexity issues that the developers face. Moreover, it is very time-consuming task. In recent research, there are scalability issues in SPL product due to occurrence of hardens and constraint violation between relationships of features in final product development. Likewise, variability modeling (VM) aims at creating, evolving, and configuring variability models, which describe the common and variable characteristics. We briefly describe the techniques along with features as shown in Table 5 . This table labels the techniques that we identify from selected researches. The We attempted to observe SPL guidelines, however, still there are some limitations in our work. We have selected a restricted set of keywords and there is a chance that we might have left some important keywords. We have used filter of 2014-19 for selection of research article work, we might have missed some important papers published before 2014. In the same way, we have only chosen five main databases for our research article selection, we might have lost some significant researches from other scientific database engines.
A Survey on Disk Oriented Querying and Reasoning on the Semantic Web <s> Description Logic <s> RACER implements a TBox and ABox reasoner for the logic SHIQ. RACER was the first full-fledged ABox description logic system for a very expressive logic and is based on optimized sound and complete algorithms. RACER also implements a decision procedure for modal logic satisfiability problems (possibly with global axioms). <s> BIB001 </s> A Survey on Disk Oriented Querying and Reasoning on the Semantic Web <s> Description Logic <s> Description logics are embodied in several knowledge-based systems and are used to develop various real-life applications. Now in paperback, The Description Logic Handbook provides a thorough account of the subject, covering all aspects of research in this field, namely: theory, implementation, and applications. Its appeal will be broad, ranging from more theoretically oriented readers, to those with more practically oriented interests who need a sound and modern understanding of knowledge representation systems based on description logics. As well as general revision throughout the book, this new edition presents a new chapter on ontology languages for the semantic web, an area of great importance for the future development of the web. In sum, the book will serve as a unique resource for the subject, and can also be used for self-study or as a reference for knowledge representation and artificial intelligence courses. <s> BIB002
Description Logics (DL) are a logical formalism, related to semantic networks and frame systems, for representing and reasoning about complex classes of individuals (called concepts) and their relationships (expressed by binary relations called roles) BIB002 . Typically, we distinguish between atomic (or primitive) concepts (and roles), and complex concepts defined by using DL constructors. Different DL languages vary in the set of constructors provided. A DL knowledge base has two components: (1) a terminological part (the Tbox) contains a set of concept descriptions and represents the general schema modeling the domain of interest; (2) an assertional part (the Abox) that is a partial instantiation of this schema consisting of a set of assertions either relating individuals to classes, or individuals to each other. Many of the applications only require reasoning in the Tbox, but in an environment like the Semantic Web, we also need Abox reasoning. Reasoning tasks in a Tbox are BIB002 : consistency (satisfiability), that checks if knowledge is meaningful; subsumption, that checks whether all the individuals belonging to a concept (the subsumee) also belong to another concept (the subsumer); and equivalence, that checks whether two classes denote the same set of instances. All of these reasoning mechanisms are reducible to satisfiability, as long as a concept language closed under negation is used. Typically, the basic reasoning tasks in an Abox are BIB002 instance checking, that verifies whether a given individual is an instance of (belongs to) a specified concept; knowledge base consistency, that amounts to verifying whether every concept in the knowledge base admits at least one individual; and realization, which finds the most specific concept an individual object is an instance of. Recent years have seen significant advances in the design of sound and complete algorithms for DLs. Moreover, systems using these algorithms have also been developed , BIB001 . Most of these works deal only with Tbox reasoning, but in an environment like the Semantic Web, we also need Abox reasoning. Although some DL systems provide sound and complete Abox reasoning (but unfortunately not scalable) BIB001 , they provide a very weak Abox query language. Although the RACER system BIB001 provides a more expressive Abox query language, it is not persistent, that is, the Abox is stored in the main memory and not on disk.
DNA Sequencing Sensors: An Overview <s> Introduction <s> A DNA sequence for the genome of bacteriophage phi X174 of approximately 5,375 nucleotides has been determined using the rapid and simple 'plus and minus' method. The sequence identifies many of the features responsible for the production of the proteins of the nine known genes of the organism, including initiation and termination sites for the proteins and RNAs. Two pairs of genes are coded by the same region of DNA using different reading frames. <s> BIB001 </s> DNA Sequencing Sensors: An Overview <s> Introduction <s> 1. From Type III pneumococci a biologically active fraction has been isolated in highly purified form which in exceedingly minute amounts is capable under appropriate cultural conditions of inducing the transformation of unencapsulated R variants of Pneumococcus Type II into fully encapsulated cells of the same specific type as that of the heat-killed microorganisms from which the inducing material was recovered. 2. Methods for the isolation and purification of the active transforming material are described. 3. The data obtained by chemical, enzymatic, and serological analyses together with the results of preliminary studies by electrophoresis, ultracentrifugation, and ultraviolet spectroscopy indicate that, within the limits of the methods, the active fraction contains no demonstrable protein, unbound lipid, or serologically reactive polysaccharide and consists principally, if not solely, of a highly polymerized, viscous form of desoxyribonucleic acid. 4. Evidence is presented that the chemically induced alterations in cellular structure and function are predictable, type-specific, and transmissible in series. The various hypotheses that have been advanced concerning the nature of these changes are reviewed. <s> BIB002 </s> DNA Sequencing Sensors: An Overview <s> Introduction <s> Abstract ::: Multiwavelength detection of laser induced fluorescence for dideoxynucleotide DNA sequencing with four different fluorophores and separation by capillary gel electrophoresis is described. A cryogenically cooled, low readout noise, 2-dimensional charge-coupled device is used as a detector for the on-line, on-column recording of emission spectra. The detection system has no moving parts and provides wavelength selectivity on a single detector device. The detection limit of fluorescently labeled oligonucleotides meets the high sensitivity requirements for capillary DNA sequencing largely due to the efficient operation of the CCD detector with a 94% duty cycle. Using the condition number as a selectivity criterion, multiwavelength detection provides better analytical selectivity than detection with four bandpass filters. Monte Carlo studies and analytical estimates show that base assignment errors are reduced with peak identification based on entire emission spectra. High-speed separation of sequencing samples and the treatment of the 2-dimensional electropherogram data is presented. Comparing the DNA sequence of a sample separated by slab gel electrophoresis with sequence from capillary gel electrophoresis and multiwavelength detection we find no significant difference in the amount of error attributable to the instrumentation. <s> BIB003 </s> DNA Sequencing Sensors: An Overview <s> Introduction <s> A method is described for the detection of DNA hybrids formed on a solid support, based upon the pairing of oligonucleotide chemistry and the technologies of electronic microdevice design. Surface matrices have been created in which oligonucleotide probes are covalently linked to a thin SiO2 film. 32P labeled target nucleic acid is then hybridized to this probe matrix under conditions of high stringency. The salient feature of the method is that to achieve the highest possible collection efficiency, the hybridization matrix is placed directly on the surface of a charge coupled device (CCD), which is used to detect 32P decay from hybridized target molecules (1, Eggers, M.D., Hogan, M.E., Reich, R.K., Lamture, J.B., Beattie, K.L., Hollis, M.A., Ehrilich, D.J., Kosicki, B.B., Shumaker, J.M., Varma, R.S., Burke, B.E., Murphy, A., and Rathman, D.D., (1993), Advances in DNA Sequencing Technology, Proc. SPIE, 1891, 13-26). Two implementations of the technology have been employed. The first involves direct attachment of the matrix to the surface of a CCD. The second involves attachment of the matrix to a disposible SiO2 coated chip, which is then placed face to face upon the CCD surface. As can be predicted from this favorable collection geometry and the known characteristics of a CCD, it is found that as measured by the time required to obtain equivalent signal to noise ratios, 32P detection speed by the direct CCD approach is at least 10 fold greater than can be obtained with a commercial gas phase array detector, and at least 100 fold greater than when X-ray film is used for 32P detection. Thus, it is shown that excellent quality hybridization signals can be obtained from a standard hybridization reaction, after only 1 second of CCD data acquisition. <s> BIB004 </s> DNA Sequencing Sensors: An Overview <s> Introduction <s> The biotechnology company Celera has generated controversy by reporting that it will sequence the human genome in 3 years with the aid of 230 new high-throughput DNA sequencing machines, well before the efforts of the Human Genome Project are slated to be competed. Now J. C. Mullikin and A. A. McMurray report their evaluation of this sequencer, the ABI 3700 from Perkin-Elmer. <s> BIB005 </s> DNA Sequencing Sensors: An Overview <s> Introduction <s> The human genome holds an extraordinary trove of information about human development, physiology, medicine and evolution. Here we report the results of an international collaboration to produce and make freely available a draft sequence of the human genome. We also present an initial analysis of the data, describing some of the insights that can be gleaned from the sequence. <s> BIB006 </s> DNA Sequencing Sensors: An Overview <s> Introduction <s> SUMMARY In the past 10 years, microbiology has undergone a revolution that has been ::: driven by access to cheap high-throughput DNA sequencing. It was not long ago ::: that the cloning and sequencing of a target gene could take months or years, ::: whereas now this entire process has been replaced by a 10 min Internet search ::: of a public genome database. There has been no single innovation that has ::: initiated this rapid technological change; in fact, the core chemistry of DNA ::: sequencing is the same as it was 30 years ago. Instead, progress has been ::: driven by large sequencing centers that have incrementally industrialized the ::: Sanger sequencing method. A side effect of this industrialization is that ::: large-scale sequencing has moved out of small research labs, and the vast ::: majority of sequence data is now generated by large genome centers. Recently, ::: there have been advances in technology that will enable high-throughput genome ::: sequencing to be established in research labs using bench-top instrumentation. ::: These new technologies are already being used to explore the vast microbial ::: diversity in the natural environment and the untapped genetic variation that ::: can occur in bacterial species. It is expected that these powerful new methods ::: will open up new questions to genomic investigation and will also allow ::: high-throughput sequencing to be more than just a discovery exercise but also ::: a routine assay for hypothesis testing. While this review will concentrate on ::: microorganisms, many of the important arguments about the need to measure and ::: understand variation at the species, population and ecosystem level will hold ::: true for many other biological systems. <s> BIB007 </s> DNA Sequencing Sensors: An Overview <s> Introduction <s> Recent advances in DNA sequencing have revolutionized the field of genomics, making it possible for even single research groups to generate large amounts of sequence data very rapidly and at a substantially lower cost. These high-throughput sequencing technologies make deep transcriptome sequencing and transcript quantification, whole genome sequencing and resequencing available to many more researchers and projects. However, while the cost and time have been greatly reduced, the error profiles and limitations of the new platforms differ significantly from those of previous sequencing technologies. The selection of an appropriate sequencing platform for particular types of experiments is an important consideration, and requires a detailed understanding of the technologies available; including sources of error, error rate, as well as the speed and cost of sequencing. We review the relevant concepts and compare the issues raised by the current high-throughput DNA sequencing technologies. We analyze how future developments may overcome these limitations and what challenges remain. <s> BIB008 </s> DNA Sequencing Sensors: An Overview <s> Introduction <s> Demand has never been greater for revolutionary technologies that deliver fast, inexpensive and accurate genome information. This challenge has catalysed the development of next-generation sequencing (NGS) technologies. The inexpensive production of large volumes of sequence data is the primary advantage over conventional methods. Here, I present a technical review of template preparation, sequencing and imaging, genome alignment and assembly approaches, and recent advances in current and near-term commercially available NGS instruments. I also outline the broad range of applications for NGS technologies, in addition to providing guidelines for platform selection to address biological questions of interest. <s> BIB009 </s> DNA Sequencing Sensors: An Overview <s> Introduction <s> Automated DNA sequencing instruments embody an elegant interplay among chemistry, engineering, software, and molecular biology and have built upon Sanger's founding discovery of dideoxynucleotide sequencing to perform once-unfathomable tasks. Combined with innovative physical mapping approaches that helped to establish long-range relationships between cloned stretches of genomic DNA, fluorescent DNA sequencers produced reference genome sequences for model organisms and for the reference human genome. New types of sequencing instruments that permit amazing acceleration of data-collection rates for DNA sequencing have been developed. The ability to generate genome-scale data sets is now transforming the nature of biological inquiry. Here, I provide an historical perspective of the field, focusing on the fundamental developments that predated the advent of next-generation sequencing instruments and providing information about how these instruments work, their application to biological research, and the newes... <s> BIB010 </s> DNA Sequencing Sensors: An Overview <s> Introduction <s> Ten years ago next-generation sequencing (NGS) technologies appeared on the market. During the past decade, tremendous progress has been made in terms of speed, read length, and throughput, along with a sharp reduction in per-base cost. Together, these advances democratized NGS and paved the way for the development of a large number of novel NGS applications in basic science as well as in translational research areas such as clinical diagnostics, agrigenomics, and forensic science. Here we provide an overview of the evolution of NGS and discuss the most significant improvements in sequencing technologies and library preparation protocols. We also explore the current landscape of NGS applications and provide a perspective for future developments. <s> BIB011 </s> DNA Sequencing Sensors: An Overview <s> Introduction <s> The human genome sequence has profoundly altered our understanding of biology, human diversity, and disease. The path from the first draft sequence to our nascent era of personal genomes and genomic medicine has been made possible only because of the extraordinary advancements in DNA sequencing technologies over the past 10 years. Here, we discuss commonly used high-throughput sequencing platforms, the growing array of sequencing assays developed around them, as well as the challenges facing current sequencing platforms and their clinical application. <s> BIB012 </s> DNA Sequencing Sensors: An Overview <s> Introduction <s> Advances in DNA sequencing technologies have led to vast increases in the diversity of sequencing-based applications and in the amount of data generated. This Review discusses the current state-of-the-art technologies in both short-read and long-read DNA sequencing, their underlying mechanisms, relative strengths and limitations, and emerging applications. <s> BIB013
Since Avery McLeod and McCarthy's famous experiment in 1944, in which it was shown that DNA was the transforming principle BIB002 and, therefore, the material from which genes were composed, the knowledge regarding this molecule has not stopped growing. The next major discovery in the field of molecular genetics was made by Watson and Crick who proposed a double helix as the structural model for DNA . This led them to establish the central dogma of molecular biology. Since then, the interest in determining the primary structure of DNA has been growing, so it can be said that sequencing technology was born as the set of techniques that leads to knowledge about the order in which the four nucleotides-Adenine, Cytosine, Guanine, and Thymine-are present in the DNA. The first organism whose complete genome was sequenced in 1977 was the bacteriophage Phi-X174 BIB001 . This genome only had 5386 nucleotides distributed in 11 genes, but its sequencing was a great milestone. Since then, the genomes of a large number of species have been sequenced until, finally, the first draft of the human genome was presented in 2001 BIB006 . Figure 1 shows a timeline of the more important events in the history of DNA sequencing and Figure 2 shows another timeline of the evolution of each platform in the number of bases read per run. Since 1977, the number of articles on DNA sequencing has continued to grow, reaching more than 11,000 publications in 2014 and 2015 ( Figure 3 ). In parallel, the cost of sequencing a complete human genome has continued to decline, according to data computed by the National Human Genome Research Institute (https://www.genome.gov/sequencingcostsdata/) ( Figure 4 ). Figure 4 also shows the theoretical reduction imposed by Moore's Law, which states that the capacity of the hardware used for sequencing doubles every two years. However, what is observed is that the decrease in the cost of sequencing a human genome does not decrease proportionally, as might be expected, but it does so much more abruptly, particularly since 2008. This is due, above all, to the introduction of new generation sequencing (NGS) . Since 1977, the number of articles on DNA sequencing has continued to grow, reaching more than 11,000 publications in 2014 and 2015 ( Figure 3 ). In parallel, the cost of sequencing a complete human genome has continued to decline, according to data computed by the National Human Genome Research Institute (https://www.genome.gov/sequencingcostsdata/) ( Figure 4 ). Figure 4 also shows the theoretical reduction imposed by Moore's Law, which states that the capacity of the hardware used for sequencing doubles every two years. However, what is observed is that the decrease in the cost of sequencing a human genome does not decrease proportionally, as might be expected, but it does so much more abruptly, particularly since 2008. This is due, above all, to the introduction of new generation sequencing (NGS) . Since 1977, the number of articles on DNA sequencing has continued to grow, reaching more than 11,000 publications in 2014 and 2015 ( Figure 3 ). In parallel, the cost of sequencing a complete human genome has continued to decline, according to data computed by the National Human Genome Research Institute (https://www.genome.gov/sequencingcostsdata/) ( Figure 4 ). Figure 4 also shows the theoretical reduction imposed by Moore's Law, which states that the capacity of the hardware used for sequencing doubles every two years. However, what is observed is that the decrease in the cost of sequencing a human genome does not decrease proportionally, as might be expected, but it does so much more abruptly, particularly since 2008. This is due, above all, to the introduction of new generation sequencing (NGS) . Sequencing of the bacteriophage Phi-X174 and the subsequent DNA sequencing analysis until the completion of the sequencing of the human genome were performed using what is known as the Sanger dideoxy method or enzymatic chain termination method BIB008 . This method is still widely used today and it is based on the use of dideoxynucleotides (ddNTPs) that block DNA polymerization. The dideoxynucleotides are identical to the deoxynucleotides (dNTPs) that the DNA polymerase enzyme uses to generate a DNA strand from another template strand, with the difference being that the ddNTPs lack a hydroxyl group on the third carbon of the ribose and this causes the enzyme to stop the polymerization of the molecule, since it is not able to find the chemical group to anchor the next nucleotide. In practice, what is done is to feed the reaction catalyzed by the DNA polymerase Sequencing of the bacteriophage Phi-X174 and the subsequent DNA sequencing analysis until the completion of the sequencing of the human genome were performed using what is known as the Sanger dideoxy method or enzymatic chain termination method BIB008 . This method is still widely used today and it is based on the use of dideoxynucleotides (ddNTPs) that block DNA polymerization. The dideoxynucleotides are identical to the deoxynucleotides (dNTPs) that the DNA polymerase enzyme uses to generate a DNA strand from another template strand, with the difference being that the ddNTPs lack a hydroxyl group on the third carbon of the ribose and this causes the enzyme to stop the polymerization of the molecule, since it is not able to find the chemical group to anchor the next nucleotide. In practice, what is done is to feed the reaction catalyzed by the DNA polymerase Sequencing of the bacteriophage Phi-X174 and the subsequent DNA sequencing analysis until the completion of the sequencing of the human genome were performed using what is known as the Sanger dideoxy method or enzymatic chain termination method BIB008 . This method is still widely used today and it is based on the use of dideoxynucleotides (ddNTPs) that block DNA polymerization. The dideoxynucleotides are identical to the deoxynucleotides (dNTPs) that the DNA polymerase enzyme uses to generate a DNA strand from another template strand, with the difference being that the ddNTPs lack a hydroxyl group on the third carbon of the ribose and this causes the enzyme to stop the polymerization of the molecule, since it is not able to find the chemical group to anchor the next nucleotide. In practice, what is done is to feed the reaction catalyzed by the DNA polymerase with a mixture of dNTPs and ddNTPs, so that at each addition of a nucleotide, the enzyme can incorporate either of them randomly ( Figure 5 ). Additionally, ddNTPs are labeled by a fluorophore molecule, so that each time one of them is incorporated, the reaction will stop and the resulting molecule will emit a signal that will report on the last incorporated nucleotide (Adenine, Thymine, Cytosine, or Guanine), since each of them are marked with a different fluorophore. Thus, after an adequate number of cycles of amplification, we will find a number of molecules equal to the number of nucleotides contained in the DNA fragment to be sequenced, differentiating these molecules from each other by a single nucleotide BIB005 . Next, a capillary electrophoresis is performed with these molecules BIB003 , so that they will be arranged in increasing order of molecular mass, each of which can be identified by the fluorophore attached to the corresponding ddNTPs terminator of the reaction . The detection is performed by a Charge-Coupled Device (CCD) spectral detector BIB004 . Sensors 2017, 17, 588 4 of 15 with a mixture of dNTPs and ddNTPs, so that at each addition of a nucleotide, the enzyme can incorporate either of them randomly ( Figure 5 ). Additionally, ddNTPs are labeled by a fluorophore molecule, so that each time one of them is incorporated, the reaction will stop and the resulting molecule will emit a signal that will report on the last incorporated nucleotide (Adenine, Thymine, Cytosine, or Guanine), since each of them are marked with a different fluorophore. Thus, after an adequate number of cycles of amplification, we will find a number of molecules equal to the number of nucleotides contained in the DNA fragment to be sequenced, differentiating these molecules from each other by a single nucleotide BIB005 . Next, a capillary electrophoresis is performed with these molecules BIB003 , so that they will be arranged in increasing order of molecular mass, each of which can be identified by the fluorophore attached to the corresponding ddNTPs terminator of the reaction . The detection is performed by a Charge-Coupled Device (CCD) spectral detector BIB004 . However, due to the approach of ambitious massive sequencing projects, such as the human genome project, it was necessary to develop a new technology that would reduce the costs and the time required to obtain the sequences BIB009 BIB010 . In this way, New Generation Sequencing (NGS) was born, a high-performance technology based on the parallelization of the sequencing process, resulting in the reading of thousands or even millions of sequences simultaneously BIB007 . At present, there are eight large massive sequencing platforms (Table 1) , which are different from each other in terms of the method of preparing the templates for sequencing, the sequencing reaction itself, and the detection systems used BIB012 . In addition, each of these platforms can have different equipment, different levels of performance, different numbers of readings and, therefore, a different cost for each sequencing reaction BIB013 . However, due to the approach of ambitious massive sequencing projects, such as the human genome project, it was necessary to develop a new technology that would reduce the costs and the time required to obtain the sequences BIB009 BIB010 . In this way, New Generation Sequencing (NGS) was born, a high-performance technology based on the parallelization of the sequencing process, resulting in the reading of thousands or even millions of sequences simultaneously BIB007 . At present, there are eight large massive sequencing platforms (Table 1) , which are different from each other in terms of the method of preparing the templates for sequencing, the sequencing reaction itself, and the detection systems used BIB012 . In addition, each of these platforms can have different equipment, different levels of performance, different numbers of readings and, therefore, a different cost for each sequencing reaction BIB013 . In any case, there is no doubt that the development of all this technology has necessarily gone hand in hand with new detection systems for signals and sensors whose sensitivity has continued to grow BIB011 . Increased research on NGS, as reflected in Figure 1 , can only be understood from the understanding that the new detection systems have evolved by leaps and bounds. If this had not been the case, the development of DNA reading systems would not have been possible.
DNA Sequencing Sensors: An Overview <s> 454 Roche Platform <s> Pyrosequencing, a nonelectrophoretic DNA sequencing method that uses a luciferase-based enzymatic system to monitor DNA synthesis in real time, has so far been limited to sequencing of short stretches of DNA. To increase the signal-to-noise ratio in pyrosequencing the natural dATP was replaced by dATPalphaS (M. Ronaghi et al., 1996, Anal. Biochem. 242, 84-89). The applied dATPalphaS was a mixture of two isomers (Sp and Rp). We show here that by the introduction of pure 2'-deoxyadenosine-5'-O'-(1-thiotriphosphate) Sp-isomer in pyrosequencing substantial longer reads could be obtained. The pure Sp-isomer allowed lower nucleotide concentration to be used and improved the possibility to read through poly(T) regions. In general, a doubling of the read length could be obtained by the use of pure Sp-isomer. Pyrosequencing data for 50 to 100 bases could be generated on different types of template. The longer read will enable numerous new applications, such as identification and typing of medically important microorganisms as well as resequencing of DNA fragments for mutation screening and clone checking. <s> BIB001 </s> DNA Sequencing Sensors: An Overview <s> 454 Roche Platform <s> Pyrosequencing is the first alternative to the conventional Sanger method for de novo DNA sequencing. Pyrosequencing is a DNA sequencing technology based on the sequencingbysynthesis principle. It employs a series of four enzymes to accurately detect nucleic acid sequences during the synthesis. Pyrosequencing has the potential advantages of accuracy, flexibility, parallel processing, and can be easily automated. Furthermore, the technique dispenses with the need for labeled primers, labeled nucleotides, and gelelectrophoresis. Pyrosequencing has opened up npossibilities for performing sequencebased DNA analysis. The method has been proven highly suitable for single nucleotide polymorphism analysis and sequencing of short stretches of DNA. Pyrosequencing has been successful for both confirmatory sequencing and de novo sequencing . By increasing the read length to higher scores and by shortening the sequence reaction time per base calling, pyrosequencing may take over many broad areas of DNA sequencing applications as the trend is directed to analysis of fewer amounts of specimens and larges cale settings, with higher throughput and lower cost. This article considers key features regarding different aspects of pyrosequencing technology, including the general principles, enzyme properties, sequencing modes, instrumentation, limitations, potential and future applications. <s> BIB002 </s> DNA Sequencing Sensors: An Overview <s> 454 Roche Platform <s> Ten years ago next-generation sequencing (NGS) technologies appeared on the market. During the past decade, tremendous progress has been made in terms of speed, read length, and throughput, along with a sharp reduction in per-base cost. Together, these advances democratized NGS and paved the way for the development of a large number of novel NGS applications in basic science as well as in translational research areas such as clinical diagnostics, agrigenomics, and forensic science. Here we provide an overview of the evolution of NGS and discuss the most significant improvements in sequencing technologies and library preparation protocols. We also explore the current landscape of NGS applications and provide a perspective for future developments. <s> BIB003
The Roche 454 was the first NGS equipment to be marketed, and its technology is based on pyrosequencing . In this case, the sequencing is carried out by a synthetic process so that the reading is performed as nucleotides are incorporated into the template strand replication. Therefore, the methodology is based on the iterative incorporation of each of the four nucleotides. In this platform, the pyrophosphate molecule that is released during the incorporation of the nucleotides in the replication of the template DNA is detected, taking advantage of the fact that the liberated pyrophosphate is proportional to the incorporated nucleotides. The measurement of the pyrophosphates is carried out by the detection of emitted light, which is a byproduct of the transformation of luciferin into oxyluciferin-the reaction is performed by an enzyme called luciferase-and this reaction requires, as a cofactor, the ATP generated by the ATP sulfurylase from adenosine 5 -phosphate (APS), in the presence of pyrophosphate BIB002 (Figure 6 ). In addition, the bases that are not incorporated in each cycle are eliminated by the action of an apyrase enzyme, to avoid these residues interfering in later cycles . The reactions are conducted in each of the one million wells that structure the PicoTiterPlate™ (Branford, CT, USA) plates, and the light generated in the reaction-with a maximum wavelength of 560 nanometers-is detected by a CCD camera BIB001 . That is, the enzymatic cascade that takes place after the incorporation of a dNTP is ultimately responsible for the produced bioluminescent signal. In any case, there is no doubt that the development of all this technology has necessarily gone hand in hand with new detection systems for signals and sensors whose sensitivity has continued to grow BIB003 . Increased research on NGS, as reflected in Figure 1 , can only be understood from the understanding that the new detection systems have evolved by leaps and bounds. If this had not been the case, the development of DNA reading systems would not have been possible. The Roche 454 was the first NGS equipment to be marketed, and its technology is based on pyrosequencing . In this case, the sequencing is carried out by a synthetic process so that the reading is performed as nucleotides are incorporated into the template strand replication. Therefore, the methodology is based on the iterative incorporation of each of the four nucleotides. In this platform, the pyrophosphate molecule that is released during the incorporation of the nucleotides in the replication of the template DNA is detected, taking advantage of the fact that the liberated pyrophosphate is proportional to the incorporated nucleotides. The measurement of the pyrophosphates is carried out by the detection of emitted light, which is a byproduct of the transformation of luciferin into oxyluciferin-the reaction is performed by an enzyme called luciferase-and this reaction requires, as a cofactor, the ATP generated by the ATP sulfurylase from adenosine 5′-phosphate (APS), in the presence of pyrophosphate BIB002 (Figure 6 ). In addition, the bases that are not incorporated in each cycle are eliminated by the action of an apyrase enzyme, to avoid these residues interfering in later cycles . The reactions are conducted in each of the one million wells that structure the PicoTiterPlate™ (Branford, CT, USA) plates, and the light generated in the reaction-with a maximum wavelength of 560 nanometers-is detected by a CCD camera BIB001 . That is, the enzymatic cascade that takes place after the incorporation of a dNTP is ultimately responsible for the produced bioluminescent signal.
DNA Sequencing Sensors: An Overview <s> AB SOLiD Platform <s> Using the massively parallel technique of sequencing by oligonucleotide ligation and detection (SOLiD; Applied Biosystems), we have assessed the in vivo positions of more than 44 million putative nucleosome cores in the multicellular genetic model organism Caenorhabditis elegans. These analyses provide a global view of the chromatin architecture of a multicellular animal at extremely high density and resolution. While we observe some degree of reproducible positioning throughout the genome in our mixed stage population of animals, we note that the major chromatin feature in the worm is a diversity of allowed nucleosome positions at the vast majority of individual loci. While absolute positioning of nucleosomes can vary substantially, relative positioning of nucleosomes (in a repeated array structure likely to be maintained at least in part by steric constraints) appears to be a significant property of chromatin structure. The high density of nucleosomal reads enabled a substantial extension of previous analysis describing the usage of individual oligonucleotide sequences along the span of the nucleosome core and linker. We release this data set, via the UCSC Genome Browser, as a resource for the high-resolution analysis of chromatin conformation and DNA accessibility at individual loci within the C. elegans genome. <s> BIB001 </s> DNA Sequencing Sensors: An Overview <s> AB SOLiD Platform <s> The development of Next Generation Sequencing technologies, capable of sequencing hundreds of millions of short reads (25–70 bp each) in a single run, is opening the door to population genomic studies of non-model species. In this paper we present SHRiMP - the SHort Read Mapping Package: a set of algorithms and methods to map short reads to a genome, even in the presence of a large amount of polymorphism. Our method is based upon a fast read mapping technique, separate thorough alignment methods for regular letter-space as well as AB SOLiD (color-space) reads, and a statistical model for false positive hits. We use SHRiMP to map reads from a newly sequenced Ciona savignyi individual to the reference genome. We demonstrate that SHRiMP can accurately map reads to this highly polymorphic genome, while confirming high heterozygosity of C. savignyi in this second individual. SHRiMP is freely available at http://compbio.cs.toronto.edu/shrimp. <s> BIB002 </s> DNA Sequencing Sensors: An Overview <s> AB SOLiD Platform <s> Motivation: High-throughput sequencing technologies produce large sets of short reads that may contain errors. These sequencing errors make de novo assembly challenging. Error correction aims to reduce the error rate prior assembly. Many de novo sequencing projects use reads from several sequencing technologies to get the benefits of all used technologies and to alleviate their shortcomings. However, combining such a mixed set of reads is problematic as many tools are specific to one sequencing platform. The SOLiD sequencing platform is especially problematic in this regard because of the two base color coding of the reads. Therefore, new tools for working with mixed read sets are needed. ::: ::: Results: We present an error correction tool for correcting substitutions, insertions and deletions in a mixed set of reads produced by various sequencing platforms. We first develop a method for correcting reads from any sequencing technology producing base space reads such as the SOLEXA/Illumina and Roche/454 Life Sciences sequencing platforms. We then further refine the algorithm to correct the color space reads from the Applied Biosystems SOLiD sequencing platform together with normal base space reads. Our new tool is based on the SHREC program that is aimed at correcting SOLEXA/Illumina reads. Our experiments show that we can detect errors with 99% sensitivity and >98% specificity if the combined sequencing coverage of the sets is at least 12. We also show that the error rate of the reads is greatly reduced. ::: ::: Availability: The JAVA source code is freely available at http://www.cs.helsinki.fi/u/lmsalmel/hybrid-shrec/ ::: ::: Contact: leena.salmela@cs.helsinki.fi <s> BIB003 </s> DNA Sequencing Sensors: An Overview <s> AB SOLiD Platform <s> Today, resequencing of a human genome can be performed in approximately a week using a single instrument. Thanks to a steady logarithmic rate of increase in performance for DNA sequencing platforms over the past seven years, DNA sequencing is one of the fastest developing technology fields. As the process becomes faster, it opens up possibilities within health care, diagnostics, and entirely new fields of research. Immediate genetic characterization of contagious outbreaks has been exemplified, and with such applications for the direct benefit of human health, expectations of future sensitive, rapid, high-throughput, and cost-effective technologies are steadily growing. Simultaneously, some of the limitations of a rapidly growing field have become apparent, and questions regarding the quality of some of the data deposited into databases have been raised. A human genome sequenced in only an hour is likely to become a reality in the future, but its definition may not be as certain. <s> BIB004
The SOLiD (sequencing by oligonucleotide ligation and detection) platform, like Sanger sequencing, is based on the detection of fluorescence signals with the difference being that while in Sanger sequencing a fluorophore is used for each nucleotide, in SOLiD sequencing a fluorophore is used for a given combination of two nucleotides. That is, each fluorescence signal represents the binding of two nucleotides. Thus, the raw data obtained cannot be translated into a known nucleotide sequence because each of the four signals refers to a subset of four nucleotide combinations. This methodology is based on the sequential ligation of fluorescent probes BIB001 , so that although only four fluorophores are used for the 16 possible combinations of nucleotides 2 to 2, it is possible to determine which nucleotide occupies each position thanks to the known color-space technique (Figure 7a ) . In the SOLiD platform, the ligation and detection of the oligonucleotides is carried out in four steps. In the first step, each fragment to be sequenced hybridizes to one of the 16 labeled probes that have two bases of a known sequence at positions n and n + 1, followed by a sequence of degenerate bases. In a second step the probe is cleaved, releasing the end to which the fluorophore is bound, and leaving a 5 -phosphate group together with five nucleotides, two of which are of a known sequence. Next, an extension process is carried out, with 10 rounds of hybridization, ligation, and cleavage. Finally, the completion is performed to start the cycle again, but this time in the n + 2 position. The fluorescence signal obtained in each measurement should not be proportional to a determined nucleotide, but it will limit the number of possibilities to four, requiring successive cycles of ligation to clear the unknowns BIB003 . Thus, if in the Sanger sequencing the reading of each position was associated with a fluorescence signal, so that it was automatically translated as soon as it was produced, in the SOLiD sequencing the reading can only be understood in a set of signals BIB002 . The color space technique was a novelty that introduced the SOLiD platform and this is only used by it. In this technique, in contrast to the base space technique of Sanger sequencing, each signal does not represent one base but two bases in a row. Each nucleotide pair receives a certain color, but as can be seen in Figure 7b , the color matching for each nucleotide pair is not random. Reverse (e.g., AG and GA), complementary (e.g., AG and TC), and complementary reverse (e.g., AG and CT) couples are shared by the same fluorophore. By having an ACGAA sequence (Figure 7b ), the first probe will have AC in its first two positions, the second probe will have CG, the third probe will have GA, and the fourth probe will have AA. However, there are up to four possible combinations, so we need a second reading. Considering this, the only possible sequence is the one that throws an ACGAA reading. The ligation, detection, and cleavage reactions are performed as many times as nucleotides have the sequence to be determined. A variation of this method is used by the Complete Genomics (CG) platform, created in 2006 and acquired in 2013 by the Chinese company BGI-Shenzhen. This is presented as an ideal platform for the detection of variants in large-scale genetic studies, such as in projects related to the human genome, given its high precision and low cost [26] . The two main novelties presented with respect to the other sequencing platforms are the use of DNA nanoboles and the ligation technique by the combinatorial probes anchor ligation, cPAL. DNA nanoboles (DNBs) are fragments of circularized DNA template, following the fragmentation with restriction enzymes by the use of directional adapters, so that each one of them has different density, size, and even affinity properties. Each DNBs contains many copies of the original DNA template. cPAL consists of the use of nonamer probes containing degenerate and fluorophore-tagged DNA fragments attached to the standard anchor sites to read the bases adjacent to the degeneracy by ligation of these probes (Figure 8 ). This is achieved by moving from readings of 6-7 base pairs to readings of 11-12 base pairs. In this way, the CG platform is the only one in which the reading of the DNA sequences is carried out in solution. The ligation performed in this methodology is known as unchained, since with the detection of each probe, the system starts a new cycle from zero, minimizing the background BIB004 because the unligated probes are washed away. In this platform the flow cell is imaged by simultaneous high speed detection of the four colors. The sequencing by ligation used by the SOLiD and CG platforms presents the enormous advantage of offering a very high precision in the reading of the sequences (Table 2 ). This is because each position is read several times and with different probes. In contrast, they do present some important drawbacks such as the short reading length obtained or the long time necessary to obtain the results.
DNA Sequencing Sensors: An Overview <s> Illumina Platform <s> A DNA sequence for the genome of bacteriophage phi X174 of approximately 5,375 nucleotides has been determined using the rapid and simple 'plus and minus' method. The sequence identifies many of the features responsible for the production of the proteins of the nine known genes of the organism, including initiation and termination sites for the proteins and RNAs. Two pairs of genes are coded by the same region of DNA using different reading frames. <s> BIB001 </s> DNA Sequencing Sensors: An Overview <s> Illumina Platform <s> Key events in cellular trafficking occur at the cell surface, and it is desirable to visualize these events without interference from other regions deeper within. This review describes a microscopy technique based on total internal reflection fluorescence which is well suited for optical sectioning at cell-substrate regions with an unusually thin region of fluorescence excitation. The technique has many other applications as well, most notably for studying biochemical kinetics and single biomolecule dynamics at surfaces. A brief summary of these applications is provided, followed by presentations of the physical basis for the technique and the various ways to implement total internal reflection fluorescence in a standard fluorescence microscope. <s> BIB002 </s> DNA Sequencing Sensors: An Overview <s> Illumina Platform <s> DNA sequencing can be used to gain important information on genes, genetic variation and gene function for biological and medical studies. The growing collection of publicly available reference genome sequences will underpin a new era of whole genome re-sequencing, but sequencing costs need to fall and throughput needs to rise by several orders of magnitude. Novel technologies are being developed to meet this need by generating massive amounts of sequence that can be aligned to the reference sequence. The challenge is to maintain the high standards of accuracy and completeness that are hallmarks of the previous genome projects. One or more new sequencing technologies are expected to become the mainstay of future research, and to make DNA sequencing centre stage as a routine tool in genetic research in the coming years. <s> BIB003 </s> DNA Sequencing Sensors: An Overview <s> Illumina Platform <s> DNA sequence information underpins genetic research, enabling discoveries of important biological or medical benefit. Sequencing projects have traditionally used long (400800 base pair) reads, but the existence of reference sequences for the human and many other genomes makes it possible to develop new, fast approaches to re-sequencing, whereby shorter reads are compared to a reference to identify intraspecies genetic variation. Here we report an approach that generates several billion bases of accurate nucleotide sequence per experiment at low cost. Single molecules of DNA are attached to a flat surface, amplified in situ and used as templates for synthetic sequencing with fluorescent reversible terminator deoxyribonucleotides. Images of the surface are analysed to generate high-quality sequence.We demonstrate application of this approach to human genome sequencing on flow-sorted X chromosomes and then scale the approach to determine the genome sequence of a male Yoruba from Ibadan, Nigeria. We build an accurate consensus sequence from.303 average depth of paired 35-base reads. We characterize four million single-nucleotide polymorphisms and four hundred thousand structural variants, many of which were previously unknown. Our approach is effective for accurate, rapid and economical whole-genome re-sequencing and many other biomedical applications. <s> BIB004 </s> DNA Sequencing Sensors: An Overview <s> Illumina Platform <s> Transcriptome analysis has been a key area of biological inquiry for decades. Over the years, research in the field has progressed from candidate gene-based detection of RNAs using Northern blotting to high-throughput expression profiling driven by the advent of microarrays. Next-generation sequencing technologies have revolutionized transcriptomics by providing opportunities for multidimensional examinations of cellular transcriptomes in which high-throughput expression data are obtained at a single-base resolution. <s> BIB005 </s> DNA Sequencing Sensors: An Overview <s> Illumina Platform <s> Stimulated emission depletion (STED) microscopy achieves diffraction-unlimited resolution in far-field fluorescence microscopy well below 100 nm. As common for (single-lens) far-field microscopy techniques, the lateral resolution is better than the axial sectioning capabilities. Here we present the first implementation of total internal reflection (TIR) illumination into STED microscopy which limits fluorophore excitation to ~70 nm in the vicinity of the cover slip while simultaneously providing ~50 nm lateral resolution. We demonstrate the performance of this new microscope technique with fluorescent bead test samples as well as immuno-stained microtubules. Total internal reflection STED microscopy provides superior axial sectioning capabilities with the potential to reduce photo-bleaching and photo-damage in live cell imaging. <s> BIB006
In this platform, DNA sequencing is carried out by fluorescence-labeled nucleotide analogs acting as reversible terminators of the amplification reaction BIB005 . The idea is similar to that developed to carry out the Sanger sequencing, with the difference being that in the Illumina platform the blockade of DNA polymerization is reversible and in the sequencing of Sanger this is irreversible. Another different feature of this technology is that the clonal amplification in vitro to multiply the number of molecules to be sequenced is conducted by means of bridge PCR. In this platform, the fragments are joined to primers immobilized on a solid surface, performing an amplification in situ, generating clusters of DNA with identical molecules BIB003 (Figure 9 ). In each cycle, the four nucleotides of reversible termination are simultaneously added and incorporated by the polymerase they complement. These nucleotides are chemically blocked-by substituting the 3 -OH group for a 3 -o-azidomethyl group-to prevent the polymerase from incorporating more than one nucleotide in each cycle. Upon incorporation of a nucleotide, a fluorescence signal is output which is measured by total internal reflection fluorescence (TIRF) using various laser channels. Concerning the next cycle, the nucleotides that have not been incorporated are washed and the chemical blockade of the 3 end is removed through the use of tris-(2-carboxyethyl)-phosphine, to continue the synthesis of the chain BIB004 . For this reason, they are cyclic reversible termination nucleotides. Once the fluorescence signal is collected, a new cycle begins, repeating this dynamic until the sequencing of each fragment is finished. In summary, we could say that the sequencing reaction is carried out in three steps: addition of nucleotides, imaging, and regeneration of 3 -OH by fluorophore cleavage. The sequencing by ligation used by the SOLiD and CG platforms presents the enormous advantage of offering a very high precision in the reading of the sequences (Table 2 ). This is because each position is read several times and with different probes. In contrast, they do present some important drawbacks such as the short reading length obtained or the long time necessary to obtain the results. In this platform, DNA sequencing is carried out by fluorescence-labeled nucleotide analogs acting as reversible terminators of the amplification reaction BIB005 . The idea is similar to that developed to carry out the Sanger sequencing, with the difference being that in the Illumina platform the blockade of DNA polymerization is reversible and in the sequencing of Sanger this is irreversible. Another different feature of this technology is that the clonal amplification in vitro to multiply the number of molecules to be sequenced is conducted by means of bridge PCR. In this platform, the fragments are joined to primers immobilized on a solid surface, performing an amplification in situ, generating clusters of DNA with identical molecules BIB003 (Figure 9 ). In each cycle, the four nucleotides of reversible termination are simultaneously added and incorporated by the polymerase they complement. These nucleotides are chemically blocked-by substituting the 3′-OH group for a 3′-oazidomethyl group-to prevent the polymerase from incorporating more than one nucleotide in each cycle. Upon incorporation of a nucleotide, a fluorescence signal is output which is measured by total internal reflection fluorescence (TIRF) using various laser channels. Concerning the next cycle, the nucleotides that have not been incorporated are washed and the chemical blockade of the 3′ end is removed through the use of tris-(2-carboxyethyl)-phosphine, to continue the synthesis of the chain BIB004 . For this reason, they are cyclic reversible termination nucleotides. Once the fluorescence signal is collected, a new cycle begins, repeating this dynamic until the sequencing of each fragment is finished. In summary, we could say that the sequencing reaction is carried out in three steps: addition of nucleotides, imaging, and regeneration of 3′-OH by fluorophore cleavage. As indicated above, the detection system used in the Illumina platform is the Total Internal Reflection Fluorescence (TIRF), also known as evanescent wave microscopy. Its main advantage over other systems is that it is able to detect the fluorescence of molecules that are very close to a solid surface (glass or plastic), and is highly selective BIB002 . By means of TIRF, it is possible to illuminate a very thin layer of less than 100 nm in depth, avoiding the excitation of other fluorophores that may be near but whose emission is not required for the measurement BIB006 . This same detection system is used by the Qiagen GeneReader platform, which was launched on the market in 2015, after Qiagen acquired the Intelligent BioSystems CRT platform. The great contribution of this new platform, and its great advantage, is that it is presented as an all-in-one platform , from the preparation of the samples to the analysis of the results . Another difference, methodological in nature, is that the group that is blocking the 3 -OH of the nucleotides is not an O-azidomethyl, as was the case in the Illumina platform, but an O-allyl, and that the regeneration of the 3 -OH is performed with a mixture of palladium and P(PhSO 3 Na) BIB001 (TPPTS) in the GeneReader platform, while in the Illumina platform the regeneration was carried out with the reducing agent tris (2-carboxyethyl) phosphine (TCEP). Moreover, both the sequencing methodology by cyclic reversible termination and the detection of the signal by total internal reflection fluorescence (TIRF) are identical to that of the Illumina platform. At present, Illumina's sequencers are the most widely used in the development of massive sequencing projects. This is not only due to their high precision in sequencing and the low cost of the Gb (Gigabytes) obtained (Table 3) , but also because they have a great variety of equipment in the market that can adapt to the needs of each project. This ranges from small medium-performance bench-top units, such as the MiniSeq, to mega equipment used for sequencing projects of whole genomes in populations, such as the HiSeqX.
DNA Sequencing Sensors: An Overview <s> Ion Torrent Platform <s> Polymerase chain reaction (PCR) using a single molecule of DNA is very useful for analysis, detection and cloning of the desired DNA fragment. We developed a simple PCR method utilizing a water-in-oil (W/O) emulsion that included numerous droplets of reaction mixture in bulk oil phase. These droplets, which were stable even at high temperatures, functioned as micro-reactors. This allows the effective concentration of template DNA to be increased, even for low concentrations of template DNA. The present method consists of a two-step thermal cycle. The first step was carried out using the W/O emulsion. During this step, the template DNA was amplified in the limited volume of the droplets in the W/O emulsion. The W/O emulsion was broken and the second PCR step was carried out. This method can be easily applied to amplify a single DNA molecule. <s> BIB001 </s> DNA Sequencing Sensors: An Overview <s> Ion Torrent Platform <s> The seminal importance of DNA sequencing to the life sciences, biotechnology and medicine has driven the search for more scalable and lower-cost solutions. Here we describe a DNA sequencing technology in which scalable, low-cost semiconductor manufacturing techniques are used to make an integrated circuit able to directly perform non-optical DNA sequencing of genomes. Sequence data are obtained by directly sensing the ions produced by template-directed DNA polymerase synthesis using all-natural nucleotides on this massively parallel semiconductor-sensing device or ion chip. The ion chip contains ion-sensitive, field-effect transistor-based sensors in perfect register with 1.2 million wells, which provide confinement and allow parallel, simultaneous detection of independent sequencing reactions. Use of the most widely used technology for constructing integrated circuits, the complementary metal-oxide semiconductor (CMOS) process, allows for low-cost, large-scale production and scaling of the device to higher densities and larger array sizes. We show the performance of the system by sequencing three bacterial genomes, its robustness and scalability by producing ion chips with up to 10 times as many sensors and sequencing a human genome. <s> BIB002 </s> DNA Sequencing Sensors: An Overview <s> Ion Torrent Platform <s> This article presents a personal account of the life and scientifi c journey of Professor Piet Bergveld, the inventor and founding father of the Ion-Sensitive Field Effect Transistor (ISFET). The interview gives a unique overview of how ISFET technology has evolved over the years, and the challenges faced during the development from its initial use in neuronal sensing to the technology we see today, which has huge potential in the current era of genetic technology. <s> BIB003 </s> DNA Sequencing Sensors: An Overview <s> Ion Torrent Platform <s> In order for next-generation sequencing to become widely used as a diagnostic in the healthcare industry, sequencing instrumentation will need to be mass produced with a high degree of quality and economy. One way to achieve this is to recast DNA sequencing in a format that fully leverages the manufacturing base created for computer chips, complementary metal-oxide semiconductor chip fabrication, which is the current pinnacle of large scale, high quality, low-cost manufacturing of high technology. To achieve this, ideally the entire sensory apparatus of the sequencer would be embodied in a standard semiconductor chip, manufactured in the same fab facilities used for logic and memory chips. Recently, such a sequencing chip, and the associated sequencing platform, has been developed and commercialized by Ion Torrent, a division of Life Technologies, Inc. Here we provide an overview of this semiconductor chip based sequencing technology, and summarize the progress made since its commercial introduction. We described in detail the progress in chip scaling, sequencing throughput, read length, and accuracy. We also summarize the enhancements in the associated platform, including sample preparation, data processing, and engagement of the broader development community through open source and crowdsourcing initiatives. <s> BIB004 </s> DNA Sequencing Sensors: An Overview <s> Ion Torrent Platform <s> Impressive progress has been made in the field of Next Generation Sequencing (NGS). Through advancements in the fields of molecular biology and technical engineering, parallelization of the sequencing reaction has profoundly increased the total number of produced sequence reads per run. Current sequencing platforms allow for a previously unprecedented view into complex mixtures of RNA and DNA samples. NGS is currently evolving into a molecular microscope finding its way into virtually every fields of biomedical research. In this chapter we review the technical background of the different commercially available NGS platforms with respect to template generation and the sequencing reaction and take a small step towards what the upcoming NGS technologies will bring. We close with an overview of different implementations of NGS into biomedical research. This article is part of a Special Issue entitled: From Genome to Function. <s> BIB005 </s> DNA Sequencing Sensors: An Overview <s> Ion Torrent Platform <s> Novel electronic detection techniques are being increasingly sought as components of highly scalable technologies for high-throughput biosensing applications. Among the techniques being considered, electrochemical detection offers an attractive alternative. Advancement in nanoscale electrochemistry makes this an opportune moment to consider the prospects of its integration with CMOS processes. This paper focuses on the new properties and challenges that emerge from the downscaling of electrode dimensions, focusing, in particular, on redox-cycling-based approaches to nanoscale electrochemical devices. We explore the possibilities of interfacing arrays of such devices with CMOS process technology to create highly parallelized integrated platforms. We cite selective examples to provide a qualitative overview of the general design constraints that attend any system-level integration process. We also discuss several challenges that limit the scalability of such platforms and that need to be overcome to create reliable and robust CMOS-integrated electrochemical biosensing platforms. <s> BIB006
The Ion Torrent platform is based on semiconductor technology and it was the first to use non-optical sensors BIB004 ; so for the first time, the technology used for DNA sequencing eliminates both optical scanning and dNTPs attached to fluorophores BIB006 . The process that is performed is the same as the one used in the construction of integrated circuits in computer chips. This is based on the complementary metal-oxide-semiconductor (CMOS) process [37] , to monitor the detection of protons (H + ) in DNA synthesis, when the incorporated dNTP is complementary to the nucleotide of the template chain being copied BIB002 . The great success of this platform consists of the integration of a chip that has millions of CMOS sensors in its matrix, so that the compilation of all the data can be performed in an inexpensive and simple way BIB005 . The second major innovation of this platform was the introduction of an electro-chemical ISFET (ion field sensitive transistor) sensor at the bottom of each well BIB003 , which act as a pH meter that is sensitive to changes in H + concentration (Figure 10 ). To perform sequencing on the Ion Torrent platform, the DNA template is presented on the surface of a sphere (or bead) obtained by a PCR emulsion BIB001 . Subsequently, the addition of a single nucleotide will occur one at a time so it is not necessary to block the dNTPs, as in the case of cyclic reversible CRT sequencing. In other words, unlike other platforms, in this case the nucleotides that are used are not chemically modified. Thus, when the nucleotide that has been added is incorporated by the polymerase into the DNA strand being synthesized, a proton is released and this is detected by the CMOS-ISFET sensor, generating a signal that is sent to a computer that will process it. Unlike the Illumina platform, the Ion Torrent platform does not have a large number of devices on the market. Even so, their sequencers present some versatility since there are several types of chips (which is the support in which the sequencing reaction is carried out) that adapt to the dimensions of each project. Thus, chips with yields ranging from 50 Mb to 15 Gb can be found. For chips with the highest throughput, the run time is not more than 7 h, and for lower throughput, the run time is about 2 h. Therefore, this short time is one of the main advantages of this platform (Table 3 ). Thanks to these characteristics, the Ion Torrent platform is finding its niche market in the analysis of gene groups (diagnosis of polygenic diseases, metagenomics, etc.), i.e., in clinical sequencing.
DNA Sequencing Sensors: An Overview <s> Pacific Bioscence Platform <s> Polymerase chain reaction (PCR) using a single molecule of DNA is very useful for analysis, detection and cloning of the desired DNA fragment. We developed a simple PCR method utilizing a water-in-oil (W/O) emulsion that included numerous droplets of reaction mixture in bulk oil phase. These droplets, which were stable even at high temperatures, functioned as micro-reactors. This allows the effective concentration of template DNA to be increased, even for low concentrations of template DNA. The present method consists of a two-step thermal cycle. The first step was carried out using the W/O emulsion. During this step, the template DNA was amplified in the limited volume of the droplets in the W/O emulsion. The W/O emulsion was broken and the second PCR step was carried out. This method can be easily applied to amplify a single DNA molecule. <s> BIB001 </s> DNA Sequencing Sensors: An Overview <s> Pacific Bioscence Platform <s> Optical approaches for observing the dynamics of single molecules have required pico- to nanomolar concentrations of fluorophore in order to isolate individual molecules. However, many biologically relevant processes occur at micromolar ligand concentrations, necessitating a reduction in the conventional observation volume by three orders of magnitude. We show that arrays of zero-mode waveguides consisting of subwavelength holes in a metal film provide a simple and highly parallel means for studying single-molecule dynamics at micromolar concentrations with microsecond temporal resolution. We present observations of DNA polymerase activity as an example of the effectiveness of zero-mode waveguides for performing single-molecule experiments at high concentrations. <s> BIB002 </s> DNA Sequencing Sensors: An Overview <s> Pacific Bioscence Platform <s> The confocal detection principle is extended to a highly parallel optical system that continuously analyzes thousands of concurrent sample locations. This is achieved through the use of a holographic laser illumination multiplexer combined with a confocal pinhole array before a prism dispersive element used to provide spectroscopic information from each confocal volume. The system is demonstrated to detect and identify single fluorescent molecules from each of several thousand independent confocal volumes in real time. <s> BIB003 </s> DNA Sequencing Sensors: An Overview <s> Pacific Bioscence Platform <s> We present single-molecule, real-time sequencing data obtained from a DNA polymerase performing uninterrupted template-directed synthesis using four distinguishable fluorescently labeled deoxyribonucleoside triphosphates (dNTPs). We detected the temporal order of their enzymatic incorporation into a growing DNA strand with zero-mode waveguide nanostructure arrays, which provide optical observation volume confinement and enable parallel, simultaneous detection of thousands of single-molecule sequencing reactions. Conjugation of fluorophores to the terminal phosphate moiety of the dNTPs allows continuous observation of DNA synthesis over thousands of bases without steric hindrance. The data report directly on polymerase dynamics, revealing distinct polymerization states and pause sites corresponding to DNA secondary structure. Sequence data were aligned with the known reference sequence to assay biophysical parameters of polymerization for each template position. Consensus sequences were generated from the single-molecule reads at 15-fold coverage, showing a median accuracy of 99.3%, with no systematic error beyond fluorophore-dependent error rates. <s> BIB004 </s> DNA Sequencing Sensors: An Overview <s> Pacific Bioscence Platform <s> BackgroundNext generation sequencing (NGS) technology has revolutionized genomic and genetic research. The pace of change in this area is rapid with three major new sequencing platforms having been released in 2011: Ion Torrent’s PGM, Pacific Biosciences’ RS and the Illumina MiSeq. Here we compare the results obtained with those platforms to the performance of the Illumina HiSeq, the current market leader. In order to compare these platforms, and get sufficient coverage depth to allow meaningful analysis, we have sequenced a set of 4 microbial genomes with mean GC content ranging from 19.3 to 67.7%. Together, these represent a comprehensive range of genome content. Here we report our analysis of that sequence data in terms of coverage distribution, bias, GC distribution, variant detection and accuracy.ResultsSequence generated by Ion Torrent, MiSeq and Pacific Biosciences technologies displays near perfect coverage behaviour on GC-rich, neutral and moderately AT-rich genomes, but a profound bias was observed upon sequencing the extremely AT-rich genome of Plasmodium falciparum on the PGM, resulting in no coverage for approximately 30% of the genome. We analysed the ability to call variants from each platform and found that we could call slightly more variants from Ion Torrent data compared to MiSeq data, but at the expense of a higher false positive rate. Variant calling from Pacific Biosciences data was possible but higher coverage depth was required. Context specific errors were observed in both PGM and MiSeq data, but not in that from the Pacific Biosciences platform.ConclusionsAll three fast turnaround sequencers evaluated here were able to generate usable sequence. However there are key differences between the quality of that data and the applications it will support. <s> BIB005 </s> DNA Sequencing Sensors: An Overview <s> Pacific Bioscence Platform <s> We have developed a sequencing method on the Pacific Biosciences RS sequencer (the PacBio) for small DNA molecules that avoids the need for a standard library preparation. To date this approach has been applied toward sequencing single-stranded and double-stranded viral genomes, bacterial plasmids, plasmid vector models for DNA-modification analysis, and linear DNA fragments covering an entire bacterial genome. Using direct sequencing it is possible to generate sequence data from as little as 1 ng of DNA, offering a significant advantage over current protocols which typically require 400-500 ng of sheared DNA for the library preparation. <s> BIB006 </s> DNA Sequencing Sensors: An Overview <s> Pacific Bioscence Platform <s> Third-generation long-range DNA sequencing and mapping technologies are creating a renaissance in high-quality genome sequencing. Unlike second-generation sequencing, which produces short reads a few hundred base-pairs long, third-generation single-molecule technologies generate over 10,000 bp reads or map over 100,000 bp molecules. We analyze how increased read lengths can be used to address longstanding problems in de novo genome assembly, structural variation analysis and haplotype phasing. <s> BIB007
Previous platforms performed sequencing from small DNA fragments-up to 1000 bp on the 454 platform-that were processed and modified according to the method of reading, whether by a ligation reaction or by a synthesis reaction. However, there are new platforms, known as third generation platforms BIB007 , whose objective is the sequencing of simple molecules in real time. These platforms take advantage of advances in the field of nanotechnology, although the way in which they perform the sequencing reaction is different. Their main advantage is that library preparation is not necessary and sequencing reagents are not needed BIB005 BIB006 . The Pacific Bioscence platform was the first to carry out this third-generation sequencing, and DNA is sequenced by a single-molecule real-time (SMRT) synthesis of a single molecule BIB004 ; this is, to this day, the platform most often used to carry out this type of sequencing. Whereas in the systems seen so far for the synthesis (SBS) of small fragments, the DNA was fixed so that the polymerase could be moved along it to perform the synthesis of the new chain; in the case of the Pacific Bioscence platform, the polymerase is the one that is fixed to the bottom of an individual picolitre well with a transparent bottom, so that the DNA has mobility (Figure 11 ). In this platform, the DNA sequencing Figure 10 . In the Ion Torrent platform, the chip is the sequencer. Each well of the chip acts as a pH meter that is able to detect changes in the concentration of H+ produced in DNA polymerization. To perform sequencing on the Ion Torrent platform, the DNA template is presented on the surface of a sphere (or bead) obtained by a PCR emulsion BIB001 . Subsequently, the addition of a single nucleotide will occur one at a time so it is not necessary to block the dNTPs, as in the case of cyclic reversible CRT sequencing. In other words, unlike other platforms, in this case the nucleotides that are used are not chemically modified. Thus, when the nucleotide that has been added is incorporated by the polymerase into the DNA strand being synthesized, a proton is released and this is detected by the CMOS-ISFET sensor, generating a signal that is sent to a computer that will process it. Unlike the Illumina platform, the Ion Torrent platform does not have a large number of devices on the market. Even so, their sequencers present some versatility since there are several types of chips (which is the support in which the sequencing reaction is carried out) that adapt to the dimensions of each project. Thus, chips with yields ranging from 50 Mb to 15 Gb can be found. For chips with the highest throughput, the run time is not more than 7 h, and for lower throughput, the run time is about 2 h. Therefore, this short time is one of the main advantages of this platform (Table 3 ). Thanks to these characteristics, the Ion Torrent platform is finding its niche market in the analysis of gene groups (diagnosis of polygenic diseases, metagenomics, etc.), i.e., in clinical sequencing. Previous platforms performed sequencing from small DNA fragments-up to 1000 bp on the 454 platform-that were processed and modified according to the method of reading, whether by a ligation reaction or by a synthesis reaction. However, there are new platforms, known as third generation platforms BIB007 , whose objective is the sequencing of simple molecules in real time. These platforms take advantage of advances in the field of nanotechnology, although the way in which they perform the sequencing reaction is different. Their main advantage is that library preparation is not necessary and sequencing reagents are not needed BIB005 BIB006 . The Pacific Bioscence platform was the first to carry out this third-generation sequencing, and DNA is sequenced by a single-molecule real-time (SMRT) synthesis of a single molecule BIB004 ; this is, to this day, the platform most often used to carry out this type of sequencing. Whereas in the systems seen so far for the synthesis (SBS) of small fragments, the DNA was fixed so that the polymerase could be moved along it to perform the synthesis of the new chain; in the case of the Pacific Bioscence platform, the polymerase is the one that is fixed to the bottom of an individual picolitre well with a transparent bottom, so that the DNA has mobility (Figure 11 ). In this platform, the DNA sequencing is carried out thanks to nanosensor technology called ZMW, zero-mode waveguide BIB002 , that detects the signal generated by the incorporation of phosphate-labeled nucleotides to the well, where a single DNA polymerase replicates the DNA. The sequencing takes place in the ZMW SMRT cell. ZMW are devices whose size prevents the propagation of light. As a result, visible laser light does not pass through the ZMW sensors, so that marked nucleotides that are not incorporated by the polymerase and are located above these sensors do not contribute to the measured signals. These only fluoresce when they are incorporated by the enzyme and diffuse through the sensor's ZMW. The incorporation of each dNTP is continuously displayed with a laser and a camera system that records the signal emitted during the incorporation into the lower part of the ZMW. Imaging is possible thanks to the action of a powerful optical system that illuminates individual ZMWs with red and green laser beams from the bottom of the SMRT cell and the existence of a parallel confocal system that detects the fluorescence signal from the incorporated nucleotides BIB003 . is carried out thanks to nanosensor technology called ZMW, zero-mode waveguide BIB002 , that detects the signal generated by the incorporation of phosphate-labeled nucleotides to the well, where a single DNA polymerase replicates the DNA. The sequencing takes place in the ZMW SMRT cell. ZMW are devices whose size prevents the propagation of light. As a result, visible laser light does not pass through the ZMW sensors, so that marked nucleotides that are not incorporated by the polymerase and are located above these sensors do not contribute to the measured signals. These only fluoresce when they are incorporated by the enzyme and diffuse through the sensor's ZMW. The incorporation of each dNTP is continuously displayed with a laser and a camera system that records the signal emitted during the incorporation into the lower part of the ZMW. Imaging is possible thanks to the action of a powerful optical system that illuminates individual ZMWs with red and green laser beams from the bottom of the SMRT cell and the existence of a parallel confocal system that detects the fluorescence signal from the incorporated nucleotides BIB003 .
DNA Sequencing Sensors: An Overview <s> Oxford Nanopore Platform <s> A nanopore-based device provides single-molecule detection and analytical capabilities that are achieved by electrophoretically driving molecules in solution through a nano-scale pore. The nanopore provides a highly confined space within which single nucleic acid polymers can be analyzed at high throughput by one of a variety of means, and the perfect processivity that can be enforced in a narrow pore ensures that the native order of the nucleobases in a polynucleotide is reflected in the sequence of signals that is detected. Kilobase length polymers (single-stranded genomic DNA or RNA) or small molecules (e.g., nucleosides) can be identified and characterized without amplification or labeling, a unique analytical capability that makes inexpensive, rapid DNA sequencing a possibility. Further research and development to overcome current challenges to nanopore identification of each successive nucleotide in a DNA strand offers the prospect of 'third generation' instruments that will sequence a diploid mammalian genome for ∼$1,000 in ∼24 h. <s> BIB001 </s> DNA Sequencing Sensors: An Overview <s> Oxford Nanopore Platform <s> A single-molecule method for sequencing DNA that does not require fluorescent labelling could reduce costs and increase sequencing speeds. An exonuclease enzyme might be used to cleave individual nucleotide molecules from the DNA, and when coupled to an appropriate detection system, these nucleotides could be identified in the correct order. Here, we show that a protein nanopore with a covalently attached adapter molecule can continuously identify unlabelled nucleoside 5'-monophosphate molecules with accuracies averaging 99.8%. Methylated cytosine can also be distinguished from the four standard DNA bases: guanine, adenine, thymine and cytosine. The operating conditions are compatible with the exonuclease, and the kinetic data show that the nucleotides have a high probability of translocation through the nanopore and, therefore, of not being registered twice. This highly accurate tool is suitable for integration into a system for sequencing nucleic acids and for analysing epigenetic modifications. <s> BIB002 </s> DNA Sequencing Sensors: An Overview <s> Oxford Nanopore Platform <s> The prospect of nanopores as a next-generation sequencing (NGS) platform has been a topic of growing interest and considerable government-sponsored research for more than a decade. Oxford Nanopore Technologies recently announced the first commercial nanopore sequencing devices, to be made available by the end of 2012, while other companies (Life, Roche, IBM) are also pursuing nanopore sequencing approaches. In this paper, the state of the art in nanopore sequencing is reviewed, focusing on the most recent contributions that have or promise to have NGS commercial potential. We consider also the scalability of the circuitry to support multichannel arrays of nanopores in future sequencing devices, which is critical to commercial viability. <s> BIB003 </s> DNA Sequencing Sensors: An Overview <s> Oxford Nanopore Platform <s> Much tremendous break through have been obtained in recent years for nanopore sequencing to achieve the goal of $1000 genome. As a method of single molecule sequencing, nanopore sequencing can discriminate the individual molecules of the target DNA strand rapidly due to the current blockages by translocating the nucleotides through a nano-scale pore. Both the protein-pores and solid-state nanopore channels which called single nanopore sequencing have been studied widely for the application of nanopore sequencing technology. This review will give a detail representation to protein nanopore and solid-state nanopore sequencing. For protein nanopore sequencing technology, we will introduce different nanopore types, device assembly and some challenges still exist at present. We will focus on more research fields for solid-state nanopore sequencing in terms of materials, device assembly, fabricated methods, translocation process and some specific challenges. The review also covers some of the technical advances in the union nanopore sequencing, which include nanopore sequencing combine with exonuclease, hybridization, synthesis and design polymer. <s> BIB004 </s> DNA Sequencing Sensors: An Overview <s> Oxford Nanopore Platform <s> A DNA biosensor for detection of specific oligonucleotides sequences of Plum Pox Virus (PPV) in plant extracts and buffer is proposed. The working principles of a genosensor are based on the ion-channel mechanism. The NH2-ssDNA probe was deposited onto a glassy carbon electrode surface to form an amide bond between the carboxyl group of oxidized electrode surface and amino group from ssDNA probe. The analytical signals generated as a result of hybridization were registered in Osteryoung square wave voltammetry in the presence of [Fe(CN)6]3−/4− as a redox marker. The 22-mer and 42-mer complementary ssDNA sequences derived from PPV and DNA samples from plants infected with PPV were used as targets. Similar detection limits of 2.4 pM (31.0 pg/mL) and 2.3 pM (29.5 pg/mL) in the concentration range 1–8 pM were observed in the presence of the 22-mer ssDNA and 42-mer complementary ssDNA sequences of PPV, respectively. The genosensor was capable of discriminating between samples consisting of extracts from healthy plants and leaf extracts from infected plants in the concentration range 10–50 pg/mL. The detection limit was 12.8 pg/mL. The genosensor displayed good selectivity and sensitivity. The 20-mer partially complementary DNA sequences with four complementary bases and DNA samples from healthy plants used as negative controls generated low signal. <s> BIB005 </s> DNA Sequencing Sensors: An Overview <s> Oxford Nanopore Platform <s> Oxford Nanopore's third-generation single-molecule sequencing platform promises to decrease costs for reagents and instrumentation. After a 2-year hiatus following the initial announcement, the first devices have been released as part of an early access program. We explore the performance of this platform by resequencing the lambda phage genome, and amplicons from a snake venom gland transcriptome. Although the handheld MinION sequencer can generate more than 150 megabases of raw data in one run, at most a quarter of the resulting reads map to the reference, with less than average 10% identity. Much of the sequence consists of insertion/deletion errors, or is seemingly without similarity to the template. Using the lambda phage data as an example, although the reads are long, averaging 5 kb, at best 890 ± 1932 bases per mapped read could be matched to the reference without soft clipping. In the course of a 36 h run on the MinION, it was possible to resequence the 48 kb lambda phage reference at 16× coverage. Currently, substantially larger projects would not be feasible using the MinION. Without increases in accuracy, which would be required for applications such as genome scaffolding and phasing, the current utility of the MinION appears limited. Library preparation requires access to a molecular laboratory, and is of similar complexity and cost to that of other next-generation sequencing platforms. The MinION is an exciting step in a new direction for single-molecule sequencing, though it will require dramatic decreases in error rates before it lives up to its promise. <s> BIB006 </s> DNA Sequencing Sensors: An Overview <s> Oxford Nanopore Platform <s> The “$1,000 Genome” project has been drawing increasing attention since its launch a decade ago. Nanopore sequencing, the third-generation, is believed to be one of the most promising sequencing technologies to reach four gold standards set for the “$1,000 Genome” while the second-generation sequencing technologies are bringing about a revolution in life sciences, particularly in genome sequencing-based personalized medicine. Both of protein and solid-state nanopores have been extensively investigated for a series of issues, from detection of ionic current blockage to field-effect-transistor (FET) sensors. A newly released protein nanopore sequencer has shown encouraging potential that nanopore sequencing will ultimately fulfill the gold standards. In this review, we address advances, challenges, and possible solutions of nanopore sequencing according to these standards. <s> BIB007 </s> DNA Sequencing Sensors: An Overview <s> Oxford Nanopore Platform <s> A long-held goal in sequencing has been to use a voltage-biased nanoscale pore in a membrane to measure the passage of a linear, single-stranded (ss) DNA or RNA molecule through that pore. With the development of enzyme-based methods that ratchet polynucleotides through the nanopore, nucleobase-by-nucleobase, measurements of changes in the current through the pore can now be decoded into a DNA sequence using an algorithm. In this Historical Perspective, we describe the key steps in nanopore strand-sequencing, from its earliest conceptualization more than 25 years ago to its recent commercialization and application. <s> BIB008
Another platform being used for real-time DNA sequencing from a single molecule is the Oxford Nanopore platform, which uses nanosensors that form channel structures and that carry the sample to a sensor that allows for the detection of each nucleotide residue present in the DNA strand BIB001 BIB007 . This technology is based on tunneling by creating pores to separate two compartments BIB003 . Similar structures are being used for the detection of specific DNA sequences BIB005 . In this case, it is the molecule that traverses the pore that causes a temporary change in the potential between the two compartments, and this change allows its identification (Figure 12 ). Thus, instead of using a secondary signal such as light, color, or pH to detect read DNA, the nanopore platform directly detects the composition of a DNA template BIB006 . The DNA molecule crosses the pore thanks to the action of a secondary motor protein, producing an alteration in the potential between both sides of the pore. These shifts in voltage are characteristic of each DNA sequence BIB002 BIB008 . The variation Another platform being used for real-time DNA sequencing from a single molecule is the Oxford Nanopore platform, which uses nanosensors that form channel structures and that carry the sample to a sensor that allows for the detection of each nucleotide residue present in the DNA strand BIB001 BIB007 . This technology is based on tunneling by creating pores to separate two compartments BIB003 . Similar structures are being used for the detection of specific DNA sequences BIB005 . In this case, it is the molecule that traverses the pore that causes a temporary change in the potential between the two compartments, and this change allows its identification (Figure 12 ). Thus, instead of using a secondary signal such as light, color, or pH to detect read DNA, the nanopore platform directly detects the composition of a DNA template BIB006 . The DNA molecule crosses the pore thanks to the action of a secondary motor protein, producing an alteration in the potential between both sides of the pore. These shifts in voltage are characteristic of each DNA sequence BIB002 BIB008 . The variation observed in the measured voltage is not only a consequence of the potential change produced by the passage of a DNA fragment, but also of its duration, so that this measure can be interpreted as a particular k-mer sequence. A flow cell structure is composed of an application-specific integrated circuit (ASIC) chip and each one has 512 individual channels that are capable of sequencing more than 60 bp per second. Considering the fact that this technology uses unmodified DNA, it has the advantage of yielding results very quickly from minimal starting quantities. At present, this platform works with both nanopores obtained from genetically engineered proteins and fully synthetic nanopores BIB004 . Sensors 2017, 17, 588 12 of 15 observed in the measured voltage is not only a consequence of the potential change produced by the passage of a DNA fragment, but also of its duration, so that this measure can be interpreted as a particular k-mer sequence. A flow cell structure is composed of an application-specific integrated circuit (ASIC) chip and each one has 512 individual channels that are capable of sequencing more than 60 bp per second. Considering the fact that this technology uses unmodified DNA, it has the advantage of yielding results very quickly from minimal starting quantities. At present, this platform works with both nanopores obtained from genetically engineered proteins and fully synthetic nanopores BIB004 . Figure 12 . Channels used by the Oxford Nanopore platform to sequence DNA. The passage of DNA through the nanopore produces alterations that are measured thanks to the detected voltage changes. The first prototype of this platform was the Minion and it was launched on the market in 2014. This one attracted a lot of attention because of its small size and simplicity. Another advantage is that it does not determine the DNA sequence through secondary elements such as light or H + concentration, but it does so directly and in real time. However, despite the promising potential caused by the launch of simple molecule sequencing platforms in real time, these still have the great disadvantage of low accuracy. In the future it will be necessary to combine the best aspects of each platform to have a sequencer that offers complete genome sequences quickly, cheaply, and simply. At that time, terms like personalized diagnosis, genomic medicine, or completely individualized medical treatments will become a reality.
DNA Sequencing Sensors: An Overview <s> Conclusions <s> Over the past two decades, molecularly imprinted polymers (MIPs) have attracted broad interest from scientists engaged in sensor development. This attention can be explained by the serious potentia ... <s> BIB001 </s> DNA Sequencing Sensors: An Overview <s> Conclusions <s> Electrochemistry-based sensors offer sensitivity, selectivity and low cost for the detection of selected DNA sequences or mutated genes associated with human disease. DNA-based electrochemical sensors exploit a range of different chemistries, but all take advantage of nanoscale interactions between the target in solution, the recognition layer and a solid electrode surface. Numerous approaches to electrochemical detection have been developed, including direct electrochemistry of DNA, electrochemistry at polymer-modified electrodes, electrochemistry of DNA-specific redox reporters, electrochemical amplifications with nanoparticles, and electrochemical devices based on DNA-mediated charge transport chemistry. <s> BIB002 </s> DNA Sequencing Sensors: An Overview <s> Conclusions <s> Graphene, emerging as a true 2-dimensional material, has received increasing attention due to its unique physicochemical properties (high surface area, excellent conductivity, high mechanical strength, and ease of functionalization and mass production). This article selectively reviews recent advances in graphene-based electrochemical sensors and biosensors. In particular, graphene for direct electrochemistry of enzyme, its electrocatalytic activity toward small biomolecules (hydrogen peroxide, NADH, dopamine, etc.), and graphenebased enzyme biosensors have been summarized in more detail; Graphene-based DNA sensing and environmental analysis have been discussed. Future perspectives in this rapidly developing field are also discussed. <s> BIB003 </s> DNA Sequencing Sensors: An Overview <s> Conclusions <s> Ensuring food safety is nowadays a top priority of authorities and professional players in the food supply chain. One of the key challenges to determine the safety of food and guarantee a high level of consumer protection is the availability of fast, sensitive and reliable analytical methods to identify specific hazards associated to food before they become a health problem. The limitations of existing methods have encouraged the development of new technologies, among them biosensors. Success in biosensor design depends largely on the development of novel receptors with enhanced affinity to the target, while being stable and economical. Aptamers fulfill these characteristics, and thus have surfaced as promising alternatives to natural receptors. This Review describes analytical strategies developed so far using aptamers for the control of pathogens, allergens, adulterants, toxins and other forbidden contaminants to ensure food safety. The main progresses to date are presented, highlighting potential prospects for the future. <s> BIB004
This manuscript aims to highlight the importance of DNA sequencing sensors describing the state of art of the topic and presenting the availability of its methods and platforms. Multiple shortread sequencing platforms were analyzed: 454 Roche, AB SOLiD, Complete Genomics, Illumina, GeneReader, and Ion Torrent. Additionally, two single-molecule real-time long read sequencing platforms were analyzed: Pacific Bioscence and Oxford Nanopore. One can envisage that incoming technologies, already known as fourth generation sequencing, will continue to cheapen the cost of trials while increasing DNA reading lengths in each run. All of this would be impossible without sensors and detection systems that are becoming smaller and more precise. The evolution of DNA sequencing sensors along the last 40 years reveals some very impressing results and opens up new perspectives for science. The first prototype of this platform was the Minion and it was launched on the market in 2014. This one attracted a lot of attention because of its small size and simplicity. Another advantage is that it does not determine the DNA sequence through secondary elements such as light or H + concentration, but it does so directly and in real time. However, despite the promising potential caused by the launch of simple molecule sequencing platforms in real time, these still have the great disadvantage of low accuracy. In the future it will be necessary to combine the best aspects of each platform to have a sequencer that offers complete genome sequences quickly, cheaply, and simply. At that time, terms like personalized diagnosis, genomic medicine, or completely individualized medical treatments will become a reality. This manuscript aims to highlight the importance of DNA sequencing sensors describing the state of art of the topic and presenting the availability of its methods and platforms. Multiple short-read sequencing platforms were analyzed: 454 Roche, AB SOLiD, Complete Genomics, Illumina, GeneReader, and Ion Torrent. Additionally, two single-molecule real-time long read sequencing platforms were analyzed: Pacific Bioscence and Oxford Nanopore. One can envisage that incoming technologies, already known as fourth generation sequencing, will continue to cheapen the cost of trials while increasing DNA reading lengths in each run. All of this would be impossible without sensors and detection systems that are becoming smaller and more precise. The evolution of DNA sequencing sensors along the last 40 years reveals some very impressing results and opens up new perspectives for science. Knowing the sequence of a DNA fragment has multiple uses, such as performing phylogenetic studies, diagnosing diseases BIB002 , or controlling pathogens. The tools with which all these applications can be developed are biosensors [58] or DNA sensors. Thanks to these, a particular sequence of genetic material or an enzymatic activity can be detected in a complex sample. To date, a multitude of biosensors have been successfully used for applications such as food, for the control of pathogens, allergens, or toxins BIB004 . However, the world of biosensors still faces many challenges. The first one is that the test sample usually contains a very low DNA concentration, therefore a previous amplification is necessary by means of PCR or another strategy. Other problems are nonspecific amplifications, which give rise to false positives, or the technical difficulties inherent in the use of nanomaterials. In this sense, the use of graphene is being presented as a good solution BIB003 due to its physicochemical properties, such as its excellent conductivity or high mechanical resistance. Another difficulty that occurs in the use of biosensors is in the molecular recognition by natural receptors. In this case, the use of Molecularly Imprinted Polymers (IPM) helps to reduce this problem, providing convenient solutions BIB001 . In any case, the use of biosensors is spreading and is expected to maintain its exponential growth to reach a world with ubiquitous sensors. Therefore, the combination of improvements in both the molecular field and the study of new materials seem fundamental.
Background subtraction methods in video streams: A review <s> I. INTRODUCTION <s> Background subtraction is a widely used approach for detecting moving objects from static cameras. Many different methods have been proposed over the recent years and both the novice and the expert can be confused about their benefits and limitations. In order to overcome this problem, this paper provides a review of the main methods and an original categorisation based on speed, memory requirements and accuracy. Such a review can effectively guide the designer to select the most suitable method for a given application in a principled way. Methods reviewed include parametric and non-parametric background density estimates and spatial correlation approaches. <s> BIB001 </s> Background subtraction methods in video streams: A review <s> I. INTRODUCTION <s> AbstractVarious tracking methods have been developed to track objects with different degrees or levels of tracking ability. The ability or performance of each tracking method is dependent on the feature or data that is being used for tracking purpose. The ability of a tracking method can be measured by utilizing tracking metrics to give an indication of the tracking ability of an algorithm. This paper offers some insights into the issues and similarities of performance measurement reporting of video tracking algorithms and proposes a method in assessing the robustness of a video tracking algorithm. The proposed metric introduces another measure to measure the consistency of a tracking algorithm. The work presented in this paper shows that using only one metric to measure the tracking performance is inadequate. The proposed metric presented in this paper shows that the utilization of multiple metrics such as tracking success rate and tracking consistency or robustness would give a better indication of the ... <s> BIB002
Background subtraction is a common technique in the image processing and computer vision. It means that the foreground of the image is extracted for further processing. Generally a region of interest (ROI) of the image can be several objects like humans, cars, texts, and so on in the foreground. After the image pre-processing step which may compose image de-noising, or filtering, object localization is required which may make use of this technique. Background subtraction is a widely used method for detecting moving objects in the videos. static cameras. The moving object detection between the current frame and a source frame, often called "background image", or "background model" BIB001 . Background subtraction is mostly performed if the image would be a part of a video stream. In summary, the main goal of the background subtraction process is: given a frame sequence from a camera, detecting all the foreground objects, and offer a description of the approach. It means that detecting the foreground objects as the difference between the current frame and an image of the static background. Background subtraction is used in many emerging video applications, such as video surveillance (one of today's hottest applications) [51] , traffic monitoring, and gesture recognition for human-machine interfaces, to name a few. On the other word, the applications of background subtraction can be divided into four categories: Video Surveillance, Optical Motion Capture, Human Computer Interaction, and Content based Video Coding. There are many methods for background subtraction BIB002 , with different strengths and weaknesses in terms of performance and computational requirements. A robust background subtraction algorithm should be able to handle lighting changes, repetitive motions from clutter and long¬term scene changes .
Background subtraction methods in video streams: A review <s> 1) Median Filtering: <s> We present a new approach to the tracking of very non rigid patterns of motion, such as water flowing down a stream. The algorithm is based on a "disturbance map", which is obtained by linearly subtracting the temporal average of the previous frames from the new frame. Every local motion creates a disturbance having the form of a wave, with a "head" at the present position of the motion and a historical "tail" that indicates the previous locations of that motion. These disturbances serve as loci of attraction for "tracking particles" that are scattered throughout the image. The algorithm is very fast and can be performed in real time. We provide excellent tracking results on various complex sequences, using both stabilized and moving cameras, showing: a busy ant column, waterfalls, rapids and, flowing streams, shoppers in a mall, and cars in a traffic intersection. <s> BIB001 </s> Background subtraction methods in video streams: A review <s> 1) Median Filtering: <s> Background subtraction methods are widely exploited for moving object detection in videos in many applications, such as traffic monitoring, human motion capture, and video surveillance. How to correctly and efficiently model and update the background model and how to deal with shadows are two of the most distinguishing and challenging aspects of such approaches. The article proposes a general-purpose method that combines statistical assumptions with the object-level knowledge of moving objects, apparent objects (ghosts), and shadows acquired in the processing of the previous frames. Pixels belonging to moving objects, ghosts, and shadows are processed differently in order to supply an object-based selective update. The proposed approach exploits color information for both background subtraction and shadow detection to improve object segmentation and background update. The approach proves fast, flexible, and precise in terms of both pixel accuracy and reactivity to background changes. <s> BIB002 </s> Background subtraction methods in video streams: A review <s> 1) Median Filtering: <s> Background subtraction is a widely used approach for detecting moving objects from static cameras. Many different methods have been proposed over the recent years and both the novice and the expert can be confused about their benefits and limitations. In order to overcome this problem, this paper provides a review of the main methods and an original categorisation based on speed, memory requirements and accuracy. Such a review can effectively guide the designer to select the most suitable method for a given application in a principled way. Methods reviewed include parametric and non-parametric background density estimates and spatial correlation approaches. <s> BIB003
Median filtering is one of the background subtraction method algorithms, which is common to use. It is based on the assessment of the background model by calculating the average of each input pixel. The object is not considered as a background just after pass more than half of the frame absorbed storage. The benefits of this method are simple construction, very fast process and easy to use. Models and background are not fixed, they change during the time. The drawbacks of these approaches are two important factors. One of them is failing on the track of targets in animated backgrounds and dependent accuracy on the speed of the target and the other is frame rate BIB002 - . 2) Frame Difference: One of the simplest types of the background subtraction methods is frame difference. This method is considered the previous frame as the background. In this way, the target is determined by subtracting the current framework of the background model - BIB001 . Frame difference (absolute) at time t + 1 is considered and the background is assumed to be the frame at time t. This difference image would only show some intensity for the pixel locations which have changed in the two frames. Though we have seemingly removed the background. This approach will only work for cases where all foreground pixels are moving and all background pixels are static BIB003 , .
Background subtraction methods in video streams: A review <s> 7) <s> We present a new approach to the tracking of very non rigid patterns of motion, such as water flowing down a stream. The algorithm is based on a "disturbance map", which is obtained by linearly subtracting the temporal average of the previous frames from the new frame. Every local motion creates a disturbance having the form of a wave, with a "head" at the present position of the motion and a historical "tail" that indicates the previous locations of that motion. These disturbances serve as loci of attraction for "tracking particles" that are scattered throughout the image. The algorithm is very fast and can be performed in real time. We provide excellent tracking results on various complex sequences, using both stabilized and moving cameras, showing: a busy ant column, waterfalls, rapids and, flowing streams, shoppers in a mall, and cars in a traffic intersection. <s> BIB001 </s> Background subtraction methods in video streams: A review <s> 7) <s> Identifying moving objects from a video sequence is a fundamental and ::: critical task in many computer-vision applications. A common approach ::: is to perform background subtraction, which identifies moving objects ::: from the portion of a video frame that differs significantly from a ::: background model. There are many challenges in developing a good ::: background subtraction algorithm. First, it must be robust against ::: changes in illumination. Second, it should avoid detecting ::: non-stationary background objects such as swinging leaves, rain, snow, ::: and shadow cast by moving objects. Finally, its internal background ::: model should react quickly to changes in background such as starting ::: and stopping of vehicles. In this paper, we compare various background subtraction algorithms for detecting moving vehicles and pedestrians in urban traffic video sequences. We consider approaches varying from simple techniques such as frame differencing and adaptive median filtering, to more sophisticated probabilistic modeling techniques. While complicated techniques often produce superior performance, our experiments show that simple techniques such as adaptive median filtering can produce good results with much lower computational complexity. <s> BIB002 </s> Background subtraction methods in video streams: A review <s> 7) <s> AbstractVarious tracking methods have been developed to track objects with different degrees or levels of tracking ability. The ability or performance of each tracking method is dependent on the feature or data that is being used for tracking purpose. The ability of a tracking method can be measured by utilizing tracking metrics to give an indication of the tracking ability of an algorithm. This paper offers some insights into the issues and similarities of performance measurement reporting of video tracking algorithms and proposes a method in assessing the robustness of a video tracking algorithm. The proposed metric introduces another measure to measure the consistency of a tracking algorithm. The work presented in this paper shows that using only one metric to measure the tracking performance is inadequate. The proposed metric presented in this paper shows that the utilization of multiple metrics such as tracking success rate and tracking consistency or robustness would give a better indication of the ... <s> BIB003 </s> Background subtraction methods in video streams: A review <s> 7) <s> Background subtraction is one of the key techniques for automatic video analysis, especially in the domain of video surveillance. Although its importance, evaluations of recent background subtraction methods with respect to the challenges of video surveillance suffer from various shortcomings. To address this issue, we first identify the main challenges of background subtraction in the field of video surveillance. We then compare the performance of nine background subtraction methods with post-processing according to their ability to meet those challenges. Therefore, we introduce a new evaluation data set with accurate ground truth annotations and shadow masks. This enables us to provide precise in-depth evaluation of the strengths and drawbacks of background subtraction methods. <s> BIB004
Kalman Filtering: This technique is one of the most well-known recursive methods. If we assume the intensity values of the pixels in the image follow a normal distribution such as, where simple adaptive filters are responsible for updating the mean and variance of the background model to compensate for the illumination changes and include objects with long stops in the background model. Background estimation using Kalman filtering has been explained in . The main difference between them is the used state space for tracking process. The simplest ones are those which are based only on the luminance BIB001 - , BIB003 . 8) Hidden Markov Models: All of the mentioned models are able to reconcile to gradual changes in lighting. However, if remarkable amount of intensity changes occur, they all encounter serious problems. Another method which is able of modelling the variations in the pixel intensity is known as Markov Model. It tries to model these variations as discrete states based on modes of the environment, for instance cloudy/sunny skies or lights on/off. A three-state HMM has been shown for modelling the intensity of a pixel in trafficmonitoring applications BIB002 , BIB004 .
Background subtraction methods in video streams: A review <s> B. Statistical Methods <s> A common method for real-time segmentation of moving regions in image sequences involves "background subtraction", or thresholding the error between an estimate of the image without moving objects and the current image. The numerous approaches to this problem differ in the type of background model used and the procedure used to update the model. This paper discusses modeling each pixel as a mixture of Gaussians and using an on-line approximation to update the model. The Gaussian, distributions of the adaptive mixture model are then evaluated to determine which are most likely to result from a background process. Each pixel is classified based on whether the Gaussian distribution which represents it most effectively is considered part of the background model. This results in a stable, real-time outdoor tracker which reliably deals with lighting changes, repetitive motions from clutter, and long-term scene changes. This system has been run almost continuously for 16 months, 24 hours a day, through rain and snow. <s> BIB001 </s> Background subtraction methods in video streams: A review <s> B. Statistical Methods <s> Real-time segmentation of moving regions in image sequences is a fundamental step in many vision systems including automated visual surveillance, human-machine interface, and very low-bandwidth telecommunications. A typical method is background subtraction. Many background models have been introduced to deal with different problems. One of the successful solutions to these problems is to use a multi-colour background model per pixel proposed by Grimson et al [1, 2,3]. However, the method suffers from slow learning at the beginning, especially in busy environments. In addition, it can not distinguish between moving shadows and moving objects. This paper presents a method which improves this adaptive background mixture model. By reinvestigating the update equations, we utilise different equations at different phases. This allows our system learn faster and more accurately as well as adapts effectively to changing environment. A shadow detection scheme is also introduced in this paper. It is based on a computational colour space that makes use of our background model. A comparison has been made between the two algorithms. The results show the speed of learning and the accuracy of the model using our update algorithm over the Grimson et al’s tracker. When incorporate with the shadow detection, our method results in far better segmentation than The Thirteenth Conference on Uncertainty in Artificial Intelligence that of Grimson et al. <s> BIB002 </s> Background subtraction methods in video streams: A review <s> B. Statistical Methods <s> Mixture of Gaussians is a widely used approach for background modeling to detect moving objects from static cameras. Numerous improvements of the original method developed by Stauffer and Grimson [1] have been proposed over the recent years and the purpose of this paper is to provide a survey and an original classification of these improvements. We also discuss relevant issues to reduce the computation time. Firstly, the original MOG are reminded and discussed following the challenges met in video sequences. Then, we categorize the different improvements found in the literature. We have classified them in term of strategies used to improve the original MOG and we have discussed them in term of the critical situations they claim to handle. After analyzing the strategies and identifying their limitations, we conclude with several promising directions for future research. <s> BIB003 </s> Background subtraction methods in video streams: A review <s> B. Statistical Methods <s> Locating moving objects in a video sequence is the first step of many computer vision applications. Among the various motion-detection techniques, background subtraction methods are commonly implemented, especially for applications relying on a fixed camera. Since the basic inter-frame difference with global threshold is often a too simplistic method, more elaborate (and often probabilistic) methods have been proposed. These methods often aim at making the detection process more robust to noise, background motion and camera jitter. In this paper, we present commonly-implemented background subtraction algorithms and we evaluate them quantitatively. In order to gauge performances of each method, tests are performed on a wide range of real, synthetic and semi-synthetic video sequences representing different challenges. <s> BIB004 </s> Background subtraction methods in video streams: A review <s> B. Statistical Methods <s> In this paper we present a novel method for foreground segmentation. Our proposed approach follows a non-parametric background modeling paradigm, thus the background is modeled by a history of recently observed pixel values. The foreground decision depends on a decision threshold. The background update is based on a learning parameter. We extend both of these parameters to dynamic per-pixel state variables and introduce dynamic controllers for each of them. Furthermore, both controllers are steered by an estimate of the background dynamics. In our experiments, the proposed Pixel-Based Adaptive Segmenter (PBAS) outperforms most state-of-the-art methods. <s> BIB005
Modelling Background with a single image as in basic methods requires a rigorously fixed background void of noise and artifacts. Since this requirement cannot be satisfied in every real-life scenario, many models with each background pixel in a probability density function (PDF) learned over a series of training frames. The Statistical methods using one Gaussian have two sub-sequences: Gaussian Average was proposed by Wren BIB002 , and the Simple Gaussian of Benezeth and his colleagues. It does not cope with multimodal backgrounds BIB004 . Many researchers have worked on Statistical methods using multiple Gaussians that is called Gaussian Mixture Model (GMM). Some of these research were done by Stauffer and Grimson BIB001 , TraKuPong and Bowden BIB003 , Zivkovic , and Baf et al. BIB005 . To account for backgrounds made of animated textures (such as waves on the water or trees shaken by the wind), some authors proposed the use of multimodal PDFs such as Stauffer and Grimson's method BIB001 .
Background subtraction methods in video streams: A review <s> C. Fuzzy Based Methods <s> AbstractIn this paper, reported algorithms for the removal of fog are reviewed. Fog reduces the visibility of scene and thus performance of various computer vision algorithms which use feature information. Formation of fog is the function of the depth. Estimation of depth information is under constraint problem if single image is available. Hence, removal of fog requires assumptions or prior information. Fog removal algorithms estimate the depth information with various assumptions, which are discussed in detail here. Fog removal algorithm has a wide application in tracking and navigation, consumer electronics, and entertainment industries. <s> BIB001 </s> Background subtraction methods in video streams: A review <s> C. Fuzzy Based Methods <s> In this paper we present a novel method for foreground segmentation. Our proposed approach follows a non-parametric background modeling paradigm, thus the background is modeled by a history of recently observed pixel values. The foreground decision depends on a decision threshold. The background update is based on a learning parameter. We extend both of these parameters to dynamic per-pixel state variables and introduce dynamic controllers for each of them. Furthermore, both controllers are steered by an estimate of the background dynamics. In our experiments, the proposed Pixel-Based Adaptive Segmenter (PBAS) outperforms most state-of-the-art methods. <s> BIB002 </s> Background subtraction methods in video streams: A review <s> C. Fuzzy Based Methods <s> Based on Type-2 Fuzzy Gaussian Mixture Model (T2-FGMM) and Markov Random Field (MRF), we propose a novel background modeling method for motion detection in dynamic scenes. The key idea of the proposed approach is the successful introduction of the spatial-temporal constraints into the T2-FGMM by a Bayesian framework. The evaluation results in pixel level demonstrate that the proposed method performs better than the sound Gaussian Mixture Model (GMM) and T2-FGMM in such typical dynamic backgrounds as waving trees and water rippling. <s> BIB003
Fuzzy logic depends on the fuzzy set theory. The Fuzzy Sets Theory is another development of the classical mathematic theory that has been studied Georg Cauter. However, the fuzzy logic can deal with words in place of the language nature of human such a small word Large, or almost equal to BIB001 . Fuzzy based techniques include three categories. For the first time, Zhang and xuthe worked on Fuzzy Sugeno Integral with Adaptive-Selective Update . Next, Baf et al. BIB002 proposed a Fuzzy Choquet Integral with AdaptiveSelective Update. Finally, Fuzzy Gaussian of Sigari et al. was proposed. Also in that year, Baf et al. proposed both Type-2 Fuzzy GMM-UM and GMM-UV methods. Zhao with his colleagues suggested the Type-2 Fuzzy GMM-UM and GMM-UV with MRF BIB003 .
Background subtraction methods in video streams: A review <s> D. Non-Parametric Methods <s> Background modeling is an important component of many vision systems. Existing work in the area has mostly addressed scenes that consist of static or quasi-static structures. When the scene exhibits a persistent dynamic behavior in time, such an assumption is violated and detection performance deteriorates. In this paper, we propose a new method for the modeling and subtraction of such scenes. Towards the modeling of the dynamic characteristics, optical flow is computed and utilized as a feature in a higher dimensional space. Inherent ambiguities in the computation of features are addressed by using a data-dependent bandwidth for density estimation using kernels. Extensive experiments demonstrate the utility and performance of the proposed approach. <s> BIB001 </s> Background subtraction methods in video streams: A review <s> D. Non-Parametric Methods <s> Metrology of vehicle trajectories has several applications in the field of road safety, particularly in dangerous curves. Actually, it is of great interest to observe trajectories of vehicles with the aim of designing a real time driver warning device in dangerous areas. This paper addresses the first step of a work with a video system placed along the road with the objective of vehicle's position and speed estimation. This system has been totally developed for this project and can record simultaneously three cameras with 640 times 480 pixels up to 30 frames per second (fps) and rangefinder informations. The best contribution of this paper is an original probabilistic background subtraction algorithm, first step of a global method (calibration, tracking, ...) implemented to be able to measure vehicle trajectories. Kinematic GPS (in post-processing) has been extensively used to get ground truth <s> BIB002 </s> Background subtraction methods in video streams: A review <s> D. Non-Parametric Methods <s> For a responsive audio art installation in a skylit atrium, we introduce a single-camera statistical segmentation and tracking algorithm. The algorithm combines statistical background image estimation, per-pixel Bayesian segmentation, and an approximate solution to the multi-target tracking problem using a bank of Kalman filters and Gale-Shapley matching. A heuristic confidence model enables selective filtering of tracks based on dynamic data. We demonstrate that our algorithm has improved recall and F 2 -score over existing methods in OpenCV 2.1 in a variety of situations. We further demonstrate that feedback between the tracking and the segmentation systems improves recall and F 2 -score. The system described operated effectively for 5–8 hours per day for 4 months; algorithms are evaluated on video from the camera installed in the atrium. Source code and sample data is open source and available in OpenCV. <s> BIB003
Elgammal and his co-workers proposed Kernel Density Estimation (KDE) algorithm. An unstructured approach can also be used to model a multimodal PDF. In this perspective, Elgammal et al. BIB001 proposed a Parzen-window estimate at each background pixel. The problem of this method is the memory requirement size (n * size (frame)), time to compute the kernel values (mitigated by a LUT approach). More sophisticated methods can also be envisaged such as Mittal and Paragios BIB002 which is based on "Variable Bandwidth Kernels". Goyat et al. worked on VuMeter . Hofmann BIB003 proposed a Pixel-Based Adaptive Segmenter (PBAS) as well as Godbehere et al. studied on GMG.
Background subtraction methods in video streams: A review <s> G. Methods Based on Eigen Features <s> A fast robust eigen-background update algorithm is proposed for foreground object detection. The update procedure involves no eigen decomposition, thus faster than former eigen-background based algorithms. Meanwhile, the algorithm can robustly maintain the desired background model, resistant to outlying objects. <s> BIB001 </s> Background subtraction methods in video streams: A review <s> G. Methods Based on Eigen Features <s> The paper presents a neural network based segmentation method which can extract moving objects in video. This proposed neural network architecture is multilayer so as to match the complexity of the frames in a video stream and deal with the problems of segmentation. The neural network combines inputs that exploit spatio-temporal correlation among pixels. Each of these unit themselves produce imperfect results, but the neural network learns to combine their results for better overall segmentation, even though it is trained with noisy results from a simpler method. The proposed algorithm converges from an initial stage where all the pixels are considered to be part of the background to a stage where only the appropriate pixels are classified as background. Results are shown to demonstrate the efficacy of the method compared to a more memory intensive MoG method. <s> BIB002
Eigen background / SL-PCA was proposed by Oliver BIB001 . The key element of this method lies in its ability of learning the background model from unconstrained video sequences, even when they contain moving foreground objects. Furthermore, PCA can be applied to a sequence of n frames to compute the Eigen backgrounds, and finally it is faster than a Mixture of Gaussian approach. III. CHALLENGES OF BACKGROUND SUBTRACTION FOR VIDEO SURVEILLANCE Background subtraction methods have to deal with various challenges due to the nature of video supervision. Besides the standard challenges, many of the background subtraction challenges have studied in literature before [51] . We refer to the work of Bouwmans et al. for a comprehensive study. For instance, we bring up the following challenges: • Gradual or sudden illumination changes: It is necessary to adapt the BS methods to gradual changes of the environment. • Dynamic background: Some parts in the video may contain moving objects, but should be regarded as background. Such movement can be irregular or periodical like waving trees. • Bootstrapping: If initialization data which is free from foreground objects is not available, the background model has to be initialized using a bootstrapping strategy . • Video noise: Video signal is generally superimposed by noise. BS approaches for video surveillance has to cope with such degraded signals affected by different types of noise, such as sensor noise or compression artifacts . • Camouflage: Deliberately or not, some objects in a video may poorly differ from the appearance of background. It leads to make an incorrect classification. This is an important case in surveillance applications especially. IV. DISCUSSION Recently, Tian et al. BIB002 proposed a selective Eigen background modelling and subtraction method that can keep robust in crowded scenes. Three "selectivity" mechanisms are integrated with their methods, including selective training, selective model initialization and pixel-level selective reconstruction. They used of three Eigen background algorithms: C-EigenBg, BS-EigenBg, PS-EigenBgNVF and compared the results with other non-Eigen background algorithms like GMM, Bayes, Codebook, PBAS, and Vibe. As it can be seen in the video that method Luque fails to segment the foreground objects effectively. Mog provides better results than the Luque method, but the proposed method gives the best overall results as Fig. 5 illustrates their results. Y. Benezeth and his co-workers tested the BS algorithms on groups of videos illustrating different scenarios and thus different challenges. As can be seen from those Precision / Recall curves, the MinMax method is slightly less effective than the others, mostly because it exclusively works on grayscale data, thus ignoring colour.
Recent advances in features extraction and description algorithms: A comprehensive survey <s> I. INTRODUCTION <s> Recent technology and market trends have demanded the significant need for feasible solutions to video/camera systems and analytics. This paper provides a comprehensive account on theory and application of intelligent video systems and analytics. It highlights the video system architectures, tasks, and related analytic methods. It clearly demonstrates that the importance of the role that intelligent video systems and analytics play can be found in a variety of domains such as transportation and surveillance. Research directions are outlined with a focus on what is essential to achieve the goals of intelligent video systems and analytics. <s> BIB001 </s> Recent advances in features extraction and description algorithms: A comprehensive survey <s> I. INTRODUCTION <s> In this work, a hardware-software co-design is proposed to effectively utilize FPGA resources for a prototype of an automated video surveillance system on a programmable platform. Time-critical steps of a foreground object detection algorithm are designed and implemented in the FPGA's logic elements to maximize parallel processing. Other non time-critical tasks are achieved by executing a high level language program on an embedded Nios-II processor. Custom and parallel processing modules are integrated into the video processing chain by a streaming protocol that aggressively utilizes on-chip memory to increase the throughput of the system. A data forwarding technique is incorporated with an on-chip buffering scheme to reduce computations and resources in the window-based operations. Other data control interfaces are achieved by software drivers that communicate with hardware controllers using Altera's Memory-Mapped protocol. The proposed prototype has demonstrated real-time processing capability that outperforms other implementations. <s> BIB002
Features detection and description from static and dynamic scenes is an active area of research and one of the most studied topics in computer vision literature. The concept of feature detection and description refers to the process of identifying points in an image (interest points) that can be used to describe the image's contents such as Edges, corners, ridges and blobs. It is primarily aiming towards object detection, analysis and tracking from a video stream to describe the semantics of the its actions and behavior . It also has a long list of potential applications, which include, but is not limited to, access control to sensitive building, crowd and population statistical analysis, human detection and tracking, detecting of suspicious actions, traffic analysis, vehicular tracking, and detection of military targets. In the last few years, we have witnessed a remarkable increase in the amount of homogeneous and inhomogeneous visual inputs (mainly due to the availability of cheap capturing devices such as the built-in cameras in smart phones, in addition to the availability of free image hosting applications, websites and servers such as Instagram and Facebook). This drives the research communities to propose number of novel, robust, and automated features detection and description algorithms, that can adapt to the needs of an application in terms of accuracy and performance. Most of the proposed algorithms requires intensive computations (especially when it is used with high-definition video stream or with high-resolution satellite imagery applications). Hardware accelerators with massive processing capabilities for these algorithms is required to accelerate the its computations for real-time applications. Digital Signal Processors (DSPs), Field Programmable Gate Arrays (FPGAs), System-on-Chips (SoCs), Application-Specific Integrated Circuits (ASICs), and Graphic Processing Units (GPUs) platforms with smarter, parallelizable, and pipelinable hardware processing designs could be targeted to alleviate this issue. Porting feature detection and description algorithms into hardware platforms speedup its computation by order of magnitude. However, hardware-constrains such as memory, power, scalability and format interfacing constitute a major bottleneck of scaling it into high resolutions. The typical solution for these hardware-related issues is to scale down the resolution or to sacrifice the accuracy of the detected features. The stateof-the-art in machine and robotic vision, on the other hand, has lately concluded that it is the processing algorithms that will make a substantial contribution to resolve these issues BIB001 [3]. That is, computer vision algorithms might be targeted to resolve most of those problems associated with the memoryand power-demanding hardware requirements, and might yield a big revolution for such systems BIB002 . This challenge is inviting researchers to invent, implement and test these new algorithms, which mainly fall in the feature detection and description category, and which are the fundamental tools of many visual computations applications. To ensure the robustness of vision algorithms, an essential prerequisite is that they are designed to cover a wide range of possible scenarios with a high-level of repeatability and affine-invariance. Ultimately, studying all of these scenarios and parameters is virtually impossible, however, a clear understanding of all these variables is critical for a successful design. Key factors influencing real-time performance include the processing platform (and its associated constrains on memory, power and frequency in FPGAs, SoCs, GPUs, etc., that can result in algorithmic modifications that can possibly impact the desired performance), monitored environment (e.g. illuminations, reflections, shadows, view orientation, angle, etc.), and the application of interest (e.g. targets of interest, tolerable miss detection/false alarm rates and the desired tradeoffs, and allowed latency). As such, a careful study of computer vision algorithms is essential. This paper is dedicated to provide a comprehensive overview on the state-of-the-art and recent advances in feature detection and description algorithms. Specifically, the paper starts by overviewing fundamental concepts that constitute the core of feature detection and description algorithms. It then compares, reports and discusses their performance and capabilities. The Maximally Stable Extremal Regions (MSER) algorithm and the Scale Invariant Feature Transform (SIFT) algorithm, being two of the best of their type, are selected to report their recent algorithmic derivatives. The rest of the paper is organized as follows. Section II provides an overview of the recent state-of-the-art feature detection and description algorithms proposed in literature. It also summaries and compares their performance and accuracy under various transformations. In Section III, the MSER and SIFT algorithms are studied in detail in terms of their recent derivatives. Finally, Section IV concludes the paper with outlooks into future work.
Recent advances in features extraction and description algorithms: A comprehensive survey <s> A. Local features <s> In this survey, we give an overview of invariant interest point detectors, how they evolvd over time, how they work, and what their respective strengths and weaknesses are. We begin with defining the properties of the ideal local feature detector. This is followed by an overview of the literature over the past four decades organized in different categories of feature extraction methods. We then provide a more detailed analysis of a selection of methods which had a particularly significant impact on the research field. We conclude with a summary and promising future research directions. <s> BIB001 </s> Recent advances in features extraction and description algorithms: A comprehensive survey <s> A. Local features <s> Feature detection is a fundamental and important problem in computer vision and image processing. It is a low-level processing step which serves as the essential part for computer vision based applications. The goal of this paper is to present a survey of recent progress and advances in visual feature detection. Firstly we describe the relations among edges, corners and blobs from the psychological view. Secondly we classify the algorithms in detecting edges, corners and blobs into different categories and provide detailed descriptions for representative recent algorithms in each category. Considering that machine learning becomes more involved in visual feature detection, we put more emphasis on machine learning based feature detection methods. Thirdly, evaluation standards and databases are also introduced. Through this survey we would like to present the recent progress in visual feature detection and identify future trends as well as challenges. We survey the recent progress and advances in visual feature detection.The relations among different kinds of features are covered.Representative feature detection algorithms are described.We categorize and discuss the pros/cons for different kinds of visual features.We put some emphasis on future challenges in feature design through this survey. <s> BIB002
Local image features (also known as interest points, key points, and salient features) can be defined as a specific pattern which unique from its immediately close pixels, which is generally associated with one or more of image properties BIB001 BIB002 . Such properties include edges, corners, regions, etc. Figure 1 (a) below represents a summary of such local features. Indeed, these local features represent essential anchor points that can summarize the content of the frame (with the aid of feature descriptors) while searching an image (or a video). These local features are then converted into numerical descriptors, representing unique and compact summarization of these local features. Local (descriptive and invariant) features provide a powerful tool that can be used in a wide range of computer vision and robotics applications, such as real-time visual surveillance, image retrieval, video mining, object tracking, mosaicking, target detection, and wide baseline matching to name few . To illustrate on the usefulness of such local features, consider the following example. Given an aerial image, a detected edge can represent a street, corners may be street junctions, and homogeneous regions can represent cars, roundabouts or buildings (of course, this is a resolution dependent). The term detector (a.k.a. extractor) traditionally refers to the algorithm or technique that detects (or extracts) these local features and prepare them to be passed to another processing stage that describe their contents, i.e. a feature descriptor algorithm. That is, feature extraction plays the role of an intermediate image processing stage between different computer vision algorithms. In this work, the terms detector and extractor are interchangeably used.
Recent advances in features extraction and description algorithms: A comprehensive survey <s> B. Ideal Local Features <s> In this survey, we give an overview of invariant interest point detectors, how they evolvd over time, how they work, and what their respective strengths and weaknesses are. We begin with defining the properties of the ideal local feature detector. This is followed by an overview of the literature over the past four decades organized in different categories of feature extraction methods. We then provide a more detailed analysis of a selection of methods which had a particularly significant impact on the research field. We conclude with a summary and promising future research directions. <s> BIB001 </s> Recent advances in features extraction and description algorithms: A comprehensive survey <s> B. Ideal Local Features <s> Feature detection is a fundamental and important problem in computer vision and image processing. It is a low-level processing step which serves as the essential part for computer vision based applications. The goal of this paper is to present a survey of recent progress and advances in visual feature detection. Firstly we describe the relations among edges, corners and blobs from the psychological view. Secondly we classify the algorithms in detecting edges, corners and blobs into different categories and provide detailed descriptions for representative recent algorithms in each category. Considering that machine learning becomes more involved in visual feature detection, we put more emphasis on machine learning based feature detection methods. Thirdly, evaluation standards and databases are also introduced. Through this survey we would like to present the recent progress in visual feature detection and identify future trends as well as challenges. We survey the recent progress and advances in visual feature detection.The relations among different kinds of features are covered.Representative feature detection algorithms are described.We categorize and discuss the pros/cons for different kinds of visual features.We put some emphasis on future challenges in feature design through this survey. <s> BIB002
In general, a local feature typically has a spatial extent which is due to its local pixels neighborhood. That is, they represent a subset of the frame that is semantically meaningful, e.g. correspond to an object (or a part of an object). Ultimately, it is infeasible to localize all such features as this will require the prerequisite of high-level frame (scene) understanding BIB001 . As such, those features detection algorithms tries to locate these features directly based on the intensity patterns in the input frame. The selection of these local features can indeed greatly impact the overall system performance BIB002 . Ideal features (and hence feature detectors) should typically have the following important qualities BIB001 : (1) Distinctiveness: the intensity patterns underlying the detected features should be rich in variations that can be used for distinguishing features and matching them. (2) Locality: features should be local so as to reduce the chances of getting occluded as well as to allow simple estimation of geometric and photometric deformations between two frames with different views. (3) Quantity: the total number of detected features (i.e. features density) should be sufficiently (not excessively) large to reflect the frames content in a compact form. (4) Accuracy: features detected should be located accurately with respect to different scales, shapes and pixels locations in a frame. (5) Efficiency: features should be efficiently identified in a short time that makes them suitable for real-time (i.e. timecritical) applications. (6) Repeatability: given two frames of the same object (or scene) with different viewing settings, a high percentage of the detected features from the overlapped visible part should be found in both frames. Repeatability is greatly affected by the following two qualities. (7) Invariance: in scenarios where a large deformation is expected (scale, rotation, etc.), the detector algorithm should model this deformation mathematically as precisely as possible so that it minimizes its effect on the extracted features. (8) Robustness: in scenarios where a small deformation is expected (noise, blur, discretization effects, compression arti- Intuitively, a given computer vision applications may favor one quality over another BIB001 . Repeatability, arguably the most important quality, is directly dependent on the other qualities (that is, improving one will equally improve repeatability). Nevertheless, regarding the other qualities, compromises typically need to be made. For example, distinctiveness and locality are competing properties (the more local a feature, the less distinctive it becomes, making feature matching more difficult). Efficiency and quantity are another example of such competing qualities. A highly dense features are likely to improve the object/scene recognition task, but this, however, will negatively impact the computation time.
Recent advances in features extraction and description algorithms: A comprehensive survey <s> C. Feature Detectors <s> Abstract The wide-baseline stereo problem, i.e. the problem of establishing correspondences between a pair of images taken from different viewpoints is studied. A new set of image elements that are put into correspondence, the so called extremal regions , is introduced. Extremal regions possess highly desirable properties: the set is closed under (1) continuous (and thus projective) transformation of image coordinates and (2) monotonic transformation of image intensities. An efficient (near linear complexity) and practically fast detection algorithm (near frame rate) is presented for an affinely invariant stable subset of extremal regions, the maximally stable extremal regions (MSER). A new robust similarity measure for establishing tentative correspondences is proposed. The robustness ensures that invariants from multiple measurement regions (regions obtained by invariant constructions from extremal regions), some that are significantly larger (and hence discriminative) than the MSERs, may be used to establish tentative correspondences. The high utility of MSERs, multiple measurement regions and the robust metric is demonstrated in wide-baseline experiments on image pairs from both indoor and outdoor scenes. Significant change of scale (3.5×), illumination conditions, out-of-plane rotation, occlusion, locally anisotropic scale change and 3D translation of the viewpoint are all present in the test problems. Good estimates of epipolar geometry (average distance from corresponding points to the epipolar line below 0.09 of the inter-pixel distance) are obtained. <s> BIB001 </s> Recent advances in features extraction and description algorithms: A comprehensive survey <s> C. Feature Detectors <s> The paper gives a snapshot of the state of the art in affine covariant region detectors, and compares their performance on a set of test images under varying imaging conditions. Six types of detectors are included: detectors based on affine normalization around Harris (Mikolajczyk and Schmid, 2002; Schaffalitzky and Zisserman, 2002) and Hessian points (Mikolajczyk and Schmid, 2002), a detector of `maximally stable extremal regions', proposed by Matas et al. (2002); an edge-based region detector (Tuytelaars and Van Gool, 1999) and a detector based on intensity extrema (Tuytelaars and Van Gool, 2000), and a detector of `salient regions', proposed by Kadir, Zisserman and Brady (2004). The performance is measured against changes in viewpoint, scale, illumination, defocus and image compression. ::: ::: The objective of this paper is also to establish a reference test set of images and performance software, so that future detectors can be evaluated in the same framework. <s> BIB002 </s> Recent advances in features extraction and description algorithms: A comprehensive survey <s> C. Feature Detectors <s> In this survey, we give an overview of invariant interest point detectors, how they evolvd over time, how they work, and what their respective strengths and weaknesses are. We begin with defining the properties of the ideal local feature detector. This is followed by an overview of the literature over the past four decades organized in different categories of feature extraction methods. We then provide a more detailed analysis of a selection of methods which had a particularly significant impact on the research field. We conclude with a summary and promising future research directions. <s> BIB003 </s> Recent advances in features extraction and description algorithms: A comprehensive survey <s> C. Feature Detectors <s> Local feature detectors and descriptors are widely used in many computer vision applications and various methods have been proposed during the past decade. There have been a number of evaluations focused on various aspects of local features, matching accuracy in particular, however there has been no comparisons considering the accuracy and speed trade-offs of recent extractors such as BRIEF, BRISK, ORB, MRRID, MROGH and LIOP. This paper provides a performance evaluation of recent feature detectors and compares their matching precision and speed in randomized kd-trees setup as well as an evaluation of binary descriptors with efficient computation of Hamming distance. <s> BIB004 </s> Recent advances in features extraction and description algorithms: A comprehensive survey <s> C. Feature Detectors <s> Numerous techniques and algorithms have been developed and implemented, primarily in software, for object tracking, detection and recognition. A few attempts have been made to implement some of the algorithms in hardware. However, those attempts have not yielded optimal results in terms of accuracy, power and memory requirements. The aim of this paper is to explore and investigate a number of possible algorithms for real-time video surveillance, revealing their various theories, relationships, shortcomings, advantages and disadvantages, and pointing out their unsolved problems of practical interest in principled way, which would be of tremendous value to engineers and researchers trying to decide what algorithm among those many in literature is most suitable to specific application and the particular real-time System-on-Chip (SoC) implementation. <s> BIB005 </s> Recent advances in features extraction and description algorithms: A comprehensive survey <s> C. Feature Detectors <s> Feature detection is a fundamental and important problem in computer vision and image processing. It is a low-level processing step which serves as the essential part for computer vision based applications. The goal of this paper is to present a survey of recent progress and advances in visual feature detection. Firstly we describe the relations among edges, corners and blobs from the psychological view. Secondly we classify the algorithms in detecting edges, corners and blobs into different categories and provide detailed descriptions for representative recent algorithms in each category. Considering that machine learning becomes more involved in visual feature detection, we put more emphasis on machine learning based feature detection methods. Thirdly, evaluation standards and databases are also introduced. Through this survey we would like to present the recent progress in visual feature detection and identify future trends as well as challenges. We survey the recent progress and advances in visual feature detection.The relations among different kinds of features are covered.Representative feature detection algorithms are described.We categorize and discuss the pros/cons for different kinds of visual features.We put some emphasis on future challenges in feature design through this survey. <s> BIB006 </s> Recent advances in features extraction and description algorithms: A comprehensive survey <s> C. Feature Detectors <s> The traditional environment maps built by mobile robots include both metric ones and topological ones. These maps are navigation-oriented and not adequate for service robots to interact with or serve human users who normally rely on the conceptual knowledge or semantic contents of the environment. Therefore, the construction of semantic maps becomes necessary for building an effective human-robot interface for service robots. This paper reviews recent research and development in the field of visual-based semantic mapping. The main focus is placed on how to extract semantic information from visual data in terms of feature extraction, object/place recognition and semantic representation methods. <s> BIB007
The technical literature is rich with new features detections and description algorithms, and surveys that compare their performance and their qualities such as those mentioned in the earlier section. The reader is referred to some of the elegant surveys from the literature in BIB003 BIB002 . However, no ideal detector exists until today. This is mainly due to the virtually infinite number of possible computer vision applications (that may require one or multiple features), the divergence of imaging conditions (changes in scale, viewpoint, illumination and contrast, image quality, compression, etc.) and possible scenes. The computational efficiency of such detectors becomes even more important when considered for real-time applications BIB006 [8] BIB005 . As such, the most important local features include: (1) Edges: refer to pixel patterns at which the intensities abruptly change (with a strong gradient magnitude), (2) Corners: refer to the point at which two (or more) edge intersect in the local neighborhood, and (3) Regions: refer to a closed set of connected points with a similar homogeneity criteria, usually the intensity value. One can intuitively note that there is a strong correlation between these local features. For example, multiple edges sometimes surround a region, i.e. tracking the edges defines the region boundaries. Similarly, the intersection of edges defines the corners BIB007 . A summary for the well-known feature detectors can be found in table 1. The performance of many of the state-of-the-art detectors is compared in table 2. As was reported in many performance comparison surveys in the computer vision literature BIB003 [10] BIB004 , both the MSER BIB001 and the SIFT algorithms have shown an excellent performance in terms of the invariance and other feature qualities (see table 2, the last two rows). Due to these facts, the MSER and SIFT algorithms were extended to several derivatives with different enhancements (that will be reported on later sections). As such, the following section of this paper considers reporting the algorithmic derivatives of the MSER and SIFT algorithms.
Recent advances in features extraction and description algorithms: A comprehensive survey <s> A. MSER Derivatives <s> This paper introduces an efficient 3D segmentation concept, which is based on extending the well-known Maximally Stable Extremal Region (MSER) detector to the third dimension. The extension allows the detection of stable 3D regions, which we call the Maximally Stable Volumes (MSVs). We present a very efficient way to detect the MSVs in quasi-linear time by analysis of the component tree. Two applications - 3D segmentation within simulated MR brain images and analysis of the 3D fiber network within digitized paper samples - show that reasonably good segmentation results are achieved with low computational effort. <s> BIB001 </s> Recent advances in features extraction and description algorithms: A comprehensive survey <s> A. MSER Derivatives <s> This paper introduces a novel colour-based affine co-variant region detector. Our algorithm is an extension of the maximally stable extremal region (MSER) to colour. The extension to colour is done by looking at successive time-steps of an agglomerative clustering of image pixels. The selection of time-steps is stabilised against intensity scalings and image blur by modelling the distribution of edge magnitudes. The algorithm contains a novel edge significance measure based on a Poisson image noise model, which we show performs better than the commonly used Euclidean distance. We compare our algorithm to the original MSER detector and a competing colour-based blob feature detector, and show through a repeatability test that our detector performs better. We also extend the state of the art in feature repeatability tests, by using scenes consisting of two planes where one is piecewise transparent. This new test is able to evaluate how stable a feature is against changing backgrounds. <s> BIB002 </s> Recent advances in features extraction and description algorithms: A comprehensive survey <s> A. MSER Derivatives <s> In this paper we present a new algorithm for computing Maximally Stable Extremal Regions (MSER), as invented by Matas et al. The standard algorithm makes use of a union-find data structure and takes quasi-linear time in the number of pixels. The new algorithm provides exactly identical results in true worst-case linear time. Moreover, the new algorithm uses significantly less memory and has better cache-locality, resulting in faster execution. Our CPU implementation performs twice as fast as a state-of-the-art FPGA implementation based on the standard algorithm. ::: ::: The new algorithm is based on a different computational ordering of the pixels, which is suggested by another immersion analogy than the one corresponding to the standard connected-component algorithm. With the new computational ordering, the pixels considered or visited at any point during computation consist of a single connected component of pixels in the image, resembling a flood-fill that adapts to the grey-level landscape. The computation only needs a priority queue of candidate pixels (the boundary of the single connected component), a single bit image masking visited pixels, and information for as many components as there are grey-levels in the image. This is substantially more compact in practice than the standard algorithm, where a large number of connected components must be considered in parallel. The new algorithm can also generate the component tree of the image in true linear time. The result shows that MSER detection is not tied to the union-find data structure, which may open more possibilities for parallelization. <s> BIB003 </s> Recent advances in features extraction and description algorithms: A comprehensive survey <s> A. MSER Derivatives <s> This paper presents a novel hardware accelerator architecture for the linear-time Maximally Stable Extremal Regions (MSER) detector algorithm. In contrast to the standard MSER algorithm, the linear-time MSER implementation is more suitable for real-time applications of image retrieval in large-scale and high resolution datasets (e.g. satellite images). The linear-time MSER accelerator design is optimized by enhancing its flooding process (which is one of the major drawbacks of the standard linear-time MSER) using a structure that we called stack of pointers, which makes it memory-efficient as it reduces the memory requirement by nearly 90%. The accelerator is configurable and can be integrated with many image processing algorithms, allowing a wide spectrum of potential real-time applications to be realized even on small and power-limited devices. <s> BIB004 </s> Recent advances in features extraction and description algorithms: A comprehensive survey <s> A. MSER Derivatives <s> This paper presents a novel implementation of the Maximally Stable Extremal Regions (MSER) detector on system-on-chip (SoC) using 65nm CMOS technology. The novel SoC was developed following the Application Specific Integrated Circuit (ASIC) design flow which significantly enhanced its realization and fabrication, and overall performances. The SoC has very low area requirement (around 0.05 mm2) and is capable of detecting both bright and dark MSERs in a single run, while computing simultaneously their associated regions' moments, simplifying its interfacing with other image algorithms (e.g. SIFT and SURF). The novel MSER SoC is power-efficient (requires 2.25 mW) and memory-efficient as it saves more than 31% of the memory space reported in the state-of-the-art MSER implementation on FPGA, making it suitable for mobile devices. With 256×256 resolution and its operating frequency of 133 MHz, the SoC is expected to have a 200 frames/second processing rate, making it suitable (when integrated with other algorithms in the system) for time-critical real-time applications such as visual surveillance. <s> BIB005 </s> Recent advances in features extraction and description algorithms: A comprehensive survey <s> A. MSER Derivatives <s> Extremal Regions of Extremum Levels (EREL) are regions detected from a set of all extremal regions of an image. Maximally Stable Extremal Regions (MSER) which is a novel affine covariant region detector, detects regions from a same set of extremal regions as well. Although MSER results in regions with almost high repeatability, it is heavily dependent on the union-find approach which is a fairly complicated algorithm, and should be completed sequentially. Furthermore, it detects regions with low repeatability under the blur transformations. The reason for the latter shortcoming is the absence of boundaries information in stability criterion. To tackle these problems we propose to employ prior information about boundaries of regions, which results in a novel region detector algorithm that not only outperforms MSER, but avoids the MSER’s rather complicated steps of union-finding. To achieve that, we introduce Maxima of Gradient Magnitudes (MGMs) and use them to find handful of Extremum Levels (ELs). The chosen ELs are then scanned to detect their Extremal Regions (ER). The proposed algorithm which is called Extremal Regions of Extremum Levels (EREL) has been tested on the public benchmark dataset of Mikolajczyk [1]. Our experimental evaluations illustrate that, in many cases EREL achieves higher repeatability scores than MSER even for very low overlap errors. <s> BIB006
Maximally stable extremal regions (MSER) algorithm was proposed by Matas et al in 2002 . Since then number of region detection algorithms have been proposed based on the MSER technique. The following is a list of five MSER derivatives presented in chronological order. (1) N-Dimensional Extension: The algorithm was extended first in 2006 for 3D segmentation BIB001 by extending the neighborhoods search and stability criteria to 3D image data instead of 2D intensity date. Later on, in 2007, another extension for N-dimensional data space was proposed by Vedaldi in , and later on the same year, an extension to vector-valued function that can be exploited with the three-color channels was also provided in BIB002 . (2) Linear-Time MSER Algorithm: In 2008, Nister and Stewenius proposed a new processing flow that emulates real flood-filling in BIB003 . The new linear-time MSER algorithm has several advantages over the standard algorithm such as the better cache locality, linear complexity, etc. An initial hardware design was proposed in BIB004 . 3) The Extended MSER (X-MSER) Algorithm: The standard MSER algorithm searches for extremal regions from the input intensity frame only. However, in 2015, the authors of proposed an extension to the depth (space) domain noting out the correlation between the depth images and intensity images, and introduced the extended MSER detector, which was patented in . (4) The Parallel MSER Algorithm: One of the major drawbacks of the MSER algorithm is the need to run it twice on every frame to detect both dark and bright extremal regions. To circumvent on these issues, the authors proposed a parallel MSER algorithm BIB005 . Parallel in this context refers to the capability of detecting both extremal regions in a single run. This algorithmic enhancement showed great advantages over the standard MSER algorithm such as a considerable reduction in the execution time, required hardware resources and power, etc. This parallel MSER algorithm has few US patents that are associated with it (e.g. ). (5) Other MSER derivatives: Other algorithms that were inspired from the MSER algorithm include the Extremal Regions of the Extremal Levels BIB006 [28] algorithm and the Tree-based Morse Regions (TBMR) .
Recent advances in features extraction and description algorithms: A comprehensive survey <s> B. SIFT Derievatives <s> Stable local feature detection and representation is a fundamental component of many image registration and object recognition algorithms. Mikolajczyk and Schmid (June 2003) recently evaluated a variety of approaches and identified the SIFT [D. G. Lowe, 1999] algorithm as being the most resistant to common image deformations. This paper examines (and improves upon) the local image descriptor used by SIFT. Like SIFT, our descriptors encode the salient aspects of the image gradient in the feature point's neighborhood; however, instead of using SIFT's smoothed weighted histograms, we apply principal components analysis (PCA) to the normalized gradient patch. Our experiments demonstrate that the PCA-based local descriptors are more distinctive, more robust to image deformations, and more compact than the standard SIFT representation. We also present results showing that using these descriptors in an image retrieval application results in increased accuracy and faster matching. <s> BIB001 </s> Recent advances in features extraction and description algorithms: A comprehensive survey <s> B. SIFT Derievatives <s> SIFT has been proven to be the most robust local invariant feature descriptor. SIFT is designed mainly for gray images. However, color provides valuable information in object description and matching tasks. Many objects can be misclassified if their color contents are ignored. This paper addresses this problem and proposes a novel colored local invariant feature descriptor. Instead of using the gray space to represent the input image, the proposed approach builds the SIFT descriptors in a color invariant space. The built Colored SIFT (CSIFT) is more robust than the conventional SIFT with respect to color and photometrical variations. The evaluation results support the potential of the proposed approach. <s> BIB002 </s> Recent advances in features extraction and description algorithms: A comprehensive survey <s> B. SIFT Derievatives <s> We propose the n -dimensional scale invariant feature transform ( n-SIFT) method for extracting and matching salient features from scalar images of arbitrary dimensionality, and compare this method's performance to other related features. The proposed features extend the concepts used for 2-D scalar images in the computer vision SIFT technique for extracting and matching distinctive scale invariant features. We apply the features to images of arbitrary dimensionality through the use of hyperspherical coordinates for gradients and multidimensional histograms to create the feature vectors. We analyze the performance of a fully automated multimodal medical image matching technique based on these features, and successfully apply the technique to determine accurate feature point correspondence between pairs of 3-D MRI images and dynamic 3D + time CT data. <s> BIB003 </s> Recent advances in features extraction and description algorithms: A comprehensive survey <s> B. SIFT Derievatives <s> This article presents a novel scale- and rotation-invariant detector and descriptor, coined SURF (Speeded-Up Robust Features). SURF approximates or even outperforms previously proposed schemes with respect to repeatability, distinctiveness, and robustness, yet can be computed and compared much faster. This is achieved by relying on integral images for image convolutions; by building on the strengths of the leading existing detectors and descriptors (specifically, using a Hessian matrix-based measure for the detector, and a distribution-based descriptor); and by simplifying these methods to the essential. This leads to a combination of novel detection, description, and matching steps. The paper encompasses a detailed description of the detector and descriptor and then explores the effects of the most important parameters. We conclude the article with SURF's application to two challenging, yet converse goals: camera calibration as a special case of image registration, and object recognition. Our experiments underline SURF's usefulness in a broad range of topics in computer vision. <s> BIB004 </s> Recent advances in features extraction and description algorithms: A comprehensive survey <s> B. SIFT Derievatives <s> If a physical object has a smooth or piecewise smooth boundary, its images obtained by cameras in varying positions undergo smooth apparent deformations. These deformations are locally well approximated by affine transforms of the image plane. In consequence the solid object recognition problem has often been led back to the computation of affine invariant image local features. Such invariant features could be obtained by normalization methods, but no fully affine normalization method exists for the time being. Even scale invariance is dealt with rigorously only by the scale-invariant feature transform (SIFT) method. By simulating zooms out and normalizing translation and rotation, SIFT is invariant to four out of the six parameters of an affine transform. The method proposed in this paper, affine-SIFT (ASIFT), simulates all image views obtainable by varying the two camera axis orientation parameters, namely, the latitude and the longitude angles, left over by the SIFT method. Then it covers the other four parameters by using the SIFT method itself. The resulting method will be mathematically proved to be fully affine invariant. Against any prognosis, simulating all views depending on the two camera orientation parameters is feasible with no dramatic computational load. A two-resolution scheme further reduces the ASIFT complexity to about twice that of SIFT. A new notion, the transition tilt, measuring the amount of distortion from one view to another, is introduced. While an absolute tilt from a frontal to a slanted view exceeding 6 is rare, much higher transition tilts are common when two slanted views of an object are compared (see Figure hightransitiontiltsillustration). The attainable transition tilt is measured for each affine image comparison method. The new method permits one to reliably identify features that have undergone transition tilts of large magnitude, up to 36 and higher. This fact is substantiated by many experiments which show that ASIFT significantly outperforms the state-of-the-art methods SIFT, maximally stable extremal region (MSER), Harris-affine, and Hessian-affine. <s> BIB005 </s> Recent advances in features extraction and description algorithms: A comprehensive survey <s> B. SIFT Derievatives <s> We present a new method to extract scale-invariant features from an image by using a Cosine Modulated Gaussian (CM-Gaussian) filter. Its balanced scale-space atom with minimal spread in scale and space leads to an outstanding scale-invariant feature detection quality, albeit at reduced planar rotational invariance. Both sharp and distributed features like corners and blobs are reliably detected, irrespective of various image artifacts and camera parameter variations, except for planar rotation. The CM-Gaussian filters are approximated with the sum of exponentials as a single, fixed-length filter and equal approximation error over all scales, providing constant-time, low-cost image filtering implementations. The approximation error of the corresponding digital signal processing is below the noise threshold. It is scalable with the filter order, providing many quality-complexity trade-off working points. We validate the efficiency of the proposed feature detection algorithm on image registration applications over a wide range of testbench conditions. <s> BIB006
SIFT algorithm has a local feature detector and local histogram-based descriptor. It detects sets of interest points in an image and for each point it computes a histogram-based descriptor with 128 values. Since SIFT algorithm has been proposed by Lowe in 2004, number of algorithms tried to reduce the SIFT descriptor width to reduce the descriptor computation and matching time. Other algorithms used different window size and histogram compution pattern around each interset point either to speed up the computation process or increase the descripotr robustness against different transformations. One can note that the SIFT is rich with derivatives compared to the MSER algorithm. The reason is that there is not that much to be done to the MSER simple processing flow, unlike the SIFT which is more complicated. A brief overview of the SIFT algorithmic derivatives are discussed below. (1) ASIFT: Yu and Morel proposed an affine version of the SIFT algorithm in BIB005 , which is termed as ASIFT. This derivative simulates all image views obtainable by varying the latitude and the longitude angles. It then uses the standard SIFT method itself. ASIFT is proven to outperform SIFT and to be fully affine invariant BIB005 . However, the major drawback is the dramatic increase in the computational load. The code of the ASIFT can be found in . (2) CSIFT: Another variation of SIFT algorithm to colored space is the CSIFT BIB002 . It basically modifies the SIFT descriptor (in color invariant space) and is found to be more robust under blur change and affine change and less robust under illumination changes as compared to the standard SIFT. (3) n-SIFT: The n-SIFT algorithm is simply a straightforward extension of the standard SIFT algorithm to images (or data) with multi-dimensions BIB003 . The algorithm creates feature vectors through using hyperspherical coordinates for gradients and multidimensional histograms. The extracted features by n-SIFT can be matched efficiently in 3D and 4D images compared to the traditional SIFT algorithm. (4) PCA-SIFT: The PCA-SIFT BIB001 adopts an substitute feature vector derived using principal component analysis (PCA), that is based on the normalized gradient patches instead of weighted and smoothed HoG that is used in the standard SIFT. More importantly, it uses a window size 41x41 pixels to generate a descriptor of length 39x39x2= 3042, but it reduces the dimensionality of the descriptor from 3042 to 20 36 vector by using PCA, which may be more preferable in memory limited devices. (5) SIFT-SIFER Retrofit: The major difference between SIFT and SIFT with Error Resilience (SIFER) BIB006 algorithm is that SIFER (with an improvement in accuracy at the cost of the computational load) has better scale-space management using a higher granularity image pyramid representation and better scale-tuned filtering using a cosine modulated Gaussian (CMG) filter. This algorithm improved the accuracy and robustness of the feature by 20 percent for some criteria. However, the accuracy comes at a cost of increasing the execution time about two times slower than SIFT algorithm. (6) Other derivatives: Other SIFT derivatives include the SURF BIB004 , SIFT CS-LBP Retrofit, RootSIFT Retrofit, and CenSurE and STAR algorithms, which are summarized in .
Allocating, Detecting and Mining Sound Structures: An Overview of Technical Tools <s> Adaptivity <s> An efficient and perfectly invertible signal transform feat uring a constant-Q frequency resolution is presented. The proposed approach is based on the idea of the recently introduced nonstationary Gabor frames. Exploiting the properties of the operator corresponding to a family of analysis atoms, this approach overcomes the problems of the classical implementations of constant-Q transforms, in particular, computational intensity and lack of i nvertibility. Perfect reconstruction is guaranteed by using an easy t o calculate dual system in the synthesis step and computation time is kept low by applying FFT-based processing. The proposed method is applied to real-life signals and evaluated in comparison to a related approach, recently introduced specifically for audio signa ls. <s> BIB001 </s> Allocating, Detecting and Mining Sound Structures: An Overview of Technical Tools <s> Adaptivity <s> Signal analysis with classical Gabor frames leads to a fixed time-frequency resolution over the whole time-frequency plane. To overcome the limitations imposed by this rigidity, we propose an extension of Gabor theory that leads to the construction of frames with time-frequency resolution changing over time or frequency. We describe the construction of the resulting nonstationary Gabor frames and give the explicit formula for the canonical dual frame for a particular case, the painless case. We show that wavelet transforms, constant-Q transforms and more general filter banks may be modeled in the framework of nonstationary Gabor frames. Further, we present the results in the finite-dimensional case, which provides a method for implementing the above-mentioned transforms with perfect reconstruction. Finally, we elaborate on two applications of nonstationary Gabor frames in audio signal processing, namely a method for automatic adaptation to transients and an algorithm for an invertible constant-Q transform. <s> BIB002
While in classical Gabor frames, as introduced in the previous section, we obtain all samples of the STFT by applying the same window ϕ, shifted along a regular set of sampling points and taking FFT of the same length. Exploiting the concept of frames, we can achieve adaptivity of the resolution in either time or frequency. To do so, we relax the regularity of the classical Gabor frames, which leads to nonstationary Gabor frames (NSGT): For (k, m) ∈ I M × I M , we set (i) ϕ k,m = M mb k ϕ k for adaptivity in time. (ii) ϕ k,m = T kam ϕ m for adaptivity in frequency. A detailed mathematical analysis of NSGTs is beyond the scope of this contribution, but we wish to emphasize, that both analysis and synthesis can be done in a similar manner as in the regular case, that is, a diagonal frame operator can be achieved and perfect reconstruction is guaranteed by using either dual or tight windows. For all details, see BIB001 BIB002 .
Allocating, Detecting and Mining Sound Structures: An Overview of Technical Tools <s> Examples and interpretation of adaptive transforms <s> We examine in some detail Mel Frequency Cepstral Coefficients (MFCCs) the dominant features used for speech recognition and investigate their applicability to modeling music. In particular, we examine two of the main assumptions of the process of forming MFCCs: the use of the Mel frequency scale to model the spectra; and the use of the Discrete Cosine Transform (DCT) to decorrelate the Mel-spectral vectors. We examine the first assumption in the context of speech/music discrimination. Our results show that the use of the Mel scale for modeling music is at least not harmful for this problem, although further experimentation is needed to verify that this is the optimal scale in the general case. We investigate the second assumption by examining the basis vectors of the theoretically optimal transform to decorrelate music and speech spectral vectors. Our results demonstrate that the use of the DCT to decorrelate vectors is appropriate for both speech and music spectra. MFCCs for Music Analysis Of all the human generated sounds which influence our lives, speech and music are arguably the most prolific. Speech has received much focused attention and decades of research in this community have led to usable systems and convergence of the features used for speech analysis. In the music community however, although the field of synthesis is very mature, a dominant paradigm has yet to emerge to solve other problems such as music classification or transcription. Consequently, many representations for music have been proposed (e.g. (Martin1998), (Scheirer1997), (Blum1999)). In this paper, we examine some of the assumptions of Mel Frequency Cepstral Coefficients (MFCCs) the dominant features used for speech recognition and examine whether these assumptions are valid for modeling music. MFCCs have been used by other authors to model music and audio sounds (e.g. (Blum1999)). These works however use cepstral features merely because they have been so successful for speech recognition without examining the assumptions made in great detail. MFCCs (e.g. see (Rabiner1993)) are short-term spectral features. They are calculated as follows (the steps and assumptions made are explained in more detail in the full paper): 1. Divide signal into frames. 2. For each frame, obtain the amplitude spectrum. 3. Take the logarithm. 4. Convert to Mel (a perceptually-based) spectrum. 5. Take the discrete cosine transform (DCT). We seek to determine whether this process is suitable for creating features to model music. We examine only steps 4 and 5 since, as explained in the full paper, the other steps are less controversial. Step 4 calculates the log amplitude spectrum on the so-called Mel scale. This transformation emphasizes lower frequencies which are perceptually more meaningful for speech. It is possible however that the Mel scale may not be optimal for music as there may be more information in say higher frequencies. Step 5 takes the DCT of the Mel spectra. For speech, this approximates principal components analysis (PCA) which decorrelates the components of the feature vectors. We investigate whether this transform is valid for music spectra. Mel vs Linear Spectral Modeling To investigate the effect of using the Mel scale, we examine the performance of a simple speech/music discriminator. We use around 3 hours of labeled data from a broadcast news show, divided into 2 hours of training data and 40 minutes of testing data. We convert the data to ‘Mel’ and ‘Linear’ cepstral features and train mixture of Gaussian classifiers for each class. We then classify each segment in the test data using these models. This process is described in more detail in the full paper. We find that for this speech/music classification problem, the results are (statistically) significantly better if Mel-based cepstral features rather than linear-based cepstral features are used. However, whether this is simply because the Mel scale models speech better or because it also models music better is not clear. At worst, we can conclude that using the Mel cepstrum to model music in this speech/music discrimination problem is not harmful. Further tests are needed to verify that the Mel cepstrum is appropriate for modeling music in the general case. Using the DCT to Approximate Principal Components Analysis We additionally investigate the effectiveness of using the DCT to decorrelate Mel spectral features. The mathematically correct way to decorrelate components is to use PCA (or equivalently the KL transform). This transform uses the eigenvalues of the covariance matrix of the data to be modeled as basis vectors. By investigating how closely these vectors approximate cosine functions we can get a feel for how well the DCT approximates PCA. By inspecting the eigenvectors for the Mel log spectra for around 3 hours of speech and 4 hours of music we see that the DCT is an appropriate transform for decorrelating music (and speech) log spectra. Future Work Future work should focus on a more thorough examination the parameters used to generate MFCC features such as the sampling rate of the signal, the frequency scaling (Mel or otherwise) and the number of bins to use when smoothing. Also worthy of investigation is the windowing size and frame rate. Suggested Readings Blum, T, Keislar, D., Wheaton, J. and Wold, E., 1999, Method and article of manufacture for content-based analysis, storage, retrieval, and segmentation of audio information, U.S. Patent 5, 918, 223. Martin, K.. 1998, Toward automatic sound source recognition: identifying musical instruments, Proceedings NATO Computational Hearing Advanced Study Institute. Rabiner, L. and Juang, B., 1993, Fundamentals of Speech Recognition, Prentice-Hall. Scheirer, E. and Slaney, M., 1997, Construction and evaluation of a robust multifeature speech/music discriminator, Proceedings IEEE International Conference on Acoustics, Speech, and Signal Processing. <s> BIB001 </s> Allocating, Detecting and Mining Sound Structures: An Overview of Technical Tools <s> Examples and interpretation of adaptive transforms <s> An efficient and perfectly invertible signal transform feat uring a constant-Q frequency resolution is presented. The proposed approach is based on the idea of the recently introduced nonstationary Gabor frames. Exploiting the properties of the operator corresponding to a family of analysis atoms, this approach overcomes the problems of the classical implementations of constant-Q transforms, in particular, computational intensity and lack of i nvertibility. Perfect reconstruction is guaranteed by using an easy t o calculate dual system in the synthesis step and computation time is kept low by applying FFT-based processing. The proposed method is applied to real-life signals and evaluated in comparison to a related approach, recently introduced specifically for audio signa ls. <s> BIB002
We now illustrate the influence of adaptivity on the visual representation of audio signals. First, an analysis of a short excerpt of G. Ligeti's piano concert is given. This signal has percussive onsets in the piano and Glockenspiel voices and some orchestral background. Figure 1 first shows a regular Gabor (STFT) analysis and secondly, a representation in which the percussive parts are finely resolved by an adaptive NSGT. Our second example is an excerpt from a duet between violin and piano, by by J.Zorn. We can see three short segments: A vivid sequence of violin and piano notes followed by a calm violin melody with accompanying piano and finally an inharmonic part with chirp component. For this signal, we show an FFT-based Gabor transform (STFT) and an NSGT-based constant-Q transform in Figure 2 . In both cases the display of the frequency axis is logarithmic. It is obvious, that the NSGT, with adaptivity in the frequency domain, provides more accurate resolution of the harmonic components, in particular in low frequency regions. Note that with MFCC, very popular features used in speech and music processing, BIB001 , are obtained from an FFT-based STFT, using a logarithmic spacing of the frequency bins, while the analysis windows are linearly spaced. Given the new opportunities offered by adaptive NSGT it may well be worth reconsidering the underlying basic analysis. Returning to the quest for salient "sound objects" that stand out from their background, these examples show well, that the analysis tool influences, even by visual inspection, what may be considered as such. In particular, in the Ligeti example, the zooming-in onto the percussive onsets makes these components more distinguishable from their background. On the other hand, the harmonic parts require less coefficients, since they are represented by longer windows. It should be noted that, for further processing, e.g. the extraction of percussive components, this kind of representation is beneficial. Even more impressively, in the low frequency components of the second example, the single harmonics are not resolved at all in the FFT-based transform, while the NSGT-transform clearly separated them from a soft noise-floor background. Again, apart from pure visual evaluation, frequency separation of single components is necessary for applications such as transposition, cp. BIB002 . More visual and audio examples for adaptivity in both time and frequency can be found on http://www.univie.ac.at/nonstatgab/.
Allocating, Detecting and Mining Sound Structures: An Overview of Technical Tools <s> Discussion and Future Work <s> In this paper the potential of using nonstationary Gabor transform for beat tracking in music is examined. Nonstationary Gabor transforms are a generalization of the short-time Fourier transform, which allow flexibility in choosing the number of bins per octave, while retaining a perfect inverse transform. In this paper, it is evaluated if these properties can lead to an improved beat tracking in music signals, thus presenting an approach that introduces recent findings in mathematics to music information retrieval. For this, both nonstationary Gabor transforms and short-time Fourier transform are integrated into a simple beat tracking framework. Statistically significant improvements are observed on a large dataset, which motivates to integrate the nonstationary Gabor transform into state of the art approaches for beat tracking and tempo estimation. <s> BIB001 </s> Allocating, Detecting and Mining Sound Structures: An Overview of Technical Tools <s> Discussion and Future Work <s> Most methods to compute content-based similarity between audio samples are based on descriptors representing the spectral envelope or the texture of the audio signal only. This paper describes an approach based on (i) the extraction of spectro-temporal profiles from audio and (ii) non-linear alignment of the profiles to calculate a distance measure. <s> BIB002 </s> Allocating, Detecting and Mining Sound Structures: An Overview of Technical Tools <s> Discussion and Future Work <s> Sparsity in redundant dictionaries has become a forceful paradigm in signal processing over the last two decades. Its basic idea is to represent a signal with as few coefficients as possible using overcomplete sets of expansion functions which are ideally well adapted to the signal class. In audio processing, different collections of windowed Fourier or cosine bases have proven to serve as well adapted dictionaries for most audio signals of relevance for humans, in particular speech and music. Furthermore, they are easy to interpret and reflect physical reality as they expand a signal with respect to the dimensions of time and frequency. <s> BIB003
In this contribution we showed how, even by visual inspection, the choice of various representations that exploit prior knowledge about a signal (class) of interest, can influence the resulting analysis. It will and should be the topic of further, and necessarily interdisciplinary, research to scrutinize the influence of these choices on the performance of higher-level processing steps. Some preliminary steps in this direction have been pursued within the research project Audio Miner, cf. http://www.ofai.at/research/impml/projects/audiominer. html and BIB001 BIB003 BIB002 and shown promising results. We strongly believe, that using appropriate, still concise, representations of the original data is important to avoid biased results in higher-level processing steps.
Quantum Programming Languages: An Introductory Overview <s> INTRODUCTION <s> An encryption method is presented with the novel property that publicly revealing an encryption key does not thereby reveal the corresponding decryption key. This has two important consequences: (1) Couriers or other secure means are not needed to transmit keys, since a message can be enciphered using an encryption key publicly revealed by the intented recipient. Only he can decipher the message, since only he knows the corresponding decryption key. (2) A message can be “signed” using a privately held decryption key. Anyone can verify this signature using the corresponding publicly revealed encryption key. Signatures cannot be forged, and a signer cannot later deny the validity of his signature. This has obvious applications in “electronic mail” and “electronic funds transfer” systems. A message is encrypted by representing it as a number M, raising M to a publicly specified power e, and then taking the remainder when the result is divided by the publicly specified product, n , of two large secret primer numbers p and q. Decryption is similar; only a different, secret, power d is used, where e * d ≡ 1(mod (p - 1) * (q - 1)). The security of the system rests in part on the difficulty of factoring the published divisor, n . <s> BIB001 </s> Quantum Programming Languages: An Introductory Overview <s> INTRODUCTION <s> A computer is generally considered to be a universal computational device; i.e., it is believed able to simulate any physical computational device with a cost in computation time of at most a polynomial factor: It is not clear whether this is still true when quantum mechanics is taken into consideration. Several researchers, starting with David Deutsch, have developed models for quantum mechanical computers and have investigated their computational properties. This paper gives Las Vegas algorithms for finding discrete logarithms and factoring integers on a quantum computer that take a number of steps which is polynomial in the input size, e.g., the number of digits of the integer to be factored. These two problems are generally considered hard on a classical computer and have been used as the basis of several proposed cryptosystems. We thus give the first examples of quantum cryptanalysis. > <s> BIB002 </s> Quantum Programming Languages: An Introductory Overview <s> INTRODUCTION <s> An inexpensive faucet aerator is provided by three molded parts and a thin metal cup that holds two of the molded parts in assembled relationship. The first molded part is an elongated annulus provided with upstream and downstream recesses separated by an inner ring that helps to break up the liquid flow and serves as an abutment to support a second molded, jet-forming, part in the upstream recess and is arranged to be engaged by the third, molded, part located in the downstream recess and aiding in defining air intake means to the aerator. <s> BIB003 </s> Quantum Programming Languages: An Introductory Overview <s> INTRODUCTION <s> From the foundations of quantum theory to quantum technology - G. Alber quantum information theory, an invitation - R. Werner quantum communication - H. Weinfurter and A. Zeilinger quantum algorithms, applicable algebra and quantum physics - T.H. Beth and M. Rotteler mixed-state entanglement and quantum communication - M. Rotteler and R. Horodecki. <s> BIB004
Quantum theory in its modern form dates back to the year 1926. Within the past eight decades, innumerable applications of this theory have been detected, which have had a deep impact on all aspects of technology, even on human life in general. Apparently, although this is a fairly long time, the potential of quantum theory for innovative applications still seems to remain inexhaustible. During the past two decades, several completely new applications of quantum physics at the edge between computer science and the new area of quantum information theory BIB003 BIB004 have been discovered. These are based on the observation that certain genuine quantum properties of a single or few quantum particles open the way to technologies not amenable to the classical physics. Quantum cryptography is the catchword which characterizes one group of these applications. The one-time pad of cryptography requires the distribution of long keys consisting of a sequence of random bits. This protocol has been proven to be unconditionally secure, provided that the key can be transmitted securely. Quantum key distribution can guarantee that the presence of an eavesdropper will be detected with certainty, at least in principle. Quantum cryptography is now available as a commercial product. Certainly, most spectacular has been the discovery by Peter Shor BIB002 that quantum systems can speed up the computational task of factorizing large integers into primes by many orders of magnitude. Building systems of this kind (which have been dubbed 'quantum computers') would make standard cryptographical protocols such as RSA BIB001 and ElGamal insecure, because these rely on the fact that no classical polynomial-time factoring algorithm is known. The activities of programming and designing algorithms require some sort of notation and a programming model. This applies to both classical and quantum computers. In particular, a notation, which is adapted to specific properties and peculiarities of programming quantum systems, is called a 'quantum programming language' (QPL). Therefore, since several years, the question whether conventional programming models and languages are sufficient or whether these should be replaced with new models and languages is being discussed. It might be argued that this discussion is premature (it has, in fact, jestingly been called 'putting the cart before the horse' ) because sufficiently sized quantum computers which could outperform modern classical PCs in factorizing large integers do not exist and will not exist in the foreseeable future. And 'will never exist' is even argued by some more pessimistic people. Nevertheless, there are at least two good reasons to discuss the issue now. First, quantum computers can be simulated on classical computers, although not efficiently in general, of course. So, at least for small numbers of 'qubits', quantum algorithms can be run on a classical computer. Second, there do exist applications which could be realized on smaller sized quantum computers, such as the simulation of complex systems [1, p. 204] . Some workers in the field argue that applications of this type might be realizable within a couple of years. THE COMPUTER JOURNAL, VOL. 50 NO. BIB004 2007 This article surveys discussions and current contributions to the young research area of QPLs, which potentially might support the development of quantum algorithms and quantum programs. The rest of the article is organized as follows. Section 2 summarizes some terminology of quantum theory and explains some basic ideas behind the formalism. There is an ongoing debate on the interpretation of quantum theory. Although this is beyond the scope of the present article, we give some comments in Section 3 because, in some of the publications on QPLs, questions of interpretation are touched. General design aspects are discussed in Section 4. Section 5 surveys in detail some of the approaches, such as the use of pseudocode, a procedural approach and an approach based on a conventional programming language. The section also discusses some more recent theoretical works related to lambda calculus, functional programming and linear logic. And, finally, Section 6 concludes the article with a summary. The intended audience for this article is computer scientists who are interested in getting some general idea of present attempts to define programming languages for quantum computers. With the exception of Section 2, most parts of the article are kept non-technical; in particular, in Section 5.6, no formalized introductions into categorical terminology, linear logic or formal semantics are given. However, references have been provided for those readers who want to see more details of the issues treated in this article. The present article is a largely extended and updated version of a seminar report on QPLs , see also Refs. [9 -11] .
Quantum Programming Languages: An Introductory Overview <s> QUANTUM THEORY AND QUANTUM COMPUTATION <s> In May 1935, Albert Einstein, Boris Podolsky and Nathan Rosen published an argument that quantum mechanics fails to provide a complete description of physical reality. Today, 50 years later, the EPR paper and the theoretical and experimental work it inspired remain remarkable for the vivid illustration they provide of one of the most bizarre aspects of the world revealed to us by the quantum theory. <s> BIB001 </s> Quantum Programming Languages: An Introductory Overview <s> QUANTUM THEORY AND QUANTUM COMPUTATION <s> A state of a composite quantum system is called classically correlated if it can be approximated by convex combinations of product states, and Einstein-Podolsky-Rosen correlated otherwise. Any classically correlated state can be modeled by a hidden-variable theory and hence satisfies all generalized Bell's inequalities. It is shown by an explicit example that the converse of this statement is false. <s> BIB002 </s> Quantum Programming Languages: An Introductory Overview <s> QUANTUM THEORY AND QUANTUM COMPUTATION <s> From the Publisher: ::: The updated new edition of the classic Introduction to Algorithms is intended primarily for use in undergraduate or graduate courses in algorithms or data structures. Like the first edition,this text can also be used for self-study by technical professionals since it discusses engineering issues in algorithm design as well as the mathematical aspects. ::: In its new edition,Introduction to Algorithms continues to provide a comprehensive introduction to the modern study of algorithms. The revision has been updated to reflect changes in the years since the book's original publication. New chapters on the role of algorithms in computing and on probabilistic analysis and randomized algorithms have been included. Sections throughout the book have been rewritten for increased clarity,and material has been added wherever a fuller explanation has seemed useful or new information warrants expanded coverage. ::: As in the classic first edition,this new edition of Introduction to Algorithms presents a rich variety of algorithms and covers them in considerable depth while making their design and analysis accessible to all levels of readers. Further,the algorithms are presented in pseudocode to make the book easily accessible to students from all programming language backgrounds. ::: Each chapter presents an algorithm,a design technique,an application area,or a related topic. The chapters are not dependent on one another,so the instructor can organize his or her use of the book in the way that best suits the course's needs. Additionally,the new edition offers a 25% increase over the first edition in the number of problems,giving the book 155 problems and over 900 exercises thatreinforcethe concepts the students are learning. <s> BIB003 </s> Quantum Programming Languages: An Introductory Overview <s> QUANTUM THEORY AND QUANTUM COMPUTATION <s> We define the model of quantum circuits with density matrices, where non-unitary gates are allowed. Measurements in the middle of the computation, noise and decoherence are implemented in a natural way in this model, which is shown to be equivalent in computational power to standard quantum circuits. ::: The main result in this paper is a solution for the subroutine problem: The general function that a quantum circuit outputs is a probabilistic function, but using pure state language, such a function can not be used as a black box in other computations. We give a natural definition of using general subroutines, and analyze their computational power. ::: We suggest convenient metrics for quantum computing with mixed states. For density matrices we analyze the so called ``trace metric'', and using this metric, we define and discuss the ``diamond metric'' on superoperators. These metrics enable a formal discussion of errors in the computation. ::: Using a ``causality'' lemma for density matrices, we also prove a simple lower bound for probabilistic functions. <s> BIB004 </s> Quantum Programming Languages: An Introductory Overview <s> QUANTUM THEORY AND QUANTUM COMPUTATION <s> An inexpensive faucet aerator is provided by three molded parts and a thin metal cup that holds two of the molded parts in assembled relationship. The first molded part is an elongated annulus provided with upstream and downstream recesses separated by an inner ring that helps to break up the liquid flow and serves as an abutment to support a second molded, jet-forming, part in the upstream recess and is arranged to be engaged by the third, molded, part located in the downstream recess and aiding in defining air intake means to the aerator. <s> BIB005 </s> Quantum Programming Languages: An Introductory Overview <s> QUANTUM THEORY AND QUANTUM COMPUTATION <s> From the foundations of quantum theory to quantum technology - G. Alber quantum information theory, an invitation - R. Werner quantum communication - H. Weinfurter and A. Zeilinger quantum algorithms, applicable algebra and quantum physics - T.H. Beth and M. Rotteler mixed-state entanglement and quantum communication - M. Rotteler and R. Horodecki. <s> BIB006 </s> Quantum Programming Languages: An Introductory Overview <s> QUANTUM THEORY AND QUANTUM COMPUTATION <s> Optimal implementation of quantum gates is crucial for designing a quantum computer. We consider the matrix representation of an arbitrary multiqubit gate. By ordering the basis vectors using the Gray code, we construct the quantum circuit which is optimal in the sense of fully controlled single-qubit gates and yet is equivalent with the multiqubit gate. In the second step of the optimization, superfluous control bits are eliminated, which eventually results in a smaller total number of the elementary gates. In our scheme the number of controlled NOT gates is O(4(n)) which coincides with the theoretical lower bound. <s> BIB007 </s> Quantum Programming Languages: An Introductory Overview <s> QUANTUM THEORY AND QUANTUM COMPUTATION <s> Abstract In this paper we give a self-contained introduction to the conceptional and mathematical foundations of quantum information theory. In the first part we introduce the basic notions like entanglement, channels, teleportation, etc. and their mathematical description. The second part is focused on a presentation of the quantitative aspects of the theory. Topics discussed in this context include: entanglement measures, channel capacities, relations between both, additivity and continuity properties and asymptotic rates of quantum operations. Finally, we give an overview on some recent developments and open questions. <s> BIB008
Quantum theory is the theory of physical processes at an atomic and subatomic scale. It is a state theory, which means that the basic notions are state of a system, evolution of a system's state in time, observables and measurement, the process of measuring observables in a given system state. There are many up-to-date textbooks BIB005 BIB006 and tutorials [14 -16] on quantum computation, including a collection of on-line articles on different levels of abstraction . Therefore, we restrict ourselves to a brief summary of terminology and notation, but discuss some points and problems of the physical background. The standard formalism underlying quantum theory defines a general framework which leaves room for empirical choices such as the system's number of degrees of freedom and the 'law of force' (technically: the Hamiltonian). Moreover, quantum theory is a statistical theory: observational results are probabilistic including the limiting cases of probability 0 or 1. Formally, the arena of quantum theory is a Hilbert space H, a complex vector space with an inner product which is complete with respect to this product. The traditional notation, due to Dirac, for elements of this vector space is jcl, where c is some label. This notation, which is quite popular in the physics community, has many advantages for practical calculations and a few disadvantages and also, occasionally, some potential ambiguities. Readers who prefer an alternative presentation may consult Ref. [18, pp. 531-541] , where vectors and matrices are written in block form. However, Mermin's tutorial , which is specifically aimed at readers with no prior familiarity with quantum mechanics, uses Dirac's notation even for classical bits. Usually, in the context of quantum computation, the state space is a finite collection of two-dimensional Hilbert spaces. So, the dimension of the Hilbert space is finite and, to a large extent, elementary linear algebra is all that is needed at this level of abstraction. The theorem of Riesz states that for every vector jfl [ H, there exists exactly one continuous linear functional on H, denoted by kfj, such that the inner product kfjcl may be regarded as an application of kfj to the vector jcl. The linear functionals on H also form a Hilbert space, the dual space H*. In component language, the operations jcl [ H 7 ! kcj [ H* and vice versa (dual correspondence) are also known as 'lowering' and 'raising' of indices. The states of a system, more precisely, mixed states, are linear positive operators r on H with tr r ¼ 1 (tr ¼ trace, the sum of the diagonal elements of the matrix r ij representing the operator with respect to some basis). In a closed system, the time evolution of a state r is given by a unitary operator U according to r 0 ¼ U rU*. Here, a norm-preserving invertible operator is called unitary. Particularly important is the operation of building larger state spaces from smaller ones. Two quantum systems A and B with Hilbert spaces H A and H B , respectively, can be joined into one system A & B. In Hilbert space terminology, the resulting bipartite system is represented by the tensor product H A&B ¼ H A H B . The dimension of the Hilbert space of a composite system A&B is given by dim Traditional textbooks usually identify system states with vectors. In fact, there are special states, called pure states, which informally could be paraphrased as 'states with as little randomness as possible'. Formally, a pure state r can be characterized by tr(r 2 ) ¼ 1 and may canonically be represented in the form of a dyad r ¼ jclkcj. The main drawback of exclusively using this notion of system state lies in the fact that a composite quantum system can be in a pure state, whereas subsystems can, at the same time, be in mixed states, which means that only partial information is available on the subsystems. Therefore, the notion of mixed states introduces a unifying view. A system with the property that maximal information is available on the system as a whole but no information at all is available on the subsystems is called (maximally) entangled. According to Schrödinger, this is the fundamental QUANTUM PROGRAMMING LANGUAGES 135 property setting quantum physics apart from the classical physics. For many decades, entanglement has been considered a strange and bizarre feature of quantum physics. One of the fundamental new insights of modern quantum information theory is the observation that entanglement serves as a resource for potential applications such as fast factorization of integers into primes. The now generally accepted formal definition of entanglement, due to Werner BIB002 BIB001 famous popular paper. In many situations, observables can adequately be represented by self-adjoint operators the eigenvalues of which are the potential measuring values. A measurement yields probabilistically one of these values and, additionally, projects the system state onto the eigenspace of the measured value. In the context of quantum information theory, some of these traditional postulates have turned out to be oversimplified. Generalizations using the notions of quantum operation or quantum channel require some more advanced formalism, which will not be treated here in detail. Indepth introductions to this formalism can be found in most advanced texts, see, for example, Refs. BIB005 BIB006 BIB008 There are three basic steps in a quantum process: system preparation, system transformation, i.e. unitary time evolution of a closed system, and measurement. A basic task for a physicist who faces the problem of modelling a concrete quantum system is to find a suitable Hilbert space H, representing the number of degrees of freedom, and the unitary operators U (or the Hamiltonian), representing the system's time evolution. Presently, in quantum computation, the most popular model is the qubit or gate model, which may, in the context of this paper, also serve as an example of the general formalism sketched earlier. In this model, a quantum network is a composite system consisting of n qubits. A one-qubit system is a two-level system, for example, a spin-1/2-particle such as an electron or a photon with two polarization states (right/ left or vertical/horizontal polarization). The Hilbert space modelling these systems is H 2 ¼ C 2 and the Hilbert space of a composite system of n qubits is H n ¼ H 2 n . So, in particular, adding one qubit to a system doubles its dimension: Another way of writing this three-qubit state is jcl ¼ a 0 j0l þ a 1 j1l þ . . . þ a 7 j7l with the obvious re-interpretation of bit sequences as integers. Applying a unitary operation to jcl means to proceed one step in time or, to put it differently, to process all of the numbers 0-7 in one step. Therefore, this capability of quantum systems of processing many integer values simultaneously has been called 'quantum parallelism'. In an n-qubit system, an operator U is represented by a 2 n  2 n matrix, which obviously gets extremely large even for modest values of n. So, an important question is how this matrix can be broken down into smaller parts. A number of theorems exist which give (partial) answers to this question BIB005 : single qubit and CNOT gates (discussed subsequently) can be used to implement an arbitrary unitary operation on n qubits. These gates are universal but 'no straightforward method is known to implement these in a fashion which is resistant to errors' [1, p. 194] . But, there exist discrete sets of gates which can be used to perform universal quantum computation using quantum error-correcting codes. Arbitrary unitary operations can be approximated by discrete sets of gates. One such set of gates is: Hadamard gate, phase gate, CNOT gate (controlled NOT, XOR) and T-gate. Figure 1 shows the graphical representations of these gates, their matrix form and their operation on states. More recent work on breaking up large unitaries into more elementary constituents can be found in Ref. BIB007 and references therein. In CLRS-style pseudocode notation BIB003 , a quantum computation in its most basic form can be written as follows: until the desired level of statistical confidence has been reached The traditional gate-model relies on the assumption that at any given time, the system is in a pure state. There are many situations, however, which cannot be described adequately, if at all, within this setting. A generalization of the gate model with mixed states has been given by Aharonov et al. BIB004 . In their article a quantum circuit is defined as a directed acyclic graph, where each node represents one gate. The gate itself is represented by a so-called superoperator, a trace preserving (in general, trace non-increasing), completely positive linear map from mixed states on k qubits to mixed states on l qubits, where k = l in general. Situations which can thus be treated adequately include measurements in the 136 R. RÜ DIGER middle of a computation, decoherence and noise and the so-called subroutine problem. The notion of a superoperator (alternative or closely related notions are quantum operations and channels) is sufficiently general to deal with situations like unitary and non-unitary evolution like measurement or quantum noise in a unified formal framework. The physical idea in the background of this formalism is the question how quantum operations in an open system can be described intrinsically, i.e. without reference to the environment. The article by Aharonov et al. gives a readable account and motivation of this terminology, see also the introductory texts on quantum information theory, cited at the beginning of this section.
Quantum Programming Languages: An Introductory Overview <s> QUANTUM PROGRAMMING LANGUAGES 137 <s> We have measured the linear polarization correlation of the photons emitted in a radiative atomic cascade of calcium. A high-efficiency source provided an improved statistical accuracy and an ability to perform new tests. Our results, in excellent agreement with the quantum mechanical predictions, strongly violate the generalized Bell's inequalities, and rule out the whole class of realistic local theories. No significant change in results was observed with source-polarizer separations of up to 6.5 m. <s> BIB001 </s> Quantum Programming Languages: An Introductory Overview <s> QUANTUM PROGRAMMING LANGUAGES 137 <s> A Franson-type test of Bell inequalities by photons 10.9 km apart is presented. Energy-time entangled photon pairs are measured using two-channel analyzers, leading to a violation of the inequalities by 16 standard deviations without subtracting accidental coincidences. Subtracting them, a two-photon interference visibility of 95.5% is observed, demonstrating that distances up to 10 km have no significant effect on entanglement. This sets quantum cryptography with photon pairs as a practical competitor to the schemes based on weak pulses. <s> BIB002
In view of the loss of direct intuition compared with the classical physics, the question is legitimate, whether quantum theory is the definite theory of micro physics or whether there might be a more subtle theory predicting more details than quantum theory does, which is, in some way, 'closer to reality'. Of course, computer science can hardly solve this problem and no one (anyway, no physicist) expects this. But, computer science offers some terminology which is flexible enough to shed some light on the subject from a different perspective. From a computer science perspective, quantum theory is a kind of automata theory: a system has to be initialized ('prepared'), the system dynamics is described by a sequence of states and the final result will be output ('measured'). The state space is a kind of abstraction and it can reasonably be asked in which way the automaton has been realized or implemented. In computer science, implementation is commonly seen as a kind of mapping to a real standard system, for example, a standard hardware, operating system or programming language. In physics, the situation is similar insofar that the Hilbert space is a highly abstracted way of describing experiments. (Peres as quoted in Ref. [1, p. 112]:'. . . quantum phenomena do not occur in a Hilbert space, they occur in a laboratory.') It is, however, not clear whether the question 'Is it possible in the context of theoretical physics to talk about reality itself?' really makes sense. Physics always describes natural phenomena, although in the classical physics, notably classical mechanics, the gap between formal description and sensual perception seems to be small. Amazingly, computer science offers some more subtle terminology, which could help in clarifying the relation between abstract description and implementation. In the context of the specification language LOTOS, Bolognesi and Brinksma [27, p. 39] write in their tutorial: 'In LOTOS the words specification and implementation have a relative meaning, not an absolute one. Given two (syntactically homogeneous) LOTOS specifications S 1 and S 2 , we will say that S 2 is an implementation of the specification S 1 when, informally, S 2 gives a more structured and detailed description of the system specified in S 1 .' This definition can successfully be applied to physics. In the history of physics, there are many examples of successful refinements. For example, statistical mechanics can be seen as a proper refinement of thermodynamics. All of the results of the latter are reproduced by statistical mechanics and, additionally, there are phenomena such as fluctuations, which can be explained by statistical mechanics. The analogy between quantum mechanics and thermodynamics elucidates Einstein's position towards quantum mechanics, see the Einstein-Born letters [28, letter dated 50/09/15]. A detailed appreciation of Einstein's historical role in the development of quantum mechanics from a perspective of modern quantum information theory has been given by Werner . So, instead of looking for a 'realization' of physical phenomena which obviously are successfully described by the Hilbert space formalism, one should ask whether refinements of the theory exist, which could explain the theory in much the same way as statistical mechanics explains thermodynamics. The so-called local hidden variable theories were one such attempt to explain the statistical nature of quantum phenomena in much the same way as the stochastic behaviour of classical probabilistic systems can be explained. Throwing dice in the usual manner is influenced by innumerable parameters which cannot be controlled fully. Embodying this idea into a theory of quantum processes led to predictions, which were substantially different from the conventional quantum physics. In fact, the celebrated Bell inequalities state that these theories set stronger bounds on a certain parameter, the Bell correlation, than the quantum theory. The key feature setting physics apart from mathematics or computer science is the existence of a 'supreme referee': the experiment. And, in fact, experiments BIB001 BIB002 say that Bell's inequalities can be violated by quantum systems, thus ruling out the theories with local hidden parameters. Therefore, to summarize, it is an open question whether a proper refinement of quantum theory exists. In this sense, one might say that there does not yet exist an entirely satisfactory explanation of how quantum phenomena are 'realized' in nature. This should not, however, obscure the overwhelming success of quantum theory in its present form: theoretical predictions agree perfectly with experimental results and no contradictions between observational experiences and the mathematical framework of the theory [1, p. 2] are known. This somewhat lengthy discussion should point out that re-formulating and possibly refining quantum theory appear to be a risky matter. Whether a discussion on QPLs can contribute anything to these issues is certainly an open question. Nevertheless, with respect to the perspective on the foundation of quantum theory, the attempts of re-formulating quantum theory (Section 5.6) are certainly the most exciting aspect of this research.
Quantum Programming Languages: An Introductory Overview <s> General goals of designing QPLs <s> A few conventions for thinking about and writing quantum pseudocode are proposed. The conventions can be used for presenting any quantum algorithm down to the lowest level and are consistent with a quantum random access machine (QRAM) model for quantum computing. In principle a formal version of quantum pseudocode could be used in a future extension of a conventional language. <s> BIB001 </s> Quantum Programming Languages: An Introductory Overview <s> General goals of designing QPLs <s> It is becoming increasingly clear that, if a useful device for quantum computation will ever be built, it will be embodied by a classical computing machine with control over a truly quantum subsystem, this apparatus performing a mixture of classical and quantum computation. This paper investigates a possible approach to the problem of programming such machines: a template high level quantum language is presented which complements a generic general purpose classical language with a set of quantum primitives. The underlying scheme involves a run-time environment which calculates the byte-code for the quantum operations and pipes it to a quantum device controller or to a simulator. This language can compactly express existing quantum algorithms and reduce them to sequences of elementary operations; it also easily lends itself to automatic, hardware independent, circuit simplification. A publicly available preliminary implementation of the proposed ideas has been realised using the language. <s> BIB002 </s> Quantum Programming Languages: An Introductory Overview <s> General goals of designing QPLs <s> This article is a brief and subjective survey of quantum programming language research. 1 Quantum Computation Quantum computing is a relatively young subject. It has its beginnings in 1982, when Paul Benioff and Richard Feynman independently pointed out that a quantum mechanical system can be used to perform computations [11, p.12]. Feynman’s interest in quantum computation was motivated by the fact that it is computationally very expensive to simulate quantum physical systems on classical computers. This is due to the fact that such simulation involves the manipulation is extremely large matrices (whose dimension is exponential in the size of the quantum system being simulated). Feynman conceived of quantum computers as a means of simulating nature much more efficiently. The evidence to this day is that quantum computers can indeed perform certain tasks more efficiently than classical computers. Perhaps the best-known example is Shor’s factoring algorithm, by which a quantum computer can find the prime factors of any integer in probabilistic polynomial time [15]. There is no known classical probabilistic algorithm which can solve this problem in polynomial time. In the ten years since the publication of Shor’s result, there has been an enormous surge of research in quantum algorithms and quantum complexity theory. 2 Quantum Programming Languages Quantum physics involves phenomena, such as superposition and entanglement, whose properties are not always intuitive. These same phenomena give quantum computation its power, and are often at the heart of an interesting quantum algorithm. However, there does not yet seem to be a unifying set of principles by which quantum algorithms are developed; each new algorithm seems to rely on a unique set of “tricks” to achieve its particular goal. One of the goals of programming language design is to identify and promote useful “high-level” concepts — abstractions or paradigms which allow humans 2 to think about a problem in a conceptual way, rather than focusing on the details of its implementation. With respect to quantum programming, it is not yet clear what a useful set of abstractions would be. But the study of quantum programming languages provides a setting in which one can explore possible language features and test their usefulness and expressivity. Moreover, the definition of prototypical programming languages creates a unifying formal framework in which to view and analyze existing quantum algorithm. 2.1 Virtual Hardware Models Advances in programming languages are often driven by advances in compiler design, and vice versa. In the case of quantum computation, the situation is complicated by the fact that no practical quantum hardware exists yet, and not much is known about the detailed architecture of any future quantum hardware. To be able to speak of “implementations”, it is therefore necessary to fix some particular, “virtual” hardware model to work with. Here, it is understood that future quantum hardware may differ considerably, but the differences should ideally be transparent to programmers and should be handled automatically by the compiler or operating system. There are several possible virtual hardware models to work with, but fortunately all of them are equivalent, at least in theory. Thus, one may pick the model which fits one’s computational intuitions most closely. Perhaps the most popular virtual hardware model, and one of the easiest to explain, is the quantum circuit model. Here, a quantum circuit is made up from quantum gates in much the same way as a classical logic circuit is made up from logic gates. The difference is that quantum gates are always reversible, and they correspond to unitary transformations over a complex vector space. See e.g. [3] for a succinct introduction to quantum circuits. Of the two basic quantum operations, unitary transformations and measurements, the quantum circuit model emphasizes the former, with measurements always carried out as the very last step in a computation. Another virtual hardware model, and one which is perhaps even better suited for the interpretation of quantum programming languages, is the QRAM model of Knill [9]. Unlike the quantum circuit model, the QRAM models allows unitary transformations and measurements to be freely interleaved. In the QRAMmodel, a quantum device is controlled by a universal classical computer. The quantum device contains a large, but finite number of individually addressable quantum bits, much like a RAM memory chip contains a multitude of classical bits. The classical controller sends a sequence of instructions, which are either of the form “apply unitary transformation U to qubits i and j” or “measure qubit i”. The quantum device carries out these instruction, and responds by making the results of the measurements available. A third virtual hardware model, which is sometimes used in complexity theory, is the quantum Turing machine. Here, measurements are never performed, and the entire operation of the machine, which consists of a tape, head, and finite control, is assumed to be unitary. While this model is theoretically equivalent <s> BIB003 </s> Quantum Programming Languages: An Introductory Overview <s> General goals of designing QPLs <s> We propose the design of a programming language for quantum computing. Traditionally, quantum algorithms are frequently expressed at the hardware level, for instance in terms of the quantum circuit model or quantum Turing machines. These approaches do not encourage structured programming or abstractions such as data types. In this paper, we describe the syntax and semantics of a simple quantum programming language with high-level features such as loops, recursive procedures, and structured data types. The language is functional in nature, statically typed, free of run-time errors, and has an interesting denotational semantics in terms of complete partial orders of superoperators. <s> BIB004 </s> Quantum Programming Languages: An Introductory Overview <s> General goals of designing QPLs <s> We define quantum cellular automata as infinite quantum lattice systems with discrete time dynamics, such that the time step commutes with lattice translations and has strictly finite propagation speed. In contrast to earlier definitions this allows us to give an explicit characterization of all local rules generating such automata. The same local rules also generate the global time step for automata with periodic boundary conditions. Our main structure theorem asserts that any quantum cellular automaton is structurally reversible, i.e., that it can be obtained by applying two blockwise unitary operations in a generalized Margolus partitioning scheme. This implies that, in contrast to the classical case, the inverse of a nearest neighbor quantum cellular automaton is again a nearest neighbor automaton. ::: We present several construction methods for quantum cellular automata, based on unitaries commuting with their translates, on the quantization of (arbitrary) reversible classical cellular automata, on quantum circuits, and on Clifford transformations with respect to a description of the single cells by finite Weyl systems. Moreover, we indicate how quantum random walks can be considered as special cases of cellular automata, namely by restricting a quantum lattice gas automaton with local particle number conservation to the single particle sector. <s> BIB005 </s> Quantum Programming Languages: An Introductory Overview <s> General goals of designing QPLs <s> We introduce the language QML, a functional language for quantum computations on finite types. Its design is guided by its categorical semantics: QML programs are interpreted by morphisms in the category FQC of finite quantum computations, which provides a constructive semantics of irreversible quantum computations realisable as quantum gates. QML integrates reversible and irreversible quantum computations in one language, using first order strict linear logic to make weakenings explicit. Strict programs are free from decoherence and hence preserve superpositions and entanglement -which is essential for quantum parallelism. <s> BIB006 </s> Quantum Programming Languages: An Introductory Overview <s> General goals of designing QPLs <s> We present the quantum programming language cQPL which is an extended version of QPL [Sel04b]. It is capable of quantum communication and it can be used to formulate all possible quantum algorithms. Additionally, it possesses a denotational semantics based on a partial order of superoperators and uses fixed points on a generalised Hilbert space to formalise (in addition to all standard features expected from a quantum programming language) the exchange of classical and quantum data between an arbitrary number of participants. Additionally, we present the implementation of a cQPL compiler which generates code for a quantum simulator. <s> BIB007 </s> Quantum Programming Languages: An Introductory Overview <s> General goals of designing QPLs <s> We discuss the role of classical control in the context of reversible quantum cellular automata. Employing the structure theorem for quantum cellular automata, we give a general construction scheme to turn an arbitrary cellular automaton with external classical control into an autonomous one, thereby proving the computational equivalence of these two models. We use this technique to construct a universally programmable cellular automaton on a one-dimensional lattice with single cell dimension 12. <s> BIB008
In computer science, language design is a highly controversial matter. On the one hand, a vast world of (classical) programming languages exists and, on the other hand, even the relevance of the subject itself is a matter of debate. Some people consider programming languages as a marginal issue, just as a means of getting a computer to do some useful work, whereas scientists involved in language and system design consider this issue as central to the whole field of computer science. Just to cite one of the pioneers of language and system design, Wirth [33, p. 10]: '. . . I hope, I have clearly expressed my opinion that programming, programming style, programming discipline, and therewith programming languages are still not merely one of many issues in computer science, but pillars.' All of these controversial matters have also to be discussed in the context of QPLs and, of course, many more which are specifically related to quantum physics. In this section, QPL design will be considered from an informal perspective. Here, some general goals which should be achieved will be put into the foreground. The following is a non-exhaustive, subjective commented list of some aspects, which will play a certain role in designing QPLs. Some of these desiderata will certainly be controversial or might considered as marginal, and some might even turn out not to be realizable. In their article on Q language, Bettelli et al. BIB002 list several desiderata for a QPL. According to these authors, a QPL should fulfil the following requirements: completeness: it must be possible to code every quantum algorithm or, more generally, every quantum program; classical extension: the quantum language must include a high level 'classical computing paradigm'; separability: classical and quantum programming must be kept separated; expressivity: the language must provide high-level constructs; hardware independence: the language should not rely on details of the quantum hardware. There may be some other and more specific desiderata. A QPL should or should possibly (i) run on top of a simulator as well as on a real system, (ii) help in discovering new efficient quantum algorithms, (iii) enable a layperson to write quantum programs, (iv) comply with the concept of abstract data types (ADTs), (v) provide high-level language constructs, (vi) support quantum data and quantum control, (vii) support programming in the large and programming communication processes, (viii) be as close as possible to classical language concepts for pragmatic reasons and (ix) support quantum processes completely, including measurement. In the sequel, we give some comments on this list. It should be possible to couple the language, more precisely, the run-time system, to a simulator and potentially replace the simulator with a real quantum computer without the need of changing parts of the program in any way. As stated earlier, quantum computers can (non-efficiently) be simulated by classical computers simply by integrating the basic equations for time evolution of quantum systems. But here, a caveat should be added: this statement tacitly assumes that the quantum system is not composed of parts which are spatially separated. The attempt of simulating a spatially separated quantum system by a classical system, which is also spatially separated, requires additional resources (classical communication) and introduces additional timing constraints which would have no counterpart in reality. In contrast to general opinion that programming languages are merely a means of getting a computer to do some useful work, language designers emphasize that programming languages also serve as a means for communication between humans. Therefore, QPLs should enable programmers to reason about structures of quantum algorithms and programs. Ideally, so the argument of many people, a well-designed QPL should aid in discovering new quantum algorithms. However, a comparison with the situation of classical programming languages suggests that the idea of languages being helpful in this context should be regarded sceptically. An undergraduate student having finished a programming course in Pascal will presumably not be able to re-invent Quicksort, for example. In fact, up to now, none of the approaches seems to have led to a discovery of new quantum algorithms. One of the fundamental goals of classical programming languages is to enable a layperson to write complex programs without a detailed knowledge of the architecture of the hardware or the operating system. In the context of quantum programs, this means that a computer scientist could program quantum computers without a detailed knowledge of the QUANTUM PROGRAMMING LANGUAGES 139 underlying physics. This could possibly be achieved by encapsulating typical quantum operations in a library: this is the idea of ADTs. There remain, however, at least two problems with this approach. First, non-experts will not have an intuitive understanding of elementary quantum operations. So, they will hardly be able to combine segments of quantum programs in a reasonable manner without some kind of formalized support. Second, if an algorithm is encapsulated as a whole such that its effect (not its efficiency, of course) can be understood classically, some information on the probabilities involved must be provided. From this perspective, quantum algorithms look like extremely fast classical probabilistic algorithms. It is, however, very unlikely that within this setting, new quantum algorithms will be discovered. Knill BIB001 proposes a pseudocode notation for quantum programs and the model of a quantum random access machine (QRAM) in which the quantum system is being controlled by a classical computer; this model has been influential in the design of several QPLs, see Section 5. For example, in Selinger's BIB003 BIB004 language QPL/QFC quantum flow charts (QFC) this idea has been put into the slogan 'classical control, quantum data'. One could as well imagine a situation in which both data and control are quantum mechanical. In the proposed language QML BIB006 , this slogan has been modified to 'quantum data and quantum control'. This idea, which permits superposed instructions as well as superposed data, has already been put forward in the context of quantum cellular automata, see Refs. BIB008 BIB005 and references therein. QPLs should also support programming in the large, i.e. they should support some kind of modularization. This is a rather non-trivial point because when composing two quantum systems into one single system, the existence of nonclassical correlations has to be taken into consideration. Obviously, a classical modularization scheme will not work, because in this setting, global memory will be additive instead of being multiplicative. QPLs should also be able to express quantum communication protocols. In recent work by Mauerer BIB007 , the language cQPL, a variant of Selinger's language QPL, has been formulated, which extends QPL with communication capabilities. When designing a QPL, it is certainly a good idea to preserve as many classical language features as possible. Consequently, many languages introduce a quantum-if by means of the unitary two-qubit-operation CNOT. Although there is nothing wrong with this, it might possibly suggest a too close analogy with classical languages. The point is that the role of the target and the control bits will be exchanged, if the computational basis is replaced by the Bell-basis, which consists of suitable linear superpositions of the basis vectors j00l, j01l, j10l and j11l. Moreover, if the target qubit is put into an equally weighted superposition by applying a Hadamard operation, then the resulting two-qubit state is a maximally entangled state, i.e. the state of both qubits is completely undefined. So, in these situations, the analogy to a classical 'If' is lost completely. This is again an example of the counterintuitiveness of quantum mechanics, see, for example, Ref. [1, p. 179 ] for more details. The area of QPLs is rapidly evolving and some of the approaches are certainly preliminary steps. In the final form of a QPL, the measurement process must certainly be incorporated because it is an integral constituent of quantum theory. It would, in fact, be very easy to compute efficiently the values of a function f : Z ! Z with a quantum computer. The crux of the matter is that measuring one value irreversibly destroys the information on all the other values. Therefore, the extraction of information on a function is non-trivial. What can, in fact, be extracted is the information on properties of the function as a whole such as the period of a periodic function. This is one of the key ingredients of Shor's algorithm.
Quantum Programming Languages: An Introductory Overview <s> 140 <s> The number of steps any classical computer requires in order to find the prime factors of an l-digit integer N increases exponentially with l, at least using algorithms known at present1. Factoring large integers is therefore conjectured to be intractable classically, an observation underlying the security of widely used cryptographic codes1,2. Quantum computers3, however, could factor integers in only polynomial time, using Shor's quantum factoring algorithm4,5,6. Although important for the study of quantum computers7, experimental demonstration of this algorithm has proved elusive8,9,10. Here we report an implementation of the simplest instance of Shor's algorithm: factorization of N = 15 (whose prime factors are 3 and 5). We use seven spin-1/2 nuclei in a molecule as quantum bits11,12, which can be manipulated with room temperature liquid-state nuclear magnetic resonance techniques. This method of using nuclei to store quantum information is in principle scalable to systems containing many quantum bits13, but such scalability is not implied by the present work. The significance of our work lies in the demonstration of experimental and theoretical techniques for precise control and modelling of complex quantum computers. In particular, we present a simple, parameter-free but predictive model of decoherence effects14 in our system. <s> BIB001 </s> Quantum Programming Languages: An Introductory Overview <s> 140 <s> We develop a type theory and provide a denotational semantics for a simple fragment of the quantum lambda calculus, a formal language for quantum computation based on linear logic. In our semantics, variables inhabit certain Hilbert bundles, and computations are interpreted as the appropriate inner product preserving maps between Hilbert bundles. These bundles and maps form a symmetric monoidal closed category, as expected for a calculus based on linear logic. <s> BIB002 </s> Quantum Programming Languages: An Introductory Overview <s> 140 <s> The classical lambda calculus may be regarded both as a programming language and as a formal algebraic system for reasoning about computation. It provides a computational model equivalent to the Turing machine and continues to be of enormous benefit in the classical theory of computation. We propose that quantum computation, like its classical counterpart, may benefit from a version of the lambda calculus suitable for expressing and reasoning about quantum algorithms. In this paper we develop a quantum lambda calculus as an alternative model of quantum computation, which combines some of the benefits of both the quantum Turing machine and the quantum circuit models. The calculus turns out to be closely related to the linear lambda calculi used in the study of linear logic. We set up a computational model and an equational proof system for this calculus, and we argue that it is equivalent to the quantum Turing machine. <s> BIB003 </s> Quantum Programming Languages: An Introductory Overview <s> 140 <s> The paper develops a model of quantum computing from the perspective of functional programming. The model explains the fundamental ideas of quantum computing at a level of abstraction that is familiar to functional programmers. The model also illustrates some of the inherent difficulties in interpreting quantum mechanics and highlights the differences between quantum computing and traditional (functional or otherwise) computing models. <s> BIB004 </s> Quantum Programming Languages: An Introductory Overview <s> 140 <s> Compilers and computer-aided design tools will be essential for quantum computing. We present a computer-aided design flow that transforms a high-level language program representing a quantum computing algorithm into a technology-specific implementation. We trace the significant steps in this flow and illustrate the transformations to the representation of the quantum program. The focus of this paper is on the languages and transformations needed to represent and optimize a quantum algorithm along the design flow. Our software architecture provides significant benefits to algorithm designers, tool builders, and experimentalists. Of particular interest are the trade-offs in performance and accuracy that can be obtained by weighing different optimization and error-correction procedures at given levels in the design hierarchy. <s> BIB005 </s> Quantum Programming Languages: An Introductory Overview <s> 140 <s> These ‘lecture notes’ are based on joint work with Samson Abramsky. I will survey and informally discuss the results of [3, 4, 5, 12, 13] in a pedestrian not too technical way. These include: • ‘The logic of entanglement’, that is, the identification and abstract axiomatization of the ‘quantum information-flow’ which enables protocols such as quantum teleportation. 1 To this means we defined strongly compact closed categories which abstractly capture the behavioral properties of quantum entanglement. • ‘Postulates for an abstract quantum formalism’in which classical informationflow (e.g. token exchange) is part of the formalism. As an example, we provided a purely formal description of quantum teleportation and proved correctness in <s> BIB006 </s> Quantum Programming Languages: An Introductory Overview <s> 140 <s> Elaborating on our joint work with Abramsky in quant-ph/0402130 we further unravel the linear structure of Hilbert spaces into several constituents. Some prove to be very crucial for particular features of quantum theory while others obstruct the passage to a formalism which is not saturated with physically insignificant global phases. ::: First we show that the bulk of the required linear structure is purely multiplicative, and arises from the strongly compact closed tensor which, besides providing a variety of notions such as scalars, trace, unitarity, self-adjointness and bipartite projectors, also provides Hilbert-Schmidt norm, Hilbert-Schmidt inner-product, and in particular, the preparation-state agreement axiom which enables the passage from a formalism of the vector space kind to a rather projective one, as it was intended in the (in)famous Birkhoff & von Neumann paper. ::: Next we consider additive types which distribute over the tensor, from which measurements can be build, and the correctness proofs of the protocols discussed in quant-ph/0402130 carry over to the resulting weaker setting. A full probabilistic calculus is obtained when the trace is moreover linear and satisfies the \em diagonal axiom, which brings us to a second main result, characterization of the necessary and sufficient additive structure of a both qualitatively and quantitatively effective categorical quantum formalism without redundant global phases. Along the way we show that if in a category a (additive) monoidal tensor distributes over a strongly compact closed tensor, then this category is always enriched in commutative monoids. <s> BIB007
R. RÜ DIGER Unfortunately, the meaning of 'controlling quantum hardware' is not quite obvious. Many of the existing QPLs have been combined with a simulator based on strongly idealized models of hardware: the system is assumed to be perfectly isolated from the environment so that decoherence effects (i.e. effects destroying interference) will not come into play, unitary operations can be made arbitrarily exact and error correction is not an explicit part of the model. Of course, it is highly non-trivial and seems in fact impossible to incorporate all of these features into a working simulation model. An article on the first implementation of an NMR-based quantum computer BIB001 , which could factorize the number 15 (into 3 and 5, as the article reports), also reports that a complete simulation of the experiment, involving 4 7 Â4 7 parameters, was not feasible because the state space (of the simulation) was too large. Although programming languages are a central subject on their own right, most computer scientists would agree that even more importantly, they also form a part of a larger structure. As a historical example, C and Unix illustrate such a close relationship. The Oberon language and Oberon operating system are a highly remarkable and modern example of a symbiosis of this kind. In an interesting article, which addresses this problem in the context of quantum programming, Svore et al. BIB005 consider the problem of designing languages for a quantum computing system from a larger perspective. Some details will be discussed in Section 5.5. Another ambitious goal in designing QPLs can be described as an attempt to re-formulate quantum theory itself in such a way that the theory embodies high-level structures of theoretical computer science BIB006 BIB007 . Articles along these lines start with ideas of theoretical computer science by extending formal models such that formal reasoning on quantum processes should become possible. The quantum lambda calculus by van Tonder BIB002 BIB003 and qGCL by Sanders and Zuliani , an extension of Dijkstra's Guarded Command Language (GCL), are two examples of this kind. qGCL is an imperative language with a formal (operational) semantics. The language contains mechanisms for stepwise refinement, which make it particularly suitable as a specification language. Presently, research on QPLs seems to focus on concepts of functional programming. One argument in favour of this approach is that functional languages can express the algebraic structure of vector spaces in a natural way BIB004 . Other formalisms of theoretical computer science could as well serve as a starting point for defining new QPLs. In Section 5, some of these approaches will be discussed in more detail.
Quantum Programming Languages: An Introductory Overview <s> First-step towards a QPL: pseudocode <s> From the Publisher: ::: The updated new edition of the classic Introduction to Algorithms is intended primarily for use in undergraduate or graduate courses in algorithms or data structures. Like the first edition,this text can also be used for self-study by technical professionals since it discusses engineering issues in algorithm design as well as the mathematical aspects. ::: In its new edition,Introduction to Algorithms continues to provide a comprehensive introduction to the modern study of algorithms. The revision has been updated to reflect changes in the years since the book's original publication. New chapters on the role of algorithms in computing and on probabilistic analysis and randomized algorithms have been included. Sections throughout the book have been rewritten for increased clarity,and material has been added wherever a fuller explanation has seemed useful or new information warrants expanded coverage. ::: As in the classic first edition,this new edition of Introduction to Algorithms presents a rich variety of algorithms and covers them in considerable depth while making their design and analysis accessible to all levels of readers. Further,the algorithms are presented in pseudocode to make the book easily accessible to students from all programming language backgrounds. ::: Each chapter presents an algorithm,a design technique,an application area,or a related topic. The chapters are not dependent on one another,so the instructor can organize his or her use of the book in the way that best suits the course's needs. Additionally,the new edition offers a 25% increase over the first edition in the number of problems,giving the book 155 problems and over 900 exercises thatreinforcethe concepts the students are learning. <s> BIB001 </s> Quantum Programming Languages: An Introductory Overview <s> First-step towards a QPL: pseudocode <s> A few conventions for thinking about and writing quantum pseudocode are proposed. The conventions can be used for presenting any quantum algorithm down to the lowest level and are consistent with a quantum random access machine (QRAM) model for quantum computing. In principle a formal version of quantum pseudocode could be used in a future extension of a conventional language. <s> BIB002
In computer science, algorithms are traditionally formulated in some or the other form of pseudocode, for example, in a CLRS-like style BIB001 , which may be considered as a first step towards a programming language. Current textbooks on the quantum information theory commonly use a form which mixes texts in natural language with standard mathematical notations. In an early article, which has had a lot of influence on later work, particularly on the languages QCL and Q language, Knill BIB002 has proposed some form of pseudocode for quantum programming. In principle, it suffices to combine traditional classical control structures with quantum operations. As an illustration, Figure 2 shows Shor's algorithm in a form which will be easily accessible to computer scientists. The effect of this algorithm can be summarized as follows. For a given composite number N, FACTORIZE(N) returns a pair of non-trivial factors of N. The algorithm is probabilistic in two respects. First, in line 6, the value of a is drawn randomly from Z N . This turns FACTORIZE(N) into a randomized
Quantum Programming Languages: An Introductory Overview <s> QUANTUM PROGRAMMING LANGUAGES 141 <s> In this paper a microscopic quantum mechanical model of computers as represented by Turing machines is constructed. It is shown that for each numberN and Turing machineQ there exists a HamiltonianHNQ and a class of appropriate initial states such that if c is such an initial state, thenψQN(t)=exp(−1H N Qt)ψQN(0) correctly describes at timest3,t6,⋯,t3N model states that correspond to the completion of the first, second, ⋯, Nth computation step ofQ. The model parameters can be adjusted so that for an arbitrary time intervalΔ aroundt3,t6,⋯,t3N, the “machine” part ofψQN(t) is stationary. <s> BIB001 </s> Quantum Programming Languages: An Introductory Overview <s> QUANTUM PROGRAMMING LANGUAGES 141 <s> It is argued that underlying the Church-Turing hypothesis there is an implicit physical assertion. Here, this assertion is presented explicitly as a physical principle: ‘every finitely realizable physical system can be perfectly simulated by a universal model computing machine operating by finite means’. Classical physics and the universal Turing machine, because the former is continuous and the latter discrete, do not obey the principle, at least in the strong form above. A class of model computing machines that is the quantum generalization of the class of Turing machines is described, and it is shown that quantum theory and the ‘universal quantum computer’ are compatible with the principle. Computing machines resembling the universal quantum computer could, in principle, be built and would have many remarkable properties not reproducible by any Turing machine. These do not include the computation of non-recursive functions, but they do include ‘quantum parallelism’, a method by which certain probabilistic tasks can be performed faster by a universal quantum computer than by any classical restriction of it. The intuitive explanation of these properties places an intolerable strain on all interpretations of quantum theory other than Everett’s. Some of the numerous connections between the quantum theory of computation and the rest of physics are explored. Quantum complexity theory allows a physically more reasonable definition of the ‘complexity’ or ‘knowledge’ in a physical system than does classical complexity theory. <s> BIB002 </s> Quantum Programming Languages: An Introductory Overview <s> QUANTUM PROGRAMMING LANGUAGES 141 <s> A computer is generally considered to be a universal computational device; i.e., it is believed able to simulate any physical computational device with a cost in computation time of at most a polynomial factor: It is not clear whether this is still true when quantum mechanics is taken into consideration. Several researchers, starting with David Deutsch, have developed models for quantum mechanical computers and have investigated their computational properties. This paper gives Las Vegas algorithms for finding discrete logarithms and factoring integers on a quantum computer that take a number of steps which is polynomial in the input size, e.g., the number of digits of the integer to be factored. These two problems are generally considered hard on a classical computer and have been used as the basis of several proposed cryptosystems. We thus give the first examples of quantum cryptanalysis. > <s> BIB003 </s> Quantum Programming Languages: An Introductory Overview <s> QUANTUM PROGRAMMING LANGUAGES 141 <s> A quantum computer promises efficient processing of certain computational tasks that are intractable with classical computer technology. While basic principles of a quantum computer have been demonstrated in the laboratory, scalability of these systems to a large number of qubits, essential for practical applications such as the Shor algorithm, represents a formidable challenge. Most of the current experiments are designed to implement sequences of highly controlled interactions between selected particles (qubits), thereby following models of a quantum computer as a (sequential) network of quantum logic gates. Here we propose a different model of a scalable quantum computer. In our model, the entire resource for the quantum computation is provided initially in form of a specific entangled state (a so-called cluster state) of a large number of qubits. Information is then written onto the cluster, processed, and read out form the cluster by one-particle measurements only. The entangled state of the cluster thus serves as a universal substrate for any quantum computation. Cluster states can be created efficiently in any system with a quantum Ising-type interaction (at very low temperatures) between two-state particles in a lattice configuration. <s> BIB004 </s> Quantum Programming Languages: An Introductory Overview <s> QUANTUM PROGRAMMING LANGUAGES 141 <s> An inexpensive faucet aerator is provided by three molded parts and a thin metal cup that holds two of the molded parts in assembled relationship. The first molded part is an elongated annulus provided with upstream and downstream recesses separated by an inner ring that helps to break up the liquid flow and serves as an abutment to support a second molded, jet-forming, part in the upstream recess and is arranged to be engaged by the third, molded, part located in the downstream recess and aiding in defining air intake means to the aerator. <s> BIB005 </s> Quantum Programming Languages: An Introductory Overview <s> QUANTUM PROGRAMMING LANGUAGES 141 <s> What resources are universal for quantum computation? In the standard model, a quantum computer consists of a sequence of unitary gates acting coherently on the qubits making up the computer. This paper shows that a very different model involving only projective measurements, quantum memory, and the ability to prepare the |0>state is also universal for quantum computation. In particular, no coherent unitary dynamics are involved in the computation. <s> BIB006 </s> Quantum Programming Languages: An Introductory Overview <s> QUANTUM PROGRAMMING LANGUAGES 141 <s> We define quantum cellular automata as infinite quantum lattice systems with discrete time dynamics, such that the time step commutes with lattice translations and has strictly finite propagation speed. In contrast to earlier definitions this allows us to give an explicit characterization of all local rules generating such automata. The same local rules also generate the global time step for automata with periodic boundary conditions. Our main structure theorem asserts that any quantum cellular automaton is structurally reversible, i.e., that it can be obtained by applying two blockwise unitary operations in a generalized Margolus partitioning scheme. This implies that, in contrast to the classical case, the inverse of a nearest neighbor quantum cellular automaton is again a nearest neighbor automaton. ::: We present several construction methods for quantum cellular automata, based on unitaries commuting with their translates, on the quantization of (arbitrary) reversible classical cellular automata, on quantum circuits, and on Clifford transformations with respect to a description of the single cells by finite Weyl systems. Moreover, we indicate how quantum random walks can be considered as special cases of cellular automata, namely by restricting a quantum lattice gas automaton with local particle number conservation to the single particle sector. <s> BIB007 </s> Quantum Programming Languages: An Introductory Overview <s> QUANTUM PROGRAMMING LANGUAGES 141 <s> We present the SQRAM architecture for quantum computing, which is based on Knill's QRAM model. We detail a suitable instruction set, which implements a universal set of quantum gates, and demonstrate the operation of the SQRAM with Deutsch's quantum algorithm. The compilation of high-level quantum programs for the SQRAM machine is considered; we present templates for quantum assembly code and a method for decomposing matrices for complex quantum operations. The SQRAM simulator and compiler are discussed, along with directions for future work. <s> BIB008
algorithm, even if the order-finding algorithm in line 10 would be implemented deterministically. Second, the result returned by the latter is probabilistic as well, if implemented on a quantum computer, because of the measurement action in line 4 of algorithm FIND-ORDER in Figure 3 . The embedding of these functions into a classical control structure makes FAC-TORIZE(N) a kind of Las Vegas algorithm: if there are exactly two prime factors, the correct non-trivial factorization is determined deterministically, apart from the order in which the factors appear, but the run-time is a random variable. Function FIND-ORDER, the core part of the algorithm in Figure 2 and the exclusive task of the quantum computer, determines the order of a with respect to N, i.e. the least integer r such that a r ; 1 (mod N). This is based on the (purely classical) function CONTINUED -FRACTION-EXPANSION(m, p, a, N) , which returns the smallest r such that a r ; 1 (mod N) if such r exists, otherwise 21, using the continued fraction expansion of m/p. Here, with regard to the subject of this article, two points deserve special attention: first, in general, quantum algorithms require some classical pre-and post-processing. Therefore, quantum programming languages should contain a mixture of classical and non-classical language elements. Second, with this notation, the classical language elements have to be interpreted intuitively just as in a conventional procedural language. In order to 'interpret' the quantum operations, however, which appear here in the shape of ADTs, one has to return to the standard formalism of quantum physics. Since these operations act on (quantum-)registers, this kind of pseudocode notation may be regarded as a mixture of procedural language elements and some kind of machine language. Of course, operations like FIND-ORDER N must further be decomposed into more elementary functions; Figure 3 shows one of the possibilities in pseudocode form. Here, MIX is the Hadamard operation, applied to a quantum register, U a,N is a unitary operation which represents the function x 7 ! x a mod N and QFT is the quantum Fourier transform which determines the period of this function. The QFT can be defined using the definition of the classical discrete Fourier transform (DFT). In a given basis, the transformation may be written as jcl k¼0 y k jkl. Here, n is the number of qubits and the coefficients (y k ) are obtained from the coefficients (x j ) by the usual classical DFT. An implementation of the quantum Fourier transform by means of more elementary operations is shown in Figure 4 : the exterior for-loop shows a decomposition into n blocks of unitaries, each of which consists of a sequence of Hadamard and controlled two-qubit operations. Obviously, the complexity of this quantum algorithm is Q(n 2 ). This shows the exponential speed-up of the QFT compared with the classical FFT, the complexity of which is Q(n2 n ). For details, the reader may consult a textbook on quantum information theory, e.g. Ref. [1, p. 217] . A complete presentation and analysis of Shor's algorithm, in particular, the determination of the correctness probabilities, can be found in Refs. BIB005 BIB003 . The pseudocode of Figures 2 -4 implicitly assumes that each register will be used in only one mode, either quantum or classical. In his article, Knill goes several steps further. He suggests to introduce a unifying framework which provides methods for handling quantum registers, annotations for specifying the extent of entanglement and methods for initializing, using and measuring quantum registers. In addition, the framework includes meta-operations such as reversing a quantum register, conditioning of quantum registers and converting a classical algorithm to a reversible one. Moreover, the article introduces a notation which allows to indicate whether a register is possibly in a superposed state. If this is the case, then restricted operations only can be applied to the register such as preparations, unitary operations and measurements. Otherwise, arbitrary operations are allowed, which are typical for classical processors. The article also provides a set of rules governing how registers are used. For example, an assignment with a quantum register on the right indicates a measurement and a register appearing on the right of an assignment can experience side effects, i.e. registers are assumed to be passed by reference. Knill illustrates his pseudocode notation with some examples. The controlled two-qubit operation in line 5 in Figure 4 of the present article is denoted in his first variant of the QFT by an underlined if to indicate a quantum conditional. In a second variant of the QFT, a measurement of the amplitudes has been included in the algorithm. This is denoted by an assignment of a register which appears in its quantum form on the right and the classical form on the left. Another idea in Knill's article is the QRAM model. According to this model, quantum computers are not stand-alone devices but form a part of a larger architecture. A conventional classical PC performs the pre-and postprocessing and controls the quantum device driver by building the required sequence of unitary operations as a classical data structure, which is then transmitted to the device driver: the quantum system is triggered by the classical PC, so to speak. After the final measurement, the PC can initiate another round with parameters, possibly depending on previous measurement results. An essential point of this idea is that in order to keep coherence times short, the PC should do all the processing that the quantum computer can anyway not speed up. The article by Knill has been influential in the design of several QPLs, particularly QCL by Ö mer and Q language by Bettelli et al. In a recent article, Nagarajan et al. BIB008 describe an elaborated variant of the QRAM model, which they call sequential quantum random access machine (SQRAM). Some more details will be given in Section 5.6. As an aside, it can be mentioned that there are several other quantum computational models. Quantum turing machines have been investigated at the very beginning of studies of quantum computing by Benioff BIB001 , Deutsch BIB002 and others (see, for example, [1, p. 214] for more references). Usually, these are considered adequate for questions of computability but as too general as an underlying model of QPLs. More recently, several variants of the model of measurement-based quantum computation have been proposed BIB004 BIB006 . The relation of this conceptually new computational model to the conventional gate model is the subject of current research. Although there has been considerable work on quantum cellular automata (see Ref. BIB007 and references therein) and several languages for classical cellular automata have been defined [58 -61] , no QPL based on this model seems to have been published up to now.
Quantum Programming Languages: An Introductory Overview <s> Current research directions <s> A method and apparatus for superimposing printed characters of any such nature as may be transmitted upon a received television image, at the will of the viewer at the receiver. The character information is incrementally transmitted during the vertical blanking interval of the television scanning format. The receiver is especially constructed to have a dynamic shift register, also means to manually select one or none of plural character programs; such as news, stock market, or weather. The characters may be made to crawl horizontally to present an extended message, which crawl may be halted by the viewer. The mandatory display of emergency messages is possible by a control located at the transmitter. <s> BIB001 </s> Quantum Programming Languages: An Introductory Overview <s> Current research directions <s> Abstract The λ-calculus is considered a useful mathematical tool in the study of programming languages, since programs can be identified with λ-terms. However, if one goes further and uses βη-conversion to prove equivalence of programs, then a gross simplification is introduced (programs are identified with total functions from values to values ) that may jeopardise the applicability of theoretical results. In this paper we introduce calculi, based on a categorical semantics for computations , that provide a correct basis for proving equivalence of programs for a wide range of notions of computation . <s> BIB002 </s> Quantum Programming Languages: An Introductory Overview <s> Current research directions <s> This paper explores the use monads to structure functionalprograms. No prior knowledge of monads or category theory isrequired. Monads increase the ease with which programs may be modified.They can mimic the effect of impure features such as exceptions,state, and continuations; and also provide effects not easilyachieved with such features. The types of a program reflect whicheffects occur. The first section is an extended example of the use of monads. Asimple interpreter is modified to support various extra features:error messages, state, output, and non-deterministic choice. Thesecond section describes the relation between monads and thecontinuation-passing style. The third section sketches how monadsare used in a compiler for Haskell that is written in Haskell. <s> BIB003 </s> Quantum Programming Languages: An Introductory Overview <s> Current research directions <s> Abstract Monads have become very popular for structuring functional programs since Wadler introduced their use in 1990. In particular, libraries of combinators are often based on a monadic type. Such libraries share (in part) a common interface, from which numerous benefits flow, such as the possibility to write generic code which works together with any library. But, several interesting and useful libraries are fundamentally incompatible with the monadic interface. In this paper I propose a generalisation of monads, which I call arrows, with significantly wider applicability. The paper shows how many of the techniques of monadic programming generalise to the new setting, and gives examples to show that the greater generality is useful. In particular, three non-monadic libraries for efficient parsing, building graphical user interfaces, and programming active web pages fit naturally into the new framework. <s> BIB004 </s> Quantum Programming Languages: An Introductory Overview <s> Current research directions <s> From the foundations of quantum theory to quantum technology - G. Alber quantum information theory, an invitation - R. Werner quantum communication - H. Weinfurter and A. Zeilinger quantum algorithms, applicable algebra and quantum physics - T.H. Beth and M. Rotteler mixed-state entanglement and quantum communication - M. Rotteler and R. Horodecki. <s> BIB005 </s> Quantum Programming Languages: An Introductory Overview <s> Current research directions <s> We develop a type theory and provide a denotational semantics for a simple fragment of the quantum lambda calculus, a formal language for quantum computation based on linear logic. In our semantics, variables inhabit certain Hilbert bundles, and computations are interpreted as the appropriate inner product preserving maps between Hilbert bundles. These bundles and maps form a symmetric monoidal closed category, as expected for a calculus based on linear logic. <s> BIB006 </s> Quantum Programming Languages: An Introductory Overview <s> Current research directions <s> The classical lambda calculus may be regarded both as a programming language and as a formal algebraic system for reasoning about computation. It provides a computational model equivalent to the Turing machine and continues to be of enormous benefit in the classical theory of computation. We propose that quantum computation, like its classical counterpart, may benefit from a version of the lambda calculus suitable for expressing and reasoning about quantum algorithms. In this paper we develop a quantum lambda calculus as an alternative model of quantum computation, which combines some of the benefits of both the quantum Turing machine and the quantum circuit models. The calculus turns out to be closely related to the linear lambda calculi used in the study of linear logic. We set up a computational model and an equational proof system for this calculus, and we argue that it is equivalent to the quantum Turing machine. <s> BIB007 </s> Quantum Programming Languages: An Introductory Overview <s> Current research directions <s> The paper develops a model of quantum computing from the perspective of functional programming. The model explains the fundamental ideas of quantum computing at a level of abstraction that is familiar to functional programmers. The model also illustrates some of the inherent difficulties in interpreting quantum mechanics and highlights the differences between quantum computing and traditional (functional or otherwise) computing models. <s> BIB008 </s> Quantum Programming Languages: An Introductory Overview <s> Current research directions <s> These ‘lecture notes’ are based on joint work with Samson Abramsky. I will survey and informally discuss the results of [3, 4, 5, 12, 13] in a pedestrian not too technical way. These include: • ‘The logic of entanglement’, that is, the identification and abstract axiomatization of the ‘quantum information-flow’ which enables protocols such as quantum teleportation. 1 To this means we defined strongly compact closed categories which abstractly capture the behavioral properties of quantum entanglement. • ‘Postulates for an abstract quantum formalism’in which classical informationflow (e.g. token exchange) is part of the formalism. As an example, we provided a purely formal description of quantum teleportation and proved correctness in <s> BIB009 </s> Quantum Programming Languages: An Introductory Overview <s> Current research directions <s> With a view towards models of quantum computation, we define a functional language where all functions are linear operators by construction. A small step operational semantic (and hence an interpreter/simulator) is provided for this language in the form of a term rewrite systems. The linear-algebraic -calculus hereby constructed is linear in a different (yet related) sense to that, say, of the linear -calculus. These various notions of linearity are discussed in the context of quantum programming languages . <s> BIB010 </s> Quantum Programming Languages: An Introductory Overview <s> Current research directions <s> This article is a brief and subjective survey of quantum programming language research. 1 Quantum Computation Quantum computing is a relatively young subject. It has its beginnings in 1982, when Paul Benioff and Richard Feynman independently pointed out that a quantum mechanical system can be used to perform computations [11, p.12]. Feynman’s interest in quantum computation was motivated by the fact that it is computationally very expensive to simulate quantum physical systems on classical computers. This is due to the fact that such simulation involves the manipulation is extremely large matrices (whose dimension is exponential in the size of the quantum system being simulated). Feynman conceived of quantum computers as a means of simulating nature much more efficiently. The evidence to this day is that quantum computers can indeed perform certain tasks more efficiently than classical computers. Perhaps the best-known example is Shor’s factoring algorithm, by which a quantum computer can find the prime factors of any integer in probabilistic polynomial time [15]. There is no known classical probabilistic algorithm which can solve this problem in polynomial time. In the ten years since the publication of Shor’s result, there has been an enormous surge of research in quantum algorithms and quantum complexity theory. 2 Quantum Programming Languages Quantum physics involves phenomena, such as superposition and entanglement, whose properties are not always intuitive. These same phenomena give quantum computation its power, and are often at the heart of an interesting quantum algorithm. However, there does not yet seem to be a unifying set of principles by which quantum algorithms are developed; each new algorithm seems to rely on a unique set of “tricks” to achieve its particular goal. One of the goals of programming language design is to identify and promote useful “high-level” concepts — abstractions or paradigms which allow humans 2 to think about a problem in a conceptual way, rather than focusing on the details of its implementation. With respect to quantum programming, it is not yet clear what a useful set of abstractions would be. But the study of quantum programming languages provides a setting in which one can explore possible language features and test their usefulness and expressivity. Moreover, the definition of prototypical programming languages creates a unifying formal framework in which to view and analyze existing quantum algorithm. 2.1 Virtual Hardware Models Advances in programming languages are often driven by advances in compiler design, and vice versa. In the case of quantum computation, the situation is complicated by the fact that no practical quantum hardware exists yet, and not much is known about the detailed architecture of any future quantum hardware. To be able to speak of “implementations”, it is therefore necessary to fix some particular, “virtual” hardware model to work with. Here, it is understood that future quantum hardware may differ considerably, but the differences should ideally be transparent to programmers and should be handled automatically by the compiler or operating system. There are several possible virtual hardware models to work with, but fortunately all of them are equivalent, at least in theory. Thus, one may pick the model which fits one’s computational intuitions most closely. Perhaps the most popular virtual hardware model, and one of the easiest to explain, is the quantum circuit model. Here, a quantum circuit is made up from quantum gates in much the same way as a classical logic circuit is made up from logic gates. The difference is that quantum gates are always reversible, and they correspond to unitary transformations over a complex vector space. See e.g. [3] for a succinct introduction to quantum circuits. Of the two basic quantum operations, unitary transformations and measurements, the quantum circuit model emphasizes the former, with measurements always carried out as the very last step in a computation. Another virtual hardware model, and one which is perhaps even better suited for the interpretation of quantum programming languages, is the QRAM model of Knill [9]. Unlike the quantum circuit model, the QRAM models allows unitary transformations and measurements to be freely interleaved. In the QRAMmodel, a quantum device is controlled by a universal classical computer. The quantum device contains a large, but finite number of individually addressable quantum bits, much like a RAM memory chip contains a multitude of classical bits. The classical controller sends a sequence of instructions, which are either of the form “apply unitary transformation U to qubits i and j” or “measure qubit i”. The quantum device carries out these instruction, and responds by making the results of the measurements available. A third virtual hardware model, which is sometimes used in complexity theory, is the quantum Turing machine. Here, measurements are never performed, and the entire operation of the machine, which consists of a tape, head, and finite control, is assumed to be unitary. While this model is theoretically equivalent <s> BIB011 </s> Quantum Programming Languages: An Introductory Overview <s> Current research directions <s> The objective of this paper is to develop a functional programming language for quantum computers. We develop a lambda calculus for the classical control model, following the first author's work on quantum flow-charts. We define a call-by-value operational semantics, and we give a type system using affine intuitionistic linear logic. The main results of this paper are the safety properties of the language and the development of a type inference algorithm. <s> BIB012 </s> Quantum Programming Languages: An Introductory Overview <s> Current research directions <s> Elaborating on our joint work with Abramsky in quant-ph/0402130 we further unravel the linear structure of Hilbert spaces into several constituents. Some prove to be very crucial for particular features of quantum theory while others obstruct the passage to a formalism which is not saturated with physically insignificant global phases. ::: First we show that the bulk of the required linear structure is purely multiplicative, and arises from the strongly compact closed tensor which, besides providing a variety of notions such as scalars, trace, unitarity, self-adjointness and bipartite projectors, also provides Hilbert-Schmidt norm, Hilbert-Schmidt inner-product, and in particular, the preparation-state agreement axiom which enables the passage from a formalism of the vector space kind to a rather projective one, as it was intended in the (in)famous Birkhoff & von Neumann paper. ::: Next we consider additive types which distribute over the tensor, from which measurements can be build, and the correctness proofs of the protocols discussed in quant-ph/0402130 carry over to the resulting weaker setting. A full probabilistic calculus is obtained when the trace is moreover linear and satisfies the \em diagonal axiom, which brings us to a second main result, characterization of the necessary and sufficient additive structure of a both qualitatively and quantitatively effective categorical quantum formalism without redundant global phases. Along the way we show that if in a category a (additive) monoidal tensor distributes over a strongly compact closed tensor, then this category is always enriched in commutative monoids. <s> BIB013 </s> Quantum Programming Languages: An Introductory Overview <s> Current research directions <s> We define a strongly normalising proof-net calculus corresponding to the logic of strongly compact closed categories with biproducts. The calculus is a full and faithful representation of the free strongly compact closed category with biproducts on a given category with an involution. This syntax can be used to represent and reason about quantum processes. <s> BIB014 </s> Quantum Programming Languages: An Introductory Overview <s> Current research directions <s> We present the SQRAM architecture for quantum computing, which is based on Knill's QRAM model. We detail a suitable instruction set, which implements a universal set of quantum gates, and demonstrate the operation of the SQRAM with Deutsch's quantum algorithm. The compilation of high-level quantum programs for the SQRAM machine is considered; we present templates for quantum assembly code and a method for decomposing matrices for complex quantum operations. The SQRAM simulator and compiler are discussed, along with directions for future work. <s> BIB015 </s> Quantum Programming Languages: An Introductory Overview <s> Current research directions <s> We present the quantum programming language cQPL which is an extended version of QPL [Sel04b]. It is capable of quantum communication and it can be used to formulate all possible quantum algorithms. Additionally, it possesses a denotational semantics based on a partial order of superoperators and uses fixed points on a generalised Hilbert space to formalise (in addition to all standard features expected from a quantum programming language) the exchange of classical and quantum data between an arbitrary number of participants. Additionally, we present the implementation of a cQPL compiler which generates code for a quantum simulator. <s> BIB016 </s> Quantum Programming Languages: An Introductory Overview <s> Current research directions <s> We show that the model of quantum computation based on density matrices and superoperators can be decomposed into a pure classical (functional) part and an effectful part modelling probabilities and measurement. The effectful part can be modelled using a generalisation of monads called arrows. We express the resulting executable model of quantum computing in the Haskell programming language using its special syntax for arrow computations. However, the embedding in Haskell is not perfect: a faithful model of quantum computing requires type capabilities that are not directly expressible in Haskell. <s> BIB017 </s> Quantum Programming Languages: An Introductory Overview <s> Current research directions <s> We introduce the language QML, a functional language for quantum computations on finite types. Its design is guided by its categorical semantics: QML programs are interpreted by morphisms in the category FQC of finite quantum computations, which provides a constructive semantics of irreversible quantum computations realisable as quantum gates. QML integrates reversible and irreversible quantum computations in one language, using first order strict linear logic to make weakenings explicit. Strict programs are free from decoherence and hence preserve superpositions and entanglement -which is essential for quantum parallelism. <s> BIB018 </s> Quantum Programming Languages: An Introductory Overview <s> Current research directions <s> Abstract In this paper we give a self-contained introduction to the conceptional and mathematical foundations of quantum information theory. In the first part we introduce the basic notions like entanglement, channels, teleportation, etc. and their mathematical description. The second part is focused on a presentation of the quantitative aspects of the theory. Topics discussed in this context include: entanglement measures, channel capacities, relations between both, additivity and continuity properties and asymptotic rates of quantum operations. Finally, we give an overview on some recent developments and open questions. <s> BIB019
Presently, the primary focus of current research in the area of QPLs seems to be mainly on the functional programming paradigm and not on imperative or object-oriented languages. Several reasons are given for this approach. First, operations in a Hilbert space are functions in the traditional mathematical sense. Therefore, it suggests itself to map these to functions in a functional language, i.e. to language constructs, which map inputs to outputs without side effects. Second, it is argued that type safety in functional languages is much higher than in imperative languages. This can open the way to realizing systems in which the compiler rather than the run-time system detects violations of the laws of quantum mechanics. Much of the present work on this matter uses the terminology of category theory and linear logic as a technical framework. Good introductions into these fields can be found in Refs. . Another ambitious approach, although closely related to QPLs but far beyond this field, aims at establishing the new field of 'quantum informatics', which is a research area different from but related to the more traditional quantum information theory (see Refs. BIB009 BIB013 BIB014 and earlier references therein). During the last eight decades, the mathematical setting of quantum mechanics, originally due to von Neumann and others, has been extended to a rigorous theory, which contains the measurement process as well as a description of 'purely classical' systems in a common formal framework. C*-algebras form the basis of this framework (see, for example, a contribution by Werner in Ref. BIB005 and the article by Keyl BIB019 for detailed state-of-the-art introductions). There still remain at least two problems. First, many physicists feel that from a physical point of view, neither the existence of two types of time evolution, unitary and measurement, nor the relation between the notions of classical and quantum have yet been satisfactorily 'explained' BIB001 . Second, from a computer scientist's point of view, the Hilbert space formalism describes systems on the level of bits and qubits (0s and 1s in folklore terms), which is far from what is commonly called high-level methods and structures in present days computer science. Nowadays, the main subjects of classical computer science rely on notions such as modules, ADTs, components, functional languages, process calculi, type systems and various theoretical foundations thereof. Apart from a few applications, which directly interface with hardware devices, programming on a bit level is now of marginal importance only. The relation between the traditional mathematical treatment of quantum mechanics with its operations on the level of qubits and a classical assembler raises the question whether there are high-level structures analogous to classical computer science, which allow some reasoning on this level for quantum systems. Particularly, it is argued by Coecke BIB013 that mappings of the kind f : H ! H can have a lot of different meanings, such as operators, (mixed) states etc. In Coecke's article, this has been called 'the lack of types reflecting kinds'. So, the question is whether classical structures can be extended ('quantized') to high-level quantum structures, which are not merely unitary operations acting on qubit states. Moreover, these should be manageable in such a way that useful work can be done, for example, the development of efficient algorithms. Although this work aims primarily at the foundation of quantum physics itself, there are as well pragmatic goals such as protocol analysis and design, particularly applications to information security [70 -72] . A detailed description of all QPLs, which have been published so far, is far beyond the scope of the present article. Therefore, only a small number of examples will be sketched in the following. We refer the reader to the original articles; a commented literature summary has recently been given by Gay . Computability with functions can be formalized by means of the lambda calculus, which accordingly forms the basis of functional programming. van Tonder BIB006 BIB007 has developed a variant of this calculus for quantum programs which may be regarded as an alternate model of the quantum Turing machine. In its present version, the l q -calculus is based on the vector formalism; classical data and measurements are not treated in the present form of the model. Arrighi and Dowek BIB010 give a formalization of vector spaces and describe an operational semantics for a formal tensor calculus based on term rewrite systems. Also, a brief non-formal account of linear logic can be found in their article. In linear logic, logical statements are re-interpreted as consumption of resources. Particularly, two of the structural 146 R. RÜ DIGER rules of classical logic (weakening and contraction) are not available in general. In the context of quantum programming, this is brought into connection with peculiarities related to discarding and cloning quantum states. Some remarks on different notions of linearity in linear logic and of linearity in vector spaces are also mentioned in the article by Arrighi and Dowek. Most influential has been Selinger's [10, 18, 74,] work. He defines two variants of a first-order functional language, a textual form (named QPL) and, alternatively, a QPL in the form of quantum flow charts (named QFC). The language is based on the idea (the 'slogan') of 'classical control and quantum data', which is along the lines of Knill's QRAM model, although the language itself is not based on any special hardware model. Separating control and data in this way means that data can be in a superposed state, whereas the control flow cannot. One of the key points of QPL/QFC is that to each programming fragment, a superoperator will be assigned, which maps input states to output states. Thus, the language is based on the established formalism, mentioned in Section 2, which describes mixed states and operations on states in a general unified setting. Therefore, unitary time evolution and measurements can be dealt with in a common framework as well as situations like, for example, irreversibly discarding a qubit ('measuring a qubit without learning the result' [1, p. 187]). Another innovative feature of the language is its denotational semantics, which is based on complete partial orders of superoperators. QPL/QFC forms the basis for several other articles. One difficulty, mentioned by Selinger BIB011 , is the proper handling of linearity; combining classical and quantum structures in one system requires a linear and non-linear type system. In Refs. BIB012 , Valiron and Selinger propose a higher order QPL based on a linear-typed lambda calculus. The language combines both classical data types and measurements as a primitive feature, which is essential for algorithms where unitary operations and measurements are interleaved. The semantics of the proposed language is operational and the appropriate type system is affine intuitionistic linear logic. Also, the authors develop a type inference algorithm. Another article, which is closely related to QPL by Selinger, is the work by Nagarajan et al. BIB015 . The authors extend the QRAM model to a model, called SQRAM, by an explicit construction of instruction sets for the classical and the quantum component and they also describe a compiler for a subset of QPL. As an example, they show how Deutsch's algorithm can be expressed in their formalism. The extension of QPL to cQPL by Mauerer BIB016 has already been mentioned. The most distinguishing feature of this language is its ability to describe quantum communication protocols. Therefore, the language, which has a denotational semantics, is suitable for security proofs of communication protocols. A compiler for the language has been developed, which can also be regarded as a QPL compiler. Several experiments with the functional language Haskell as a QPL have been described BIB008 . There is a somewhat vague analogy to the work on Q language insofar, as an established standard language is being used as a QPL. However, the analogy ends here: programs written in Cþþ and Haskell have not much in common. In Ref. BIB017 , superoperators are introduced as arrows BIB004 , which generalize monads BIB002 BIB003 (an algebraic structure which formalizes the notion of a computation). Vizzotto et al. BIB017 remark that the no-cloning property of quantum systems cannot adequately be represented within this framework, and they state that a better approach would be to continue the work with QML by Altenkirch and Grattage BIB018 . This QPL is a first-order functional language with a denotational semantics. In contrast to Selinger's QPL, the language is based on the idea of 'quantum data and quantum control'. Measurements will be included in a future version of the language. A QML compiler has been implemented in Haskell . Table 1 summarizes some of the features of those QPLs which have been discussed or mentioned in the preceding sections. The reader should be aware that research on quantum
Runtime Adaptive Extensible Embedded Processors — A Survey <s> Introduction <s> System designers can optimize Xtensa for their embedded application by sizing and selecting features and adding new instructions. Xtensa provides an integrated solution that allows easy customization of both hardware and software. This process is simple, fast, and robust. <s> BIB001 </s> Runtime Adaptive Extensible Embedded Processors — A Survey <s> Introduction <s> Lx is a scalable and customizable VLIW processor technology platform designed by Hewlett-Packard and STMicroelectronics that allows variations in instruction issue width, the number and capabilities of structures and the processor instruction set. For Lx we developed the architecture and software from the beginning to support both scalability (variable numbers of identical processing resources) and customizability (special purpose resources). In this paper we consider the following issues. When is customization or scaling beneficial? How can one determine the right degree of customization or scaling for a particular application domain? What architectural compromises were made in the Lx project to contain the complexity inherent in a customizable and scalable processor family? The experiments described in the paper show that specialization for an application domain is effective, yielding large gains in price/performance ratio. We also show how scaling machine resources scales performance, although not uniformly across all applications. Finally we show that customization on an application-by-application basis is today still very dangerous and much remains to be done for it to become a viable solution. <s> BIB002
The ever increasing demand of high-performance at low-power in the embedded domain is fueling the trend towards customized embedded processors . A customized processor is designed specifically for an application domain (e.g., network, multimedia etc.) enabling it to offer significantly higher performance compared to its generalpurpose counterparts, while consuming much lower energy. This dual improvement in power-performance is achieved by eliminating certain structures (e.g., floating-point unit) that are redundant for the particular application-domain, while choosing appropriate dimensions for other structures (e.g., cache, TLB, register file). The elimination of redundant structures cuts down energy/area wastage and tailor-made dimensioning of required structures improves performance at reduced power budget. A further step towards customization is instruction-set extensible processors or extensible processors for short. An extensible processor opens up the opportunity to customize the Instruction-Set Architecture (ISA) through application-specific extension instructions or custom instructions. Each custom instruction encapsulates a frequency occurring complex pattern in the data-flow graph of the application(s). Custom instructions are implemented as Custom Functional Units (CFU) in the data-path of the processor core. As multiple instructions from the base ISA are folded into a single custom instruction, we save fetching/decoding costs and improve code size. More importantly, the CFU can typically achieve significantly lower latency through parallelization and chaining of basic operations (the latency is determined by the critical path in the dataflow graph of the corresponding custom instruction) compared to executing one operation per cycle sequentially in the original processor. On the other hand, as custom instructions are exposed to the programmer, extensible processors offer great flexibility just like any software-programmable general-purpose processors. The large number of commercial extensible processors available in today's market (e.g., Xtensa BIB001 , Lx BIB002 , ARC configurable cores [2], OptimoDE , MIPS CorExtend [18] ) is a testament to their wide-spread popularity. There are, however, some drawbacks of traditional extensible processors. First, we need to design and fabricate different customized processor for each application domain. A processor customized for one application domain may fail to provide any tangible performance benefit for a different domain. Soft core processors with extensibility features that are synthesized in FPGAs (e.g., Altera Nios , Xilinx MicroBlaze [21] ) somewhat mitigate this problem as the customization can be performed post-fabrication. Still, customizable soft cores suffer from lower frequency and higher energy consumption issues because the entire processor (and not just the CFUs) is implemented in FPGAs. Apart from cross-domain performance problems, extensible processors are also limited by the amount of silicon available for implementation of the CFUs. As embedded systems progress towards highly complex and dynamic applications (e.g., MPEG-4 video encoder/decoder, software-defined radio), the silicon area constraint becomes a primary concern. Moreover, for highly dynamic applications that can switch between different modes (e.g., runtime selection of encryption standard) with unique custom instructions requirements, a customized processor catering to all scenarios will clearly be a sub-optimal design. Runtime adaptive extensible embedded processors offer a potential solution to all these problems. An adaptive extensible processor can be configured at runtime to change its custom instructions and the corresponding CFUs. Clearly, to achieve runtime adaptivity, the CFUs have to be implemented in some form of reconfigurable logic. But the base processor is implemented in ASIC to provide high clock frequency and better energy efficiency. As CFUs are implemented in reconfigurable logic, these extensible processors offer full flexibility to adapt (post-fabrication) the custom instructions according to the requirement of the application running on the system and even midway through the execution of the application. Such adaptive extensible processors can be broadly classified into two categories: -Explicit Reconfigurability: This class of processors need full compiler or programmer support to identify the custom instructions, synthesize them, and finally cluster then into one (or more) configurations that can be switched at runtime. In other words, custom instructions are generated off-line and the application is recompiled to use these custom instructions. -Transparent Reconfigurability: This class of processors do not expose the extensibility feature to the compiler or the programmer. In other words, the extensibility is completely transparent to the user. Instead, the runtime system identifies the custom instructions and synthesizes them while the application is running on the system. These systems are more complex, but may provide better performance as the decisions are taken at runtime. In this article, we will first provide a quick survey of the architecture of explicit runtime adaptive extensible processors followed by the compiler support required for such processors. Next, we will discuss transparent reconfigurable processors and their runtime systems. Finally, we will conclude this survey by outlining the challenges and opportunities in this domain.
Runtime Adaptive Extensible Embedded Processors — A Survey <s> Architecture <s> This paper explores a novel way to incorporate hardware-programmable resources into a processor microarchitecture to improve the performance of general-purpose applications. Through a coupling of compile-time analysis routines and hardware synthesis tools, we automatically configure a given set of the hardware-programmable functional units (PFUs) and thus augment the base instruction set architecture so that it better meets the instruction set needs of each application. We refer to this new class of general-purpose computers as PRogrammable Instruction Set Computers (PRISC). Although similar in concept, the PRISC approach differs from dynamically programmable microcode because in PRISC we define entirely-new primitive datapath operations. In this paper, we concentrate on the microarchitectural design of the simplest form of PRISC—a RISC microprocessor with a single PFU that only evaluates combinational functions. We briefly discuss the operating system and the programming language compilation techniques that are needed to successfully build PRISC and, we present performance results from a proof-of-concept study. With the inclusion of a single 32-bit-wide PFU whose hardware cost is less than that of a 1 kilobyte SRAM, our study shows a 22% improvement in processor performance on the SPECint92 benchmarks. <s> BIB001 </s> Runtime Adaptive Extensible Embedded Processors — A Survey <s> Architecture <s> A dynamic instruction set computer (DISC) has been developed that supports demand-driven modification of its instruction set. Implemented with partially reconfigurable FPGAs, DISC treats instructions as removable modules paged in and out through partial reconfiguration as demanded by the executing program. Instructions occupy FPGA resources only when needed and FPGA resources can be reused to implement an arbitrary number of performance-enhancing application-specific instructions. DISC further enhances the functional density of FPGAs by physically relocating instruction modules to available FPGA space. <s> BIB002 </s> Runtime Adaptive Extensible Embedded Processors — A Survey <s> Architecture <s> As custom computing machines evolve, it is clear that a major bottleneck is the slow interconnection architecture between the logic and memory. This paper describes the architecture of a custom computing machine that overcomes the interconnection bottleneck by closely integrating a fixed-logic processor, a reconfigurable logic array, and memory into a single chip, called OneChip-98. The OneChip-98 system has a seamless programming model that enables the programmer to easily specify instructions without additional complex instruction decoding hardware. As well, there is a simple scheme for mapping instructions to the corresponding programming bits. To allow the processor and the reconfigurable array to execute concurrently, the programming model utilizes a novel memory-consistency scheme implemented in the hardware. To evaluate the feasibility of the OneChip-98 architecture, a 32-bit MIPS-like processor and several performance enhancement applications were mapped to the Transmogrifier-2 field programmable system. For two typical applications, the 2-dimensional discrete cosine transform and the 64-tap FIR filter, we were capable of achieving a performance speedup of over 30 times that of a stand-alone state-of-the-art processor. <s> BIB003 </s> Runtime Adaptive Extensible Embedded Processors — A Survey <s> Architecture <s> Reconfigurable hardware has the potential for significant performance improvements by providing support for application-specific operations. We report our experience with Chimaera, a prototype system that integrates a small and fast reconfigurable functional unit (RFU) into the pipeline of an aggressive, dynamically-scheduled superscalar processor. Chimaera is capable of performing 9-input/1-output operations on integer data. We discuss the Chimaera C compiler that automatically maps computations for execution in the RFU. Chimaera is capable of: (1) collapsing a set of instructions into RFU operations, (2) converting control-flow into RFU operations, and (3) supporting a more powerful fine-grain data-parallel model than that supported by current multimedia extension instruction sets (for integer operations). Using a set of multimedia and communication applications we show that even with simple optimizations, the Chimaera C compiler is able to map 22% of all instructions to the RFU on the average. A variety of computations are mapped into RFU operations ranging from as simple as add/sub-shift pairs to operations of more than 10 instructions including several branches. Timing experiments demonstrate that for a 4-way out-of-order superscalar processor Chimaera results in average performance improvements of 21%, assuming a very aggressive core processor design (most pessimistic RFU latency model) and communication overheads from and to the RFU. <s> BIB004 </s> Runtime Adaptive Extensible Embedded Processors — A Survey <s> Architecture <s> System designers can optimize Xtensa for their embedded application by sizing and selecting features and adding new instructions. Xtensa provides an integrated solution that allows easy customization of both hardware and software. This process is simple, fast, and robust. <s> BIB005 </s> Runtime Adaptive Extensible Embedded Processors — A Survey <s> Architecture <s> This paper describes a new architecture for embedded reconfigurable computing, based on a very-long instruction word (VLIW) processor enhanced with an additional run-time configurable datapath. The reconfigurable unit is tightly coupled with the processor, featuring an application-specific instruction-set extension. Mapping computation intensive algorithmic portions on the reconfigurable unit allows a more efficient elaboration, thus leading to an improvement in both timing performance and power consumption. A test chip has been implemented in a standard 0.18-/spl mu/m CMOS technology. The test of a signal processing algorithmic benchmark showed speedups ranging from 4.3/spl times/ to 13.5/spl times/ and energy consumption reduced up to 92%. <s> BIB006 </s> Runtime Adaptive Extensible Embedded Processors — A Survey <s> Architecture <s> In this paper, we present a polymorphic processor paradigm incorporating both general-purpose and custom computing processing. The proposal incorporates an arbitrary number of programmable units, exposes the hardware to the programmers/designers, and allows them to modify and extend the processor functionality at will. To achieve the previously stated attributes, we present a new programming paradigm, a new instruction set architecture, a microcode-based microarchitecture, and a compiler methodology. The programming paradigm, in contrast with the conventional programming paradigms, allows general-purpose conventional code and hardware descriptions to coexist in a program: In our proposal, for a given instruction set architecture, a onetime instruction set extension of eight instructions, is sufficient to implement the reconfigurable functionality of the processor. We propose a microarchitecture based on reconfigurable hardware emulation to allow high-speed reconfiguration and execution. To prove the viability of the proposal, we experimented with the MPEG-2 encoder and decoder and a Xilinx Virtex II Pro FPGA. We have implemented three operations, SAD, DCT, and IDCT. The overall attainable application speedup for the MPEG-2 encoder and decoder is between 2.64-3.18 and between 1.56-1.94, respectively, representing between 93 percent and 98 percent of the theoretically obtainable speedups. <s> BIB007 </s> Runtime Adaptive Extensible Embedded Processors — A Survey <s> Architecture <s> A software-configurable processor combines a traditional RISC processor with a field-programmable instruction extension unit that lets the system designer tailor the processor to a particular application. To add application-specific instructions to the processor, the programmer adds a pragma before a C or C++ function declaration, and the compiler then turns the function into a single instruction <s> BIB008
Temporal Reconfiguration. We start with architectures that enable temporal reconfiguration, but only one custom instruction can exist at any point of time. That is, there is no spatial sharing of the reconfigurable logic among custom instructions. PRISC (PRogrammable Instruction Set Processor) BIB001 is one of the very first architectures to include temporal reconfigurability of the custom functional units. Temporal reconfiguration virtually enlarges the limited reconfigurable hardware, which is tightly attached to the datapath of core processor. PRISC supports a set of configurations, each of which contains a computation kernel or a custom instruction. At any point of time, there is only one active configuration for reconfigurable hardware. However, each of the configurations can become active at some point of time through time-multiplexing. Therefore, temporal reconfiguration can extend the computational ability of the reconfigurable hardware at the cost of reconfiguration overhead. Figure 1 shows the Programmable Functional Unit (PFU) in parallel with the other traditional functional units in the datapath of the PRISC processor. PFU data communication is similar to the other functional units. However, PFU can support only two input operands and one output operand. With the limitation on the number of input and output operands, PRISC cannot implement large custom instructions that can potentially provide more performance benefit though instruction-level parallelism as well as higher latency reduction. Moreover, as each configuration can include only one instruction, PRISC effectively restricts the number of custom instructions per loop body to one; BIB001 otherwise, the temporal reconfiguration cost within loop body will typically outweigh any benefit of custom instructions. OneChip BIB003 reduces reconfiguration overhead by allowing multiple configurations to be stored in the PFU, but only one configuration is active at any point of time. Moreover, OneChip comprises of a superscalar pipeline with PFU to achieve higher performance for streaming applications. However, OneChip lacks the details of how programmers specify or design the hardware that is mapped onto the reconfigurable logic. Spatial and Temporal Reconfiguration. Both PRISC and OneChip allow only one custom instruction per configuration that can result in high reconfiguration cost specially if two custom instructions in the same code segment are executed frequently, for example, inside a loop body. Our next set of architectures enable spatial reconfiguration, that is, the reconfigurable hardware can be shared among multiple custom instructions. The combination of spatial and temporal reconfiguration is a powerful feature that partitions the custom instructions into multiple configurations, each of which contains one or more custom instructions. This clustering of multiple custom instructions into a single configuration can significantly reduce the reconfiguration overhead. Chimaera BIB004 , which is inspired by PRISC, is one of the original works considering temporal plus spatial configuration of the custom functional units. Chimaera tightly couples Reconfigurable Functional Unit (RFU) with a superscalar pipeline. The main innovation of the Chimaera RFU is that it uses nine input registers to produce the result in one destination register. Simple compiler support is provided to automatically map group of normal instructions into custom instructions. However, Chimaera compiler lacks support for spatial and temporal reconfiguration of custom instructions so as to make runtime reconfiguration more efficient. Stretch S6000 BIB008 commercial processor follows this research trend. Figure 2 shows the Stretch S6000 engine that incorporates Tensilica Xtensa LX dual-issue VLIW processor BIB005 and the Stretch Instruction Set Extension Fabric (ISEF). The ISEF is software-configurable datapath based on programmable logic. It consists of a plane of Arithmetic/logic Units (AU) and a plane of Multiplier Units (MU) embedded and interlinked in a programmable, hierarchical routing fabric. This configurable fabric acts as a functional unit to the processor. It is built into the processor's datapath, and resides alongside other traditional functional units. The programmer defined application specific instructions (Extension Instructions) are implemented in this fabric. When an extension instruction is issued, the processor checks to make sure the corresponding configuration (containing the extension instruction) is loaded into the ISEF. If the required configuration is not present in the ISEF, it is automatically loaded prior to the execution of the user-defined instruction. ISEF provides high data bandwidth to the core processor through 128-bit wide registers. In addition, 64KB embedded RAM is included inside ISEF to store temporary results of computation. With all these features, a single custom instruction can potentially implement a complete inner loop of the application. The Stretch compiler fully unrolls any loop with constant iteration counts. Fig. 2 . Stretch S6000 datapath BIB008 can be removed to make space for the new instructions. Moreover, as only a part of the fabric is reconfigured, it saves reconfiguration cost. DISC (Dynamic Instruction Set Computer) BIB002 is one of the earliest attempts for an extensible processor to provide partial reconfiguration feature. DISC implements each instruction of the instruction set as an independent circuit module. It can page-in and page-out individual instruction modules onto reconfigurable fabric in a demand-driven manner. DISC supports relocatable circuit modules such that an existing instruction module can be moved inside the fabric to generate enough contiguous space for the incoming instruction module. The drawback of DISC system is that both standard and custom instructions are implemented in reconfigurable logic, causing significant performance overhead. On the other hand, the host processor is under-utilized as it only performs resource allocation and reconfiguration. Extended Instruction Set RISC (XiRisc) BIB006 follows this line of development to couple a VLIW datapath with a pipelined run-time reconfigurable hardware. XiRisc has a five-stage pipeline with two symmetrical execution flows called Data Channels. Reconfigurable datapath supports up to four source operands and two destination operands for each custom instruction. Moreover, reconfigurable hardware can hold internal states for several computations so as to reduce the register pressure. However, configuration caching is missing in XiRisc leading to high reconfiguration overhead. Moreover, there is lack of compiler support for designer to automatically generate custom instructions. Molen BIB007 polymorphic processor incorporates an arbitrary number of reconfigurable functional units. Molen resolves the issue of opcode space explosion for custom functions as well as data bandwidth limitation of the reconfigurable hardware. Moreover, Molen architecture allows two or more independent functions to be executed in parallel in the reconfigurable logic. To achieve these features, Molen requires a new programming paradigm that enables general-purpose instructions and hardware descriptions of custom instructions to coexist in a program. An one-time instruction set extension of eight instructions is added to support the functionality of reconfigurable hardware. Molen compiler automatically generates optimized binary code for C applications with pragma annotation for custom instructions. The compiler can also generate appropriate custom instructions for each implementation of reconfigurable logic. The reconfiguration cost is hidden by scheduling the instructions appropriately such that the configuration corresponding to a custom instruction can be prefetched before that custom instruction is scheduled to execute.
Runtime Adaptive Extensible Embedded Processors — A Survey <s> Compiler Support <s> We present an efficient framework for dynamic reconfiguration of application-specific instruction-set customization. A key component of this framework is an iterative algorithm for temporal and spatial partitioning of the loop kernels. Our algorithm maximizes performance gain of an application while taking into consideration the dynamic reconfiguration cost. It selects the appropriate custom instruction-sets for the loops and maps them into appropriate configurations. We model the temporal partitioning problem as a k-way graph partitioning problem. A dynamic programming based solution is used for the spatial partitioning. Comprehensive experimental results indicate that our iterative algorithm is highly scalable while producing optimal or near-optimal (99% of the optimal) performance gain. <s> BIB001 </s> Runtime Adaptive Extensible Embedded Processors — A Survey <s> Compiler Support <s> This paper explores runtime reconfiguration of custom instructions in the context of multi-tasking real-time embedded systems. We propose a pseudo-polynomial time algorithm that minimizes processor utilization through customization and runtime reconfiguration, while satisfying all the timing constraints. Our experimental infrastructure consists of Stretch customizable processor supporting runtime reconfiguration as the hardware platform and realistic embedded benchmarks as applications. We observe that runtime reconfiguration of custom instructions can help to reduce the processor utilization by up to 64%. The experimental results also demonstrate that our algorithm is highly scalable and achieves optimal or near optimal (3% difference) processor utilization. <s> BIB002
Most of the runtime adaptive extensible processors lack appropriate compiler support to automate the design flow. However, given the tight time-to-market constraint of embedded systems, compiler support is instrumental in developing greater acceptability of these architectures. Currently, the burden is entirely on the programmer to select appropriate custom instructions and cluster them into one or more configurations. Choosing an appropriate set of custom instructions for an application itself is a difficult problem. Significant research effort has been invested in developing automated selection techniques for custom instructions . Runtime reconfiguration has the additional complication of both temporal and spatial partitioning of the set of custom instructions in the reconfigurable fabric. We have recently developed an efficient framework BIB001 that starts with an application specified in ANSI-C and automatically selects appropriate custom instructions as well as clubs them into one or more configurations (see Figure 3) . We first extract a set of compute-intensive candidate loop kernels from the application through profiling. For each candidate loop, one or more Custom Instruction Set (CIS) versions are generated differing in performance gain and area tradeoffs. Fig. 4 . A set of periodic task graphs and the corresponding schedule BIB002 partitioning algorithm. We model the temporal partitioning of the custom instructions into different configurations as a k-way graph partitioning problem. A dynamic programming based pseudo-polynomial time algorithm determines the spatial partitioning of the custom instructions within a configuration. The selected CIS versions to be implemented in hardware pass through a datapath synthesis tool. It generates the bitstream corresponding to each configuration (based on the outcome of the temporal partitioning). These bitstreams are used to configure the fabric at runtime. The remaining loops are implemented in software on the core processor. Finally, the source code is modified to exploit the new custom instructions. We also extend our work to include runtime reconfiguration of custom instructions for multiple tasks along with timing constraints BIB002 . An application is modeled as a set of periodic task graphs, each associated with a period and a deadline. Multiple CIS versions are generated for each constituent task of a task graph. Each task has many instances in the static non-preemptive schedule over the hyper-period (the least common multiple of the task graph periods) as shown in Figure 4 . The objective is to minimize processor utilization by exploiting runtime reconfiguration of the custom instructions while satisfying deadline constraints. To achieve this goal, temporal partitioning divides the schedule into a number of configurations, where area constraint is imposed on each configuration. For example, Figure 4 illustrates an initial fragment of the schedule and its partitioning into three configurations. Note that each configuration contains a disjoint subsequence of task instances from the original schedule. Temporal partitioning allows a larger virtual area at the cost of reconfiguration overhead. The area within a configuration is spatially partitioned among the task instances assigned to it by choosing appropriate CIS version for each task instance. A dynamic programming based algorithm is enhanced with various constraints to efficiently solve the problem.
Runtime Adaptive Extensible Embedded Processors — A Survey <s> Transparent Extensible Processors <s> Application-specific instruction set extensions are an effective way of improving the performance of processors. Critical computation subgraphs can be accelerated by collapsing them into new instructions that are executed on specialized function units. Collapsing the subgraphs simultaneously reduces the length of computation as well as the number of intermediate results stored in the register file. The main problem with this approach is that a new processor must be generated for each application domain. While new instructions can be designed automatically, there is a substantial amount of engineering cost incurred to verify and to implement the final custom processor. In this work, we propose a strategy to transparent customization of the core computation capabilities of the processor without changing its instruction set. A congurable array of function units is added to the baseline processor that enables the acceleration of a wide range of data flow subgraphs. To exploit the array, the microarchitecture performs subgraph identification at run-time, replacing them with new microcode instructions to configure and utilize the array. We compare the effectiveness of replacing subgraphs in the fill unit of a trace cache versus using a translation table during decode, and evaluate the tradeoffs between static and dynamic identification of subgraphs for instruction set customization. <s> BIB001 </s> Runtime Adaptive Extensible Embedded Processors — A Survey <s> Transparent Extensible Processors <s> Instruction set customization is an effective way to improve processor performance. Critical portions of applicationdata-flow graphs are collapsed for accelerated execution on specialized hardware. Collapsing dataflow subgraphs will compress the latency along critical paths and reduces the number of intermediate results stored in the register file. While custom instructions can be effective, the time and cost of designing a new processor for each application is immense. To overcome this roadblock, this paper proposes a flexible architectural framework to transparently integrate custom instructions into a general-purpose processor. Hardware accelerators are added to the processor to execute the collapsed subgraphs. A simple microarchitectural interface is provided to support a plug-and-play model for integrating a wide range of accelerators into a pre-designed and verified processor core. The accelerators are exploited using an approach of static identification and dynamic realization. The compiler is responsible for identifying profitable subgraphs, while the hardware handles discovery, mapping, and execution of compatible subgraphs. This paper presents the design of a plug-and-play transparent accelerator system and evaluates the cost/performance implications of the design. <s> BIB002 </s> Runtime Adaptive Extensible Embedded Processors — A Survey <s> Transparent Extensible Processors <s> We describe a new processing architecture, known as a warp processor, that utilizes a field-programmable gate array (FPGA) to improve the speed and energy consumption of a software binary executing on a microprocessor. Unlike previous approaches that also improve software using an FPGA but do so using a special compiler, a warp processor achieves these improvements completely transparently and operates from a standard binary. A warp processor dynamically detects the binary's critical regions, reimplements those regions as a custom hardware circuit in the FPGA, and replaces the software region by a call to the new hardware implementation of that region. While not all benchmarks can be improved using warp processing, many can, and the improvements are dramatically better than those achievable by more traditional architecture improvements. The hardest part of warp processing is that of dynamically reimplementing code regions on an FPGA, requiring partitioning, decompilation, synthesis, placement, and routing tools, all having to execute with minimal computation time and data memory so as to coexist on chip with the main processor. We describe the results of developing our warp processor. We developed a custom FPGA fabric specifically designed to enable lean place and route tools, and we developed extremely fast and efficient versions of partitioning, decompilation, synthesis, technology mapping, placement, and routing. Warp processors achieve overall application speedups of 6.3X with energy savings of 66p across a set of embedded benchmark applications. We further show that our tools utilize acceptably small amounts of computation and memory which are far less than traditional tools. Our work illustrates the feasibility and potential of warp processing, and we can foresee the possibility of warp processing becoming a feature in a variety of computing domains, including desktop, server, and embedded applications. <s> BIB003 </s> Runtime Adaptive Extensible Embedded Processors — A Survey <s> Transparent Extensible Processors <s> Adaptation in embedded processing is key in order to address efficiency. The concept of extensible embedded processors works well if a few a-priori known hot spots exist. However, they are far less efficient if many and possible at-design-time-unknown hot spots need to be dealt with. Our RISPP approach advances the extensible processor concept by providing flexibility through runtime adaptation by what we call "instruction rotation". It allows sharing resources in a highly flexible scheme of compatible components (called atoms and molecules). As a result, we achieve high speed-ups at moderate additional hardware. Furthermore, we can dynamically tradeoff between area and speed-up through runtime adaptation. We present the main components of our platform and discuss by means of an H.264 video codec. <s> BIB004 </s> Runtime Adaptive Extensible Embedded Processors — A Survey <s> Transparent Extensible Processors <s> We are presenting a new concept of an application-specific processor that is capable of transmuting its instruction set according to non-predictive application behavior during run-time. In those scenarios, current (extensible) embedded processors are less efficient since they are not run-time adaptive. We have identified the instruction set selection to be a critical step to perform at run time and hence we focus this paper on that crucial part. Our paradigm conducts as many steps as possible at compile/design time and as little as necessary at run time with the constraint to provide a sufficient flexibility to react to non-predictive application behavior efficiently We provide an in-depth analysis of our scheme and achieve a speed-up of up to 7.19times (average: 3.63times) compared to state-of-the-art adaptive approaches (like [19]). As an application, we have employed a whole H.264 video encoder though our scheme is by principle applicable to many other embedded applications. Our results are evaluated by an implementation of the instruction set selection for our transmutable processor on an FPGA platform. <s> BIB005
We now proceed to describe extensible processors that are reconfigured transparently by the runtime system. Configurable Compute Accelerators (CCA): Transparent instruction-set customization supports a plug-and-play model for integrating a wide range of accelerators into a pre-designed and verified processor core. Moreover, instruction-set customization occurs at runtime. An architectural framework for transparent instruction-set customization has been proposed in BIB002 . The framework comprises of static identification of subgraphs for execution on CCA BIB001 and runtime selection of custom instructions to be synthesized to CCA as shown in Figure 5 . First, the program is analyzed to identify the most frequent computation subgraphs (custom instructions) to be mapped onto CCA. Figure 5 (a) shows that two subgraphs have been selected. They are considered as normal functions and will be replaced by function calls. At runtime, the first time a selected subgraph is encountered, it is executed in the core pipeline while a hardware engine determines the CCA configuration concurrently. From the second execution onwards, the subgraph is implemented in the CCA as shown in Figure 5(b) . Static subgraph extraction and replacement are achieved by adding a few steps into the conventional code generation process, which comprises of prepass scheduling, register allocation and postpass scheduling of spill code as shown in Figure 6 . These steps are shaded in gray in the figure. First, given a dataflow graph, subgraph identification selects a set of potential subgraphs, which will be later implemented on CCA. Subgraph identification is a well-studied problem; interested readers can refer to for a detailed exposition of the solutions. Note that subgraph identification is performed before register allocation to avoid false dependencies within data flow graph. After subgraph identification, selected subgraphs are collapsed into a single instruction. However, when collapsing subgraphs, code motion ensures the correctness if the subgraph crosses branch boundaries. Before getting into register allocation, the collapsed instruction is expanded so that register allocator can assign the registers to internal values. The advantage of this approach is that even a processor without CCA can execute the subgraphs as well (because they are treated as normal functions). More importantly, subgraph expansion ensures that register allocation remains relatively unchanged. After register allocation, each subgraph is compacted to an atomic node and passed on as input to postpass scheduling. When postpass scheduling completes, each subgraph is expanded once again and a function is created for each subgraph along with a function call. WARP: At the other end of the spectrum, we have WARP BIB003 that has been developed with completely transparent instruction-set customization in mind. WARP processor consists of a main processor with instruction and data caches, an on-chip profiler, WARP-oriented FPGA and an on-chip computer-aided design (CAD) module. The execution of an application starts only on the main processor. During the execution, the profiler determines the critical kernels of the application. Then, CAD module invokes the Riverside On-Chip CAD (ROCCAD) tool chain. ROCCAD tool chain starts with decompilation of the application binary code of software loops into high-level representation that is more suitable for synthesis. Next, the partitioning algorithm determines the most suitable loops to be implemented in FPGA. For the selected kernels, ROCCAD uses behavioral and Register Transfer Level (RTL) synthesis to generate appropriate circuit descriptions. Then, ROCCAD configures the FPGA by using Just-In-Time (JIT) FPGA compilation tools. The JIT compiler performs logic synthesis to optimize the hardware circuit followed by technology mapping to map the hardware circuit onto reconfigurable logic fabric. Placement and route are then performed to complete the JIT compilation. Finally, ROCCAD updates the application binary code to utilize the custom accelerators inside the FPGA. RISPP (Rotating Instruction Set Processing Platform) BIB004 is a recent architecture that offers a unique approach towards runtime customization. RISPP introduces the notion of atoms and molecules for custom instructions. Atom is the basic datapath, while a combination of atoms creates custom instruction molecule. Atoms can be reused across different custom instruction molecules. Compared to the contemporary reconfigurable architectures, RISPP reduces the overhead of partial reconfiguration substantially through an innovative gradual transition of the custom instructions implementation from software into hardware. At compile time, only the potential custom instructions (molecules) are identified, but these molecules are not bound to any datapath in hardware. Instead, a number of possible implementation choices are available including a purely software implementation. At runtime, the implementation of a molecule can gradually "upgrade" to hardware as and when the atoms it needs become available. If no atom is available for a custom instruction, it will be executed in core pipeline using the software implementation. RISPP requires fast design space exploration technique at runtime to combine appropriate elementary data paths and evaluate tradeoffs between performance and hardware area of the custom instructions BIB005 . A greedy heuristic is proposed to select the appropriate implementation for each custom instruction.
SDN in the home: A survey of home network solutions using Software Defined Networking <s> Background <s> Before building the network or its components, first understand the home and the behavior of its human inhabitants. <s> BIB001 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Background <s> Software Defined Networking (SDN) is an exciting technology that enables innovation in how we design and manage networks. Although this technology seems to have appeared suddenly, SDN is part of a long history of efforts to make computer networks more programmable. In this paper, we trace the intellectual history of programmable networks, including active networks, early efforts to separate the control and data plane, and more recent work on OpenFlow and network operating systems. We highlight key concepts, as well as the technology pushes and application pulls that spurred each innovation. Along the way, we debunk common myths and misconceptions about the technologies and clarify the relationship between SDN and related technologies such as network virtualization. <s> BIB002 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Background <s> The idea of programmable networks has recently re-gained considerable momentum due to the emergence of the Software-Defined Networking (SDN) paradigm. SDN, often referred to as a "radical new idea in networking", promises to dramatically simplify network management and enable innovation through network programmability. This paper surveys the state-of-the-art in programmable networks with an emphasis on SDN. We provide a historic perspective of programmable networks from early ideas to recent developments. Then we present the SDN architecture and the OpenFlow standard in particular, discuss current alternatives for implementation and testing of SDN-based protocols and services, examine current and future SDN applications, and explore promising research directions based on the SDN paradigm. <s> BIB003
This section provides the necessary background to understand the general problem of home networking and the very nature of SDN that makes this new paradigm a very attractive solution for that problem. This is a brief treatment, and the interested reader is referred to BIB001 BIB002 Goransson, Black, & Culver, 2016; BIB003 .
SDN in the home: A survey of home network solutions using Software Defined Networking <s> How can SDN help? <s> Networks and networked applications depend on several pieces of configuration information to operate correctly. Such information resides in routers, firewalls, and end hosts, among other places. Incorrect information, or misconfiguration, could interfere with the running of networked applications. This problem is particularly acute in consumer settings such as home networks, where there is a huge diversity of network elements and applications coupled with the absence of network administrators. ::: ::: To address this problem, we present NetPrints, a system that leverages shared knowledge in a population of users to diagnose and resolve misconfigurations. Basically, if a user has a working network configuration for an application or has determined how to rectify a problem, we would like this knowledge to be made available automatically to another user who is experiencing the same problem. NetPrints accomplishes this task by applying decision tree based learning on working and nonworking configuration snapshots and by using network traffic based problem signatures to index into configuration changes made by users to fix problems. We describe the design and implementation of NetPrints, and demonstrate its effectiveness in diagnosing a variety of home networking problems reported by users. <s> BIB001 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> How can SDN help? <s> We argue that heterogeneity is hindering technological innovation in the home---homes differ in terms of their devices and how those devices are connected and used. To abstract these differences, we propose to develop a home-wide operating system. A HomeOS can simplify application development and let users easily add functionality by installing new devices or applications. The development of such an OS is an inherently inter-disciplinary exercise. Not only must the abstractions meet the usual goals of being efficient and easy to program, but the underlying primitives must also match how users want to manage and secure their home. We describe the preliminary design of HomeOS and our experience with developing applications for it. <s> BIB002
As mentioned earlier, SDN separates the control plane from the data plane, providing the required abstraction of low-level layers into a logical view that can be understood and programmed by network developers. Providing an access into the configuration of network hardware through software programming is essential to allow users to manage their networks via high-level applications that are developed for them by third-party developers. Alternatively, users can outsource network configuration and management to service providers. Viewing the target management functions of each possible application as a separate control slice of the home network, trusted third parties can programmatically control different slices to better manage different functions, such as WiFi configuration, improving routing and implementing access control (e.g. configure WiFi channel and power to minimize interference and/or set parental controls). Several other researchers have previously identified the need for applications and services within the home to cope with increasing complexity and heterogeneity. Few works suggested solutions that are independent of the SDN concept, such as creating new and separate operating system for the home in which users deal with applications and high-level policies to deal with integration and management of their network BIB002 or using an OSGI (Open Service Gateway Initiative)-based framework to install applications on a residential gateway (Valtchev & Frankov, 2002) . However, most of the recent works rely on the SDN technology, and particularly the OpenFlow-based solutions to address the problem of network management. This is the main focus of this article. Aside from the core management functions, and as a subset of those functions, significant work has been done to automate detection and diagnosis of faults in home networks, and to define the appropriate interaction and interfaces between the users and tools to manage and configure the home network BIB001 ).
SDN in the home: A survey of home network solutions using Software Defined Networking <s> Generic theme <s> Wireless home networks are increasingly deployed in people's homes worldwide. Unfortunately, home networks have evolved using protocols designed for backbone and enterprise networks, which are quite different in scale and character to home networks. We believe this evolution is at the heart of widely observed problems experienced by users managing and using their home networks. In this paper we investigate redesign of the home router to exploit the distinct social and physical characteristics of the home. We extract two key requirements from a range of ethnographic studies: users desire greater understanding of and control over their networks' behaviour. We present our design for a home router that focuses on monitoring and controlling network traffic flows, and so provides a platform for building user interfaces that satisfy these two user requirements. We describe and evaluate our prototype which uses NOX and OpenFlow to provide per-flow control, and a custom DHCP implementation to enable traffic isolation and accurate measurement from the IP layer. It also provides finer-grained per-flow control through interception of wireless association and DNS resolution. We evaluate the impact of these modifications, and thus the applicability of flow-based network management in the home. <s> BIB001 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Generic theme <s> Managing a home network is challenging because the underlying infrastructure is so complex. Existing interfaces either hide or expose the network's underlying complexity, but in both cases, the information that is shown does not necessarily allow a user to complete desired tasks. Recent advances in software defined networking, however, permit a redesign of the underlying network and protocols, potentially allowing designers to move complexity further from the user and, in some cases, eliminating it entirely. In this paper, we explore whether the choices of what to make visible to the user in the design of today's home network infrastructure, performance, and policies make sense. We also examine whether new capabilities for refactoring the network infrastructure - changing the underlying system without compromising existing functionality - should cause us to revisit some of these choices. Our work represents a case study of how co-designing an interface and its underlying infrastructure could ultimately improve interfaces for that infrastructure. <s> BIB002 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Generic theme <s> As the number and variety of connected devices increase, most end-users find themselves unable to manage their home networks properly, not having enough time and/or knowledge to do so. In this paper, we propose a new approach to remove this burden from them, by fully virtualizing the home network and delegating its management and operations to the ISP, while keeping end-users in control. We furthermore define the architecture of our software-based Majord'Home solution. Acting as a majordomo of the home, it handles a representation of the home objects and network constraints, automates the connectivity between heterogeneous elements and thus meets the needs of end-users. We finally describe the first version of our on-going implementation as a proof of concept. <s> BIB003 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Generic theme <s> Summary ::: Within the Internet service provider landscape, the residential gateway (RGW) plays a key role in service provision. The RGW should be an enabler for the provision of new and better services for residential users, but it is often instead an obstacle for innovation. This paper discusses how to improve the provision of innovative services and to increase the usability of residential networks by upgrading the residential gateway in alignment with the current paradigms of software-defined networking (SDN) and network function virtualization. In this approach, SDN contributes by providing a fine-grained control of the traffic and network function virtualization contributes by outsourcing traditional and specialized network functions running inside the RGW like routing or network address translation to the Internet service provider premises. Based on this approach, a management framework has been designed by considering 2 aspects: the involvement of the residential user in the management tasks through the provision of network management applications and the need to decouple network applications from the underlying SDN controller technology to encourage the development of innovative network applications. In addition, a virtualized management and networking domain has been defined to complement the approach and leverage cloud technologies. The advantages and challenges of this approach are analyzed based on a proof of concept development. <s> BIB004 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Generic theme <s> The residential gateway is a key device in the provision of Internet access to a household or to a small office. Managing a residential network nowadays means configuring the functionality provided by the residential gateway, which is often a task that requires a certain level of technical expertise that most residential users lack. Internet Service Providers sometimes address this usability problem by managing the residential gateway from a central location and offering a way of configuring simple functions such as the password of the Wi-Fi network through a web-based application. In this paper a new user-centric management architecture is proposed, to increase the active engagement of residential users in the management tasks of their own networks, improving the usability of the network and facilitating the provision of new services. In this approach, residential network management applications are split in two components: a front-end handling user interaction and running on the user's preferred device (PC, laptop, smartphone); and a back-end built on top of both the Software Defined Networking (SDN) and the Network Functions Virtualization (NFV) paradigms. The solution takes advantage of the fine-grained control of network traffic and the convenience to communicate network events provided by SDN and the outsourcing of traditional network functions like routing or NAT from the residential gateway to a cloud-based infrastructure managed by the Internet Service Provider. In this paper the advantages and challenges of this approach are discussed, based on the results obtained from a proof of concept system that has been developed to evaluate the feasibility and performance of the proposal. The residential network usability is improved by implementing a new user-centric management model.Residential network management applications (RENEMA apps) involve users in managing their own networks.Residential network services (RENESEs) expedite and simplify the development of RENEMA apps.The virtualized management and networking domain (vMANDO) concept hosts the SDN and NFV components.The architecture allows avoiding the manufacturer lock-in effect. <s> BIB005
Apart from the works that focus on a specific aspect of managing home networks, such as bandwidth allocation or security, several articles introduce their own approach of exploiting SDN in home networking, from a general perspective. Two of the first works in this category were developed as part of the Homework project (The University of Nottingham, 2012), and aimed at redesigning exiting home-network infrastructure (i.e. routers) based on the concepts of SDN to provide the user with better understanding and control as well as novel interfaces (Mortier et al., 2011 BIB001 . The authors in BIB002 ) take the home network as a case study to discuss how SDN can be used to refactor current networks and provide users with the correct level of network visibility and actionable information. The concept of virtualisation is suggested in the remaining works of this generic category BIB003 Dillon & Winters, 2014; BIB004 BIB005 . These four works differ in their proposed architectures, but agree on virtualising the home network and delegating the management and control of the network to someone in the cloud, most probably the Internet Service Provider (ISP). This aims to remove the management burden from the user while providing the usability of the network.
SDN in the home: A survey of home network solutions using Software Defined Networking <s> Specialised themes <s> As Internet service providers increasingly implement and impose "usage caps", consumers need better ways to help them understand and control how devices in the home use up the available network resources or available capacity. Towards this goal, we will demonstrate a system that allows users to monitor and manage their usage caps. The system uses the BISMark firmware running on network gateways to collect usage statistics and report them to a logically centralized controller, which displays usage information. The controller allows users to specify policies about how different people, devices, and applications should consume the usage cap; it implements and enforces these policies via a secure OpenFlow control channel to each gateway device. The demonstration will show various use cases, such as limiting the usage of a particular application, visualizing usage statistics, and allowing users within a single household to "trade" caps with one another. <s> BIB001 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Specialised themes <s> Despite the popularity of home networks, they face a number of systemic problems: (i)Broadband networks are expensive to deploy; and it is not clear how the cost can be shared by several service providers; (ii) Home networks are getting harder to manage as we connect more devices, use new applications, and rely on them for entertainment, communication and work|it is common for home networks to be poorly managed, insecure or just plain broken; and (iii) It is not clear how home networks will steadily improve, after they have been deployed, to provide steadily better service to home users. In this paper we propose slicing home networks as a way to overcome these problems. As a mechanism, slicing allows multiple service providers to share a common infrastructure; and supports many policies and business models for cost sharing. We propose four requirements for slicing home networks: bandwidth and traffic isolation between slices, independent control of each slice, and the ability to modify and improve the behavior of a slice. We explore how these requirements allow cost-sharing, outsourced management of home networks, and the ability to customize a slice to provide higher-quality service. Finally, we describe an initial prototype that we are deploying in homes. <s> BIB002 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Specialised themes <s> Policy-makers, ISPs and content providers are locked in a debate about who can control the Internet traffic that flows into our homes. In this paper we argue that the user, not the ISP or the content provider, should decide how traffic is prioritized to and from the home. Home users know most about their preferences, and if they can express them well to the ISP, then both the ISP and user are better off. To test the idea we built a prototype that lets users express highlevel preferences that are translated to low-level semantics and used to control the network. <s> BIB003 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Specialised themes <s> Home networks are becoming increasingly complex, with many household devices (PCs tablets, phones, media gateways, smart TVs) and diverse user applications (browsing, video streaming, peer-to-peer, VoIP, gaming) sharing the single broadband access link. In today's architecture the traffic streams compete for bandwidth on a best-effort basis, resulting in poor quality of experience for users. In this paper, we leverage the emerging paradigm of software defined networking (SDN) to enable the ISP to expose some controls to the users to manage service quality for specific devices and applications in their household. Our contributions are to develop an architecture and interface for delegation of such control to the user, and to demonstrate its value via experiments in a laboratory test-bed using three representative applications: video, web-browsing, and large downloads. <s> BIB004 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Specialised themes <s> This paper considers SDN, and OpenFlow in particular, as technology to develop the next generation of more flexible, configurable and automated home networks. We identify the problems with the current state of the art in home networking, which includes a lack of user engagement in home network maintenance and configuration, Internet bandwidth limitations, and a lack of ISP reconfiguration and troubleshooting tools. We propose HomeVisor, a novel remote home network management tool. In this paper, we evaluate HomeVisor's ability to outsource control to an entity outside the home network. This includes the overhead of multiple slices within the home, and the effect of controller latency on network performance. <s> BIB005 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Specialised themes <s> Software Defined Networks (SDN) offers the opportunity to drive down costs through increased operational efficiency-network utilization in particular-service creation velocity, and differentiated and personalized network services. This way the CAPEX and OPEX costs for the operator are going to be drawn down and same way costs will be drawn down for the end user. In the context of UNIFY project [1], one of the main objectives is to focus on enablers of such unified production environment and will develop an automated, dynamic service creation platform, leveraging a fine-granular service chaining architecture. <s> BIB006 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Specialised themes <s> Home networks are becoming increasingly rich in devices and applications, but continue to share the broadband link in a neutral way. We believe the time is ripe to personalize the home network experience, allowing a household to differentiate its users (e.g. father's laptop prioritized over kid's iPad) and services (e.g. video streaming prioritized over downloading). In this paper we argue that SDN provides a way to automate self-customization by households, while cloud-based delivery simplifies subscriber management. We develop an architecture comprising a cloud-based front-end portal and SDN-based back-end APIs, and show how these can be used by the subscriber to improve streaming-video (YouTube) quality and video conferencing (Skype) experience, and to permit device-specific parental controls (e.g. Facebook access). We prototype and validate our solution in a platform comprising the Floodlight controller and OVS switches. Lastly, we evaluate our solutions via experiments of realistic scenarios to quantify the benefits in terms of improved quality of experience and new features for the user. <s> BIB007 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Specialised themes <s> In this paper we present an idea of a propriety Software Defined residential Network (SDrN) and we show as a use case, a multicast streaming service that can be hosted on such networks. To verify the feasibility of the service in the context of quality of service, we offer to the providers of online streaming services (in some cases the ISPs themselves), APIs to control and validate the QoS of the users in the service. The QoS control APIs were tested on SDN based simulation environment. <s> BIB008 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Specialised themes <s> Software Defined Network (SDN) has long been a research focus since born from the lab of Stanford University. Researches on traditional home networks are faced with a series of challenges due to the ever more complicated user demands. The application of SDN to the home network is an effective approach in coping with it. Now the research on the SDN based home network is in its preliminary stage. Therefore, for better user experience, it is essential to effectively manage and utilize the resources of the home network. The general slicing strategies don't show much advantage in performance within the home networks due to the increased user demands and applications. In this paper, we introduce an advanced SDN based home network prototype and analyze its compositions and application requirements. By implementing and comparing several slicing strategies in properties, we achieve an optimized slicing strategy according to the specified home network circumstance and our preference. <s> BIB009 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Specialised themes <s> Existing home-networking protocols do not robustly incorporate universal connectivity among multiple homes, which leaves their use restricted to a single home. In addition, even in a single home network, new functional requirements ask for more diversified forms of networking control. This paper presents in-home consumer electronic devices that incorporate the emerging SDN (Software Defined Networking) paradigm. The proposed devices enable ondemand provisioning for protocol-agnostic home networking and thus provide a high degree of flexibility for intra-home networking as well as wider connectivity for inter-home networking. The feasibility of the prototype devices is verified by realizing a multi-home visual-sharing scenario and by supporting diverse future scenarios. <s> BIB010 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Specialised themes <s> In this paper, we propose to combine the emerging software defined networking (SDN) paradigm with the existing residential broadband infrastructure to enable home users to have dynamic control over their traffic flows. The SDN centralized control technology enables household devices to have virtualized services with quality of service (QoS) guarantee. SDN-enabled open application programming interfaces (APIs) allow Internet service providers (ISPs) to perform bandwidth slicing in home networks and implement time-dependent hybrid pricing. Given the requests from household devices for virtualized and non-virtualized services, we formulate a Stackelberg game to characterize the pricing strategy of ISP as well as bandwidth allocation strategy in home networks. In the Stackelberg game, the leader is the ISP and the followers are the home networks. We determine the optimal strategies which provide maximal payoff for the ISP. Numerical results show that our proposed SDN-enabled home network technology with the hybrid pricing scheme provides a better performance than a usage-based pricing scheme tailored for best-effort home networks. <s> BIB011 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Specialised themes <s> The increasing uptake of smart home appliances, such as lights, smoke-alarms, power switches, baby monitors, and weighing scales, raises privacy and security concerns at unprecedented scale, allowing legitimate and illegitimate entities to snoop and intrude into the family's activities. In this paper we first illustrate these threats using real devices currently available in the market. We then argue that as more such devices emerge, the attack vectors increase, and ensuring privacy/security of the house becomes more challenging. We therefore advocate that device-level protections be augmented with network-level security solutions, that can monitor network activity to detect suspicious behavior. We further propose that software defined networking technology be used to dynamically block/quarantine devices, based on their network activity and on the context within the house such as time-of-day or occupancy-level. We believe our network-centric approach can augment device-centric security for the emerging smart-home. <s> BIB012 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Specialised themes <s> Internet Service Providers (ISPs) have introduced "data caps", or quotas on the amount of data that a customer can download during a billing cycle. Under this model, Internet users who reach a data cap can be subject to degraded performance, extra fees, or even temporary interruption of Internet service. For this reason, users need better visibility into and control over their Internet usage to help them understand what uses up data and control how these quotas are reached. In this paper, we present the design and implementation of a tool, called uCap, to help home users manage Internet data. We conducted a field trial of uCap in 21 home networks in three countries and performed an in-depth qualitative study of ten of these homes. We present the results of the evaluation and implications for the design of future Internet data management tools. <s> BIB013 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Specialised themes <s> Today's residential Internet service is bundled and shared by a multiplicity of household devices and members, causing several performance problems. Customizing broadband sharing to the needs and usage patterns of each individual house has hitherto been difficult for ISPs and home router vendors. In this paper we design, implement, and evaluate a system that allows a third-party to create new services by which subscribers can easily customize Internet sharing within their household. Our specific contributions are three-fold: (1) We develop an over-the-top architecture that enables residential Internet customization, and propose new APIs to facilitate service innovation. (2) We identify several use-cases where subscribers benefit from the customization, including: prioritizing quality-of-experience amongst family members; monitoring individual usage volumes in relation to the household quota; and filtering age-appropriate content for selected users. (3) We develop a fully-functional prototype of our system leveraging open-source SDN platforms, deploy it in selected households, and evaluate its usability and performance benefits to demonstrate feasibility and utility in the real world. <s> BIB014 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Specialised themes <s> In recent years, there has been a rapid growth in the adoption and usage of WiFi enabled networked devices at homes such as laptops, handheld device and wireless entertainment devices. In dense wireless deployments at homes, such as apartment buildings, neighboring home WLANs share the same unlicensed spectrum by deploying consumer-grade access points in their individual homes. In such environments, WiFi networks can suffer from intermittent performance issues such as wireless packet losses, interference from WiFi and non-WiFi sources due to the increasing diversity of devices that share the spectrum. In this paper, we propose a vendor-neutral cloud-based centralized framework called COAP to configure, co-ordinate and manage individual home APs using an open API implemented over the OpenFlow SDN framework. This paper describes the framework and motivates the potential benefits of the framework in home WLANs. <s> BIB015 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Specialised themes <s> In dense wireless deployments at homes, such as apartment buildings, neighboring home WLANs share the same unlicensed spectrum by deploying consumer-grade access points in their individual homes. In such environments, WiFi networks can suffer from intermittent performance issues such as wireless packet losses, interference from WiFi and non-WiFi sources due to the rapid growth and increasing diversity of devices that share the spectrum. In this paper, we propose a vendor-neutral cloud-based centralized framework called COAP to configure, coordinate and manage individual home APs using an open API implemented by these commodity APs. The framework, implemented using OpenFlow extensions, allows the APs to share various types of information with a centralized controller — interference and traffic phenomenon and various flow contexts, and in turn receive instructions — configuration parameters (e.g., channel) and transmission parameters (through coarse-grained schedules and throttling parameters). This paper describes the framework and associated techniques, applications to motivate its potential benefits, such as, upto 47% reduction in channel congestion and our experiences from having deployed it in actual home environments. <s> BIB016 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Specialised themes <s> In recent years a lot of new consumer devices have been introduced to the home network. Modern home networks usually consists of multiple heterogeneous communication technologies such as Ethernet, Wi-Fi and power-line communications. Today, the user has to manually decide which transmission technology to use as there is no automated optimization across technologies. Load balancing algorithms can improve overall throughput while redundant links also provide the opportunity to switch flows in case of link failures. Current standards either lack real implementation in consumer devices or do not have the flexibility to support all necessary functionality towards creating a convergent hybrid home network. Therefore, we propose an alternative way by using Software-Defined Networking techniques to manage a heterogeneous home network. In this paper we specifically evaluate the ability of OpenFlow-enabled switches to perform link switching both under normal conditions and in case of link failures. Our results show that SDN-based management can be used to improve heterogeneous home networks by utilising redundant links for flow rerouting. However, they also show that improvements are still needed to reduce downtime during link failure or rerouting in case of TCP traffic. <s> BIB017 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Specialised themes <s> Adaptive video streaming techniques were introduced to enable high quality video delivery over HTTP. These schemes propose to choose dynamically the appropriate video rate to match the operating conditions. In home networks, wireless access is the predominant Internet access. Multiple clients/players with different link qualities compete over a limited wireless bandwidth to transfer their video streams. As a result, some users undergo unpredictable degradations of their Quality of Experience (QoE) while others benefit from these perturbations. In this paper we introduce a new technique to address this issue at the gateway without modifying neither the client nor the video server side. We design a framework WNAVS (Wireless Network Assisted Video Streaming) that relies on the deployment of Software Defined Networking (SDN). WNAVS performs a dynamic traffic shaping based on collected network traffic statistics and allocates bandwidth for the clients in real time. We evaluate WNAVS over several metrics: fairness, instability, average video quality as well as the video traffic utilization. Our results demonstrate an improvement for all these parameters. <s> BIB018 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Specialised themes <s> Software defined networking (SDN) provides a centralized control framework with real-time control of network components, residential customer routers in particular, that allows automated per-user bandwidth allocation. However, employing dynamic traffic shaping for efficient bandwidth utilization among residential users is a challenging task. In this context, understanding application usage requirements for each individual user and translating them into network policies requires expertise beyond most residential users. This paper proposes a user-centric traffic optimization scheme by profiling users based on their application trends recorded using generic NetFlow records, in order to provide a better view of per user utilization. We also propose an SDN traffic monitoring and management application for implementing Linux-based hierarchical token bucket (HTB) queues customized for individual user profiles in real-time, according to user-defined priorities. The traffic management scheme scales well under both upstream and downstream network congestion by dynamically allocating dedicated bandwidth to users based on their profile priority, resulting in a decreased packet loss and latency for a selected set of high priority users. <s> BIB019 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Specialised themes <s> In a home network, there are multiple users each running different applications interacting with the network. To enhance the experience of each user, prioritization of various network applications is important. Previous solutions to this problem assigned priorities in a static manner. Even though there has been some efforts to assign priorities dynamically, these solutions only used interactivity of the application to prioritize traffic. We present Contextual Router, which achieves better prioritization by detecting all the flows generated in a home network and assigning priorities in a dynamic manner using various features of flows collected from each user's machine. <s> BIB020 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Specialised themes <s> With the increasing number of IoT (Internet of Things) devices and advance of smart home technology, we propose an innovative bandwidth allocation framework for IoT enabled smart homes. The application scope of this research assumes a scenario that an ISP (Internet Service Provider) should support thousands of IoT enabled smart homes for a variety of services. Each smart home is equipped with tens of IoT devices with a wide spectrum of functional capabilities. The proposed bandwidth allocation framework is based on the promising software defined networking (SDN) architecture and is responsible for optimizing bandwidth allocation on both internal home traffic and external Internet traffic. The overall system architecture is separated into SDN Smart Home Cloud and Massive Smart Homes, which are interconnected by OpenFlow protocol. We modify the 3GPP LTE QoS Class Identifier (QCI) to adaptive to the services suitable for smart homes. The proposed bandwidth allocation algorithm considers fairness, delay, and service priority at the same time. With this framework, ISP is able to optimize bandwidth allocation by aggregating thousands of classified services of smart homes and thus effectively enhance Quality of Service (QoS) and user experience (QoE). <s> BIB021 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Specialised themes <s> There has always been a gap of perception between Internet Service Providers (ISPs) and their customers when considering the performance of network service. On one hand, ISPs invest to increase downstream speed of access network infrastructure. On the other hand, users cannot achieve perceived quality of experience (QoE). This paper addresses this problem by introducing a system, Conan, which enables content-aware flow scheduling to improve the QoE of users. Conan exploits to satisfy users' requirements in the access network (LAN), which is the performance bottleneck actually. By leveraging the technique of software defined networking (SDN), Conan are able to specify the expected network capacity for different applications. Automatic application identification is deployed at home gateway to improve the scalability, and flexible bandwidth allocation is realized at LAN for specified applications. Using video streaming service optimization as an example, we demonstrate that our system can automatically allocate bandwidth for video flows. <s> BIB022 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Specialised themes <s> In recent years, the smart home field has caught wide attention and witnessed rapid development. Smart devices, continuously increasing in number, make user management and implementation more difficult while promoting the development of the smart home. How to design an efficient smart home management platform is one of the great challenges the current smart home field faces. This article refers to the core idea of SDN, and proposed the software defined smart home platform, SDSH for short. The design features of virtualization, openness, and centralization can effectively integrate the heterogeneous network devices in the smart home platform, and flexibly adapt to the great difference between family scenes and user demands. At the same time, this article brings up the core technology of SDSH, and discusses the application value of the four core technologies and the new challenges the current technology is facing in a smart home scenario. In the end, regarding the SDSH application scenarios, this article analyzes the household experience innovation brought by this kind of smart home management platform, and the opportunities and challenges the SDSH platform faces. <s> BIB023 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Specialised themes <s> Software-defined Home Networks (SDHN) is a key development trend of smart home which is proposed to realize multi-home visual sharing. With the improved openness and programming ability, SDHN faces increased network threat than traditional home networks. Especially, because of the diversity and heterogeneity of smart home products, multi-stage attack is more convenient to be performed in SDHN. To mitigate multi-stage attack in SDHN, some significant problems are needed to be addressed. The first problem is security assessment along with attack events. The second one is countermeasure selection problem based on security assessment result and security policy. The third one is attack mitigation countermeasure deployment problem according to current network context to meet the countermeasure decision instantly. In this paper, a multi-stage attack mitigation mechanism is proposed for SDHN using Software-Defined Networking (SDN) and Network Function Virtualization (NFV). Firstly, an evidence-driven security assessment method using SDN factors and NFV-based detection is designed to perform security assessment along with observed security events. Secondly, an attack mitigation countermeasure selection method is proposed. The evaluation shows that the proposed mechanism is effective for multi-stage attack mitigation in SDHN1. <s> BIB024 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Specialised themes <s> While enterprise networks follow best practices and security measures, residential networks often lack these protections. Home networks have constrained resources and lack a dedicated IT staff that can secure and manage the network and systems. At the same time, homes must tackle the same challenges of securing heterogeneous devices when communicating to the Internet. In this work, we explore combining software-defined networking and proxies with commodity residential Internet routers. We evaluate a “whole home” proxy solution for the Skype video conferencing application to determine the viability of the approach in practice. We find that we are able to automatically detect when a device is about to use Skype and dynamically intercept all of the Skype communication and route it through a proxy while not disturbing unrelated network flows. Our approach works across multiple operating systems, form factors, and versions of Skype. <s> BIB025 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Specialised themes <s> Recent advances in wireless networking technologies are leading toward the proliferation of novel home network applications. However, the landscape of emerging scenarios is fragmented due to their varying technological requirements and the heterogeneity of current wireless technologies. We argue that the development of flexible software-defined wireless architectures, including such efforts as the wireless MAC processor, coupled with SDN concepts, will enable the support of both emerging and future home applications. In this article, we first identify problems with managing current home networks composed of separate network segments governed by different technologies. Second, we point out the flaws of current approaches to provide interoperability of these technologies. Third, we present a vision of a software-defined multi-technology network architecture (SDN@home) and demonstrate how a future home gateway (SDN controller) can directly and dynamically program network devices. Finally, we define a new type of flexibility enabled by SDN@home. Wireless protocols and features are no longer tied to specific technologies but can be used by general-purpose wireless SDN devices. This permits satisfaction of the requirements demanded by home owners and service providers under heterogeneous network conditions. <s> BIB026 </s> SDN in the home: A survey of home network solutions using Software Defined Networking <s> Specialised themes <s> We propose to leverage the virtualization possibilities of Network Functions Virtualization (NFV) together with the programmability of Software Defined Networking (SDN) in order to offer a portfolio of IoT-related functions to the residential users. The objectives are to reach economies of scale by offering a reasonably inexpensive customer premises equipment supporting most IoT physical communication options, whereas all self-discovery and the rest of vendor-specific functionality is externalized and implemented by the ISP (Internet Service Provider) or third parties. <s> BIB027
In this category, we can find few different themes. The general purpose is still to control and manage the home network using ideas and tools from the SDN paradigm, but the emphasis is put on a particular aspect of home networking in each work. We could recognize several specialised themes and summarised them into 10 different subcategories, some of which have only one paper each, but the theme is distinct enough to be highlighted and pointed out for further study and research. The most popular subject in this category is the QoS and quality of user experience (QoE) when using home network applications BIB018 BIB006 BIB019 BIB020 BIB011 BIB007 BIB021 BIB004 BIB008 BIB022 . The target application in these works is generally multimedia and video streaming, and the aim is to optimize bandwidth allocation for different network applications to improve the user experience. This optimisation is mostly based on the user preferences or profile, but can also be derived from dynamic traffic shaping based on collected traffic statistics BIB018 , automatic identification of applications BIB022 or a proposed bandwidth allocation algorithm BIB021 . Most of the works enable the ISP of controlling the service quality from the cloud, though few works depend on local solution using in-home SDN controller BIB018 , BIB020 , BIB019 . One work also proposes a novel pricing scheme for ISPs, who can implement time-dependent hybrid pricing through SDN APIs BIB011 . Another distinct theme in this category is to address the issues related to IoT devices, in the context of smart home BIB027 Nobakht, Sivaraman, & Boreli, 2016; BIB012 BIB023 . This perspective is unique and new to home networking, but its relevance is increasing in modern homes with the rise of the IoT paradigm. All network-enabled devices in the home are eventually forming an internet of things, and their management can consequently be considered a networking problem; hence, the SDN comes to mind. Within the papers on IoT home devices, some works focus on the problem of managing IoT devices, such as finding a device fault easily , integrating the heterogeneous network devices in smart home environments BIB023 , and offering a portfolio of IoTrelated functions to home users BIB027 . Another focus point is to propose solutions for smart home and IoT device security (Nobakht et al., 2016 BIB012 . Apart from IoT, targeting the application of home network security is also a common theme. One of the earliest works in the complete set of the surveyed papers proposed that users outsource the management tasks related to security to a third party controller who has the required expertise and capacity to monitor coordinated activities over the Internet (Feamster, 2010) . Another work proposes a multi-stage attack mitigation mechanism for home networks using SDN BIB024 . A home-level security proxy solution for the video conferencing applications (as a case study) is proposed in BIB025 . Finally, a communitybased crowdsourced home cyber-security system is proposed in . Because the caps of Internet usage is an increasing concern for home users, several works specifically address the problem of managing Internet use through the SDN architecture BIB013 BIB014 BIB001 BIB003 ). An early work BIB001 demonstrates a system to collect usage statistics and reports them to a central controller, which displays usage information. The controller allows users to specify policies and enforces them, where policies dictate how different people, devices, and applications should consume the usage cap. The other works depend either on the ISP, where the users are allowed to choose the relative priority of their applications, and signal their preference to the ISP BIB003 , or allow a third party to control the Internet traffic usage BIB014 , BIB013 . Another group of papers address the specific issues arising from managing home WiFi access points BIB015 BIB016 , or in general all multi-technology wireless network devices BIB026 . Few papers adopt the concept of network slicing (Fratczak, BIB005 BIB009 BIB002 . Network slicing is a promising technique that creates different slices over the same physical home network, so that each slice is independently controllable and can be isolated for different services. The management of slices may be assigned to a third party. Finally, the last four papers in our collection are directed toward four special target applications. The first work proposes the instrumentation of home networks to enable their troubleshooting . This work presents the design requirements of a general-purpose home network logging platform that can record events supporting troubleshooting services for home network users. A second work discusses the idea of multi-home networking BIB010 , enabling on-demand provisioning of networked multi-home multimedia applications using SDNbased in-home consumer electronic devices. The automatic configuration of home networks is also addressed in , which proposes a method where SDN controller performs auto-recognition and registration of home devices, then manages home devices according to the home network connection state. The final work addresses the problem of heterogeneity in home networks, and evaluates the ability of OpenFlow-enabled switches to manage heterogeneous home networks by utilising redundant links for flow rerouting and performing link switching between wired and wireless technologies both under normal conditions and in case of link failures BIB017 .
Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Introduction <s> Agents that learn on-line with partial instance memory reserve some of the previously encountered examples for use in future training episodes. We extend our previous work by combining our method for selecting extreme examples with two incremental learning algorithms, AQ11 and GEM. Using these new systems, AQ11-PM and GEM-PM, and the task computer intrusion detection, we conducted a lesion study to analyze trade-offs in performance. Results showed that, although our partial-memory model decreased predictive accuracy by 2%, it also decreased memory requirements by 75%, learning time by 75%, and in some cases, concept complexity by 10%, an outcome consistent with earlier results using our partial-memory method and batch learning. <s> BIB001 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Introduction <s> Recently, mining data streams with concept drifts for actionable insights has become an important and challenging task for a wide range of applications including credit card fraud protection, target marketing, network intrusion detection, etc. Conventional knowledge discovery tools are facing two challenges, the overwhelming volume of the streaming data, and the concept drifts. In this paper, we propose a general framework for mining concept-drifting data streams using weighted ensemble classifiers. We train an ensemble of classification models, such as C4.5, RIPPER, naive Beyesian, etc., from sequential chunks of the data stream. The classifiers in the ensemble are judiciously weighted based on their expected classification accuracy on the test data under the time-evolving environment. Thus, the ensemble approach improves both the efficiency in learning the model and the accuracy in performing classification. Our empirical study shows that the proposed methods have substantial advantage over single-classifier approaches in prediction accuracy, and the ensemble framework is effective for a variety of classification models. <s> BIB002 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Introduction <s> Learning concepts that change over time is important for a variety of applications in which an intelligent system must acquire and use a behavioral profile. Computer intrusion detection, calendar scheduling, and intelligent user interfaces are three examples. An interesting class of methods for learning such concepts consists of algorithms that maintain a portion of previously encountered examples. Since concepts change over time and these methods store selected examples, mechanisms must exist to identify and remove irrelevant examples of old concepts. In this paper, we describe an incremental rule learner with partial instance memory, called AQ 11 -PM+WAH, that uses Widmer and Kubat's heuristic to adjust dynamically the window over which it retains and forgets examples. We evaluated this learner using the STAGGER concepts and made direct comparisons to AQ-PM and to AQ 11 - PM, similar learners with partial instance memory. Results suggest that the forgetting heuristic is not restricted to FLORA2 the learner for which it was originally designed. Overall, result from this study and others suggest learners with partial instance memory converge more quickly to changing target concepts than algorithms that learn solely from new examples. <s> BIB003 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Introduction <s> We consider strategies for building classifier ensembles for non-stationary environments where the classification task changes during the operation of the ensemble. Individual classifier models capable of online learning are reviewed. The concept of ”forgetting” is discussed. Online ensembles and strategies suitable for changing environments are summarized. <s> BIB004 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Introduction <s> Induction of a concept description given noisy instances is difficult and is further exacerbated when the concepts may change over time. This paper presents a solution which has been guided by psychological and mathematical results. The method is based on a distributed concept description which is composed of a set of weighted, symbolic characterizations. Two learning processes incrementally modify this description. One adjusts the characterization weights and another creates new characterizations. The latter process is described in terms of a search through the space of possibilities and is shown to require linear space with respect to the number of attribute-value pairs in the description language. The method utilizes previously acquired concept definitions in subsequent learning by adding an attribute for each learned concept to instance descriptions. A program called STAGGER fully embodies this method, and this paper reports on a number of empirical analyses of its performance. Since understanding the relationships between a new learning method and existing ones can be difficult, this paper first reviews a framework for discussing machine learning systems and then describes STAGGER in that framework. <s> BIB005 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Introduction <s> Alexey Tsymbal Department of Computer Science Trinity College Dublin, Ireland tsymbalo@tcd.ie April 29, 2004 Abstract In the real world concepts are often not stable but change with time. Typical examples of this are weather prediction rules and customers’ preferences. The underlying data distribution may change as well. Often these changes make the model built on old data inconsistent with the new data, and regular updating of the model is necessary. This problem, known as concept drift, complicates the task of learning a model from data and requires special approaches, different from commonly used techniques, which treat arriving instances as equally important contributors to the final concept. This paper considers different types of concept drift, peculiarities of the problem, and gives a critical review of existing approaches to the problem. 1. Definitions and peculiarities of the problem A difficult problem with learning in many real-world domains is that the concept of interest may depend on some hidden context, not given explicitly in the form of pre-dictive features. A typical example is weather prediction rules that may vary radically with the season. Another example is the patterns of customers’ buying preferences that may change with time, depending on the current day of the week, availability of alter-natives, inflation rate, etc. Often the cause of change is hidden, not known a priori, making the learning task more complicated. Changes in the hidden context can induce more or less radical changes in the target concept, which is generally known as con-cept drift (Widmer and Kubat, 1996). An effective learner should be able to track such changes and to quickly adapt to them. A difficult problem in handling concept drift is distinguishing between true concept drift and noise. Some algorithms may overreact to noise, erroneously interpreting it as concept drift, while others may be highly robust to noise, adjusting to the changes too slowly. An ideal learner should combine robustness to noise and sensitivity to concept drift (Widmer and Kubat, 1996). In many domains, hidden contexts may be expected to recur. Recurring contexts may be due to cyclic phenomena, such as seasons of the year or may be associated with irregular phenomena, such as inflation rates or market mood (Harries and Sam-mut, 1998). In such domains, in order to adapt more quickly to concept drift, concept <s> BIB006 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Introduction <s> Most of the work in machine learning assume that examples are generated at random according to some stationary probability distribution. In this work we study the problem of learning when the distribution that generate the examples changes over time. We present a method for detection of changes in the probability distribution of examples. The idea behind the drift detection method is to control the online error-rate of the algorithm. The training examples are presented in sequence. When a new training example is available, it is classified using the actual model. Statistical theory guarantees that while the distribution is stationary, the error will decrease. When the distribution changes, the error will increase. The method controls the trace of the online error of the algorithm. For the actual context we define a warning level, and a drift level. A new context is declared, if in a sequence of examples, the error increases reaching the warning level at example k w , and the drift level at example k d . This is an indication of a change in the distribution of the examples. The algorithm learns a new model using only the examples since k w . The method was tested with a set of eight artificial datasets and a real world dataset. We used three learning algorithms: a perceptron, a neural network and a decision tree. The experimental results show a good performance detecting drift and with learning the new concept. We also observe that the method is independent of the learning algorithm. <s> BIB007 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Introduction <s> Induction of decision rules within the dominance–based rough set approach to the multicriteria and multiattribute classification is considered. Within this framework, we discuss two algorithms: Glance and an extended version of AllRules. The important characteristics of Glance is that it induces the set of all dominance–based rules in an incremental way. On the other hand, AllRules induces in a non–incremental way the set of all robust rules, i.e. based on objects from the set of learning examples. The main aim of this study is to compare both these algorithms. We experimentally evaluate them on several data sets. The results show that Glance and AllRules are complementary algorithms. The first one works very efficiently on data sets described by a low number of condition attributes and a high number of objects. The other one, conversely, works well on data sets characterized by a high number of attributes and a low number of objects. <s> BIB008 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Introduction <s> On-line learning in domains where the target concept depends on some hidden context poses serious problems. A changing context can induce changes in the target concepts, producing what is known as concept drift. We describe a family of learning algorithms that flexibly react to concept drift and can take advantage of situations where contexts reappear. The general approach underlying all these algorithms consists of (1) keeping only a window of currently trusted examples and hypotheses; (2) storing concept descriptions and re-using them when a previous context re-appears; and (3) controlling both of these functions by a heuristic that constantly monitors the system's behavior. The paper reports on experiments that test the systems' performance under various conditions such as different levels of noise and different extent and rate of concept drift. <s> BIB009 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Introduction <s> An emerging problem in Data Streams is the detection of concept drift. This problem is aggravated when the drift is gradual over time. In this work we deflne a method for detecting concept drift, even in the case of slow gradual change. It is based on the estimated distribution of the distances between classiflcation errors. The proposed method can be used with any learning algorithm in two ways: using it as a wrapper of a batch learning algorithm or implementing it inside an incremental and online algorithm. The experimentation results compare our method (EDDM) with a similar one (DDM). Latter uses the error-rate instead of distance-error-rate. <s> BIB010 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Introduction <s> Mining data streams is a challenging task that requires online systems ba- sed on incremental learning approaches. This paper describes a classification system based on decision rules that may store up-to-date border examples to avoid unneces- sary revisions when virtual drifts are present in data. Consistent rules classify new test examples by covering and inconsistent rules classify them by distance as the nearest neighbour algorithm. In addition, the system provides an implicit forgetting heuristic so that positive and negative examples are removed from a rule when they are not near one another. <s> BIB011 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Introduction <s> We address adaptive classification of streaming data in the presence of concept change. An overview of the machine learning approaches reveals a deficit of methods for explicit change detection. Typically, classifier ensembles designed for changing environments do not have a bespoke change detector. Here we take a systematic look at the types of changes in streaming data and at the current approaches and techniques in online classification. Classifier ensembles for change detection are discussed. An example is carried through to illustrate individual and ensemble change detectors for both unlabelled and labelled data. While this paper does not offer ready-made solutions, it outlines possibilities for novel approaches to classification of streaming data. <s> BIB012 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Introduction <s> Sales prediction is an important problem for different companies involved in manufacturing, logistics, marketing, wholesaling and retailing. Different approaches have been suggested for food sales forecasting. Several researchers, including the authors of this paper, reported on the advantage of one type of technique over the others for a particular set of products. In this paper we demonstrate that besides an already recognized challenge of building accurate predictive models, the evaluation procedures themselves should be considered more carefully. We give illustrative examples to show that e.g. popular MAE and MSE estimates can be intuitive with one type of product and rather misleading with the others. Furthermore, averaging errors across differently behaving products can be also counter intuitive. We introduce new ways to evaluate the performance of wholesales prediction and discuss their biases with respect to different error types. <s> BIB013 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Introduction <s> Fuel feeding and inhomogeneity of fuel typically cause process fluctuations in the circulating fluidized bed (CFB) boilers. If control systems fail to compensate the fluctuations, the whole plant will suffer from fluctuations that are reinforced by the closed-loop controls. Accurate estimates of fuel consumption among other factors are needed for control systems operation. In this paper we address a problem of online mass flow prediction. Particularly, we consider the problems of (1) constructing the ground truth, (2) handling noise and abrupt concept drift, and (3) learning an accurate predictor. Last but not least we emphasize the importance of having the domain knowledge concerning the considered case. We demonstrate the performance of OMPF using real data sets collected from the experimental CFB boiler. <s> BIB014 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Introduction <s> Sales prediction is a complex task because of a large number of factors affecting the demand. We present a context aware sales prediction approach, which selects the base predictor depending on the structural properties of the historical sales. In the experimental part we show that there exist product subsets on which, using this strategy, it is possible to outperform naive methods. We also show the dependencies between product categorization accuracies and sales prediction accuracies. A case study of a food wholesaler indicates that moving average prediction can be outperformed by intelligent methods, if proper categorization is in place, which appears to be a difficult task. <s> BIB015 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Introduction <s> Since the beginning of the Internet age and the increased use of ubiquitous computing devices, the large volume and continuous flow of distributed data have imposed new constraints on the design of learning algorithms. Exploring how to extract knowledge structures from evolving and time-changing data, Knowledge Discovery from Data Streams presents a coherent overview of state-of-the-art research in learning from data streams. The book covers the fundamentals that are imperative to understanding data streams and describes important applications, such as TCP/IP traffic, GPS data, sensor networks, and customer click streams. It also addresses several challenges of data mining in the future, when stream mining will be at the core of many applications. These challenges involve designing useful and efficient data mining solutions applicable to real-world problems. In the appendix, the author includes examples of publicly available software and online data sets. This practical, up-to-date book focuses on the new requirements of the next generation of data mining. Although the concepts presented in the text are mainly about data streams, they also are valid for different areas of machine learning and data mining. <s> BIB016 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Introduction <s> Concept drift refers to a non stationary learning problem over time. The training and the application data often mismatch in real life problems. In this report we present a context of concept drift problem 1. We focus on the issues relevant to adaptive training set formation. We present the framework and terminology, and formulate a global picture of concept drift learners design. We start with formalizing the framework for the concept drifting data in Section 1. In Section 2 we discuss the adaptivity mechanisms of the concept drift learners. In Section 3 we overview the principle mechanisms of concept drift learners. In this chapter we give a general picture of the available algorithms and categorize them based on their properties. Section 5 discusses the related research fields and Section 5 groups and presents major concept drift applications. This report is intended to give a bird's view of concept drift research field, provide a context of the research and position it within broad spectrum of research fields and applications. <s> BIB017 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Introduction <s> Decision rules, which can provide good interpretability and flexibility for data mining tasks, have received very little attention in the stream mining community so far. In this work we introduce a new algorithm to learn rule sets, designed for open-ended data streams. The proposed algorithm is able to continuously learn compact ordered and unordered rule sets. The experimental evaluation shows competitive results in comparison with VFDT and C4.5rules. <s> BIB018 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Introduction <s> This paper presents a new framework for dealing with two main types of concept drift: sudden and gradual drift in labelled data with decision attribute. The learning examples are processed in batches of the same size. This new framework, called Batch Weighted Ensemble, is based on incorporating drift detector into the evolving ensemble. Its performance was evaluated experimentaly on data sets with different types of concept drift and compared with the performance of a standard Accuracy Weighted Ensemble classifier. The results show that BWE improves evaluation measures like processing time, memory used and obtain competitive total accuracy. <s> BIB019 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Introduction <s> Overload management has become very important in public safety systems that analyse high performance multimedia data streams, especially in the case of detection of terrorist and criminal dangers. Efficient overload management improves the accuracy of automatic identification of persons suspected of terrorist or criminal activity without requiring interaction with them. We argue that in order to improve the quality of multimedia data stream processing in the public safety arena, the innovative concept of a Multimedia Data Stream Management System (MMDSMS) using load-shedding techniques should be introduced into the infrastructure to monitor and optimize the execution of multimedia data stream queries. In this paper, we present a novel content-centered load shedding framework, based on searching and matching algorithms, for analysing video tuples arriving within multimedia data streams. The framework tracks and registers all symptoms of overload, and either prevents overload before it occurs, or minimizes its effects. We have extended our Continuous Query Language (CQL) syntax to enable this load shedding technique. The effectiveness of the framework has been verified using both artificial and real data video streams collected from monitoring devices. <s> BIB020 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Introduction <s> Three block based ensembles, AWE, BWE and ACE, are considered in the perspective of learning from data streams with concept drift. AWE updates the ensemble after processing each successive block of incoming examples, while the other ensembles are additionally extended by different drift detectors. Experiments show that these extensions improve classification accuracy, in particular for sudden changes occurring within the block, as well as reduce computational costs. <s> BIB021 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Introduction <s> Data streams are usually characterized by changes in the underlying distribution generating data. Therefore algorithms designed to work with data streams should be able to detect changes and quickly adapt the decision model. Rules are one of the most interpretable and flexible models for data mining prediction tasks. In this paper we present the Adaptive Very Fast Decision Rules (AVFDR), an on-line, any-time and one-pass algorithm for learning decision rules in the context of time changing data. AVFDR can learn ordered and unordered rule sets. It is able to adapt the decision model via incremental induction and specialization of rules. Detecting local drifts takes advantage of the modularity of rule sets. In AVFDR, each individual rule monitors the evolution of performance metrics to detect concept drift. AVFDR prunes rules that detect drift. This explicit change detection mechanism provides useful information about the dynamics of the process generating data, faster adaption to changes and generates compact rule sets. The experimental evaluation shows this method is able to learn fast and compact rule sets from evolving streams in comparison to alternative methods. <s> BIB022
Data mining is a relatively young and interdisciplinary field of computing science. It is one of the steps in the Knowledge Discovery in Databases (KDD) process that tries to discover patterns and dependencies in large data sets. One subtask of data mining is the classification problem. It identifies class labels to which a new observation belongs using knowledge extracted from labeled training examples. Most of the existing classifiers are created statically. They receive the whole learning set, from which knowledge is extracted. The knowledge is obtained only once and is not updated in the future. Those standard classifiers fail to answer modern challenges like processing streaming data. Data streams are characterized by the large size of data, probably infinite. Processing streaming data may be very expensive due to multiple data access. That is why many classifiers try to minimize the number of reads. The second problem with data streams is how many examples to remember. Classifiers may have a full memory-they remember all training data, a partial memory-they memorize some important learning examples, or no memory. Some of the algorithms remember only meta data connected with learning examples. Data streams can be processed by online classifiers. Those classifiers should have the following qualities BIB004 : • Single pass through the data. The classifier reads each example only once. • Limited memory and processing time. Each example should be processed very fast and in a constant period of time. • Any-time learning. The classifier should provide the best answer at every moment of time. Processing of data streams is a very popular and interesting research topic. An example of system designed for stream analysis can be found in BIB020 . While processing streaming data a problem can be encountered that the environment and the classification task may change in time. The concepts of interest may depend on some hidden context BIB009 , which is unknown. Changes in the hidden context can induce more or less radical changes in target concepts, producing what is generally known as concept drift BIB005 . One of the common examples of changing environments is spam detection. The description of assignment to different groups of e-mails changes with time. They depend on user preferences and active spammers, who invent new solutions to trick the up-to-date classifier. The problem with concept drift is real and has a wide range of applications. According to Zliobaite BIB017 applications' domains can be divided into 4 main groups: Monitoring and control, Assistance and information, Decision making, and AI and robotics. One of the typical monitoring problems is intrusion detection. The attackers try to invent new ways of overcoming current security systems, which is a source of concept drift. Other examples of occurrence of concept drift from Monitoring and control group are fraud detection in financial sector or traffic management. Applications from Assistance and information domain mainly organize and/or personalize the flow of information. The cost of mistake is relatively low. An example of such an application is customer profiling and direct marketing, where customer's needs and interests change with time. Also smart home systems should adapt to the changing environment and user's needs. This is an example of application from the AI and robotics domain. A wide range of occurrences of the concept drift problem was presented in BIB018 BIB017 . Systems designed for specific applications like food sales or CFB Boilers were described in BIB013 BIB014 BIB015 . A more formal definition of the concept drift may be as follows. In each point of time t every example is generated by source S t , which is a distribution over the data. Concepts are stable if all examples are sampled by the same source, otherwise concept drift exists BIB017 . Two main types of concept drift may be distinguished: sudden (abrupt) and gradual (incremental) BIB006 . In case when a source at time t is suddenly replaced with another one a sudden concept drift occurs. For example, John was listening to pop music his whole teenage life but when he graduated from university he changed his preferences and started to listen only to classical music. A gradual drift would occur if John started to listen to classical music while he was still enjoying pop music but the interest in pop decreased with time. In this case, the probability of sampling from the first source decreases with time, while the probability of sampling from the second source increases. In some domains previously seen concepts may reappear after some period of time. This type of change is known as a recurring context or recurring concept. Periodic seasonality is not considered to be a concept drift problem. Reoccurring concepts differ from common seasonality because it is not known when they may reappear BIB017 . Other examples of change worth mentioning are noise and blips BIB012 . Noise is a non-significant change and a good online classifier should not react to it. A blip represents a rare event that should be treated as an outlier and should be discarded. Mining data streams in the presence of concept drift is rather a new topic in the machine learning world but there already exist algorithms that attempt to solve this problem. For a taxonomy of available concept drift learners see BIB017 . In general, they can be divided into two main groups: trigger-based and evolving. The trigger-based model contains a change detector that indicates a need for model change. The change detection process is separate from classification. Standard actions of classifiers equipped with a detector are as following: the classifier predicts a label for received example e; then the true label and the predicted label are submitted to the change detector; if the detector detects a change, the feedback is passed to the classifier; then the classifier is retrained according to the level of change BIB012 . One of the most popular drift detection methods is DDM proposed by Gama et al. in BIB007 . This approach detects changes in the probability distribution of examples. The main idea of this method is to monitor the error-rate produced by a classifier. Statistical theory affirms that the error decreases if the distribution is stable BIB007 . When the error increases, it signifies that the distribution has changed. DDM operates on labeled data that arrive one at a time. Another interesting detector that performs better than DDM for a slow gradual drift is EDDM proposed in BIB010 . It uses the distance between classification errors in order to detect a change. There is also a solution that detects change from data arriving in batches, called Batch Drift Detection Method (BDDM). It was proposed in BIB019 and improved in BIB021 . Evolving methods operate in a different way than trigger-based solutions. They try to build the most accurate classifiers at each moment of time without explicit information about the occurrence of a change. The most popular evolving technique for handling concept drift is an ensemble of classifiers BIB017 . An example of such an ensemble is Accuracy Weighted Ensemble (AWE) BIB002 . It is the best representative of block-based ensembles, where component classifiers are constructed from sequentialcoming blocks of training data. When a new block is available, a new classifier is built from it and already existing component classifiers are evaluated. The new classifier usually replaces the worst component in the ensemble. For an overview of available complex methods see BIB016 BIB004 BIB012 BIB006 BIB017 . There also exist hybrid methods that incorporate explicit drift detector with an ensemble of classifiers. An example of such an approach is Batch Weighted Ensemble (BWE) introduced in BIB019 and improved in BIB021 . BWE uses Batch Drift Detection Method (BDDM) to detect an occurrence of change and updates its ensemble according to the type of change. Another block ensemble that is combined with an online drift detector is Adaptive Classifiers Ensemble (ACE) proposed in . This system besides a drift detection mechanism and many batch learners contains also an online learner. This paper focuses on incremental or online learning. A learning task is incremental if the training examples become available over time, usually one at a time . In this case learning may need to last indefinitely. This type of learning is similar to a human's acquisition of knowledge. People learn all the time and their knowledge is constantly revised based on newly gathered information. The term "incremental" is also applied to learning algorithms. An algorithm is online if, for given training examples, it produces a sequence of hypotheses such that the current hypothesis depends only on the previous one and on the current learning example e . All learning algorithms are applicable to all learning tasks. However, the most natural and flexible way to handle incremental learning tasks is to use incremental learners. Unfortunately, incremental learning is a rather forgotten area in the machine learning world . Nevertheless, there exist many incremental learning algorithms inducing different types of knowledge. An example of an incremental classifier inducing decision rules was described in BIB008 . However most of the existing solutions are not applicable for processing data streams. One of the most popular incremental method for mining data streams is Very Fast Decision Trees (VFDT) proposed in . It is a anytime system that builds decision trees using constant memory and constant time per example. VFDT uses Hoeffding bound to guarantee that its output is asymptotically nearly identical to the result obtained by a batch learner. VFDT was improved in to deal with the concept drift problem. CVFDT uses a sliding window on incoming data and old data, which fall outside the window, is forgotten. Another knowledge representation that was adjusted to processing data streams are decision rules. Decision rules can provide descriptions that are easily interpretable by a human. They are also very flexible and can be quickly updated or removed when a change occurs. Decision rules cover selected parts of the space, so if they become out-of-date there is no need to learn from scratch-only the rules that cover regions with the change should be revised. However, according to Gama BIB018 , they have not received enough attention in the stream mining community so far. Decision rules can be more effective for mining data streams than other methods. In case of algorithms based on Hoeffding Trees, the adaptation to change is performed via incremental growth of a tree. However, for sudden change the reaction might be to slow due to the fact that it might require rebuilding the whole tree structure. This might be very inefficient. Decision rules are more flexible than trees. A set of decision rules take advantage of individual rules that can be managed independently BIB022 . Therefore, they can be altered more easily if change occurred or even removed if necessary. For gradual concept drift, the adaptation to change has probably similar complexity for both knowledge representations. Next, decision trees split the data space, where decision rules cover parts of the data space. While processing data instance by instance, a tree might need more changes in global model, while decision rules are updated independently. On the other hand, the process of incremental rule induction is more sophisticated than induction of decision tree. This may be the reason why decision rules are not as popular as decision trees for mining data streams. According to the author's best knowledge, there does not exist any survey of incremental rule-based classifiers learning from non-stationary environments. The goal of this paper is to present the key online algorithms proposed for mining data streams in the presence of concept drift. It describes four of the proposed algorithms: FLORA, AQ11-PM+WAH, FACIL and VFDR. Those are the only purely incremental rule-based classifiers mining data streams in the presence of concept drift. First, the FLORA framework is described-a first family of algorithms that flexibly react to changes in concepts, can use previous knowledge in situations when contexts reappear and is robust to the noise in data BIB009 . Then, algorithms from the AQ family are presented with their modifications. AQ-PM is a static learner that selects extreme examples from rules' boundaries and stores them in the partial memory for each incoming batch of data. AQ11-PM BIB001 is a combination of the incremental AQ11 algorithm with a partial memory mechanism. AQ11-PM+WAH BIB003 is extended with a heuristic for flexible size of the window with stored examples. The FACIL algorithm behaves similarly to AQ11-PM BIB011 . However, it differs in a way that examples stored in the partial memory do not have to be extreme ones. Those three main algorithms were not tested on massive datasets. The newest proposal called VFDR BIB018 was tested on huge data streams. It induces ordered or unordered sets of decision rules that are efficient in terms of memory and learning times. This paper is organized as follows. The next section presents the basics of rule induction. Section 3 describes the first incremental rule-based learners for a concept drift problem-the FLORA family. Section 4 is devoted to the AQ family algorithms, e.g., AQ11-PM+WAH. Section 5 familiarizes with the FACIL algorithm. Section 6 reveals the newest algorithms VFDR and AVFDR. Section 7 concludes this paper.
Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Basics of the Rule Induction <s> This research was supported in part by the National Science Foundation under Grant No. DCR 84-06801, the Office of Naval Research under Grant No. N00014-82-K-0186, the Defense Advanced Research Project Agency under Grant No. N00014-K-85-0878, and by the Slovene Research Council. <s> BIB001 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Basics of the Rule Induction <s> Abstract The decision tree output of Quinlan's ID3 algorithm is one of its major weaknesses. Not only can it be incomprehensible and difficult to manipulate, but its use in expert systems frequently demands irrelevant information to be supplied. This report argues that the problem lies in the induction algorithm itself and can only be remedied by radically altering the underlying strategy. It describes a new algorithm, PRISM which, although based on ID3, uses a different induction strategy to induce rules which are modular, thus avoiding many of the problems associated with decision trees. <s> BIB002 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Basics of the Rule Induction <s> The CN2 algorithm induces an ordered list of classification rules from examples using entropy as its search heuristic. In this short paper, we describe two improvements to this algorithm. Firstly, we present the use of the Laplacian error estimate as an alternative evaluation function and secondly, we show how unordered as well as ordered rules can be generated. We experimentally demonstrate significantly improved performances resulting from these changes, thus enhancing the usefulness of CN2 as an inductive tool. Comparisons with Quinlan's C4.5 are also made. <s> BIB003 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Basics of the Rule Induction <s> This paper presents and compares two algorithms of machine learning from examples, ID3 and AQ, and one recent algorithm from the same class, called LEM2. All three algorithms are illustrated using the same example. Production rules induced by these algorithms from the well-known Small Soybean Database are presented. Finally, some advantages and disadvantages of these algorithms are shown. <s> BIB004 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Basics of the Rule Induction <s> This paper is a survey of inductive rule learning algorithms that use a separate-and-conquer strategy. This strategy can be traced back to the AQ learning system and still enjoys popularity as can be seen from its frequent use in inductive logic programming systems. We will put this wide variety of algorithms into a single framework and analyze them along three different dimensions, namely their search, language and overfitting avoidance biases. <s> BIB005 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Basics of the Rule Induction <s> Systems for inducing concept descriptions from examples are valuable tools for assisting in the task of knowledge acquisition for expert systems. This paper presents a description and empirical evaluation of a new induction system, CN2, designed for the efficient induction of simple, comprehensible production rules in domains where problems of poor description language and/or noise may be present. Implementations of the CN2, ID3, and AQ algorithms are compared on three medical classification tasks. <s> BIB006
A classification problem relates to an exploration of hypotheses describing so-called concepts. The term concept denotes a set of objects with some common characteristics that distinguish it from other concepts. In order to describe similar features the terms category or class are also used. Hypotheses are results of supervised learning. They are functions which best describe concepts from the supplied learning examples. Generally, hypotheses assign examples to the appropriate category (class). Those functions can be expressed in different forms. One of the most popular methods of knowledge representation are decision rules. There exist many algorithms that induce decision rules. For reviews see BIB005 BIB004 . Most of the existing classifiers extract knowledge from static data. As input they obtain the whole learning set, from which hypotheses are found. The set of learning examples may be represented in several ways, most com-mon is a decision The collection of objects U can be divided with respect to concept C k into positive E Decision rule r for concept C k is defined as an expression taking the form: if P then Q. P is the conditional part of the rule (premise; antecedent). For conditional part the term description item or description can also be used. Q is the decision part of the rule (conclusion; label ) indicating affiliation to concept C k . In the literature, a decision rule can also take the form: Conditional part P of a rule r is a conjunction of elementary conditions and is represented in the form of: where l is the number of conditions known as the length of the rule. A single elementary condition i (selector ) is represented as: where at i is a conditional attribute i and v i is a value from the domain of attribute at i . rel is a relation operator from the set of relations {=, =, <, ≤, >, ≥, ∈} . Rule r covers an example when attributes of the example match the rule's conditions. Rules can cover both positive and negative examples. Examples from the learning set that fulfill conditional part P of rule r are called coverage and are indicated by [P] . Rule r is discriminant or certain, when it covers only positive examples (no negative examples covered). Thanks to this the rule distinguishes examples belonging to the class indicated by the rule's decision part. A discriminant rule r is minimal, if removing of one of its selectors results in negative examples being covered. There also exist other types of decision rules like probabilistic rules. They do not indicate a single category but return probabilities connected with every decision class' label. Probability estimation techniques for rule learners are considered in . The problem of finding a minimal set of rules covering learning examples is NPcomplete. Many heuristic algorithms exist that induce decision rules. One of the most popular techniques is sequential covering. In general, it relies on learning a single rule for a given concept, removing examples covered by the rule and repeating this process for other examples from the same concept. Next, rules for other concepts are generated sequentially. The pseudocode of a sequential covering mechanism is presented as Algorithm 1. The function LearnSingleRule (line 5) depends on the used algorithm-sample realizations can be found in BIB002 BIB003 BIB006 BIB001 . In most of these algorithms, the
Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Algorithm 1: Sequential Covering algorithm <s> This research was supported in part by the National Science Foundation under Grant No. DCR 84-06801, the Office of Naval Research under Grant No. N00014-82-K-0186, the Defense Advanced Research Project Agency under Grant No. N00014-K-85-0878, and by the Slovene Research Council. <s> BIB001 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Algorithm 1: Sequential Covering algorithm <s> Abstract We present ELEM2, a machine learning system that induces classification rules from a set of data based on a heuristic search over a hypothesis space. ELEM2 is distinguished from other rule induction systems in three aspects. First, it uses a new heuristtic function to guide the heuristic search. The function reflects the degree of relevance of an attribute-value pair to a target concept and leads to selection of the most relevant pairs for formulating rules. Second, ELEM2 handles inconsistent training examples by defining an unlearnable region of a concept based on the probability distribution of that concept in the training data. The unlearnable region is used as a stopping criterion for the concept learning process, which resolves conflicts without removing inconsistent examples. Third, ELEM2 employs a new rule quality measure in its post-pruning process to prevent rules from overfitting the data. The rule quality formula measures the extent to which a rule can discriminate between the positive and negative examples of a class. We describe features of ELEM2, its rule induction algorithm and its classification procedure. We report experimental results that compare ELEM2 with C4.5 and CN2 on a number of datasets. <s> BIB002 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Algorithm 1: Sequential Covering algorithm <s> Systems for inducing concept descriptions from examples are valuable tools for assisting in the task of knowledge acquisition for expert systems. This paper presents a description and empirical evaluation of a new induction system, CN2, designed for the efficient induction of simple, comprehensible production rules in domains where problems of poor description language and/or noise may be present. Implementations of the CN2, ID3, and AQ algorithms are compared on three medical classification tasks. <s> BIB003
Input : U -a set of learning examples; A-conditional attributes Output: RS-a set of induced rules 8 Return RS initial candidate for the conditional part of the rule covers the set of all learning examples including the negative ones. Then the rule is specialized by adding elementary conditions until the acceptance threshold is reached. Candidates for the elementary conditions of a rule are evaluated with respect to different measures depending on the algorithm. The most commonly used criteria are as follows : • Maximizing the number of positive examples covered by the conjunction of elementary conditions in P . • Maximizing the ratio of covered positive examples to the total number of examples covered. • Minimizing the number of elementary conditions in P -minimizing the length of the rule. Other algorithms use an entropy of information to evaluate the conditional part of the rule. It was introduced by Shannon in . The entropy of information of given learning set S is defined as: where p i is the probability of class C in the set of examples S and n c is the number of different class labels. The entropy is a cost type measure-the smaller the value is, the better is the conjunction in P . Another important measure of evaluating the dependence of P and Q is the mestimate proposed by Cestnik in . The definition of m-estimate is: where n p is the number of positive examples covered by P , n is the total number of all examples covered by P , p i is the prior probability of the class C k and m is a constant depending on the data. A special case of m-estimate is the Laplace estimate defined as: where n c is the number of different class labels. More about these measures can be found in . One of the first algorithms basing on the sequential covering idea is AQ, proposed by Michalski BIB001 . It operates as follows. At the beginning of each iteration, the currently processed decision class is chosen. Next, sets with positive and negative examples are created with respect to the given class label. Then, a seed is selected randomly from the positive examples. In the next step, a star is generated. A star is a set of all rules that cover the seed and does not cover any of the negative examples. Extending the seed against all negative examples is a multistep procedure. While the star covers negative examples, select one of them. Then, all maximally general rules that cover the seed and exclude the negative example are found. The resulting set is called a partial star of the seed against the negative example. Next, a new partial star is generated by intersecting the initial star with the partial star of the seed against the negative example. In the end, a new partial star is trimmed if the number of rules exceeds the user defined threshold and the new partial star becomes a star. This threshold was introduced in order to limit the search space, which would grow rapidly with the number of negative examples and with the number of attributes. A typical criterion for trimming is the number of positive examples covered. In case of a tie, the minimum number of selectors is preferred. The procedure of star extension is repeated until the star no longer covers any negative examples. After the star is created, the best rule from the star is chosen according to the user-defined criteria. The rule is added to the current set of rules. This mechanism iteratively induces decision rules until all positive examples from the given decision class are covered. The whole process is rerun for every label of the decision class. For details see BIB001 . Another algorithm-CN2, proposed in BIB003 , modifies the AQ algorithm in a way that it removes the dependence on specific examples and increases the space of searched rules. Unlike the AQ-based system, which induces an unordered set of decision rules, CN2 produces an ordered list of if-then rules. CN2 works in an iterative fashion. In each iteration, it searches for a rule that covers a large number of examples of the single class C k and few of other classes. When the best rule according to the entropy measure is found, the algorithm removes the covered examples from the training set and adds the rule to the end of the rule list. This process is repeated until no more satisfactory rules can be found. CN2 searches for new rules by performing a generalto-specific search. At each stage, CN2 retains a size-limited set or star S of the best rules found so far. The system examines only specializations of this set, performing a beam search of the space of rules. A rule is specialized by either adding a new elementary condition or removing disjunctive values from one of its selectors. Each rule can be specialized in several ways-CN2 generates and evaluates all of them. In the end, star S is trimmed by removing rules with the lowest ranking values measured by given evaluation function-the likelihood ratio statistic. For more details see BIB003 . Another representative of the rule-based algorithms is MODLEM, which was originally introduced by Stefanowski in . Generally, it is based on the scheme of sequential covering and it generates an unordered minimal set of rules for every decision concept. It is particularly well-suited for analyzing data containing a mixture of numerical and qualitative attributes, inconsistent descriptions of objects, or missing attribute values. Searching for the best single rule and selecting the best condition is controlled by a criterion based on an entropy measure. For more details see . Induced set of decision rules can be used for classification of new incoming examples. Those new examples were not used during the learning phase. Their description of conditional attributes is known and the goal is to determine the correct decision class label. A classification of the new examples is based on matching the description of the new object to the conditional part of a decision rule. Two main matching types can be distinguished: full or strict and partial or flexible matching. Full matching takes place, when all elementary conditions of a rule match the example's attributes. In case of partial matching there must exist at least one elementary condition of a rule that does not match the new object's description. Classification strategy is performed in a different way depending on whether the decision rules are sorted to form a list or create a random set of rules. In case of an unordered list of decision rules, only the first rule that matches the example is fired and the label associated with the rule determines the example's class label. When the first rule covering the example is found, the rest of the rules are not visited. In case when none of the rules match the example, the default rule is used. Generally, the default rule indicates the majority class in the training set-the largest class in the training set. In case of an unordered set of decision rules using full or strict matching three situations are possible: a unique match (to one or more rules from the same class); matching more rules from different classes or not matching any rules at all. In both latter situations the suggestion is ambiguous, thus, a proper resolution strategy is necessary. One of the solutions is the strategy introduced by Grzymala-Busse . It has been successfully applied in many experiments. Generally, it is based on a voting of matching rules with their supports. The total support for class C k is defined as: where r i is a matched rule that indicates class C k , n r is the number of these rules and sup(r i ) is the number of learning objects satisfying both condition and decision parts of the rule r i . A new object is assigned to the class with the highest total support. In the case of not-matching, so called partial matching or flexible matching is considered, where at least one of the rule's conditions is satisfied by the corresponding attributes in the new object's description x. In this case, a matching factor match(r,x) is introduced as the ratio of conditions matched by object x to all conditions in rule r. The total support is modified to: where p is the number of partially-matched rules, and object x is assigned to the class with the highest value of sup(C k ). Another example of classification strategy is the proposal of Aijun Ann in BIB002 . It uses a rule quality measure different than rule support, i.e., a measure of discrimination: where P denotes probability. For more technical details of estimating probabilities and adjusting this formula to prevent zero division see BIB002 . Its interpretation says that it measures the extent to which rule r discriminates between positive and negative objects of class C k . The only difference between these two described classification strategies is choosing another rule quality measure-putting Q M D in place of sup(r). Moreover, classification strategies can be adopted to abstaining from a class prediction when the final decision is uncertain. This modification can influence the final accuracy of classification of an ensemble consisting of rule-based component classifiers. This idea was inspected by B laszczyński et al. in . Because of the natural and easy form of representation, decision rules can be inspected and interpreted by a human. They are also more comprehensive than any other knowledge representation. Generally, they provide good interpretability and flexibility for data mining tasks. They take advantage of not being hierarchically structured, so hypotheses can be easily updated when becoming out-of-date without significant decrease in performance. However, they have not received enough attention in mining data streams.
Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FLORA <s> Abstract The decision tree output of Quinlan's ID3 algorithm is one of its major weaknesses. Not only can it be incomprehensible and difficult to manipulate, but its use in expert systems frequently demands irrelevant information to be supplied. This report argues that the problem lies in the induction algorithm itself and can only be remedied by radically altering the underlying strategy. It describes a new algorithm, PRISM which, although based on ID3, uses a different induction strategy to induce rules which are modular, thus avoiding many of the problems associated with decision trees. <s> BIB001 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FLORA <s> The CN2 algorithm induces an ordered list of classification rules from examples using entropy as its search heuristic. In this short paper, we describe two improvements to this algorithm. Firstly, we present the use of the Laplacian error estimate as an alternative evaluation function and secondly, we show how unordered as well as ordered rules can be generated. We experimentally demonstrate significantly improved performances resulting from these changes, thus enhancing the usefulness of CN2 as an inductive tool. Comparisons with Quinlan's C4.5 are also made. <s> BIB002 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FLORA <s> This paper presents and compares two algorithms of machine learning from examples, ID3 and AQ, and one recent algorithm from the same class, called LEM2. All three algorithms are illustrated using the same example. Production rules induced by these algorithms from the well-known Small Soybean Database are presented. Finally, some advantages and disadvantages of these algorithms are shown. <s> BIB003 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FLORA <s> Agents that learn on-line with partial instance memory reserve some of the previously encountered examples for use in future training episodes. We extend our previous work by combining our method for selecting extreme examples with two incremental learning algorithms, AQ11 and GEM. Using these new systems, AQ11-PM and GEM-PM, and the task computer intrusion detection, we conducted a lesion study to analyze trade-offs in performance. Results showed that, although our partial-memory model decreased predictive accuracy by 2%, it also decreased memory requirements by 75%, learning time by 75%, and in some cases, concept complexity by 10%, an outcome consistent with earlier results using our partial-memory method and batch learning. <s> BIB004 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FLORA <s> Abstract We present ELEM2, a machine learning system that induces classification rules from a set of data based on a heuristic search over a hypothesis space. ELEM2 is distinguished from other rule induction systems in three aspects. First, it uses a new heuristtic function to guide the heuristic search. The function reflects the degree of relevance of an attribute-value pair to a target concept and leads to selection of the most relevant pairs for formulating rules. Second, ELEM2 handles inconsistent training examples by defining an unlearnable region of a concept based on the probability distribution of that concept in the training data. The unlearnable region is used as a stopping criterion for the concept learning process, which resolves conflicts without removing inconsistent examples. Third, ELEM2 employs a new rule quality measure in its post-pruning process to prevent rules from overfitting the data. The rule quality formula measures the extent to which a rule can discriminate between the positive and negative examples of a class. We describe features of ELEM2, its rule induction algorithm and its classification procedure. We report experimental results that compare ELEM2 with C4.5 and CN2 on a number of datasets. <s> BIB005 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FLORA <s> Learning concepts that change over time is important for a variety of applications in which an intelligent system must acquire and use a behavioral profile. Computer intrusion detection, calendar scheduling, and intelligent user interfaces are three examples. An interesting class of methods for learning such concepts consists of algorithms that maintain a portion of previously encountered examples. Since concepts change over time and these methods store selected examples, mechanisms must exist to identify and remove irrelevant examples of old concepts. In this paper, we describe an incremental rule learner with partial instance memory, called AQ 11 -PM+WAH, that uses Widmer and Kubat's heuristic to adjust dynamically the window over which it retains and forgets examples. We evaluated this learner using the STAGGER concepts and made direct comparisons to AQ-PM and to AQ 11 - PM, similar learners with partial instance memory. Results suggest that the forgetting heuristic is not restricted to FLORA2 the learner for which it was originally designed. Overall, result from this study and others suggest learners with partial instance memory converge more quickly to changing target concepts than algorithms that learn solely from new examples. <s> BIB006 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FLORA <s> Recently, mining data streams with concept drifts for actionable insights has become an important and challenging task for a wide range of applications including credit card fraud protection, target marketing, network intrusion detection, etc. Conventional knowledge discovery tools are facing two challenges, the overwhelming volume of the streaming data, and the concept drifts. In this paper, we propose a general framework for mining concept-drifting data streams using weighted ensemble classifiers. We train an ensemble of classification models, such as C4.5, RIPPER, naive Beyesian, etc., from sequential chunks of the data stream. The classifiers in the ensemble are judiciously weighted based on their expected classification accuracy on the test data under the time-evolving environment. Thus, the ensemble approach improves both the efficiency in learning the model and the accuracy in performing classification. Our empirical study shows that the proposed methods have substantial advantage over single-classifier approaches in prediction accuracy, and the ensemble framework is effective for a variety of classification models. <s> BIB007 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FLORA <s> Systems for inducing concept descriptions from examples are valuable tools for assisting in the task of knowledge acquisition for expert systems. This paper presents a description and empirical evaluation of a new induction system, CN2, designed for the efficient induction of simple, comprehensible production rules in domains where problems of poor description language and/or noise may be present. Implementations of the CN2, ID3, and AQ algorithms are compared on three medical classification tasks. <s> BIB008 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FLORA <s> This paper is a survey of inductive rule learning algorithms that use a separate-and-conquer strategy. This strategy can be traced back to the AQ learning system and still enjoys popularity as can be seen from its frequent use in inductive logic programming systems. We will put this wide variety of algorithms into a single framework and analyze them along three different dimensions, namely their search, language and overfitting avoidance biases. <s> BIB009 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FLORA <s> Most of the work in machine learning assume that examples are generated at random according to some stationary probability distribution. In this work we study the problem of learning when the distribution that generate the examples changes over time. We present a method for detection of changes in the probability distribution of examples. The idea behind the drift detection method is to control the online error-rate of the algorithm. The training examples are presented in sequence. When a new training example is available, it is classified using the actual model. Statistical theory guarantees that while the distribution is stationary, the error will decrease. When the distribution changes, the error will increase. The method controls the trace of the online error of the algorithm. For the actual context we define a warning level, and a drift level. A new context is declared, if in a sequence of examples, the error increases reaching the warning level at example k w , and the drift level at example k d . This is an indication of a change in the distribution of the examples. The algorithm learns a new model using only the examples since k w . The method was tested with a set of eight artificial datasets and a real world dataset. We used three learning algorithms: a perceptron, a neural network and a decision tree. The experimental results show a good performance detecting drift and with learning the new concept. We also observe that the method is independent of the learning algorithm. <s> BIB010 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FLORA <s> Induction of decision rules within the dominance–based rough set approach to the multicriteria and multiattribute classification is considered. Within this framework, we discuss two algorithms: Glance and an extended version of AllRules. The important characteristics of Glance is that it induces the set of all dominance–based rules in an incremental way. On the other hand, AllRules induces in a non–incremental way the set of all robust rules, i.e. based on objects from the set of learning examples. The main aim of this study is to compare both these algorithms. We experimentally evaluate them on several data sets. The results show that Glance and AllRules are complementary algorithms. The first one works very efficiently on data sets described by a low number of condition attributes and a high number of objects. The other one, conversely, works well on data sets characterized by a high number of attributes and a low number of objects. <s> BIB011 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FLORA <s> We consider strategies for building classifier ensembles for non-stationary environments where the classification task changes during the operation of the ensemble. Individual classifier models capable of online learning are reviewed. The concept of ”forgetting” is discussed. Online ensembles and strategies suitable for changing environments are summarized. <s> BIB012 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FLORA <s> Alexey Tsymbal Department of Computer Science Trinity College Dublin, Ireland tsymbalo@tcd.ie April 29, 2004 Abstract In the real world concepts are often not stable but change with time. Typical examples of this are weather prediction rules and customers’ preferences. The underlying data distribution may change as well. Often these changes make the model built on old data inconsistent with the new data, and regular updating of the model is necessary. This problem, known as concept drift, complicates the task of learning a model from data and requires special approaches, different from commonly used techniques, which treat arriving instances as equally important contributors to the final concept. This paper considers different types of concept drift, peculiarities of the problem, and gives a critical review of existing approaches to the problem. 1. Definitions and peculiarities of the problem A difficult problem with learning in many real-world domains is that the concept of interest may depend on some hidden context, not given explicitly in the form of pre-dictive features. A typical example is weather prediction rules that may vary radically with the season. Another example is the patterns of customers’ buying preferences that may change with time, depending on the current day of the week, availability of alter-natives, inflation rate, etc. Often the cause of change is hidden, not known a priori, making the learning task more complicated. Changes in the hidden context can induce more or less radical changes in the target concept, which is generally known as con-cept drift (Widmer and Kubat, 1996). An effective learner should be able to track such changes and to quickly adapt to them. A difficult problem in handling concept drift is distinguishing between true concept drift and noise. Some algorithms may overreact to noise, erroneously interpreting it as concept drift, while others may be highly robust to noise, adjusting to the changes too slowly. An ideal learner should combine robustness to noise and sensitivity to concept drift (Widmer and Kubat, 1996). In many domains, hidden contexts may be expected to recur. Recurring contexts may be due to cyclic phenomena, such as seasons of the year or may be associated with irregular phenomena, such as inflation rates or market mood (Harries and Sam-mut, 1998). In such domains, in order to adapt more quickly to concept drift, concept <s> BIB013 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FLORA <s> On-line learning in domains where the target concept depends on some hidden context poses serious problems. A changing context can induce changes in the target concepts, producing what is known as concept drift. We describe a family of learning algorithms that flexibly react to concept drift and can take advantage of situations where contexts reappear. The general approach underlying all these algorithms consists of (1) keeping only a window of currently trusted examples and hypotheses; (2) storing concept descriptions and re-using them when a previous context re-appears; and (3) controlling both of these functions by a heuristic that constantly monitors the system's behavior. The paper reports on experiments that test the systems' performance under various conditions such as different levels of noise and different extent and rate of concept drift. <s> BIB014 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FLORA <s> An emerging problem in Data Streams is the detection of concept drift. This problem is aggravated when the drift is gradual over time. In this work we deflne a method for detecting concept drift, even in the case of slow gradual change. It is based on the estimated distribution of the distances between classiflcation errors. The proposed method can be used with any learning algorithm in two ways: using it as a wrapper of a batch learning algorithm or implementing it inside an incremental and online algorithm. The experimentation results compare our method (EDDM) with a similar one (DDM). Latter uses the error-rate instead of distance-error-rate. <s> BIB015 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FLORA <s> Mining data streams is a challenging task that requires online systems ba- sed on incremental learning approaches. This paper describes a classification system based on decision rules that may store up-to-date border examples to avoid unneces- sary revisions when virtual drifts are present in data. Consistent rules classify new test examples by covering and inconsistent rules classify them by distance as the nearest neighbour algorithm. In addition, the system provides an implicit forgetting heuristic so that positive and negative examples are removed from a rule when they are not near one another. <s> BIB016 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FLORA <s> Mining data streams is a challenging task that requires online systems based on incremental learning approaches. This paper describes a classification system based on decision rules that may store up--to--date border examples to avoid unnecessary revisions when virtual drifts are present in data. Consistent rules classify new test examples by covering and inconsistent rules classify them by distance as the nearest neighbor algorithm. In addition, the system provides an implicit forgetting heuristic so that positive and negative examples are removed from a rule when they are not near one another. <s> BIB017 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FLORA <s> We address adaptive classification of streaming data in the presence of concept change. An overview of the machine learning approaches reveals a deficit of methods for explicit change detection. Typically, classifier ensembles designed for changing environments do not have a bespoke change detector. Here we take a systematic look at the types of changes in streaming data and at the current approaches and techniques in online classification. Classifier ensembles for change detection are discussed. An example is carried through to illustrate individual and ensemble change detectors for both unlabelled and labelled data. While this paper does not offer ready-made solutions, it outlines possibilities for novel approaches to classification of streaming data. <s> BIB018 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FLORA <s> Sales prediction is an important problem for different companies involved in manufacturing, logistics, marketing, wholesaling and retailing. Different approaches have been suggested for food sales forecasting. Several researchers, including the authors of this paper, reported on the advantage of one type of technique over the others for a particular set of products. In this paper we demonstrate that besides an already recognized challenge of building accurate predictive models, the evaluation procedures themselves should be considered more carefully. We give illustrative examples to show that e.g. popular MAE and MSE estimates can be intuitive with one type of product and rather misleading with the others. Furthermore, averaging errors across differently behaving products can be also counter intuitive. We introduce new ways to evaluate the performance of wholesales prediction and discuss their biases with respect to different error types. <s> BIB019 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FLORA <s> Since the beginning of the Internet age and the increased use of ubiquitous computing devices, the large volume and continuous flow of distributed data have imposed new constraints on the design of learning algorithms. Exploring how to extract knowledge structures from evolving and time-changing data, Knowledge Discovery from Data Streams presents a coherent overview of state-of-the-art research in learning from data streams. The book covers the fundamentals that are imperative to understanding data streams and describes important applications, such as TCP/IP traffic, GPS data, sensor networks, and customer click streams. It also addresses several challenges of data mining in the future, when stream mining will be at the core of many applications. These challenges involve designing useful and efficient data mining solutions applicable to real-world problems. In the appendix, the author includes examples of publicly available software and online data sets. This practical, up-to-date book focuses on the new requirements of the next generation of data mining. Although the concepts presented in the text are mainly about data streams, they also are valid for different areas of machine learning and data mining. <s> BIB020 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FLORA <s> Fuel feeding and inhomogeneity of fuel typically cause process fluctuations in the circulating fluidized bed (CFB) boilers. If control systems fail to compensate the fluctuations, the whole plant will suffer from fluctuations that are reinforced by the closed-loop controls. Accurate estimates of fuel consumption among other factors are needed for control systems operation. In this paper we address a problem of online mass flow prediction. Particularly, we consider the problems of (1) constructing the ground truth, (2) handling noise and abrupt concept drift, and (3) learning an accurate predictor. Last but not least we emphasize the importance of having the domain knowledge concerning the considered case. We demonstrate the performance of OMPF using real data sets collected from the experimental CFB boiler. <s> BIB021 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FLORA <s> Sales prediction is a complex task because of a large number of factors affecting the demand. We present a context aware sales prediction approach, which selects the base predictor depending on the structural properties of the historical sales. In the experimental part we show that there exist product subsets on which, using this strategy, it is possible to outperform naive methods. We also show the dependencies between product categorization accuracies and sales prediction accuracies. A case study of a food wholesaler indicates that moving average prediction can be outperformed by intelligent methods, if proper categorization is in place, which appears to be a difficult task. <s> BIB022 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FLORA <s> Concept drift refers to a non stationary learning problem over time. The training and the application data often mismatch in real life problems. In this report we present a context of concept drift problem 1. We focus on the issues relevant to adaptive training set formation. We present the framework and terminology, and formulate a global picture of concept drift learners design. We start with formalizing the framework for the concept drifting data in Section 1. In Section 2 we discuss the adaptivity mechanisms of the concept drift learners. In Section 3 we overview the principle mechanisms of concept drift learners. In this chapter we give a general picture of the available algorithms and categorize them based on their properties. Section 5 discusses the related research fields and Section 5 groups and presents major concept drift applications. This report is intended to give a bird's view of concept drift research field, provide a context of the research and position it within broad spectrum of research fields and applications. <s> BIB023 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FLORA <s> This paper presents a new framework for dealing with two main types of concept drift: sudden and gradual drift in labelled data with decision attribute. The learning examples are processed in batches of the same size. This new framework, called Batch Weighted Ensemble, is based on incorporating drift detector into the evolving ensemble. Its performance was evaluated experimentaly on data sets with different types of concept drift and compared with the performance of a standard Accuracy Weighted Ensemble classifier. The results show that BWE improves evaluation measures like processing time, memory used and obtain competitive total accuracy. <s> BIB024 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FLORA <s> Decision rules, which can provide good interpretability and flexibility for data mining tasks, have received very little attention in the stream mining community so far. In this work we introduce a new algorithm to learn rule sets, designed for open-ended data streams. The proposed algorithm is able to continuously learn compact ordered and unordered rule sets. The experimental evaluation shows competitive results in comparison with VFDT and C4.5rules. <s> BIB025 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FLORA <s> Three block based ensembles, AWE, BWE and ACE, are considered in the perspective of learning from data streams with concept drift. AWE updates the ensemble after processing each successive block of incoming examples, while the other ensembles are additionally extended by different drift detectors. Experiments show that these extensions improve classification accuracy, in particular for sudden changes occurring within the block, as well as reduce computational costs. <s> BIB026 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FLORA <s> Decision rules are one of the most interpretable and flexible models for data mining prediction tasks. Till now, few works presented online, any-time and one-pass algorithms for learning decision rules in the stream mining scenario. A quite recent algorithm, the Very Fast Decision Rules (VFDR), learns set of rules, where each rule discriminates one class from all the other. In this work we extend the VFDR algorithm by decomposing a multi-class problem into a set of two-class problems and inducing a set of discriminative rules for each binary problem. The proposed algorithm maintains all properties required when learning from stationary data streams: online and any-time classifiers, processing each example once. Moreover, it is able to learn ordered and unordered rule sets. The new approach is evaluated on various real and artificial datasets. The new algorithm improves the performance of the previous version and is competitive with the state-of-the-art decision tree learning method for data streams. <s> BIB027 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FLORA <s> Data streams are usually characterized by changes in the underlying distribution generating data. Therefore algorithms designed to work with data streams should be able to detect changes and quickly adapt the decision model. Rules are one of the most interpretable and flexible models for data mining prediction tasks. In this paper we present the Adaptive Very Fast Decision Rules (AVFDR), an on-line, any-time and one-pass algorithm for learning decision rules in the context of time changing data. AVFDR can learn ordered and unordered rule sets. It is able to adapt the decision model via incremental induction and specialization of rules. Detecting local drifts takes advantage of the modularity of rule sets. In AVFDR, each individual rule monitors the evolution of performance metrics to detect concept drift. AVFDR prunes rules that detect drift. This explicit change detection mechanism provides useful information about the dynamics of the process generating data, faster adaption to changes and generates compact rule sets. The experimental evaluation shows this method is able to learn fast and compact rule sets from evolving streams in comparison to alternative methods. <s> BIB028 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FLORA <s> Overload management has become very important in public safety systems that analyse high performance multimedia data streams, especially in the case of detection of terrorist and criminal dangers. Efficient overload management improves the accuracy of automatic identification of persons suspected of terrorist or criminal activity without requiring interaction with them. We argue that in order to improve the quality of multimedia data stream processing in the public safety arena, the innovative concept of a Multimedia Data Stream Management System (MMDSMS) using load-shedding techniques should be introduced into the infrastructure to monitor and optimize the execution of multimedia data stream queries. In this paper, we present a novel content-centered load shedding framework, based on searching and matching algorithms, for analysing video tuples arriving within multimedia data streams. The framework tracks and registers all symptoms of overload, and either prevents overload before it occurs, or minimizes its effects. We have extended our Continuous Query Language (CQL) syntax to enable this load shedding technique. The effectiveness of the framework has been verified using both artificial and real data video streams collected from monitoring devices. <s> BIB029
Effective learning in environments with hidden contexts and concept drifts requires a learning algorithm which fulfills certain conditions BIB014 : • it can detect context changes without being explicitly informed; • it can quickly recover from a concept change and adjust its hypotheses; • it can make use of previous descriptions when concepts reappear. One of the possible solutions is to trust only the latest examples-this is known as the windowing mechanism. The window of examples may be of a fixed or a flexible size. New examples are added to the window as they arrive and the old ones are removed, when appropriate conditions are fulfilled. Those activities in window trigger modifications of current hypotheses in order to be consistent with the examples held in the window. This idea is widely used and states the main essence of the FLORA framework proposed in BIB014 . The FLORA framework is restricted to processing data containing only nominal attributes and can only solve the binary classification problem. In the FLORA framework each concept is represented by three sets comprising rules' antecedents: ADES (Accepted DEScriptors), NDES (Negative DEScriptors) and PDES (Potential DEScriptors). ADES contains descriptions covering only positive examples, and NDES only negative examples. PDES consists of descriptions that match both positive and negative examples. ADES is used to classify new incoming examples, while NDES is used to prevent the over-generalization of ADES. PDES acts as a storage for descriptions that might become relevant in the future BIB014 . Every description item has corresponding counters, which indicate how many positive or negative examples from current window are covered by the given description. The counters are updated with every modification of the learning window (addition or deletion of a learning example). A description item is held in memory as long as it covers at least one example from the window. The simple FLORA framework is presented as Algorithm 2. The FLORA framework operates as follows. When a new positive example is added to the learning window, three situations are possible: a new description item is added to ADES, descriptions existing in ADES are generalized to match the new example, or/and existing items are moved from NDES to PDES (lines BIB005 BIB015 BIB019 BIB001 BIB002 BIB008 BIB024 BIB026 BIB016 BIB017 BIB009 BIB010 . First, the ADES set is tested in order to find a description covering the incoming positive example (lines 3-6). If there does not exist such an item, a generalization of descriptions from ADES is performed (lines 7-8). If there is no covering item in ADES and there does not exist any generalization that matches the example, the example's full description is added to the ADES set (lines 9-10). Then, the PDES set is searched and counters of positive examples are incremented for the description items that cover the example (lines 11-13). In the end, the NDES set is visited. Descriptions that match the new positive example are moved to PDES and their counters are updated (lines BIB017 BIB009 BIB010 . In case when the incoming example is negative-same situations are possible but in respect to the NDES set (lines BIB020 BIB025 BIB011 BIB003 BIB027 BIB028 BIB012 BIB018 BIB029 BIB004 BIB006 . First, the NDES set is tested in order to find a description covering the incoming negative example (lines BIB011 BIB003 . If there does not exist such an item, a generalization of descriptions from NDES is performed (lines . If there is no covering item in NDES and there does not exist any generalization that matches the example, the example's full description is added to the NDES set (lines BIB027 BIB028 . Then, the PDES set is searched and counters of negative examples are incremented for the description items that cover the example (lines BIB012 BIB018 BIB029 . In the end, the ADES set is visited. Descriptions that match the new negative example are moved to PDES and their counters are updated (lines BIB004 BIB006 . When an example is deleted from the learning window, appropriate counters are decreased (lines . This may result in a removal of a description or its migration from PDES to ADES or NDES, with respect to the type of example: negative or positive. If the example to be deleted is positive, first the ADES set is visited. Counters of positive examples are decremented for the description items that match the example. If the counter is equal to 0, then the description from ADES is dropped (lines BIB013 . Then the PDES set is tested. Counters of positive examples are decremented for the description items that match the example. If the counter equals 0, then the description is moved from PDES to NDES (lines BIB007 BIB014 BIB023 BIB021 BIB022 . If the example to be deleted is negative, first the NDES set is visited. Counters of negative examples are decremented for the description items that match the example. If the counter is equal to 0, then the description from NDES is dropped (lines 49-53). Then the PDES set is tested. Counters of negative examples are decremented for the description items that match the example. If the counter equals 0, then the description is moved from PDES to ADES (lines 54-58).
Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Algorithm 2: simple FLORA algorithm <s> Induction of a concept description given noisy instances is difficult and is further exacerbated when the concepts may change over time. This paper presents a solution which has been guided by psychological and mathematical results. The method is based on a distributed concept description which is composed of a set of weighted, symbolic characterizations. Two learning processes incrementally modify this description. One adjusts the characterization weights and another creates new characterizations. The latter process is described in terms of a search through the space of possibilities and is shown to require linear space with respect to the number of attribute-value pairs in the description language. The method utilizes previously acquired concept definitions in subsequent learning by adding an attribute for each learned concept to instance descriptions. A program called STAGGER fully embodies this method, and this paper reports on a number of empirical analyses of its performance. Since understanding the relationships between a new learning method and existing ones can be difficult, this paper first reviews a framework for discussing machine learning systems and then describes STAGGER in that framework. <s> BIB001 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Algorithm 2: simple FLORA algorithm <s> On-line learning in domains where the target concept depends on some hidden context poses serious problems. A changing context can induce changes in the target concepts, producing what is known as concept drift. We describe a family of learning algorithms that flexibly react to concept drift and can take advantage of situations where contexts reappear. The general approach underlying all these algorithms consists of (1) keeping only a window of currently trusted examples and hypotheses; (2) storing concept descriptions and re-using them when a previous context re-appears; and (3) controlling both of these functions by a heuristic that constantly monitors the system's behavior. The paper reports on experiments that test the systems' performance under various conditions such as different levels of noise and different extent and rate of concept drift. <s> BIB002
Input : E-incoming example; ADES-a set with accepted descriptors; P DES-a set with potential descriptors; N DES-a set with negative descriptors Output: ADES,P DES,N DES-modified description sets The ADES, NDES, and PDES sets are kept non-redundant and consistent with respect to the examples in the window. FLORA does not implement any specialization operator. If a new example cannot be covered by any description or generalization its full description is added to ADES or NDES, with respect to the type of example: positive or negative. The new incoming example acts as a specific seed, which may be generalized in the future. FLORA uses a generalization operator known as dropping condition rule, which removes attribute-value pairs from a single description item. The simple FLORA framework assumes that only the latest fixed number of examples are relevant and should be kept in the window. However the question arises of how many examples are sufficient to describe current concepts. The authors expanded FLORA with a heuristic for flexible windowing in the FLORA2 algorithm. The motivation for this improvement were the effects of an inappropriate window size: too small a window will not contain a sufficient number of examples to describe a stable concept. On the other hand, too large a window will slow down reaction to a concept drift. A good heuristic for flexible windowing should shrink the window when a concept drift seems to occur and keep the window size fixed in case when concepts are stable. Meanwhile the window size should grow until concepts are stabilized. FLORA2's heuristic called Window Adjustment Heuristic (WAH) meets the above requirements. The pseudocode of WAH is presented as Algorithm 3. If a concept drift was detected, the WAH decreases the window size by 20% (lines 1-2) . In case of extremely stable concepts the WAH decreases window size by 1 unit (lines 3-4) . If the current concepts seems stable the window size remains unchanged (lines 5-6). In the other case, when the algorithm assumes that more examples is necessary, the window size is incresed by 1 unit (lines 7-8). FLORA2 was tested on an artificial learning problem used by Schlimmer and Granger in BIB001 -STAGGER concepts. The example space is defined by three attributes: size ∈ {small, medium, large}, color ∈ {red, green, blue}, and shape ∈ {square, circle, triangle}. There also exists a sequence of three target concepts: (1) size = small ∧ color = red, (2) color = green ∧ shape = circle, and (3) size = medium ∨ size = large. FLORA's authors randomly generated 120 training examples and labeled them according to some hidden context. After processing each example, the accuracy of the classification was tested on a separate testing set with 100 examples. The concept was changed after every 40 examples. The obtained results showed that after a sudden change the total accuracy suddenly decreases but FLORA2 quickly adjusts to the new concepts and approaches 100% accuracy. WAH behaves as expected. Sudden change leads to a short increase in window size, followed by narrowing the window size and forgetting irrelevant examples. In such a case, it would be a waste of time and effort to relearn an old concept from scratch. This was the reason for inventing the FLORA3 algorithm, which introduces a mechanism for a previous concept's storage and recall. The mechanism is tightly associated with the WAH heuristic. FLORA3 differs from FLORA2 behavior in a way that after every stage of learning it checks the current state of hypotheses in order to decide whether some old concept's descriptions are useful. The main idea assumes that when a change occurs the system should check which descriptions better explain the examples currently in the window: new concepts or some old ones. On the other hand, in case of stability periods it may be worth to store the current descriptions for future reuse. WAH decides when to store or reexamine the old concepts. If WAH signals a drift, the system examines its storage of old descriptions in order to find the one that fits the current state of the learning window. If one is found that is more appropriate than the current description, it replaces the current one. The procedure for reevaluating old concepts consists of three steps. First, the best candidate is found from all stored concepts that are consistent with the current examples in the window. It is the one with the highest ratio of positive to negative examples matched from the learning window. Then, the best candidate's counters are recalculated to reflect the examples from the learning window. In the last step, the updated best candidate is compared with the current concept description on a measure of fitness. In FLORA3 the measure of fitness is estimated by the relative complexity of the descriptions-the more compact the ADES is, the better. To maintain the efficiency of the learning algorithm, the old concepts are not checked after every new training example. They are only retrieved when WAH suspects a concept drift. Moreover, the best candidate is determined by a simple heuristic measure. For more details see BIB002 . FLORA3 was tested on an artificial situation of recurring context. The dataset consisted of three STAGGER concepts repeated three times in cyclic order: 1-2-3-1-2-3-1-2-3. Training and testing examples were created using the same procedure as for FLORA2. Results showed that storing and reusing old concepts leads to a noticeable improvement in reaction time to the reappearing concepts. In most of the cases FLORA3 relearns faster and obtains higher accuracy levels than the simpler FLORA2. Previous versions of FLORA deal with the main types of concept drift and recurring concepts. However they were not robust to noise. This is one of the difficulties in incremental learning-to distinguish between real concept drift and slight irregularities that may be treated as noise in the data. Methods that react quickly to every sign of change may overreact to noise. This may result in instability and low accuracy of classification. An ideal learner should combine stability and robustness with flexible and effective tracking of concept change BIB002 . That is why FLORA4 replaces the strict consistency condition, inherited from FLORA2 and FLORA3 with a softer notion of reliability. In FLORA4 for every description item statistical confidence intervals around its classification accuracy are calculated. Decisions when to move descriptions between sets are made based on the relation between these confidence intervals and observed class frequencies. Transitions among the description sets are as follows BIB002 . • A description item is kept in ADES if the lower endpoint of its accuracy confidence interval is greater than the class frequency interval's upper endpoint. • A description item from ADES is moved to PDES, when its accuracy interval overlaps with the class frequency interval. • A description item is dropped from ADES if the upper endpoint of its accuracy interval is lower than the class frequency interval's lower endpoint. • Description items in NDES are kept as long as the lower endpoint of its accuracy confidence interval is greater than the class frequency interval's upper endpoint computed over negative examples in the window. • There is no migration between NDES and PDES. Unacceptable hypotheses from NDES are deleted. The main effect of this strategy is that generalizations in ADES and NDES may cover some negative or positive examples, respectively. PDES acts as a buffer for descriptions that cover too many negative examples or their absolute number of covered examples is too small. The rest of the algorithm's mechanisms remain unchanged. FLORA4 was also tested on STAGGER concepts and was compared with FLORA3 and FLORA2. In noise-free environment FLORA4 is initially a bit slower in reacting to a change than its predecessors. However, eventually it gains higher accuracy of classification faster than the previous versions. For a different amount of noise FLORA4 is again a bit slower in reaction to a change than the predecessors but then soon outperforms them. However, the difference in the classification accuracy is greater than for the noise-free data. FLORA4 was also compared with the IB3 algorithm. For more details see BIB002 .
Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Algorithm 5: Method for finding extreme examples <s> Agents that learn on-line with partial instance memory reserve some of the previously encountered examples for use in future training episodes. We extend our previous work by combining our method for selecting extreme examples with two incremental learning algorithms, AQ11 and GEM. Using these new systems, AQ11-PM and GEM-PM, and the task computer intrusion detection, we conducted a lesion study to analyze trade-offs in performance. Results showed that, although our partial-memory model decreased predictive accuracy by 2%, it also decreased memory requirements by 75%, learning time by 75%, and in some cases, concept complexity by 10%, an outcome consistent with earlier results using our partial-memory method and batch learning. <s> BIB001 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Algorithm 5: Method for finding extreme examples <s> Learning concepts that change over time is important for a variety of applications in which an intelligent system must acquire and use a behavioral profile. Computer intrusion detection, calendar scheduling, and intelligent user interfaces are three examples. An interesting class of methods for learning such concepts consists of algorithms that maintain a portion of previously encountered examples. Since concepts change over time and these methods store selected examples, mechanisms must exist to identify and remove irrelevant examples of old concepts. In this paper, we describe an incremental rule learner with partial instance memory, called AQ 11 -PM+WAH, that uses Widmer and Kubat's heuristic to adjust dynamically the window over which it retains and forgets examples. We evaluated this learner using the STAGGER concepts and made direct comparisons to AQ-PM and to AQ 11 - PM, similar learners with partial instance memory. Results suggest that the forgetting heuristic is not restricted to FLORA2 the learner for which it was originally designed. Overall, result from this study and others suggest learners with partial instance memory converge more quickly to changing target concepts than algorithms that learn solely from new examples. <s> BIB002
Input : S i -current set of learning examples; CR-characteristic rules Output: S ee -a set of extreme examples; 5). Next, the new extreme examples are selected using the strict matching strategy (line 6). The transformed rule is applied on the current training set. The examples that match the edges of the transformed rule using the strict matching strategy are the extreme ones. In the end, current extreme examples are combined with previously obtained ones (line 7). AQ-PM is equipped in implicit forgetting-examples from partial memory are forgotten when no longer force a boundary. AQ-PM was tested on three problems: STAGGER concepts, blasting cap detection and computer intrusion detection. The algorithm was compared with a simpler version of AQ-PM (baseline), with partial memory mechanism disabled, and IB2. The STAGGER concepts dataset consisted of 120 examples with sudden changes after every 40 examples. At each time step, a single training example and 100 testing examples were randomly generated. AQ-PM obtained higher results on total accuracy of classification than its opponents. The values of accuracy are comparable to those obtained by the FLORA system. The size of the memory held by AQ-PM was compared with the FLORA2's requirements. Over the entire learning phase, FLORA2 kept 15 examples, while AQ-PM maintained on average 6.6 examples in the partial memory. Blasting cap detection and computer intrusion detection was not evaluated by other researchers, so for the results and more details on these problems see . AQ-PM was extended by combining the method for selecting extreme examples with the incremental learning system AQ11. The resulting AQ11-PM algorithm was described in BIB001 . The AQ11 learning system does not operate in batch mode but incrementally generates new rules from the existing rules and new training examples. The standard AQ11 algorithm has no instance memory. It reads each example only once and drops it after the learning phase. The AQ11's learning process consists of three main steps. In the first phase, the algorithm searches for difficult examples in the new training set-the ones that are misclassified. If a rule covers a new negative example, then in the second step, the rule is specialized to be consistent using the AQ11 covering algorithm. In the end, the specialized positive rule is combined with the new positive training examples and AQ is used to generalize them as much as possible without intersecting any of the negative rules and without covering any of the new negative examples. AQ11 uses this same procedure to learn rules incrementally for both the positive and negative class. Furthermore, this process can be adjusted to processing multiple classes. In this case, one class is selected and treated as the positive one, while other labels are treated as negative. The learning process is per-formed on such partitions. This division is performed for each class present in the new training set. Because AQ11 has no instance memory, it relies solely on its current set of rules. Its rules are complete and consistent with respect to the current examples only. Like every incremental learner it can be susceptible to the ordering effect. This can be weakened using a partial instance memory. However, certain applications may require an additional mechanism to remove examples from the partial memory when they become too old. AQ11-PM was also tested on three problems: STAGGER concepts, blasting cap detection and computer intrusion detection. For STAGGER concepts, the algorithm was compared with the unmodified version of AQ11 and AQ-PM. STAGGER concepts dataset was the same as the one created for the AQ-PM evaluation. At each time step, accuracy of classification and the number of examples in partial memory were recorded. AQ11-PM stores more examples than AQ-PM. However, it was able to achieve higher predictive accuracy on all the target concepts than its predecessor. AQ11-PM outperformed FLORA2 on accuracy of classification on the second and third context, but was weaker on the first one. Regarding memory requirements, both of the AQ family algorithms stored fewer examples during the evaluation than FLORA2. Blasting cap detection and computer intrusion detection was not evaluated by other researchers, so for the results and more details on these problems see BIB001 . The AQ11-PM algorithm was combined with FLORA's window adjustment heuristic (Algorithm 3) to adjust dynamically the window over which it retains and forgets examples. This mechanism will help to deal with changing concepts. The proposal was described in BIB002 . AQ11-PM+WAH was evaluated using STAGGER concepts. It was compared on total accuracy of classification and the number of maintained examples with AQ11, AQ11-PM and AQ-PM. The results suggest that the partialmemory classifiers learn faster than do simple incremental systems. AQ11-PM and AQ11-PM+WAH outperformed AQ-PM on all three concepts. Moreover, AQ-PM, AQ11-PM and AQ11-PM+WAH are competitive with FLORA2 in terms of predictive accuracy. In addition the AQ systems store fewer examples in the memory. For more details see BIB002 .
Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FACIL <s> Abstract We present ELEM2, a machine learning system that induces classification rules from a set of data based on a heuristic search over a hypothesis space. ELEM2 is distinguished from other rule induction systems in three aspects. First, it uses a new heuristtic function to guide the heuristic search. The function reflects the degree of relevance of an attribute-value pair to a target concept and leads to selection of the most relevant pairs for formulating rules. Second, ELEM2 handles inconsistent training examples by defining an unlearnable region of a concept based on the probability distribution of that concept in the training data. The unlearnable region is used as a stopping criterion for the concept learning process, which resolves conflicts without removing inconsistent examples. Third, ELEM2 employs a new rule quality measure in its post-pruning process to prevent rules from overfitting the data. The rule quality formula measures the extent to which a rule can discriminate between the positive and negative examples of a class. We describe features of ELEM2, its rule induction algorithm and its classification procedure. We report experimental results that compare ELEM2 with C4.5 and CN2 on a number of datasets. <s> BIB001 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FACIL <s> This paper is a survey of inductive rule learning algorithms that use a separate-and-conquer strategy. This strategy can be traced back to the AQ learning system and still enjoys popularity as can be seen from its frequent use in inductive logic programming systems. We will put this wide variety of algorithms into a single framework and analyze them along three different dimensions, namely their search, language and overfitting avoidance biases. <s> BIB002 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FACIL <s> Most of the work in machine learning assume that examples are generated at random according to some stationary probability distribution. In this work we study the problem of learning when the distribution that generate the examples changes over time. We present a method for detection of changes in the probability distribution of examples. The idea behind the drift detection method is to control the online error-rate of the algorithm. The training examples are presented in sequence. When a new training example is available, it is classified using the actual model. Statistical theory guarantees that while the distribution is stationary, the error will decrease. When the distribution changes, the error will increase. The method controls the trace of the online error of the algorithm. For the actual context we define a warning level, and a drift level. A new context is declared, if in a sequence of examples, the error increases reaching the warning level at example k w , and the drift level at example k d . This is an indication of a change in the distribution of the examples. The algorithm learns a new model using only the examples since k w . The method was tested with a set of eight artificial datasets and a real world dataset. We used three learning algorithms: a perceptron, a neural network and a decision tree. The experimental results show a good performance detecting drift and with learning the new concept. We also observe that the method is independent of the learning algorithm. <s> BIB003 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FACIL <s> Induction of decision rules within the dominance–based rough set approach to the multicriteria and multiattribute classification is considered. Within this framework, we discuss two algorithms: Glance and an extended version of AllRules. The important characteristics of Glance is that it induces the set of all dominance–based rules in an incremental way. On the other hand, AllRules induces in a non–incremental way the set of all robust rules, i.e. based on objects from the set of learning examples. The main aim of this study is to compare both these algorithms. We experimentally evaluate them on several data sets. The results show that Glance and AllRules are complementary algorithms. The first one works very efficiently on data sets described by a low number of condition attributes and a high number of objects. The other one, conversely, works well on data sets characterized by a high number of attributes and a low number of objects. <s> BIB004 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FACIL <s> Mining data streams is a challenging task that requires online systems ba- sed on incremental learning approaches. This paper describes a classification system based on decision rules that may store up-to-date border examples to avoid unneces- sary revisions when virtual drifts are present in data. Consistent rules classify new test examples by covering and inconsistent rules classify them by distance as the nearest neighbour algorithm. In addition, the system provides an implicit forgetting heuristic so that positive and negative examples are removed from a rule when they are not near one another. <s> BIB005 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FACIL <s> Mining data streams is a challenging task that requires online systems based on incremental learning approaches. This paper describes a classification system based on decision rules that may store up--to--date border examples to avoid unnecessary revisions when virtual drifts are present in data. Consistent rules classify new test examples by covering and inconsistent rules classify them by distance as the nearest neighbor algorithm. In addition, the system provides an implicit forgetting heuristic so that positive and negative examples are removed from a rule when they are not near one another. <s> BIB006 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FACIL <s> Since the beginning of the Internet age and the increased use of ubiquitous computing devices, the large volume and continuous flow of distributed data have imposed new constraints on the design of learning algorithms. Exploring how to extract knowledge structures from evolving and time-changing data, Knowledge Discovery from Data Streams presents a coherent overview of state-of-the-art research in learning from data streams. The book covers the fundamentals that are imperative to understanding data streams and describes important applications, such as TCP/IP traffic, GPS data, sensor networks, and customer click streams. It also addresses several challenges of data mining in the future, when stream mining will be at the core of many applications. These challenges involve designing useful and efficient data mining solutions applicable to real-world problems. In the appendix, the author includes examples of publicly available software and online data sets. This practical, up-to-date book focuses on the new requirements of the next generation of data mining. Although the concepts presented in the text are mainly about data streams, they also are valid for different areas of machine learning and data mining. <s> BIB007 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FACIL <s> Decision rules, which can provide good interpretability and flexibility for data mining tasks, have received very little attention in the stream mining community so far. In this work we introduce a new algorithm to learn rule sets, designed for open-ended data streams. The proposed algorithm is able to continuously learn compact ordered and unordered rule sets. The experimental evaluation shows competitive results in comparison with VFDT and C4.5rules. <s> BIB008 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> FACIL <s> Three block based ensembles, AWE, BWE and ACE, are considered in the perspective of learning from data streams with concept drift. AWE updates the ensemble after processing each successive block of incoming examples, while the other ensembles are additionally extended by different drift detectors. Experiments show that these extensions improve classification accuracy, in particular for sudden changes occurring within the block, as well as reduce computational costs. <s> BIB009
Previous solutions were not designed to process high-rate data streams. In this environment classifiers have to operate continuously, processing each item in real time only once. This forces memory and time limitations. Moreover, real data streams are susceptible to changes in contexts, so proposed methods should track and adapt to the underlying modifications. The new incremental algorithm-FACIL was proposed in BIB005 . FACIL is an acronym of the words Fast and Adaptive Classifier by Incremental Learning. It induces a set of decision rules from numerical data streams. This approach allows the rule to be inconsistent by storing positive and negative examples covered by it. Those examples lie very near one another-they are border examples. A rule is inconsistent when it covers both positive and negative examples. The aim of this system is to remember border examples until a minimum purity of the rule is reached. The purity of the rule is defined as a ratio between number of positive examples covered by the rule to the total number of covered examples. When the value of purity falls below the minimum threshold, the examples associated with the rule are used to create new consistent rules. This approach is similar to AQ11-PM system, however it differs in the way that a rule stores two positive examples for a negative one. This guarantees that an impure rule is always modified from both positive and negative examples. Nevertheless, the examples held in memory are not necessary extreme. Despite the fact that this proposal suffers from the ordering effect, it does not weaken the learning process. The initial proposal of FACIL operates on m numerical attributes. Every learning example is described by a normalized vector [0, BIB001 m and a discrete value of a class label. Decision rule r is given by a set of m closed intervals [I jl , I ju ], where l stands for a lower bound, and u-upper bound BIB005 . Rules are separated among different sets according to the appropriate class label. FACIL does not maintain any global window but each rule has a different set of associated examples. Each rule has its own window of border examples. Each rule stores a number of positive and negative examples and also an index of the last covered example. The model is updated every time a new example becomes available. The pseudocode of FACIL is presented as Algorithm 6. FACIL operates as follows. When a new example arrives, the rules associated with the example's class label are checked (lines 1-9) and the generalization necessary to describe the new example is calculated according to the formula (line 2): where g j = max(x ij ; I ju ) − min(x ij ; I jl ) and r j = I ju − I jl . The measure of growth favors the rule that involves the smallest changes in the minimum number of attributes. A rule with the minimum value of growth becomes a candidate (lines 3-4). However, the rule is taken into account as a possible candidate only if the new example can be seized with a moderate growth (lines 3-4). It occurs when ∀ j ∈ {1..m} : g j − r j ≤ κ, where κ ∈ (0; 1] BIB005 . If the first rule covering the new example is found, then the number of positive examples covered by the rule is increased and the rule's last-covered-example index is updated (lines 5-7). The example is added to the rule's window, if the number of negative examples covered by the rule increased by one unit (lines 8-9). If any of rules associated with the example's class label does not fire for the example (line 10), the rest of the rules with different class labels are visited (lines BIB009 BIB005 BIB006 BIB002 BIB003 BIB007 BIB008 BIB004 . If a rule with a different label does not cover the example, the intersection with the candidate is checked (line 21). If there exists such an intersection, the candidate is rejected (line 24). When the differentlabeled rule covers the example (line 12), its negative support is increased (line 13). Additionally, the example is added to the rule's window of examples (line 14). If the purity of the rule dropped below the minimum value given by the user (line 15), new consistent rules are created from examples associated with the initial rule and added to the model (lines BIB003 . The old rule is marked as unreliable (line 18) and cannot be used in the generalization process, even for rules with different labels. A window
Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> 14 <s> This paper presents and compares two algorithms of machine learning from examples, ID3 and AQ, and one recent algorithm from the same class, called LEM2. All three algorithms are illustrated using the same example. Production rules induced by these algorithms from the well-known Small Soybean Database are presented. Finally, some advantages and disadvantages of these algorithms are shown. <s> BIB001 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> 14 <s> We consider strategies for building classifier ensembles for non-stationary environments where the classification task changes during the operation of the ensemble. Individual classifier models capable of online learning are reviewed. The concept of ”forgetting” is discussed. Online ensembles and strategies suitable for changing environments are summarized. <s> BIB002 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> 14 <s> Mining data streams is a challenging task that requires online systems ba- sed on incremental learning approaches. This paper describes a classification system based on decision rules that may store up-to-date border examples to avoid unneces- sary revisions when virtual drifts are present in data. Consistent rules classify new test examples by covering and inconsistent rules classify them by distance as the nearest neighbour algorithm. In addition, the system provides an implicit forgetting heuristic so that positive and negative examples are removed from a rule when they are not near one another. <s> BIB003 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> 14 <s> Mining data streams is a challenging task that requires online systems based on incremental learning approaches. This paper describes a classification system based on decision rules that may store up--to--date border examples to avoid unnecessary revisions when virtual drifts are present in data. Consistent rules classify new test examples by covering and inconsistent rules classify them by distance as the nearest neighbor algorithm. In addition, the system provides an implicit forgetting heuristic so that positive and negative examples are removed from a rule when they are not near one another. <s> BIB004 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> 14 <s> We address adaptive classification of streaming data in the presence of concept change. An overview of the machine learning approaches reveals a deficit of methods for explicit change detection. Typically, classifier ensembles designed for changing environments do not have a bespoke change detector. Here we take a systematic look at the types of changes in streaming data and at the current approaches and techniques in online classification. Classifier ensembles for change detection are discussed. An example is carried through to illustrate individual and ensemble change detectors for both unlabelled and labelled data. While this paper does not offer ready-made solutions, it outlines possibilities for novel approaches to classification of streaming data. <s> BIB005 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> 14 <s> Data streams are usually characterized by changes in the underlying distribution generating data. Therefore algorithms designed to work with data streams should be able to detect changes and quickly adapt the decision model. Rules are one of the most interpretable and flexible models for data mining prediction tasks. In this paper we present the Adaptive Very Fast Decision Rules (AVFDR), an on-line, any-time and one-pass algorithm for learning decision rules in the context of time changing data. AVFDR can learn ordered and unordered rule sets. It is able to adapt the decision model via incremental induction and specialization of rules. Detecting local drifts takes advantage of the modularity of rule sets. In AVFDR, each individual rule monitors the evolution of performance metrics to detect concept drift. AVFDR prunes rules that detect drift. This explicit change detection mechanism provides useful information about the dynamics of the process generating data, faster adaption to changes and generates compact rule sets. The experimental evaluation shows this method is able to learn fast and compact rule sets from evolving streams in comparison to alternative methods. <s> BIB006 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> 14 <s> Decision rules are one of the most interpretable and flexible models for data mining prediction tasks. Till now, few works presented online, any-time and one-pass algorithms for learning decision rules in the stream mining scenario. A quite recent algorithm, the Very Fast Decision Rules (VFDR), learns set of rules, where each rule discriminates one class from all the other. In this work we extend the VFDR algorithm by decomposing a multi-class problem into a set of two-class problems and inducing a set of discriminative rules for each binary problem. The proposed algorithm maintains all properties required when learning from stationary data streams: online and any-time classifiers, processing each example once. Moreover, it is able to learn ordered and unordered rule sets. The new approach is evaluated on various real and artificial datasets. The new algorithm improves the performance of the previous version and is competitive with the state-of-the-art decision tree learning method for data streams. <s> BIB007
add example e to the negative rule's r n window; BIB006 BIB002 BIB005 . If there exists no rule that covers the example and there is no candidate for generalization, then a maximally specific rule to describe the new example is added to the appropriate set of rules (lines BIB001 BIB007 . Rules can also be deleted from the appropriate sets (line 30). A rule is removed if it is unreliable with a support smaller than the support of any rule generated from it. The second condition for rule removal is when the number of times the rule prevented generalization of a different label rule is greater than its support. FACIL is also equipped in a forgetting mechanism for dropping learning examples (line 31). This mechanism can be either explicit or implicit. Examples which are older than a user's defined threshold are deleted-this is explicit forgetting. Implicit forgetting takes place when examples are no longer relevant-they no longer lie on any of the rules boundary. Like every rule-based classifier, FACIL is supplemented with a classification strategy. A new test example is classified by rules that cover it. Unreliable rules that cover the example are rejected. Reliable rules are used to classify the test example. Consistent rules classify new examples by strict matching. Inconsistent rules acts like the nearest neighbor algorithm and classify the new example by its distance. The authors do not explain how exactly it is performed. Probably, they calculate the Euclidean distance between the test example and the rule's boundaries. In the case when no rule covers the example, it is classified to the label associated with the reliable rule with the minimal value of growth and an empty intersection with any other different label rules. The initial version of FACIL was evaluated on 12 real datasets from the UCI repository 1 and on a synthetic data stream generated from a moving hyperplane. In case of real data, a concept drift is not present. During the experiments the total accuracy of classification, the learning time and the number of induced rules was recorded. FACIL was compared with the C4.5Rules algorithm. In half of the real problems, FACIL obtains better results on the classification accuracy. Because FACIL is a single-pass solution, the processing time is always significantly shorter than for multi-pass C4.5Rules. For the hyperplane data stream, authors evaluated the computational cost as a function of the number of attributes. FACIL was not compared with any other existing stream mining solution. For detailed results see BIB003 . The initial version of FACIL was extended to process symbolic attributes in BIB004 . The formula for calculating the growth of a rule was changed in a way to process nominal attributes: Growth(r, x) = m j=1 ∆(T j , x j ), where for numeric attributes: ∆(T j , x j ) = min(|I jl − x j |; |x j − I ju |) and for nominal attributes: if example's attribute value x i is covered by the rule then ∆(T j , x j ) = 0, in the opposite case- The extension of FACIL was tested on a moving hyperplane problem. Again, the authors focused on evaluating the computational cost as a function of the number of attributes. The total accuracy of classification drops with the number of attributes. The processing time increases with the growth of the hyperplane problem. For detailed results see BIB004 .
Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> VFDR <s> Abstract The decision tree output of Quinlan's ID3 algorithm is one of its major weaknesses. Not only can it be incomprehensible and difficult to manipulate, but its use in expert systems frequently demands irrelevant information to be supplied. This report argues that the problem lies in the induction algorithm itself and can only be remedied by radically altering the underlying strategy. It describes a new algorithm, PRISM which, although based on ID3, uses a different induction strategy to induce rules which are modular, thus avoiding many of the problems associated with decision trees. <s> BIB001 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> VFDR <s> The CN2 algorithm induces an ordered list of classification rules from examples using entropy as its search heuristic. In this short paper, we describe two improvements to this algorithm. Firstly, we present the use of the Laplacian error estimate as an alternative evaluation function and secondly, we show how unordered as well as ordered rules can be generated. We experimentally demonstrate significantly improved performances resulting from these changes, thus enhancing the usefulness of CN2 as an inductive tool. Comparisons with Quinlan's C4.5 are also made. <s> BIB002 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> VFDR <s> Abstract We present ELEM2, a machine learning system that induces classification rules from a set of data based on a heuristic search over a hypothesis space. ELEM2 is distinguished from other rule induction systems in three aspects. First, it uses a new heuristtic function to guide the heuristic search. The function reflects the degree of relevance of an attribute-value pair to a target concept and leads to selection of the most relevant pairs for formulating rules. Second, ELEM2 handles inconsistent training examples by defining an unlearnable region of a concept based on the probability distribution of that concept in the training data. The unlearnable region is used as a stopping criterion for the concept learning process, which resolves conflicts without removing inconsistent examples. Third, ELEM2 employs a new rule quality measure in its post-pruning process to prevent rules from overfitting the data. The rule quality formula measures the extent to which a rule can discriminate between the positive and negative examples of a class. We describe features of ELEM2, its rule induction algorithm and its classification procedure. We report experimental results that compare ELEM2 with C4.5 and CN2 on a number of datasets. <s> BIB003 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> VFDR <s> Systems for inducing concept descriptions from examples are valuable tools for assisting in the task of knowledge acquisition for expert systems. This paper presents a description and empirical evaluation of a new induction system, CN2, designed for the efficient induction of simple, comprehensible production rules in domains where problems of poor description language and/or noise may be present. Implementations of the CN2, ID3, and AQ algorithms are compared on three medical classification tasks. <s> BIB004 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> VFDR <s> This paper is a survey of inductive rule learning algorithms that use a separate-and-conquer strategy. This strategy can be traced back to the AQ learning system and still enjoys popularity as can be seen from its frequent use in inductive logic programming systems. We will put this wide variety of algorithms into a single framework and analyze them along three different dimensions, namely their search, language and overfitting avoidance biases. <s> BIB005 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> VFDR <s> An emerging problem in Data Streams is the detection of concept drift. This problem is aggravated when the drift is gradual over time. In this work we deflne a method for detecting concept drift, even in the case of slow gradual change. It is based on the estimated distribution of the distances between classiflcation errors. The proposed method can be used with any learning algorithm in two ways: using it as a wrapper of a batch learning algorithm or implementing it inside an incremental and online algorithm. The experimentation results compare our method (EDDM) with a similar one (DDM). Latter uses the error-rate instead of distance-error-rate. <s> BIB006 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> VFDR <s> Mining data streams is a challenging task that requires online systems ba- sed on incremental learning approaches. This paper describes a classification system based on decision rules that may store up-to-date border examples to avoid unneces- sary revisions when virtual drifts are present in data. Consistent rules classify new test examples by covering and inconsistent rules classify them by distance as the nearest neighbour algorithm. In addition, the system provides an implicit forgetting heuristic so that positive and negative examples are removed from a rule when they are not near one another. <s> BIB007 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> VFDR <s> Mining data streams is a challenging task that requires online systems based on incremental learning approaches. This paper describes a classification system based on decision rules that may store up--to--date border examples to avoid unnecessary revisions when virtual drifts are present in data. Consistent rules classify new test examples by covering and inconsistent rules classify them by distance as the nearest neighbor algorithm. In addition, the system provides an implicit forgetting heuristic so that positive and negative examples are removed from a rule when they are not near one another. <s> BIB008 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> VFDR <s> Sales prediction is an important problem for different companies involved in manufacturing, logistics, marketing, wholesaling and retailing. Different approaches have been suggested for food sales forecasting. Several researchers, including the authors of this paper, reported on the advantage of one type of technique over the others for a particular set of products. In this paper we demonstrate that besides an already recognized challenge of building accurate predictive models, the evaluation procedures themselves should be considered more carefully. We give illustrative examples to show that e.g. popular MAE and MSE estimates can be intuitive with one type of product and rather misleading with the others. Furthermore, averaging errors across differently behaving products can be also counter intuitive. We introduce new ways to evaluate the performance of wholesales prediction and discuss their biases with respect to different error types. <s> BIB009 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> VFDR <s> Since the beginning of the Internet age and the increased use of ubiquitous computing devices, the large volume and continuous flow of distributed data have imposed new constraints on the design of learning algorithms. Exploring how to extract knowledge structures from evolving and time-changing data, Knowledge Discovery from Data Streams presents a coherent overview of state-of-the-art research in learning from data streams. The book covers the fundamentals that are imperative to understanding data streams and describes important applications, such as TCP/IP traffic, GPS data, sensor networks, and customer click streams. It also addresses several challenges of data mining in the future, when stream mining will be at the core of many applications. These challenges involve designing useful and efficient data mining solutions applicable to real-world problems. In the appendix, the author includes examples of publicly available software and online data sets. This practical, up-to-date book focuses on the new requirements of the next generation of data mining. Although the concepts presented in the text are mainly about data streams, they also are valid for different areas of machine learning and data mining. <s> BIB010 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> VFDR <s> Decision rules, which can provide good interpretability and flexibility for data mining tasks, have received very little attention in the stream mining community so far. In this work we introduce a new algorithm to learn rule sets, designed for open-ended data streams. The proposed algorithm is able to continuously learn compact ordered and unordered rule sets. The experimental evaluation shows competitive results in comparison with VFDT and C4.5rules. <s> BIB011 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> VFDR <s> This paper presents a new framework for dealing with two main types of concept drift: sudden and gradual drift in labelled data with decision attribute. The learning examples are processed in batches of the same size. This new framework, called Batch Weighted Ensemble, is based on incorporating drift detector into the evolving ensemble. Its performance was evaluated experimentaly on data sets with different types of concept drift and compared with the performance of a standard Accuracy Weighted Ensemble classifier. The results show that BWE improves evaluation measures like processing time, memory used and obtain competitive total accuracy. <s> BIB012 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> VFDR <s> Three block based ensembles, AWE, BWE and ACE, are considered in the perspective of learning from data streams with concept drift. AWE updates the ensemble after processing each successive block of incoming examples, while the other ensembles are additionally extended by different drift detectors. Experiments show that these extensions improve classification accuracy, in particular for sudden changes occurring within the block, as well as reduce computational costs. <s> BIB013
The Very Fast Decision Rules (VFDR) algorithm proposed by Gama and Kosina in BIB011 was also designed for high-speed massive data streams. It reads every learning example only once and induces an ordered or an unordered list of rules. VFDR enables processing both nominal and numeric attributes. The algorithm starts with an empty rule set RS and an empty default rule {} → L. L is a data structure that contains the necessary information for classification of new examples and includes the statistics used for extending the rule. Each rule r is associated with the corresponding data structure L r . Every L r (also L) stores: the number of examples covered by rule r, a vector to calculate the probability of observing examples of class c i , a matrix to calculate the probability of observing value v i of a nominal attribute at i per class and a b-tree to compute the probability per class of observing values greater than v j for a numerical attribute at i BIB011 . In general, L r accumulates sufficient statistics to compute the entropy for every label of a decision class. L r is updated when its corresponding rule covers a labeled example. The pseudocode of VFDR is presented as Algorithm 7. VFDR operates as follows. When a new learning example e is available all decision rules are visited (lines BIB003 BIB006 BIB009 BIB001 BIB002 BIB004 BIB012 BIB013 BIB007 BIB008 BIB005 . If rule r covers example e (line 2), its corresponding structure L r is updated (line 3). The Hoeffding bound states the number of examples after which a rule set RS should be updated either by extending some existing rule or inducting a new rule (line 4). The Hoeffding bound guarantees that with the probability 1 − δ the true mean of a random variable x with a range R will not differ from the estimated mean after n independent observations by more than = R 2 * ln( 1 δ ) 2 * n BIB011 . In the next step, the initial value of the entropy is calculated from the statistics gathered in L r (line 5). If the value of entropy exceeded the Hoeffding bound, then a rule should be enhanced (line 7). The rule is extended as follows. For each attribute and for each of this attribute's values that were observed in more than 10% of examples, the value of the split evaluation function is computed (lines BIB002 BIB004 BIB012 BIB013 BIB007 . If the value of the interesting measure for the best split is better than for not splitting, the rule is extended with a new selector obtained from the best split (lines BIB007 . The selector that minimizes the entropy of the class labels of the examples covered by the rule is added to the previous elementary conditions of the rule. The class label of the rule is then assigned according to the majority class of observations. VFDR can learn an ordered or unordered set of decision rules. In the former case, every labeled example updates the statistics of the first rule that covers it (line 14-15). For the latter-every rule that covers the example is updated. Those sets of rules are learned in parallel. In case when none of the rules cover example e (line 16), the default rule is updated (line 17). Then, if the number of examples in L exceeds the minimum number of examples obtained from the Hoeffding bound, new decision rules are induced from the default rule-using the same mechanism of a rule's growth as described earlier (lines BIB010 BIB011 . VFDR, as every rule-based classifier, is equipped with a classification strategy. The simplest strategy uses the stored distribution of classes-an example is classified to the class with the maximum value of probability. A more sophisticated strategy bases
Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Algorithm 7: Very Fast Decision Rules algorithm <s> Decision rules, which can provide good interpretability and flexibility for data mining tasks, have received very little attention in the stream mining community so far. In this work we introduce a new algorithm to learn rule sets, designed for open-ended data streams. The proposed algorithm is able to continuously learn compact ordered and unordered rule sets. The experimental evaluation shows competitive results in comparison with VFDT and C4.5rules. <s> BIB001 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Algorithm 7: Very Fast Decision Rules algorithm <s> Decision rules are one of the most interpretable and flexible models for data mining prediction tasks. Till now, few works presented online, any-time and one-pass algorithms for learning decision rules in the stream mining scenario. A quite recent algorithm, the Very Fast Decision Rules (VFDR), learns set of rules, where each rule discriminates one class from all the other. In this work we extend the VFDR algorithm by decomposing a multi-class problem into a set of two-class problems and inducing a set of discriminative rules for each binary problem. The proposed algorithm maintains all properties required when learning from stationary data streams: online and any-time classifiers, processing each example once. Moreover, it is able to learn ordered and unordered rule sets. The new approach is evaluated on various real and artificial datasets. The new algorithm improves the performance of the previous version and is competitive with the state-of-the-art decision tree learning method for data streams. <s> BIB002 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Algorithm 7: Very Fast Decision Rules algorithm <s> Data streams are usually characterized by changes in the underlying distribution generating data. Therefore algorithms designed to work with data streams should be able to detect changes and quickly adapt the decision model. Rules are one of the most interpretable and flexible models for data mining prediction tasks. In this paper we present the Adaptive Very Fast Decision Rules (AVFDR), an on-line, any-time and one-pass algorithm for learning decision rules in the context of time changing data. AVFDR can learn ordered and unordered rule sets. It is able to adapt the decision model via incremental induction and specialization of rules. Detecting local drifts takes advantage of the modularity of rule sets. In AVFDR, each individual rule monitors the evolution of performance metrics to detect concept drift. AVFDR prunes rules that detect drift. This explicit change detection mechanism provides useful information about the dynamics of the process generating data, faster adaption to changes and generates compact rule sets. The experimental evaluation shows this method is able to learn fast and compact rule sets from evolving streams in comparison to alternative methods. <s> BIB003
Input : e-a new learning example; RS-current set of rules; ordered-flag indicating induction of ordered set of rules; S min -a minimum number of examples from Hoeffding bound; δ-threshold for probability used in Hoeffding bound; SEF -a split evaluation function; Output: RS -modified set of decision rules 1 foreach rule r ∈ RS do 20 Return RS on the Bayes rule with the assumption of attribute independence for the class. The Naive Bayes strategy uses the prior distribution of the classes and also the conditional probabilities of the attribute-value pairs given the class. As a result, for each testing example e = (v 1 , ..., v j ), the probability that example e belongs to decision class c k is P (c k |e) ∝ P (c k ) jP (v j |c k ) BIB001 . Thanks to using this strategy more information available with each rule is exploited. An example is classified to the class with the maximum value of the posteriori probability. In case of the ordered set of rules only the first rule that covers an example is fired. Using the unordered set of rules-results returned by all rules that match the example are combined using weighted voting. This type of voting assumes that not all voters are equal. Instead, they are diversified by giving them different amounts of weights. The authors did not provide information on how weights are assigned to each of the decision rules. VFDR was tested on six different data streams: disjunctive concepts, hyperplane, LED, SEA, STAGGER, and Waveform. The authors tested two different classification strategies. Usage of the Bayes theorem improves the predictive capabilities of the algorithm. Authors also compared an ordered versus an unordered set of rules. The experimental evaluation showed that unordered rule set is more competitive than the ordered one with respect to the accuracy of classification. In the end, VFDR (with the Bayes classification strategy and an unordered set of rules) was compared with VFDT and C4.5Rules. VFDR is much more efficient than C4.5Rules in terms of memory and processing time. It also obtained competitive results against VFDT. For more details see BIB001 . The initial version of VFDR was extended to deal with multi-class problems in BIB002 . The proposed algorithm VFDR-MC decomposes a multi-class problem into a set of two-class problems and induces a set of discriminative rules for each binary problem. VFDR-MC applies one versus all strategy in which examples of one class are positive and other are negative. It considers a rule expansion for each of the classes observed with current rule. The expansion of a rule is different for the default rule and for the already existing rule. It also depends on the type of generated rule set: ordered or unordered. The default rule is expanded to a new rule with a literal for which a gain function, adopted from FOIL classifier, obtains the best value. The rule's decision class is indicated by the class with minimum frequency among those that satisfy the Hoeffding bound condition. The ordered strategy stops after finding the first candidate rule that is better than the previous one. The unordered strategy checks all possible expansions for every decision class. In case of extending a rule that already exists, the procedure also depends whether the ordered or unordered set of decision rules is induced. In case of ordered set only literals for positive class are tested. For unordered set the decision rules the class of expanded rule is maintained as positive for the first calculations of the gain measure. Next, computations for other classes set as positive ones are performed. This allows to produce more than one rule but not always for all the available decision classes. For more details see BIB002 . VFDR-MC was tested on six different data streams: KDDCup99, covtype, hyperplane, SEA, LED, and Random Tree. The authors observed that the unordered version obtained generally better results of classification accuracy than the ordered one. Moreover, unordered VFDR-MC mostly outperforms base version of VFDR on multi-class data sets. Learning time for ordered rule set is almost the same as in case of creation of the Hoeffding Tree. In case of unordered set of decision rules the learning time grows with the number of rules. For more details see BIB002 . VFDR was also improved in order to handle time changing data. The resulting algorithm Adaptive Very Fast Decision Rules (AVFDR) was described in BIB003 . AVFDR extends VFDR-MC with explicit drift detection. Each rule in the set of decision rules is equipped in a drift detection method, which tracks the performance of the rule during learning. Applied drift detector is presented as Algorithm 8. For every learning example covered by the rule, the rule updates its error of classification. Moreover, the drift detector manages two additional statistics: error min and stddev min . Those registers are updated if for given learning example e error e + stddev e < error min + stddev min . The flag indicating type of change for given rule can take one of three values: None, Warning or Drift. If the rule achieved warning level, then the rule's learning process is stopped until the flag is set to None again. In case of Drift level, the rule is so weak that it is removed from the set of decision rules. This helps to keep the final set of decision rules effective and up-to-date. For more details see BIB003 .
Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Algorithm 8: AVFDR Drift Detection Method <s> This paper presents a new framework for dealing with two main types of concept drift: sudden and gradual drift in labelled data with decision attribute. The learning examples are processed in batches of the same size. This new framework, called Batch Weighted Ensemble, is based on incorporating drift detector into the evolving ensemble. Its performance was evaluated experimentaly on data sets with different types of concept drift and compared with the performance of a standard Accuracy Weighted Ensemble classifier. The results show that BWE improves evaluation measures like processing time, memory used and obtain competitive total accuracy. <s> BIB001 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Algorithm 8: AVFDR Drift Detection Method <s> Data streams are usually characterized by changes in the underlying distribution generating data. Therefore algorithms designed to work with data streams should be able to detect changes and quickly adapt the decision model. Rules are one of the most interpretable and flexible models for data mining prediction tasks. In this paper we present the Adaptive Very Fast Decision Rules (AVFDR), an on-line, any-time and one-pass algorithm for learning decision rules in the context of time changing data. AVFDR can learn ordered and unordered rule sets. It is able to adapt the decision model via incremental induction and specialization of rules. Detecting local drifts takes advantage of the modularity of rule sets. In AVFDR, each individual rule monitors the evolution of performance metrics to detect concept drift. AVFDR prunes rules that detect drift. This explicit change detection mechanism provides useful information about the dynamics of the process generating data, faster adaption to changes and generates compact rule sets. The experimental evaluation shows this method is able to learn fast and compact rule sets from evolving streams in comparison to alternative methods. <s> BIB002
Input : r-tested decision rule; e-current learning example; Output: f lag ∈ {N one, W arning, Drif t}-flag indicating type of change 1 flag = None; 2 compute error of classification error e for given learning example e with its standard deviation stddev e .; 3 if (error e + stddev e ) < (error min + stddev min ) then 4 error min = error e ; 5 stddev min = stddev e ; 6 if (error e + stddev e ) ≥ (error min + 3 * stddev min ) then 7 flag = Drift; 8 else if (error e + stddev e ) ≥ (error min + 2 * stddev min ) then 9 flag = Warning; BIB001 Return f lag AVFDR was tested on five artificial data streams: Hyperplane, SEA, LED, RBF, and Waveform and six real datasets: KDDCup99, Covtype, Elec, Airlines, Connect-4, and Activity. The results obtained on artificial data show that AVFDR works best for changing environments. The accuracy of classification of VFDR's base version decreases with time. In case of real datasets, AVFDR u obtains competitive results on the accuracy of classification with a lower size of the induced model. For more details see BIB002 .
Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Conclusions <s> Agents that learn on-line with partial instance memory reserve some of the previously encountered examples for use in future training episodes. We extend our previous work by combining our method for selecting extreme examples with two incremental learning algorithms, AQ11 and GEM. Using these new systems, AQ11-PM and GEM-PM, and the task computer intrusion detection, we conducted a lesion study to analyze trade-offs in performance. Results showed that, although our partial-memory model decreased predictive accuracy by 2%, it also decreased memory requirements by 75%, learning time by 75%, and in some cases, concept complexity by 10%, an outcome consistent with earlier results using our partial-memory method and batch learning. <s> BIB001 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Conclusions <s> Learning concepts that change over time is important for a variety of applications in which an intelligent system must acquire and use a behavioral profile. Computer intrusion detection, calendar scheduling, and intelligent user interfaces are three examples. An interesting class of methods for learning such concepts consists of algorithms that maintain a portion of previously encountered examples. Since concepts change over time and these methods store selected examples, mechanisms must exist to identify and remove irrelevant examples of old concepts. In this paper, we describe an incremental rule learner with partial instance memory, called AQ 11 -PM+WAH, that uses Widmer and Kubat's heuristic to adjust dynamically the window over which it retains and forgets examples. We evaluated this learner using the STAGGER concepts and made direct comparisons to AQ-PM and to AQ 11 - PM, similar learners with partial instance memory. Results suggest that the forgetting heuristic is not restricted to FLORA2 the learner for which it was originally designed. Overall, result from this study and others suggest learners with partial instance memory converge more quickly to changing target concepts than algorithms that learn solely from new examples. <s> BIB002 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Conclusions <s> On-line learning in domains where the target concept depends on some hidden context poses serious problems. A changing context can induce changes in the target concepts, producing what is known as concept drift. We describe a family of learning algorithms that flexibly react to concept drift and can take advantage of situations where contexts reappear. The general approach underlying all these algorithms consists of (1) keeping only a window of currently trusted examples and hypotheses; (2) storing concept descriptions and re-using them when a previous context re-appears; and (3) controlling both of these functions by a heuristic that constantly monitors the system's behavior. The paper reports on experiments that test the systems' performance under various conditions such as different levels of noise and different extent and rate of concept drift. <s> BIB003 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Conclusions <s> Mining data streams is a challenging task that requires online systems ba- sed on incremental learning approaches. This paper describes a classification system based on decision rules that may store up-to-date border examples to avoid unneces- sary revisions when virtual drifts are present in data. Consistent rules classify new test examples by covering and inconsistent rules classify them by distance as the nearest neighbour algorithm. In addition, the system provides an implicit forgetting heuristic so that positive and negative examples are removed from a rule when they are not near one another. <s> BIB004 </s> Incremental Rule-Based Learners for Handling Concept Drift: An Overview <s> Conclusions <s> Decision rules, which can provide good interpretability and flexibility for data mining tasks, have received very little attention in the stream mining community so far. In this work we introduce a new algorithm to learn rule sets, designed for open-ended data streams. The proposed algorithm is able to continuously learn compact ordered and unordered rule sets. The experimental evaluation shows competitive results in comparison with VFDT and C4.5rules. <s> BIB005
Mining data streams recently became a very popular topic of research. Data streams are susceptible to changes in the hidden context, producing what is generally known as concept drift. There exist two main types of concept drift: sudden and gradual. However, there are also other types like recurring context and two cases, to which a good classifier should be resistant: blips and noise. Learning from non-stationary environments is rather a new discipline, but there already exist algorithms that attempt to solve this problem. They can be divided into two main groups: trigger-based and evolving methods. In this paper four key rule-based online algorithms proposed for mining data streams in the presence of concept drift were presented. First, FLORA was described-a first family of algorithms that flexibly react to changes in concepts, can use previous hypotheses in situations when context reappears and are robust to noise in data BIB003 . Then, algorithms from the AQ family were presented with their modifications. AQ-PM is a static learner that selects extreme examples from rules' boundaries for each incoming batch of data and stores them in the partial memory. AQ11-PM BIB001 is a combination of the incremental AQ11 algorithm with a partial memory mechanism. AQ11-PM+WAH BIB002 is extended with a heuristic for a flexible size of the window with stored examples. The FACIL algorithm operates similarly to AQ11-PM BIB004 . However, it differs in the way that examples stored in the partial memory do not have to be extreme ones. Those three main algorithms were not tested on huge datasets. For massive high-speed data streams a new algorithm called VFDR was proposed in BIB005 . It induces an ordered or an unordered sets of decision rules that are efficient in terms of memory and processing time. Those solutions use the same representation of knowledge-decision rules, however they operate in a differet way. These four algorithms can be compared on several criteria, like the type of data. FLORA is restricted only to nominal attributes, where AQ11-PM+WAH, FACIL and VFDR process both nominal and numerical attributes. On the other hand, FLORA, AQ11-PM+WAH and FACIL are adjusted to deal with concept drift, where VFDR are suitable only to stationary environments. Moreover, FLORA was designed and tested on different types of concept drift: sudden, recurring, and noise. Unfortunately, the first three solutions were not tested on massive data streams with concept drift. Two of them (FLORA and AQ11-PM+WAH) were tested on STAGGER concepts with 120 learning examples, where FACIL was evaluated on the moving hyperplane problem. FLORA and AQ11-PM+WAH solve binary classification problem, but the latter one can be extended for the multi-class problem. FACIL and VFDR do not have any restrictions on the number of decision classes. The four proposals differ also on the type of memory that they maintain. FLORA remembers only a window of the most recent examples. AQ11-PM+WAH has a partial memory with extreme examples that lie on the boundaries of induced decision rules. Additionally, application of WAH heuristic introduced a global learning window, outside which old examples are forgotten. FACIL also maintains a partial memory but the stored examples do not have to be extreme ones. Every decision rule has its own window of learning examples. Moreover, it remembers more examples than its predecessor (it stores two positive per one negative example). On the other hand, VFDR has no instance memory-it only maintains a set of decision rules with their corresponding data structures L r containing all necessary statistics. Knowledge representation is also maintained in a different way. FLORA stores the conditional part of rules in three description sets: ADES, PDES, and NDES. AQ11-PM+WAH induces a classical unordered set of decision rules. In case of FACIL, rules consist of all conditional attributes, which define an m-dimensional space (intervals). VFDR is the only algorithm that can induce either an unordered or an ordered set of decision rules. Its rules have to be as short as possible. Another criterion that differs the four described algorithms is the way of use of induced decision rules for new examples' classification. Moreover, all algorithms were evaluated in different setups and on different data sets, so the obtained results cannot be compared with each other. It is difficult to state which of the described algorithms is the best. They were introduced in different times and were tested on different data sets. It would be interesting to perform a comparison of those solutions on many data streams containing different types of concept drift with respect to the total accuracy of classification, the memory usage and the processing time. Nowadays the MOA environment-a framework for data stream mining, is very helpful. It contains a collection of machine learning algorithms, data generators and tools for evaluation. More can be found about this project in the literature and on the MOA project website 2 . MOA can be easily extended with new mining algorithms, but also with new stream generators or evaluation measures. Unfortunately the implementations of FLORA, AQ11-PM+WAH, FACIL, and VDFR are not publicly available, hindering such a comparison at present.
Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Introduction <s> We predict regulatory targets for 14 Arabidopsis microRNAs (miRNAs) by identifying mRNAs with near complementarity. Complementary sites within predicted targets are conserved in rice. Of the 49 predicted targets, 34 are members of transcription factor gene families involved in developmental patterning or cell differentiation. The near-perfect complementarity between plant miRNAs and their targets suggests that many plant miRNAs act similarly to small interfering RNAs and direct mRNA cleavage. The targeting of developmental transcription factors suggests that many plant miRNAs function during cellular differentiation to clear key regulatory transcripts from daughter cell lineages. <s> BIB001 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Introduction <s> MicroRNAs (miRNAs) are short RNA molecules that regulate gene expression by binding to target messenger RNAs and by controlling protein production or causing RNA cleavage. To date, functions have been assigned to only a few of the hundreds of identified miRNAs, in part because of the difficulty in identifying their targets. The short length of miRNAs and the fact that their complementarity to target sequences is imperfect mean that target identification in animal genomes is not possible by standard sequence comparison methods. Here we screen conserved 3′ UTR sequences from the Drosophila melanogaster genome for potential miRNA targets. The screening procedure combines a sequence search with an evaluation of the predicted miRNA–target heteroduplex structures and energies. We show that this approach successfully identifies the five previously validated let-7, lin-4, and bantam targets from a large database and predict new targets for Drosophila miRNAs. Our target predictions reveal striking clusters of functionally related targets among the top predictions for specific miRNAs. These include Notch target genes for miR-7, proapoptotic genes for the miR-2 family, and enzymes from a metabolic pathway for miR-277. We experimentally verified three predicted targets each for miR-7 and the miR-2 family, doubling the number of validated targets for animal miRNAs. Statistical analysis indicates that the best single predicted target sites are at the border of significance; thus, target predictions should be considered as tentative until experimentally validated. We identify features shared by all validated targets that can be used to evaluate target predictions for animal miRNAs. Our initial evaluation and experimental validation of target predictions suggest functions for two miRNAs. For others, the screen suggests plausible functions, such as a role for miR-277 as a metabolic switch controlling amino acid catabolism. Cross-genome comparison proved essential, as it allows reduction of the sequence search space. Improvements in genome annotation and increased availability of cDNA sequences from other genomes will allow more sensitive screens. An increase in the number of confirmed targets is expected to reveal general structural features that can be used to improve their detection. While the screen is likely to miss some targets, our study shows that valid targets can be identified from sequence alone. <s> BIB002 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Introduction <s> Recent experiments have shown that the genomes of organisms such as worm, fly, human and mouse encode hundreds of microRNA genes. Many of these microRNAs are thought to regulate the translational expression of other genes by binding to partially complementary sites in messenger RNAs. Phenotypic and expression analysis suggest an important role of microRNAs during development. Therefore, it is of fundamental importance to identify microRNA targets. However, no experimental or computational high-throughput method for target site identification in animals has been published yet. Our main result is a new computational method which is designed to identify microRNA target sites. This method recovers with high specificity known microRNA target sites which previously have been defined experimentally. Based on these results, we present a simple model for the mechanism of microRNA target site recognition. Our model incorporates both kinetic and thermodynamic components of target recognition. When we applied our method to a set of 74 Drosophila melanogaster microRNAs, searching 3' UTR sequences of a predefined set of fly mRNAs for target sites which were evolutionary conserved between Drosophila melanogaster and Drosophila pseudoobscura, we found that a number of key developmental body patterning genes such as hairy and fushi-tarazu are likely to be translationally regulated by microRNAs. <s> BIB003 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Introduction <s> Summary We present rna22 , a method for identifying microRNA binding sites and their corresponding heteroduplexes. Rna22 does not rely upon cross-species conservation, is resilient to noise, and, unlike previous methods, it first finds putative microRNA binding sites in the sequence of interest, then identifies the targeting microRNA. Computationally, we show that rna22 identifies most of the currently known heteroduplexes. Experimentally, with luciferase assays, we demonstrate average repressions of 30% or more for 168 of 226 tested targets. The analysis suggests that some microRNAs may have as many as a few thousand targets, and that between 74% and 92% of the gene transcripts in four model genomes are likely under microRNA control through their untranslated and amino acid coding regions. We also extended the method's key idea to a low-error microRNA-precursor-discovery scheme; our studies suggest that the number of microRNA precursors in mammalian genomes likely ranges in the tens of thousands. <s> BIB004 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Introduction <s> Computational microRNA (miRNA) target prediction is a field in flux. Here we present a guide through five widely used mammalian target prediction programs. We include an analysis of the performance of these individual programs and of various combinations of these programs. For this analysis we compiled several benchmark data sets of experimentally supported miRNA-target gene interactions. Based on the results, we provide a discussion on the status of target prediction and also suggest a stepwise approach toward predicting and selecting miRNA targets for experimental testing. <s> BIB005 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Introduction <s> MicroRNAs (miRNAs) are small noncoding RNAs that control gene expression by inducing RNA cleavage or translational inhibition. Most human miRNAs are intragenic and are transcribed as part of their hosting transcription units. We hypothesized that the expression profiles of miRNA host genes and of their targets are inversely correlated and devised a novel procedure, HOCTAR (host gene oppositely correlated targets), which ranks predicted miRNA target genes based on their anti-correlated expression behavior relative to their respective miRNA host genes. HOCTAR is the first tool for systematic miRNA target prediction that utilizes the same set of microarray experiments to monitor the expression of both miRNAs (through their host genes) and candidate targets. We applied the procedure to 178 human intragenic miRNAs and found that it performs better than currently available prediction softwares in pinpointing previously validated miRNA targets. The high-scoring HOCTAR predicted targets were enriched in Gene Ontology categories, which were consistent with previously published data, as in the case of miR-106b and miR-93. By means of overexpression and loss-of-function assays, we also demonstrated that HOCTAR is efficient in predicting novel miRNA targets and we identified, by microarray and qRT-PCR procedures, 34 and 28 novel targets for miR-26b and miR-98, respectively. Overall, we believe that the use of HOCTAR significantly reduces the number of candidate miRNA targets to be tested compared to the procedures based solely on target sequence recognition. Finally, our data further confirm that miRNAs have a significant impact on the mRNA levels of most of their targets. <s> BIB006 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Introduction <s> MicroRNAs (miRNAs) are a class of short endogenously expressed RNA molecules that regulate gene expression by binding directly to the messenger RNA of protein coding genes. They have been found to confer a novel layer of genetic regulation in a wide range of biological processes. Computational miRNA target prediction remains one of the key means used to decipher the role of miRNAs in development and disease. Here we introduce the basic idea behind the experimental identification of miRNA targets and present some of the most widely used computational miRNA target identification programs. The review includes an assessment of the prediction quality of these programs and their combinations. Contact: p.alexiou@fleming.gr Supplementary information: Supplementary data are available at Bioinformatics online. <s> BIB007 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Introduction <s> RNA interference (RNAi) is one of the most significant recent breakthroughs in biomedical sciences. In 2006, Drs. Fire and Mello were awarded the Nobel Price for Physiology or Medicine for their discovery of gene silencing by double-stranded RNA. Basic scientists have used RNAi as a tool to study gene regulation, signal transduction and disease mechanisms, while preclinical drug development has gained from its use in target validation and lead optimization. RNAi has also shown promise in therapeutic applications, and several synthetic RNA molecules have entered clinical trials. The family of short regulatory RNA molecules, including small interfering RNAs (siRNAs) and micro-RNAs (miRNAs), offers many possibilities for the innovative mind. When conventional small molecule inhibitors cannot be used, RNAi technology offers the possibility for sequence-specific targeting and subsequent target gene knockdown. Currently the major challenges related to RNAi -based drug development include delivery, off-target effects, activation of the immune system and RNA degradation. Although many of the expectations related to drug development have not been met thus far, these physiologically important molecules are used in several applications. This review summarizes recent patent applications concerning micro-RNA biology. Despite the somewhat unclear intellectual property right (IPR) status for RNAi, there are many possibilities for new inventions, and much remains to be learned from the physiology behind gene regulation by short RNA molecules. <s> BIB008 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Introduction <s> MicroRNAs (miRNAs) suppress gene expression by inhibiting translation, promoting mRNA decay or both. Each miRNA may regulate hundreds of genes to control the cell's response to developmental and other environmental cues. The best way to understand the function of a miRNA is to identify the genes that it regulates. Target gene identification is challenging because miRNAs bind to their target mRNAs by partial complementarity over a short sequence, suppression of an individual target gene is often small, and the rules of targeting are not completely understood. Here we review computational and experimental approaches to the identification of miRNA-regulated genes. The examination of changes in gene expression that occur when miRNA expression is altered and biochemical isolation of miRNA-associated transcripts complement target prediction algorithms. Bioinformatic analysis of over-represented pathways and nodes in protein-DNA interactomes formed from experimental candidate miRNA gene target lists can focus attention on biologically significant target genes. <s> BIB009 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Introduction <s> The liver-expressed microRNA-122 (miR-122) is essential for hepatitis C virus (HCV) RNA accumulation in cultured liver cells, but its potential as a target for antiviral intervention has not been assessed. We found that treatment of chronically infected chimpanzees with a locked nucleic acid (LNA)-modified oligonucleotide (SPC3649) complementary to miR-122 leads to long-lasting suppression of HCV viremia, with no evidence of viral resistance or side effects in the treated animals. Furthermore, transcriptome and histological analyses of liver biopsies demonstrated derepression of target mRNAs with miR-122 seed sites, down-regulation of interferon-regulated genes, and improvement of HCV-induced liver pathology. The prolonged virological response to SPC3649 treatment without HCV rebound holds promise of a new antiviral therapy with a high barrier to resistance. <s> BIB010 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Introduction <s> Proper coordination of cholesterol biosynthesis and trafficking is essential to human health. The sterol regulatory element-binding proteins (SREBPs) are key transcription regulators of genes involved in cholesterol biosynthesis and uptake. We show here that microRNAs (miR-33a/b) embedded within introns of the SREBP genes target the adenosine triphosphate-binding cassette transporter A1 (ABCA1), an important regulator of high-density lipoprotein (HDL) synthesis and reverse cholesterol transport, for posttranscriptional repression. Antisense inhibition of miR-33 in mouse and human cell lines causes up-regulation of ABCA1 expression and increased cholesterol efflux, and injection of mice on a western-type diet with locked nucleic acid-antisense oligonucleotides results in elevated plasma HDL. Our findings indicate that miR-33 acts in concert with the SREBP host genes to control cholesterol homeostasis and suggest that miR-33 may represent a therapeutic target for ameliorating cardiometabolic diseases. <s> BIB011 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Introduction <s> Dominant negative genetic disorders, in which a mutant allele of a gene causes disease in the presence of a second, normal copy, have been challenging since there is no cure and treatments are only to alleviate the symptoms. Current therapies involving pharmacological and biological drugs are not suitable to target mutant genes selectively due to structural indifference of the normal variant of their targets from the disease-causing mutant ones. In instances when the target contains single nucleotide polymorphism (SNP), whether it is an enzyme or structural or receptor protein are not ideal for treatment using conventional drugs due to their lack of selectivity. Therefore, there is a need to develop new approaches to accelerate targeting these previously inaccessible targets by classical therapeutics. Although there is a cooling trend by the pharmaceutical industry for the potential of RNA interference (RNAi), RNAi and other RNA targeting drugs (antisense, ribozyme, etc.) still hold their promise as the only drugs that provide an opportunity to target genes with SNP mutations found in dominant negative disorders, genes specific to pathogenic tumor cells, and genes that are critical for mediating the pathology of various other diseases. Because of its exquisite specificity and potency, RNAi has attracted a considerable interest as a new class of therapeutic for genetic diseases including amyotrophic lateral sclerosis, Huntington’s disease (HD), Alzheimer’s disease (AD), Parkinson’s disease (PD), spinocerebellar ataxia, dominant muscular dystrophies, and cancer. In this review, progress and challenges in developing RNAi therapeutics for genetic diseases will be discussed. <s> BIB012 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Introduction <s> RNA interference (RNAi) is a robust gene silencing mechanism that degrades mRNAs complementary to the antisense strands of double-stranded, short interfering RNAs (siRNAs). As a therapeutic strategy, RNAi has an advantage over small-molecule drugs, as virtually all genes are susceptible to targeting by siRNA molecules. This advantage is, however, counterbalanced by the daunting challenge of achieving safe, effective delivery of oligonucleotides to specific tissues in vivo. Lipid-based carriers of siRNA therapeutics can now target the liver in metabolic diseases and are being assessed in clinical trials for the treatment of hypercholesterolemia. For this indication, a chemically modified oligonucleotide that targets endogenous small RNA modulators of gene expression (microRNAs) is also under investigation in clinical trials. Emerging 'self-delivery' siRNAs that are covalently linked to lipophilic moieties show promise for the future development of therapies. Besides the liver, inflammation of the adipose tissue in patients with obesity and type 2 diabetes mellitus may be an attractive target for siRNA therapeutics. Administration of siRNAs encapsulated within glucan microspheres can silence genes in inflammatory phagocytic cells, as can certain lipid-based carriers of siRNA. New technologies that combine siRNA molecules with antibodies or other targeting molecules also appear encouraging. Although still at an early stage, the emergence of RNAi-based therapeutics has the potential to markedly influence our clinical future. <s> BIB013 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Introduction <s> Since its discovery in 1998, RNA interference (RNAi) has revolutionized basic and clinical research. Small RNAs, including small interfering RNA (siRNA), short hairpin RNA (shRNA) and microRNA (miRNA), mediate RNAi effects through either cleavage-dependent or cleavage-independent RNA inducible silencing complex (RISC) effector processes. As a result of its efficacy and potential, RNAi has been elevated to the status of "blockbuster therapeutic" alongside recombinant protein and monoclonal antibody. RNAi has already contributed to our understanding of neoplasia and has great promise for anti-cancer therapeutics, particularly so for personalized cancer therapy. Despite this potential, several hurdles have to be overcome for successful development of RNAi-based pharmaceuticals. This review will discuss the potential for, challenges to, and the current status of RNAi-based cancer therapeutics. <s> BIB014 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Introduction <s> MicroRNAs (miRNAs) provide new therapeutic targets for many diseases, while their myriad roles in development and cellular processes make them fascinating to study. We still do not fully understand the molecular mechanisms by which miRNAs regulate gene expression nor do we know the complete repertoire of mRNAs each miRNA regulates. However, recent progress in the development of effective strategies to block miRNAs suggests that anti-miRNA drugs may soon be used in the clinic. <s> BIB015 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Introduction <s> miRNA target genes prediction represents a crucial step in miRNAs functional characterization. In this context, the challenging issue remains predictions accuracy and recognition of false positive results. In this article myMIR, a web based system for increasing reliability of miRNAs predicted targets lists, is presented. myMIR implements an integrated pipeline for computing ranked miRNA::target lists and provides annotations for narrowing them down. The system relies on knowledge base data, suitably integrated in order to extend the functional characterization of targeted genes to miRNAs, by highlighting the search on over-represented annotation terms. Validation results show a dramatic reduction in the quantity of predictions and an increase in the sensitivity, when compared to other methods. This improves the predictions accuracy and allows the formulation of novel hypotheses on miRNAs functional involvement. <s> BIB016 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Introduction <s> An emerging new category of therapeutic agents based on ribonucleic acid has emerged and shown very promising in vitro, animal and pre-clinical results, known as small interfering RNAs (siRNAs), microRNAs mimics (miRNA mimics) and their derivates. siRNAs are small RNA molecules that promote potent and specific silencing of mutant, exogenous or aberrant genes through a mechanism known as RNA interference. These agents have called special attention to medicine since they have been used to experimentally treat a series of neurological conditions with distinct etiologies such as prion, viral, bacterial, fungal, genetic disorders and others. siRNAs have also been tested in other scenarios such as: control of anxiety, alcohol consumption, drug-receptor blockage and inhibition of pain signaling. Although in a much earlier stage, miRNAs mimics, anti-miRs and small activating RNAs (saRNAs) also promise novel therapeutic approaches to control gene expression. In this review we intend to introduce clinicians and medical researchers to the most recent advances in the world of siRNA- and miRNA-mediated gene control, its history, applications in cells, animals and humans, delivery methods (an yet unsolved hurdle), current status and possible applications in future clinical practice. <s> BIB017 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Introduction <s> Rarely a new research area has gotten such an overwhelming amount of attention as have microRNAs. Although several basic questions regarding their biological principles still remain to be answered, many specific characteristics of microRNAs in combination with compelling therapeutic efficacy data and a clear involvement in human disease have triggered the biotechnology community to start exploring the possibilities of viewing microRNAs as therapeutic entities. This review serves to provide some general insight into some of the current microRNAs targets, how one goes from the initial bench discovery to actually developing a therapeutically useful modality, and will briefly summarize the current patent landscape and the companies that have started to explore microRNAs as the next drug target. <s> BIB018 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Introduction <s> Neurodegenerative diseases are typically late-onset, progressive disorders that affect neural function and integrity. Although most attention has been focused on the genetic underpinnings of familial disease, mechanisms are likely to be shared with more predominant sporadic forms, which can be influenced by age, environment, and genetic inputs. Previous work has largely addressed the roles of select protein-coding genes; however, disease pathogenesis is complicated and can be modulated through not just protein-coding genes, but also regulatory mechanisms mediated by the exploding world of small non-coding RNAs. Here, we focus on emerging roles of miRNAs in age-associated events impacting long-term brain integrity and neurodegenerative disease. <s> BIB019 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Introduction <s> Analysis of data from The Cancer Genome Atlas generates a pan-cancer network of 143 recurrent miRNA-target relationships. The identified miRNAs were frequently regulated by genetic and epigenetic alterations in cancer. The work also reveals that some miRNAs might coordinately regulate cancer pathways, such as miR-29 regulation of TET1 and TDG mRNAs, encoding components from the active DNA demethylation pathway. <s> BIB020 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Introduction <s> MicroRNA (miRNA) are non-coding regulatory RNA usually consisting of 20-24 nucleotides. Over the past decade, increases and decreases in miRNA expression have been shown to associate with various types of disease, including cancer. The first two known miRNA aberrations resulted from altered expression of DLEU2 and C13orf25 in hematological malignancies. DLEU2, which encodes miR-15a and miR-16-1, was discovered from 13q14 deletion in chronic lymphocytic leukemia, while C13orf25, which encodes six mature miRNA (miR-17, miR-18, miR-19a, miR-19b, miR-20a and miR-92a), was identified from 13q31 amplification in aggressive B-cell lymphomas. These miRNA were downregulated or upregulated in accordance with genomic deletion or amplification, which suggests that they contribute to tumorigenesis through altered regulation of target oncogenes or tumor suppressors. Consistent with that idea, miR-15a/16-1 is known to regulate Bcl2 in chronic lymphocytic leukemia, and miR-17-92 regulates the tumor suppressors p21, Pten and Bim in aggressive B-cell lymphomas. Dysregulation of other miRNA, including miR-21, miR-29, miR-150 and miR-155, have also been shown to play crucial roles in the pathogenesis of aggressive transformed, high-grade and refractory lymphomas. Addition of miRNA dysregulation to the original genetic events likely enhances tumorigenicity of malignant lymphoma through activation of one or more signaling pathways. <s> BIB021 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Introduction <s> MicroRNAs (miRNA) are a group of naturally occurring, small, noncoding, and single-strand RNA molecules that regulate gene expression at the posttranscriptional and translational levels. By controlling the expression of oncogenic and tumor suppressor proteins, miRNAs are believed to play an important role in pathologic processes associated with malignant progression including tumor cell proliferation, apoptosis, differentiation, angiogenesis, invasion, and metastasis. However, relatively few studies have investigated the influence of chemopreventive agents on miRNA expression and their regulation of target genes. Given the significance of miRNAs in modulating gene expression, such research can provide insight into the pleiotropic biologic effects that chemopreventive agents often display and a deeper understanding of their mechanism of action to inhibit carcinogenesis. In addition, miRNAs can provide useful biomarkers for assessing antineoplastic activity of these agents in preclinical and clinical observations. In this review, we summarize recent publications that highlight a potentially important role of miRNAs in cancer chemoprevention research. <s> BIB022 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Introduction <s> MicroRNAs (miRNAs) are small non-coding RNAs which play a key role in the post-transcriptional regulation of many genes. Elucidating miRNA-regulated gene networks is crucial for the understanding of mechanisms and functions of miRNAs in many biological processes, such as cell proliferation, development, differentiation and cell homeostasis, as well as in many types of human tumors. To this aim, we have recently presented the biclustering method HOCCLUS2, for the discovery of miRNA regulatory networks. Experiments on predicted interactions revealed that the statistical and biological consistency of the obtained networks is negatively affected by the poor reliability of the output of miRNA target prediction algorithms. Recently, some learning approaches have been proposed to learn to combine the outputs of distinct prediction algorithms and improve their accuracy. However, the application of classical supervised learning algorithms presents two challenges: i) the presence of only positive examples in datasets of experimentally verified interactions and ii) unbalanced number of labeled and unlabeled examples. We present a learning algorithm that learns to combine the score returned by several prediction algorithms, by exploiting information conveyed by (only positively labeled/) validated and unlabeled examples of interactions. To face the two related challenges, we resort to a semi-supervised ensemble learning setting. Results obtained using miRTarBase as the set of labeled (positive) interactions and mirDIP as the set of unlabeled interactions show a significant improvement, over competitive approaches, in the quality of the predictions. This solution also improves the effectiveness of HOCCLUS2 in discovering biologically realistic miRNA:mRNA regulatory networks from large-scale prediction data. Using the miR-17-92 gene cluster family as a reference system and comparing results with previous experiments, we find a large increase in the number of significantly enriched biclusters in pathways, consistent with miR-17-92 functions. The proposed approach proves to be fundamental for the computational discovery of miRNA regulatory networks from large-scale predictions. This paves the way to the systematic application of HOCCLUS2 for a comprehensive reconstruction of all the possible multiple interactions established by miRNAs in regulating the expression of gene networks, which would be otherwise impossible to reconstruct by considering only experimentally validated interactions. <s> BIB023
MicroRNAs (miRNAs) are abundant and short endogenous noncoding RNAs made of 19-23 nt that bind to target mRNAs, typically resulting in degradation and translational repression of mRNAs. The fine-tuning of gene regulation in biological processes and disease pathways by these small RNAs recently attracted significant attention; the number of related articles has grown exponentially over the past decade (Supplementary Figure S1) . MiRNAs are used to study signal transduction and pathogenesis of genetic BIB012 BIB017 , neurodegenerative BIB019 and metabolic diseases BIB013 and cancer BIB014 BIB020 . They are also used in preclinical drug development for target validation and lead optimization, and a few synthetic miRNAs entered clinical trials BIB008 . Development of the miRNA-directed novel therapeutics is already under way BIB018 BIB015 and miRNA-based targeting in cancer is not far behind BIB021 BIB022 . MiRNAs account for about 1% of human genes and are shown to regulate >60% of genes . On average, miRNAs bind to hundreds of target sites BIB009 , with some that have a few thousand sites BIB004 . The number of known miRNAs has substantially increased during the past few years, and based on release 21 of the miRBase database , it currently stands at >35 000 in >200 species. Unfortunately, the annotation of their targets falls Xiao Fan is a PhD candidate at the University of Alberta. Her research interests involve high-throughput characterization of microRNAs, microRNA target prediction and analysis of microRNA regulatory networks. Lukasz Kurgan is a Professor at the University of Alberta. His research group focuses on high-throughput structural and functional characterization of proteins and small RNAs. behind as only about 1000 miRNAs (3% of known miRNAs) have validated targets. Moreover, the number of curated targets per miRNA ( Supplementary Table S1 ) is far lower than their estimated count. Traditionally, the targets are annotated using low-throughput experimental biochemical assays including quantitative polymerase chain reaction (qPCR), luciferase assay and western blot. In recent years, a few high-throughput experimental methods to annotate miRNA targets were developed. They include microarrays and RNA sequencing that use gene expression levels and pulsed SILAC (pSILAC; stable isotope labeling by/with amino acids in cell culture) that focus on protein expression levels. These annotations are performed by assuming that miRNA targets (genes or proteins) with large reduction in expression levels in miRNA-overexpressed cells are functional (i.e. they are downregulated) BIB006 . One drawback of such approach to annotate miRNA targets is that it requires a threshold of the expression changes, which may vary depending on specific miRNA-mRNA pair, cell types, culture conditions, etc. Another drawback is that these experiments are done for per single miRNA and are difficult to scale to cover all known miRNAs. Lastly, these annotations are at the gene level, i.e. they indicate whether a given mRNA interacts with a given miRNA, in contrast to the duplex level, i.e. whether a given fragment on mRNA (binding site) interacts with a given miRNA. The latter is motivated by the fact that knowledge of the binding sites is important for the development of gene therapeutics BIB010 BIB011 . Cross-linking immunoprecipitation (CLIP)-based techniques attracted attention in recent years, as they can specify the sites targeted by miRNAs. However, these methods are not miRNA specific, which means that they find binding sites of the Argonaute (Ago) protein that facilitates miRNA:mRNA binding but without coupling them to specific miRNAs. In parallel to the experimental efforts, dozens of computational miRNAs target predictors, which find targets from the mRNA and miRNA sequences, have been developed since the first method was released in 2003 BIB002 (Supplementary Figure S2) . The underlying principle is to use data generated by (usually lowthroughput) experimental methods to build predictive models, which in turn can be used to perform high-throughput predictions for specific miRNAs of interest that lack the experimental data. The results generated by these (base) predictors can be filtered or combined together by meta predictors, i.e. methods that refine predictions of the base methods such as Pio's approach and myMIR BIB023 BIB016 . However, the meta predictors often lack integration with the base predictive models (they were developed separately from the base methods and require manual collection of the predictions from the base methods) and they rely on availability of results generated by multiple base methods, which makes them more challenging to use. The targets can be also predicted computationally by ranking the gene expression or CLIPbased data, but in this case the inputs are the experimental data, which limits their applications. In this review we focus on the computational miRNAs target predictors that require only the knowledge of the miRNA and mRNA sequences (sequence-based miRNA target prediction), excluding the meta methods. The field of sequence-based miRNA target prediction has reached maturity, as evidenced by the declining trend in the development efforts (Supplementary Figure S2 ). After the initial spike in 2005 when eight methods were developed, more recent years have seen on average only three new methods per year. These predictors differ on many aspects including their underlying predictive methodology (mechanistic details of miRNA-mRNA binding that they consider including use of complementarity of base pairing, site accessibility and evolutionary conservation), empirical evaluation (data sets and evaluation procedures; type of predictive model they use), usability (availability and ease of use) popularity and impact and predictive performance. Availability of many difficult-to-compare methods makes it challenging for the end users to select a proper tool and prompts the need for contributions that summarize and evaluate these methods to guide the users and to help the developers to revitalize this field. Supplementary Table S2 compares existing reviews of the miRNA target predictors based on the inclusion of discussion and analysis of the abovementioned aspects. We observe that these reviews summarized the latest miRNA target predictors at the time of their publication and compared or at least described the methodology used by these predictors. Most of these contributions also discussed availability of predictors and some aspects of their usability, focusing on the species that they were designed for. However, other important aspects of usability, such as the number of input parameters (that determines flexibility of use for an expert user), the format of the input miRNAs and genes, the ability to predict for novel miRNA sequences, the format of the outputs and the number of predicted targets (which differs substantially between methods), were omitted. They also neglected to discuss popularity and impact of the predictors and details concerning their evaluation. Only three relatively older reviews provided comparative evaluation. The first review by Rajewsky assessed nine methods on 113 experimentally annotated miRNA-target pairs, but only in Drosophila BIB003 . Review from 2006 by Sethupathy BIB005 used a small set of 84 annotated miRNA-target pairs and lacked assessment on the nonfunctional pairs (whether these methods can correctly recognize lack of interaction). The latest comparative review from 2009 by Alexiou BIB007 used 150 miRNA-target duplexes but considered only relatively old methods that were published in 2007 or earlier. Moreover, the evaluation criteria included only sensitivity and precision, which does not cover quality of prediction of the nonfunctional pairs. To summarize, prior reviews of the sequence-based miRNA target prediction methods suffer from lack of or limited and outdated empirical evaluation, inclusion of a relatively small set of predictors, lack of or shallow treatment of certain aspects, such as usability and impact of the prediction methods, evaluation procedures and practical insights for the end users and developers. To this end, we provide a comprehensive and practical summary of this field. We introduce and discuss 38 base predictors of miRNA targets in animals including recent methods. The focus on animals is motivated by an observation that predictions of targets in plants are relatively easy and are considered a solved problem BIB001 . We provide analysis from all key perspectives that are relevant to the end users and developers including overview of the mechanistic basis of miRNA-mRNA interaction and how this information is incorporated into the underlying predictive methodologies. We also give detailed summary of evaluation, usability and popularity/impact of the 38 predictors. As one often omitted dimension, we discuss the scope of the outputs, i.e. whether a given method provides propensity score (probability of binding) or only a binary outcome (binding versus nonbinding), and whether it predicts positions of the miRNA binding site on the target gene. We are the first to conduct an empirical comparative assessment on both low-throughput and high-throughput experimental data for the predictions at the miRNA:mRNA duplex and gene levels. We use four benchmark data sets and consider seven representative methods including recent predictors. We systematically evaluate both binary and (for the first time) real-valued propensity to compare multiple methods. Moreover, we use our in-depth analytical and empirical review to provide practical insights for the end users and developers.
Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Benchmark data sets <s> microRNAs (miRNAs) regulate mRNA translation and mRNA decay in plants and animals (49). Hundreds of human miRNAs are now known (4-6, 15, 19). In animals, miRNAs regulate thousands of genes with spatial and temporal specificity, helping to ensure the accuracy of gene expression programs (17, 38, 47). Understanding the precise biological functions of animal miRNAs will require the identification of their multiple targets and the pathways that they control. ::: ::: Animal miRNAs generally share limited sequence complementarity with their targets. miRNA target recognition involves complementary base pairing of the target with the 5′ end (positions 1 to 8) of the miRNA guide strand seed region. However, the extent of seed region complementarity is not precisely determined and can be modified by 3′ pairing (8). Computational methods have been used to predict human miRNA targets (31, 34, 37, 45, 52). Most predicted miRNA target recognition sites lie in 3′ untranslated regions (3′UTRs), although coding region sites (CDS) may also be used (8, 36). Current estimates are that 30% or more of human mRNAs are regulated by miRNAs (36). ::: ::: While thousands of miRNA targets have been predicted, relatively few have been experimentally validated. Available methods for validation are laborious and not easily amenable to high-throughput methodologies (4). Since a single miRNA can regulate hundreds of targets, the biological pathways regulated by miRNAs are not always obvious from an examination of their targets. There is a clear need for high-throughput, low-cost methods to experimentally determine miRNA targets, validate computational predictions, and decipher miRNA function. ::: ::: One method to experimentally identify miRNA targets and their functions is microarray analysis (50). Although miRNAs may silence their targets via translational blocking (16), they also regulate target transcript levels. miRNAs in transfected cells down-regulate hundreds of mRNAs detectable by microarray profiling (38). These down-regulated transcripts have expression patterns that are complementary to that of the introduced miRNA and are also highly enriched within their 3′UTRs with hexamer, heptamer, and octamer motifs complementary to miRNA seed regions. This regulation resembles the “off-target” silencing of imperfectly matched targets by small interfering RNAs (siRNAs) (28, 29). Thus, both miRNAs and siRNAs can target partially complementary transcripts for degradation, resulting in transcript changes that can be monitored using microarrays. In fact, changes in transcript levels due to miRNA activity have been observed directly in vivo. The let-7 and lin-4 miRNAs trigger the degradation of their target mRNAs (2). Also, the depletion of miRNAs in mice and zebrafish led to the up-regulation of target mRNAs that were measured on microarrays (18, 35). ::: ::: A potential advantage of using microarrays to analyze miRNA targets is the utility of expression profiles for predicting gene function (25). In this study, we have explored the use of miRNA expression profiles to analyze miRNA targets and functions. Included in our analysis are several miRNAs reported to have a role in cancer (9-12, 14, 21, 22, 27, 32, 40, 41). We use this approach to show that a family of miRNAs sharing seed region identity with miRNA-16 (miR-16) negatively regulates cell cycle progression from G0/G1 to S. <s> BIB001 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Benchmark data sets <s> A global decrease in microRNA (miRNA) levels is often observed in human cancers, indicating that small RNAs may have an intrinsic function in tumour suppression. To identify miRNA components of tumour suppressor pathways, we compared miRNA expression profiles of wild-type and p53-deficient cells. Here we describe a family of miRNAs, miR-34a-c, whose expression reflected p53 status. Genes encoding miRNAs in the miR-34 family are direct transcriptional targets of p53, whose induction by DNA damage and oncogenic stress depends on p53 both in vitro and in vivo. Ectopic expression of miR-34 induces cell cycle arrest in both primary and tumour-derived cell lines, which is consistent with the observed ability of miR-34 to downregulate a programme of genes promoting cell cycle progression. The p53 network suppresses tumour formation through the coordinated activation of multiple transcriptional targets, and miR-34 may act in concert with other effectors to inhibit inappropriate cell proliferation. <s> BIB002 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Benchmark data sets <s> MicroRNAs (miRNAs) are an important class of small noncoding RNAs capable of regulating other genes’ expression. Much progress has been made in computational target prediction of miRNAs in recent years. More than 10 miRNA target prediction programs have been established, yet, the prediction of animal miRNA targets remains a challenging task. We have developed miRecords, an integrated resource for animal miRNA–target interactions. The Validated Targets component of this resource hosts a large, high-quality manually curated database of experimentally validated miRNA–target interactions with systematic documentation of experimental support for each interaction. The current release of this database includes 1135 records of validated miRNA–target interactions between 301 miRNAs and 902 target genes in seven animal species. The Predicted Targets component of miRecords stores predicted miRNA targets produced by 11 established miRNA target prediction programs. miRecords is expected to serve as a useful resource not only for experimental miRNA researchers, but also for informatics scientists developing the next-generation miRNA target prediction programs. The miRecords is available at http:// miRecords.umn.edu/miRecords. <s> BIB003 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Benchmark data sets <s> ‘miR2Disease’, a manually curated database, aims at providing a comprehensive resource of microRNA deregulation in various human diseases. The current version of miR2Disease documents 1939 curated relationships between 299 human microRNAs and 94 human diseases by reviewing more than 600 published papers. Around one-seventh of the microRNA–disease relationships represent the pathogenic roles of deregulated microRNA in human disease. Each entry in the miR2Disease contains detailed information on a microRNA–disease relationship, including a microRNA ID, the disease name, a brief description of the microRNA–disease relationship, an expression pattern of the microRNA, the detection method for microRNA expression, experimentally verified target gene(s) of the microRNA and a literature reference. miR2Disease provides a user-friendly interface for a convenient retrieval of each entry by microRNA ID, disease name, or target gene. In addition, miR2Disease offers a submission page that allows researchers to submit established microRNA–disease relationships that are not documented. Once approved by the submission review committee, the submitted records will be included in the database. miR2Disease is freely available at http://www.miR2Disease.org. <s> BIB004 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Benchmark data sets <s> MicroRNAs (miRNAs) and short interfering RNAs (siRNAs) are classes of regulatory small RNA molecules, ranging from 18 to 24 nucleotides in length, whose roles in development and disease are becoming increasingly recognized. They function by altering the stability or translational efficiency of messenger RNAs (mRNAs) with which they share sequence complementarity, and are predicted to affect up to one-third of all human genes. Computer algorithms and microarray data estimate the presence of nearly 1000 human miRNAs, and direct examination of candidate miRNAs has validated their involvement in various cancers, disorders of neuronal development, cardiac hypertrophy, and skin diseases such as psoriasis. This article reviews the history of miRNA and siRNA discovery, key aspects of their biogenesis and mechanism of action, and known connections to human health, with an emphasis on their roles in skin development and disease. Learning objectives After completing this learning activity, participants should be able to summarize the relevance of microRNAs in development and disease, explain the molecular steps of how small RNAs regulate their targets within the human cell, and discuss the role of small RNAs in the diagnosis and treatment of disease. <s> BIB005 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Benchmark data sets <s> Animal microRNAs (miRNAs) regulate gene expression by inhibiting translation and/or by inducing degradation of target messenger RNAs. It is unknown how much translational control is exerted by miRNAs on a genome-wide scale. We used a new proteomic approach to measure changes in synthesis of several thousand proteins in response to miRNA transfection or endogenous miRNA knockdown. In parallel, we quantified mRNA levels using microarrays. Here we show that a single miRNA can repress the production of hundreds of proteins, but that this repression is typically relatively mild. A number of known features of the miRNA-binding site such as the seed sequence also govern repression of human protein synthesis, and we report additional target sequence characteristics. We demonstrate that, in addition to downregulating mRNA levels, miRNAs also directly repress translation of hundreds of genes. Finally, our data suggest that a miRNA can, by direct or indirect effects, tune protein synthesis from thousands of genes. <s> BIB006 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Benchmark data sets <s> Current methods for system-wide gene expression analysis detect changes in mRNA abundance, but neglect regulation at the level of translation. Pulse labeling with stable isotopes has been used to measure protein turnover rates, but this does not directly provide information about translation rates. Here, we developed pulsed stable isotope labeling by amino acids in cell culture (pSILAC) with two heavy isotope labels to directly quantify protein translation on a proteome-wide scale. We applied the method to cellular iron homeostasis as a model system and demonstrate that it can confidently identify proteins that are translationally regulated by iron availability. <s> BIB007 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Benchmark data sets <s> In recent years, the discovery of small ncRNAs (noncoding RNAs) has unveiled a slew of powerful riboregulators of gene expression. So far, many different types of small ncRNAs have been described. Of these, miRNAs (microRNAs), siRNAs (small interfering RNAs), and piRNAs (Piwi-interacting RNAs) have been studied in more detail. A significant fraction of genes in most organisms and tissues is targets of these small ncRNAs. Because these tiny RNAs are turning out to be important regulators of gene and genome expression, their aberrant expression profiles are expected to be associated with cellular dysfunction and disease. In fact, an ever-increasing number of studies have implicated miRNAs and siRNAs in human health and disease ranging from metabolic disorders to diseases of various organ systems as well as various forms of cancer. Nevertheless, despite the flurry of research on these small ncRNAs, many aspects of their biology still remain to be understood. The following discussion focuses on some aspects of the biogenesis and function of small ncRNAs with major emphasis on miRNAs since these are the most widespread endogenous small ncRNAs that have been called "micromanagers" of gene expression. Their emerging significance in toxicology is also discussed. <s> BIB008 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Benchmark data sets <s> AbstractmirSVR is a new machine learning method for ranking microRNA target sites by a down-regulation score. The algorithm trains a regression model on sequence and contextual features extracted from miRanda-predicted target sites. In a large-scale evaluation, miRanda-mirSVR is competitive with other target prediction methods in identifying target genes and predicting the extent of their downregulation at the mRNA or protein levels. Importantly, the method identifies a significant number of experimentally determined non-canonical and non-conserved sites. <s> BIB009 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Benchmark data sets <s> As the relevant literature and the number of experiments increase at a super linear rate, databases that curate and collect experimentally verified microRNA (miRNA) targets have gradually emerged. These databases attempt to provide efficient access to this wealth of experimental data, which is scattered in thousands of manuscripts. Aim of TarBase 6.0 (http://www.microrna.gr/tarbase) is to face this challenge by providing a significant increase of available miRNA targets derived from all contemporary experimental techniques (gene specific and high-throughput), while incorporating a powerful set of tools in a user-friendly interface. TarBase 6.0 hosts detailed information for each miRNA–gene interaction, ranging from miRNA- and gene-related facts to information specific to their interaction, the experimental validation methodologies and their outcomes. All database entries are enriched with function-related data, as well as general information derived from external databases such as UniProt, Ensembl and RefSeq. DIANA microT miRNA target prediction scores and the relevant prediction details are available for each interaction. TarBase 6.0 hosts the largest collection of manually curated experimentally validated miRNA–gene interactions (more than 65 000 targets), presenting a 16.5–175-fold increase over other available manually curated databases. <s> BIB010 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Benchmark data sets <s> The Gene Expression Omnibus (GEO, http://www.ncbi.nlm.nih.gov/geo/) is an international public repository for high-throughput microarray and next-generation sequence functional genomic data sets submitted by the research community. The resource supports archiving of raw data, processed data and metadata which are indexed, cross-linked and searchable. All data are freely available for download in a variety of formats. GEO also provides several web-based tools and strategies to assist users to query, analyse and visualize data. This article reports current status and recent database developments, including the release of GEO2R, an R-based web application that helps users analyse GEO data. <s> BIB011 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Benchmark data sets <s> MicroRNAs, or miRNAs, post-transcriptionally repress the expression of protein-coding genes. The human genome encodes over 1000 miRNA genes that collectively target the majority of messenger RNAs (mRNAs). Base pairing of the so-called miRNA ‘seed’ region with mRNAs identifies many thousands of putative targets. Evaluating the strength of the resulting mRNA repression remains challenging, but is essential for a biologically informative ranking of potential miRNA targets. To address these challenges, predictors may use thermodynamic, evolutionary, probabilistic or sequence-based features. We developed an open-source software library, miRmap, which for the first time comprehensively covers all four approaches using 11 predictor features, 3 of which are novel. This allowed us to examine feature correlations and to compare their predictive power in an unbiased way using high-throughput experimental data from immunopurification, transcriptomics, proteomics and polysome fractionation experiments. Overall, target site accessibility appears to be the most predictive feature. Our novel feature based on PhyloP, which evaluates the significance of negative selection, is the best performing predictor in the evolutionary category. We combined all the features into an integrated model that almost doubles the predictive power of TargetScan. miRmap is freely available from http://cegg.unige.ch/mirmap. <s> BIB012 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Benchmark data sets <s> MicroRNAs (miRNAs) are small non-coding RNA molecules capable of negatively regulating gene expression to control many cellular mechanisms. The miRTarBase database (http://mirtarbase.mbc.nctu.edu.tw/) provides the most current and comprehensive information of experimentally validated miRNA-target interactions. The database was launched in 2010 with data sources for >100 published studies in the identification of miRNA targets, molecular networks of miRNA targets and systems biology, and the current release (2013, version 4) includes significant expansions and enhancements over the initial release (2010, version 1). This article reports the current status of and recent improvements to the database, including (i) a 14-fold increase to miRNA-target interaction entries, (ii) a miRNA-target network, (iii) expression profile of miRNA and its target gene, (iv) miRNA target-associated diseases and (v) additional utilities including an upgrade reminder and an error reporting/user feedback system. <s> BIB013 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Benchmark data sets <s> Motivation: Research interests in microRNAs have increased rapidly in the past decade. Many studies have showed that microRNAs have close relationships with various human cancers, and they potentially could be used as cancer indicators in diagnosis or as a suppressor for treatment purposes. There are several databases that contain microRNA–cancer associations predicted by computational methods but few from empirical results. Despite the fact that abundant experiments investigating microRNA expressions in cancer cells have been carried out, the results have remain scattered in the literature. We propose to extract microRNA–cancer associations by text mining and store them in a database called miRCancer. Results: The text mining is based on 75 rules we have constructed, which represent the common sentence structures typically used to state microRNA expressions in cancers. The microRNA–cancer association database, miRCancer, is updated regularly by running the text mining algorithm against PubMed. All miRNA–cancer associations are confirmed manually after automatic extraction. miRCancer currently documents 878 relationships between 236 microRNAs and 79 human cancers through the processing of426 000 published articles. Availability: miRCancer is freely available on the web at http://mircan <s> BIB014
There are five databases of experimentally validated and curated miRNA targets ( Supplementary Table S1 ). Only three of them provide information necessary to characterize the miRNA:mRNA duplexes: TarBase, miRecords and miTarBase. miTarBase 4.5 stores the largest number of >5000 miRNA:target pairs BIB013 , with large number of new data from sequencing effort in TarBase v6.0 BIB010 . miRecords includes 2574 interactions BIB003 . miR2Disease BIB004 and miRCancer BIB014 focus on selected diseases associated with miRNAs and also do not include information about miRNA:mRNA duplexes. We developed four benchmark data sets using the miRTarBase repository, gene expression data from Gene Expression Omnibus (GEO) and pSILAC. miRTarBase provides the largest number of positive (functional) and negative (nonfunctional) miRNA:mRNA complexes; the functional miRNA-mRNA interactions are defined as those where mRNA is downregulated by the corresponding miRNA. GEO is the largest source of microarray, sequencing and other forms of high-throughput genomics data BIB011 . pSILAC is a technique for quantitative proteomics BIB007 . Our data sets cover human and mouse, which is motivated by research interests in using miRNAs in human health-related applications BIB008 BIB005 and our objective to include the largest possible number of predictors, i.e. relatively few methods work on other species. The first data set, called TEST_duplex, is used to assess the target site prediction at the duplex level. We selected targets that were validated by at least one of the low-throughput experimental methods, which are considered as strong evidence: qPCR, luciferase assay or western blot. We focused on targets that were released recently to limit overlap between our benchmark data and data used to develop the evaluated predictors. The functional targets deposited to miRTarBase after 2012 (after the newest method included in our evaluation was published) and all nonfunctional duplexes from human and mouse were included; we used all nonfunctional targets because of their small number. The second, TEST_gene data set focuses on the evaluation at the gene level. We selected miRNAs that have both functional and nonfunctional genes in miRTarBase and for which the functional genes were validated after 2012. Furthermore, we extend our evaluation to analyze whether the current methods are capable of predicting at the cell level using two additional data sets that rely on the annotations from the high-throughput methods. TEST_geo data set is based on results from three microarray-based experiments: GSE6838, GSE7864 and GSE8501. The interactions for 25 miRNAs were annotated the contrasting expression arrays before miRNA transfection and at 24 h after miRNA mimics were transfected BIB001 BIB002 . As recommended in BIB009 BIB012 , we remove the genes for which the expression magnitudes are below the median in the control transfection experiments. TEST_psilac data set was originally developed in a proteomic study that used pSILAC technique BIB007 BIB006 . Previous studies assume that genes that are more repressed (characterized by higher drop in the expression levels) are more likely to be targeted by the transfected miRNA. These studies use a certain fraction of the genes with the highest magnitude of the change in the expression levels (repressed genes) as functional and the same fraction of the genes for which expression levels have increased by the largest margin (overexpressed genes) as nonfunctional BIB009 . Instead of using an arbitrary fraction value to define the functional and nonfunctional targets, we vary this value between 1% and 50%. Detailed summary of the four data sets is shown in the Supplementary Table S3 . The TEST_duplex and TEST_gene data sets are given in the Supplementary Tables S4 and S5 , respectively. The comprehensiveness of our tests stems from the fact that we consider targets as gene segments (TEST_duplex data set), genes (TEST_gene and TEST_geo data sets) and proteins (TEST_psilac data set). We also use different source of information that is used to perform annotations including low-throughput assays (TEST_duplex and TEST_gene data sets), microarrays (TEST_geo data set) and pSILAC (TEST_psilac data set).
Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> We predict regulatory targets for 14 Arabidopsis microRNAs (miRNAs) by identifying mRNAs with near complementarity. Complementary sites within predicted targets are conserved in rice. Of the 49 predicted targets, 34 are members of transcription factor gene families involved in developmental patterning or cell differentiation. The near-perfect complementarity between plant miRNAs and their targets suggests that many plant miRNAs act similarly to small interfering RNAs and direct mRNA cleavage. The targeting of developmental transcription factors suggests that many plant miRNAs function during cellular differentiation to clear key regulatory transcripts from daughter cell lineages. <s> BIB001 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> MicroRNAs (miRNAs) are short RNA molecules that regulate gene expression by binding to target messenger RNAs and by controlling protein production or causing RNA cleavage. To date, functions have been assigned to only a few of the hundreds of identified miRNAs, in part because of the difficulty in identifying their targets. The short length of miRNAs and the fact that their complementarity to target sequences is imperfect mean that target identification in animal genomes is not possible by standard sequence comparison methods. Here we screen conserved 3′ UTR sequences from the Drosophila melanogaster genome for potential miRNA targets. The screening procedure combines a sequence search with an evaluation of the predicted miRNA–target heteroduplex structures and energies. We show that this approach successfully identifies the five previously validated let-7, lin-4, and bantam targets from a large database and predict new targets for Drosophila miRNAs. Our target predictions reveal striking clusters of functionally related targets among the top predictions for specific miRNAs. These include Notch target genes for miR-7, proapoptotic genes for the miR-2 family, and enzymes from a metabolic pathway for miR-277. We experimentally verified three predicted targets each for miR-7 and the miR-2 family, doubling the number of validated targets for animal miRNAs. Statistical analysis indicates that the best single predicted target sites are at the border of significance; thus, target predictions should be considered as tentative until experimentally validated. We identify features shared by all validated targets that can be used to evaluate target predictions for animal miRNAs. Our initial evaluation and experimental validation of target predictions suggest functions for two miRNAs. For others, the screen suggests plausible functions, such as a role for miR-277 as a metabolic switch controlling amino acid catabolism. Cross-genome comparison proved essential, as it allows reduction of the sequence search space. Improvements in genome annotation and increased availability of cDNA sequences from other genomes will allow more sensitive screens. An increase in the number of confirmed targets is expected to reveal general structural features that can be used to improve their detection. While the screen is likely to miss some targets, our study shows that valid targets can be identified from sequence alone. <s> BIB002 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> The Vienna RNA secondary structure server provides a web interface to the most frequently used functions of the Vienna RNA software package for the analysis of RNA secondary structures. It currently offers prediction of secondary structure from a single sequence, prediction of the consensus secondary structure for a set of aligned sequences and the design of sequences that will fold into a predefined structure. All three services can be accessed via the Vienna RNA web server at http://rna.tbi.univie.ac.at/. <s> BIB003 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> The abbreviated name,‘mfold web server’,describes a number of closely related software applications available on the World Wide Web (WWW) for the prediction of the secondary structure of single stranded nucleic acids. The objective of this web server is to provide easy access to RNA and DNA folding and hybridization software to the scientific community at large. By making use of universally available web GUIs (Graphical User Interfaces),the server circumvents the problem of portability of this software. Detailed output,in the form of structure plots with or without reliability information,single strand frequency plots and ‘energy dot plots’, are available for the folding of single sequences. A variety of ‘bulk’ servers give less information,but in a shorter time and for up to hundreds of sequences at once. The portal for the mfold web server is http://www.bioinfo.rpi.edu/applications/ mfold. This URL will be referred to as ‘MFOLDROOT’. <s> BIB004 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> Given that microRNAs select their targets by nucleotide base-pairing, it follows that it should be possible to find microRNA targets computationally. There has been considerable progress, but assessing success and biological significance requires a move into the 'wet' lab. <s> BIB005 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> TheSfoldwebserverprovidesuser-friendlyaccessto Sfold, a recently developed nucleic acid folding software package, via the World Wide Web (WWW). The software is based on a new statistical sampling paradigm for the prediction of RNA secondary structure. One of the main objectives of this software is to offer computational tools for the rational design of RNAtargetingnucleicacids,whichincludesmallinterfering RNAs (siRNAs), antisense oligonucleotides and trans-cleaving ribozymes for gene knock-down studies. The methodology for siRNA design is based on a combination of RNA target accessibility prediction, siRNA duplex thermodynamic properties and empirical design rules. Our approach to target accessibility evaluation is an original extension of the underlying RNA folding algorithm to account for the likely existence of a population of structures for the target mRNA. In addition to the application modules Sirna, Soligo and Sribo for siRNAs, antisense oligos and ribozymes, respectively, the moduleSrna offers comprehensive features for statistical representation of sampledstructures.Detailedoutputin both graphical andtextformatsisavailableforallmodules.TheSfold server is available at http://sfold.wadsworth.org and http://www.bioinfo.rpi.edu/applications/sfold. <s> BIB006 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> MicroRNAs (miRNAs) are short RNAs that post-transcriptionally regulate the expression of target genes by binding to the target mRNAs. Although a large number of animal miRNAs has been defined, only a few targets are known. In contrast to plant miRNAs, which usually bind nearly perfectly to their targets, animal miRNAs bind less tightly, with a few nucleotides being unbound, thus producing more complex secondary structures of miRNA/target duplexes. Here, we present a program, RNA-hybrid, that predicts multiple potential binding sites of miRNAs in large target RNAs. In general, the program finds the energetically most favorable hybridization sites of a small RNA in a large RNA. Intramolecular hybridizations, that is, base pairings between target nucleotides or between miRNA nucleotides are not allowed. For large targets, the time complexity of the algorithm is linear in the target length, allowing many long targets to be searched in a short time. Statistical significance of predicted targets is assessed with an extreme value statistics of length normalized minimum free energies, a Poisson approximation of multiple binding sites, and the calculation of effective numbers of orthologous targets in comparative studies of multiple organisms. We applied our method to the prediction of Drosophila miRNA targets in 3'UTRs and coding sequence. RNAhybrid, with its accompanying programs RNAcalibrate and RNAeffective, is available for download and as a Web tool on the Bielefeld Bioinformatics Server (http://bibiserv.techfak.uni-bielefeld.de/rnahybrid/). <s> BIB007 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> BackgroundMicroRNAs (miRNAs) mediate a form of translational regulation in animals. Hundreds of animal miRNAs have been identified, but only a few of their targets are known. Prediction of miRNA targets for translational regulation is challenging, since the interaction with the target mRNA usually occurs via incomplete and interrupted base pairing. Moreover, the rules that govern such interactions are incompletely defined.ResultsMovingTargets is a software program that allows a researcher to predict a set of miRNA targets that satisfy an adjustable set of biological constraints. We used MovingTargets to identify a high-likelihood set of 83 miRNA targets in Drosophila, all of which adhere to strict biological constraints. We tested and verified 3 of these predictions in cultured cells, including a target for the Drosophila let-7 homolog. In addition, we utilized the flexibility of MovingTargets by relaxing the biological constraints to identify and validate miRNAs targeting tramtrack, a gene also known to be subject to translational control dependent on the RNA binding protein Musashi.ConclusionMovingTargets is a flexible tool for the accurate prediction of miRNA targets in Drosophila. MovingTargets can be used to conduct a genome-wide search of miRNA targets using all Drosophila miRNAs and potential targets, or it can be used to conduct a focused search for miRNAs targeting a specific gene. In addition, the values for a set of biological constraints used to define a miRNA target are adjustable, allowing the software to incorporate the rules used to characterize a miRNA target as these rules are experimentally determined and interpreted. <s> BIB008 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> MicroRNAs are small noncoding RNAs that serve as posttranscriptional regulators of gene expression in higher eukaryotes. Their widespread and important role in animals is highlighted by recent estimates that 20%-30% of all genes are microRNA targets. Here, we report that a large set of genes involved in basic cellular processes avoid microRNA regulation due to short 3'UTRs that are specifically depleted of microRNA binding sites. For individual microRNAs, we find that coexpressed genes avoid microRNA sites, whereas target genes and microRNAs are preferentially expressed in neighboring tissues. This mutually exclusive expression argues that microRNAs confer accuracy to developmental gene-expression programs, thus ensuring tissue identity and supporting cell-lineage decisions. <s> BIB009 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> microRNAs are small noncoding genes that regulate the protein production of genes by binding to partially complementary sites in the mRNAs of targeted genes. Here, using our algorithm PicTar, we exploit cross-species comparisons to predict, on average, 54 targeted genes per microRNA above noise in Drosophila melanogaster. Analysis of the functional annotation of target genes furthermore suggests specific biological functions for many microRNAs. We also predict combinatorial targets for clustered microRNAs and find that some clustered microRNAs are likely to coordinately regulate target genes. Furthermore, we compare microRNA regulation between insects and vertebrates. We find that the widespread extent of gene regulation by microRNAs is comparable between flies and mammals but that certain microRNAs may function in clade-specific modes of gene regulation. One of these microRNAs (miR-210) is predicted to contribute to the regulation of fly oogenesis. We also list specific regulatory relationships that appear to be conserved between flies and mammals. Our findings provide the most extensive microRNA target predictions in Drosophila to date, suggest specific functional roles for most microRNAs, indicate the existence of coordinate gene regulation executed by clustered microRNAs, and shed light on the evolution of microRNA function across large evolutionary distances. All predictions are freely accessible at our searchable Web site http://pictar.bio.nyu.edu. <s> BIB010 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> We present a new microRNA target prediction algorithm called TargetBoost, and show that the algorithm is stable and identifies more true targets than do existing algorithms. TargetBoost uses machine learning on a set of validated microRNA targets in lower organisms to create weighted sequence motifs that capture the binding characteristics between microRNAs and their targets. Existing algorithms require candidates to have (1) near-perfect complementarity between microRNAs' 5' end and their targets; (2) relatively high thermodynamic duplex stability; (3) multiple target sites in the target's 3' UTR; and (4) evolutionary conservation of the target between species. Most algorithms use one of the two first requirements in a seeding step, and use the three others as filters to improve the method's specificity. The initial seeding step determines an algorithm's sensitivity and also influences its specificity. As all algorithms may add filters to increase the specificity, we propose that methods should be compared before such filtering. We show that TargetBoost's weighted sequence motif approach is favorable to using both the duplex stability and the sequence complementarity steps. (TargetBoost is available as a Web tool from http://www.interagon.com/demo/.). <s> BIB011 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> The DINAMelt web server simulates the melting of one or two single-stranded nucleic acids in solution. The goal is to predict not just a melting temperature for a hybridized pair of nucleic acids, but entire equilibrium melting profiles as a function of temperature. The two molecules are not required to be complementary, nor must the two strand concentrations be equal. Competition among different molecular species is automatically taken into account. Calculations consider not only the heterodimer, but also the two possible homodimers, as well as the folding of each single-stranded molecule. For each of these five molecular species, free energies are computed by summing Boltzmann factors over every possible hybridized or folded state. For temperatures within a user-specified range, calculations predict species mole fractions together with the free energy, enthalpy, entropy and heat capacity of the ensemble. Ultraviolet (UV) absorbance at 260 nm is simulated using published extinction coefficients and computed base pair probabilities. All results are available as text files and plots are provided for species concentrations, heat capacity and UV absorbance versus temperature. This server is connected to an active research program and should evolve as new theory and software are developed. The server URL is http://www.bioinfo. rpi.edu/applications/hybrid/. <s> BIB012 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> Summary We present rna22 , a method for identifying microRNA binding sites and their corresponding heteroduplexes. Rna22 does not rely upon cross-species conservation, is resilient to noise, and, unlike previous methods, it first finds putative microRNA binding sites in the sequence of interest, then identifies the targeting microRNA. Computationally, we show that rna22 identifies most of the currently known heteroduplexes. Experimentally, with luciferase assays, we demonstrate average repressions of 30% or more for 168 of 226 tested targets. The analysis suggests that some microRNAs may have as many as a few thousand targets, and that between 74% and 92% of the gene transcripts in four model genomes are likely under microRNA control through their untranslated and amino acid coding regions. We also extended the method's key idea to a low-error microRNA-precursor-discovery scheme; our studies suggest that the number of microRNA precursors in mammalian genomes likely ranges in the tens of thousands. <s> BIB013 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> BackgroundThe accurate prediction of a comprehensive set of messenger RNAs (targets) regulated by animal microRNAs (miRNAs) remains an open problem. In particular, the prediction of targets that do not possess evolutionarily conserved complementarity to their miRNA regulators is not adequately addressed by current tools.ResultsWe have developed MicroTar, an animal miRNA target prediction tool based on miRNA-target complementarity and thermodynamic data. The algorithm uses predicted free energies of unbound mRNA and putative mRNA-miRNA heterodimers, implicitly addressing the accessibility of the mRNA 3' untranslated region. MicroTar does not rely on evolutionary conservation to discern functional targets, and is able to predict both conserved and non-conserved targets. MicroTar source code and predictions are accessible at http://tiger.dbs.nus.edu.sg/microtar/, where both serial and parallel versions of the program can be downloaded under an open-source licence.ConclusionMicroTar achieves better sensitivity than previously reported predictions when tested on three distinct datasets of experimentally-verified miRNA-target interactions in C. elegans, Drosophila, and mouse. <s> BIB014 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> MicroRNAs are key regulators of gene expression, but the precise mechanisms underlying their interaction with their mRNA targets are still poorly understood. Here, we systematically investigate the role of target-site accessibility, as determined by base-pairing interactions within the mRNA, in microRNA target recognition. We experimentally show that mutations diminishing target accessibility substantially reduce microRNA-mediated translational repression, with effects comparable to those of mutations that disrupt sequence complementarity. We devise a parameter-free model for microRNA-target interaction that computes the difference between the free energy gained from the formation of the microRNA-target duplex and the energetic cost of unpairing the target to make it accessible to the microRNA. This model explains the variability in our experiments, predicts validated targets more accurately than existing algorithms, and shows that genomes accommodate site accessibility by preferentially positioning targets in highly accessible regions. Our study thus demonstrates that target accessibility is a critical factor in microRNA function. <s> BIB015 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> MicroRNAs (miRNAs) are small noncoding RNAs that repress protein synthesis by binding to target messenger RNAs. We investigated the effect of target secondary structure on the efficacy of repression by miRNAs. Using structures predicted by the Sfold program, we model the interaction between an miRNA and a target as a two-step hybridization reaction: nucleation at an accessible target site followed by hybrid elongation to disrupt local target secondary structure and form the complete miRNA-target duplex. This model accurately accounts for the sensitivity to repression by let-7 of various mutant forms of the Caenorhabditis elegans lin-41 3¢ untranslated region and for other experimentally tested miRNA-target interactions in C. elegans and Drosophila melanogaster. These findings indicate a potent effect of target structure on target recognition by miRNAs and establish a structure-based framework for genome-wide identification of animal miRNA targets. <s> BIB016 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> Summary A U- r ich e lements (AREs), present in mRNA 3′-UTRs, are potent posttranscriptional regulatory signals that can rapidly effect changes in mRNA stability and translation, thereby dramatically altering gene expression with clinical and developmental consequences. In human cell lines, the TNFα ARE enhances translation relative to mRNA levels upon serum starvation, which induces cell-cycle arrest. An in vivo crosslinking-coupled affinity purification method was developed to isolate ARE-associated complexes from activated versus basal translation conditions. We surprisingly found two microRNP-related proteins, f ragile- X -mental-retardation- r elated protein 1 (FXR1) and A r go naute 2 (AGO2), that associate with the ARE exclusively during translation activation. Through tethering and shRNA-knockdown experiments, we provide direct evidence for the translation activation function of both FXR1 and AGO2 and demonstrate their interdependence for upregulation. This novel cell-growth-dependent translation activation role for FXR1 and AGO2 allows new insights into ARE-mediated signaling and connects two important posttranscriptional regulatory systems in an unexpected way. <s> BIB017 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> BackgroundMicroRNAs (miRs) are small noncoding RNAs that bind to complementary/partially complementary sites in the 3' untranslated regions of target genes to regulate protein production of the target transcript and to induce mRNA degradation or mRNA cleavage. The ability to perform accurate, high-throughput identification of physiologically active miR targets would enable functional characterization of individual miRs. Current target prediction methods include traditional approaches that are based on specific base-pairing rules in the miR's seed region and implementation of cross-species conservation of the target site, and machine learning (ML) methods that explore patterns that contrast true and false miR-mRNA duplexes. However, in the case of the traditional methods research shows that some seed region matches that are conserved are false positives and that some of the experimentally validated target sites are not conserved.ResultsWe present HuMiTar, a computational method for identifying common targets of miRs, which is based on a scoring function that considers base-pairing for both seed and non-seed positions for human miR-mRNA duplexes. Our design shows that certain non-seed miR nucleotides, such as 14, 18, 13, 11, and 17, are characterized by a strong bias towards formation of Watson-Crick pairing. We contrasted HuMiTar with several representative competing methods on two sets of human miR targets and a set of ten glioblastoma oncogenes. Comparison with the two best performing traditional methods, PicTar and TargetScanS, and a representative ML method that considers the non-seed positions, NBmiRTar, shows that HuMiTar predictions include majority of the predictions of the other three methods. At the same time, the proposed method is also capable of finding more true positive targets as a trade-off for an increased number of predictions. Genome-wide predictions show that the proposed method is characterized by 1.99 signal-to-noise ratio and linear, with respect to the length of the mRNA sequence, computational complexity. The ROC analysis shows that HuMiTar obtains results comparable with PicTar, which are characterized by high true positive rates that are coupled with moderate values of false positive rates.ConclusionThe proposed HuMiTar method constitutes a step towards providing an efficient model for studying translational gene regulation by miRs. <s> BIB018 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> BackgroundMicroRNAs are small endogenously expressed non-coding RNA molecules that regulate target gene expression through translation repression or messenger RNA degradation. MicroRNA regulation is performed through pairing of the microRNA to sites in the messenger RNA of protein coding genes. Since experimental identification of miRNA target genes poses difficulties, computational microRNA target prediction is one of the key means in deciphering the role of microRNAs in development and disease.ResultsDIANA-microT 3.0 is an algorithm for microRNA target prediction which is based on several parameters calculated individually for each microRNA and combines conserved and non-conserved microRNA recognition elements into a final prediction score, which correlates with protein production fold change. Specifically, for each predicted interaction the program reports a signal to noise ratio and a precision score which can be used as an indication of the false positive rate of the prediction.ConclusionRecently, several computational target prediction programs were benchmarked based on a set of microRNA target genes identified by the pSILAC method. In this assessment DIANA-microT 3.0 was found to achieve the highest precision among the most widely used microRNA target prediction programs reaching approximately 66%. The DIANA-microT 3.0 prediction results are available online in a user friendly web server at http://www.microrna.gr/microT <s> BIB019 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> BackgroundMicroRNAs (miRNAs) are single-stranded non-coding RNAs known to regulate a wide range of cellular processes by silencing the gene expression at the protein and/or mRNA levels. Computational prediction of miRNA targets is essential for elucidating the detailed functions of miRNA. However, the prediction specificity and sensitivity of the existing algorithms are still poor to generate meaningful, workable hypotheses for subsequent experimental testing. Constructing a richer and more reliable training data set and developing an algorithm that properly exploits this data set would be the key to improve the performance current prediction algorithms.ResultsA comprehensive training data set is constructed for mammalian miRNAs with its positive targets obtained from the most up-to-date miRNA target depository called miRecords and its negative targets derived from 20 microarray data. A new algorithm SVMicrO is developed, which assumes a 2-stage structure including a site support vector machine (SVM) followed by a UTR-SVM. SVMicrO makes prediction based on 21 optimal site features and 18 optimal UTR features, selected by training from a comprehensive collection of 113 site and 30 UTR features. Comprehensive evaluation of SVMicrO performance has been carried out on the training data, proteomics data, and immunoprecipitation (IP) pull-down data. Comparisons with some popular algorithms demonstrate consistent improvements in prediction specificity, sensitivity and precision in all tested cases. All the related materials including source code and genome-wide prediction of human targets are available at http://compgenomics.utsa.edu/svmicro.html.ConclusionsA 2-stage SVM based new miRNA target prediction algorithm called SVMicrO is developed. SVMicrO is shown to be able to achieve robust performance. It holds the promise to achieve continuing improvement whenever better training data that contain additional verified or high confidence positive targets and properly selected negative targets are available. <s> BIB020 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> Considering accessibility of the 3′UTR is believed to increase the precision of microRNA target predictions. We show that, contrary to common belief, ranking by the hybridization energy or by the sum of the opening and hybridization energies, used in currently available algorithms, is not an efficient way to rank predictions. Instead, we describe an algorithm which also considers only the accessible binding sites but which ranks predictions according to over-representation. When compared with experimentally validated and refuted targets in the fruit fly and human, our algorithm shows a remarkable improvement in precision while significantly reducing the computational cost in comparison with other free energy based methods. In the human genome, our algorithm has at least twice higher precision than other methods with their default parameters. In the fruit fly, we find five times more validated targets among the top 500 predictions than other methods with their default parameters. Furthermore, using a common statistical framework we demonstrate explicitly the advantages of using the canonical ensemble instead of using the minimum free energy structure alone. We also find that ‘naive’ global folding sometimes outperforms the local folding approach. <s> BIB021 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> Summary MicroRNAs (miRNAs) are endogenous ∼22-nucleotide RNAs that play important gene-regulatory roles by pairing to the mRNAs of protein-coding genes to direct their repression. Repression of these regulatory targets leads to decreased translational efficiency and/or decreased mRNA levels, but the relative contributions of these two outcomes have been largely unknown, particularly for endogenous targets expressed at low-to-moderate levels. Here, we use ribosome profiling to measure the overall effects on protein production and compare these to simultaneously measured effects on mRNA levels. For both ectopic and endogenous miRNA regulatory interactions, lowered mRNA levels account for most (≥84%) of the decreased protein production. These results show that changes in mRNA levels closely reflect the impact of miRNAs on gene expression and indicate that destabilization of target mRNAs is the predominant reason for reduced protein output. <s> BIB022 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> microRNAs (miRNAs) are small endogenous RNA molecules that are implicated in many biological processes through post-transcriptional regulation of gene expression. The DIANA-microT Web server provides a user-friendly interface for comprehensive computational analysis of miRNA targets in human and mouse. The server has now been extended to support predictions for two widely studied species: Drosophila melanogaster and Caenorhabditis elegans. In the updated version, the Web server enables the association of miRNAs to diseases through bibliographic analysis and provides insights for the potential involvement of miRNAs in biological processes. The nomenclature used to describe mature miRNAs along different miRBase versions has been extensively analyzed, and the naming history of each miRNA has been extracted. This enables the identification of miRNA publications regardless of possible nomenclature changes. User interaction has been further refined allowing users to save results that they wish to analyze further. A connection to the UCSC genome browser is now provided, enabling users to easily preview predicted binding sites in comparison to a wide array of genomic tracks, such as single nucleotide polymorphisms. The Web server is publicly accessible in www.microrna.gr/microT-v4. <s> BIB023 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> In animals, RNA binding proteins (RBPs) and microRNAs (miRNAs) post-transcriptionally regulate the expression of virtually all genes by binding to RNA. Recent advances in experimental and computational methods facilitate transcriptome-wide mapping of these interactions. It is thought that the combinatorial action of RBPs and miRNAs on target mRNAs form a posttranscriptional regulatory code. We provide a database that supports the quest for deciphering this regulatory code. Within doRiNA, we are systematically curating, storing and integrating binding site data for RBPs and miRNAs. Users are free to take a target (mRNA) or regulator (RBP and/or miRNA) centric view on the data. We have implemented a database framework with short query response times for complex searches (e.g. asking for all targets of a particular combination of regulators). All search results can be browsed, inspected and analyzed in conjunction with a huge selection of other genome-wide data, because our database is directly linked to a local copy of the UCSC genome browser. At the time of writing, doRiNA encompasses RBP data for the human, mouse and worm genomes. For computational miRNA target site predictions, we provide an update of PicTar predictions. <s> BIB024 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> The last few decades observed an increasing interest in development and application of 1-dimensional (1D) descriptors of protein structure. These descriptors project 3D structural features onto 1D strings of residue-wise structural assignments. They cover a wide-range of structural aspects including conformation of the backbone, burying depth/solvent exposure and flexibility of residues, and inter-chain residue-residue contacts. We perform first-of-its-kind comprehensive comparative review of the existing 1D structural descriptors. We define, review and categorize ten structural descriptors and we also describe, summarize and contrast over eighty computational models that are used to predict these descriptors from the protein sequences. We show that the majority of the recent sequence-based predictors utilize machine learning models, with the most popular being neural networks, support vector machines, hidden Markov models, and support vector and linear regressions. These methods provide high-throughput predictions and most of them are accessible to a non-expert user via web servers and/or stand-alone software packages. We empirically evaluate several recent sequence-based predictors of secondary structure, disorder, and solvent accessibility descriptors using a benchmark set based on CASP8 targets. Our analysis shows that the secondary structure can be predicted with over 80% accuracy and segment overlap (SOV), disorder with over 0.9 AUC, 0.6 Matthews Correlation Coefficient (MCC), and 75% SOV, and relative solvent accessibility with PCC of 0.7 and MCC of 0.6 (0.86 when homology is used). We demonstrate that the secondary structure predicted from sequence without the use of homology modeling is as good as the structure extracted from the 3D folds predicted by top-performing template-based methods. <s> BIB025 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> Background ::: Machine learning based miRNA-target prediction algorithms often fail to obtain a balanced prediction accuracy in terms of both sensitivity and specificity due to lack of the gold standard of negative examples, miRNA-targeting site context specific relevant features and efficient feature selection process. Moreover, all the sequence, structure and machine learning based algorithms are unable to distribute the true positive predictions preferentially at the top of the ranked list; hence the algorithms become unreliable to the biologists. In addition, these algorithms fail to obtain considerable combination of precision and recall for the target transcripts that are translationally repressed at protein level. ::: Methodology/Principal Finding ::: In the proposed article, we introduce an efficient miRNA-target prediction system MultiMiTar, a Support Vector Machine (SVM) based classifier integrated with a multiobjective metaheuristic based feature selection technique. The robust performance of the proposed method is mainly the result of using high quality negative examples and selection of biologically relevant miRNA-targeting site context specific features. The features are selected by using a novel feature selection technique AMOSA-SVM, that integrates the multi objective optimization technique Archived Multi-Objective Simulated Annealing (AMOSA) and SVM. ::: Conclusions/Significance ::: MultiMiTar is found to achieve much higher Matthew’s correlation coefficient (MCC) of 0.583 and average class-wise accuracy (ACA) of 0.8 compared to the others target prediction methods for a completely independent test data set. The obtained MCC and ACA values of these algorithms range from −0.269 to 0.155 and 0.321 to 0.582, respectively. Moreover, it shows a more balanced result in terms of precision and sensitivity (recall) for the translationally repressed data set as compared to all the other existing methods. An important aspect is that the true positive predictions are distributed preferentially at the top of the ranked list that makes MultiMiTar reliable for the biologists. MultiMiTar is now available as an online tool at www.isical.ac.in/~bioinfo_miu/multimitar.htm. MultiMiTar software can be downloaded from www.isical.ac.in/~bioinfo_miu/multimitar-download.htm. <s> BIB026 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> BACKGROUND ::: Many computational microRNA target prediction tools are focused on several key features, including complementarity to 5'seed of miRNAs and evolutionary conservation. While these features allow for successful target identification, not all miRNA target sites are conserved and adhere to canonical seed complementarity. Several studies have propagated the use of energy features of mRNA:miRNA duplexes as an alternative feature. However, different independent evaluations reported conflicting results on the reliability of energy-based predictions. Here, we reassess the usefulness of energy features for mammalian target prediction, aiming to relax or eliminate the need for perfect seed matches and conservation requirement. ::: ::: ::: METHODOLOGY/PRINCIPAL FINDINGS ::: We detect significant differences of energy features at experimentally supported human miRNA target sites and at genome-wide sites of AGO protein interaction. This trend is confirmed on datasets that assay the effect of miRNAs on mRNA and protein expression changes, and a simple linear regression model leads to significant correlation of predicted versus observed expression change. Compared to 6-mer seed matches as baseline, application of our energy-based model leads to ∼3-5-fold enrichment on highly down-regulated targets, and allows for prediction of strictly imperfect targets with enrichment above baseline. ::: ::: ::: CONCLUSIONS/SIGNIFICANCE ::: In conclusion, our results indicate significant promise for energy-based miRNA target prediction that includes a broader range of targets without having to use conservation or impose stringent seed match rules. <s> BIB027 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> A number of web tools are available for the prediction and identification of target microRNAs (miRNAs). The choice, availability, validity and selection of an optimal yet appropriate tool are a challenge for the design of high throughput assays with promising miRNA targets. The current trends and challenges for target microRNAs (miRNAs) prediction, identification and selection is described in this review. <s> BIB028 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> Summary: Although small non-coding RNAs, such as microRNAs, have well-established functions in the cell, long non-coding RNAs (lncRNAs) have only recently started to emerge as abundant regulators of cell physiology, and their functions may be diverse. A small number of studies describe interactions between small and lncRNAs, with lncRNAs acting either as inhibitory decoys or as regulatory targets of microRNAs, but such interactions are still poorly explored. To facilitate the study of microRNA–lncRNA interactions, we implemented miRcode: a comprehensive searchable map of putative microRNA target sites across the complete GENCODE annotated transcriptome, including 10 419 lncRNA genes in the current version. ::: ::: Availability: http://www.mircode.org ::: ::: Contact: es.ug@nossral.kire ::: ::: Supplementary Information: Supplementary data are available at Bioinformatics online. <s> BIB029 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> BackgroundMicroRNA (miRNA) target genes tend to have relatively long and conserved 3' untranslated regions (UTRs), but to what degree these characteristics contribute to miRNA targeting is poorly understood. Different high-throughput experiments have, for example, shown that miRNAs preferentially regulate genes with both short and long 3' UTRs and that target site conservation is both important and irrelevant for miRNA targeting.ResultsWe have analyzed several gene context-dependent features, including 3' UTR length, 3' UTR conservation, and messenger RNA (mRNA) expression levels, reported to have conflicting influence on miRNA regulation. By taking into account confounding factors such as technology-dependent experimental bias and competition between transfected and endogenous miRNAs, we show that two factors - target gene expression and competition - could explain most of the previously reported experimental differences. Moreover, we find that these and other target site-independent features explain about the same amount of variation in target gene expression as the target site-dependent features included in the TargetScan model.ConclusionsOur results show that it is important to consider confounding factors when interpreting miRNA high throughput experiments and urge special caution when using microarray data to compare average regulatory effects between groups of genes that have different average gene expression levels. <s> BIB030 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> MicroRNAs (miRNAs) are small endogenous RNA molecules that regulate gene expression through mRNA degradation and/or translation repression, affecting many biological processes. DIANA-microT web server (http://www.microrna.gr/webServer) is dedicated to miRNA target prediction/functional analysis, and it is being widely used from the scientific community, since its initial launch in 2009. DIANA-microT v5.0, the new version of the microT server, has been significantly enhanced with an improved target prediction algorithm, DIANAmicroT-CDS. It has been updated to incorporate miRBase version 18 and Ensembl version 69. The in silico-predicted miRNA–gene interactions in Homo sapiens, Mus musculus, Drosophila melanogaster and Caenorhabditis elegans exceed 11 million in total. The web server was completely redesigned, to host a series of sophisticated workflows, which can be used directly from the on-line web interface, enabling users without the necessary bioinformatics infrastructure to perform advanced multi-step functional miRNA analyses. For instance, one available pipeline performs miRNA target prediction using different thresholds and meta-analysis statistics, followed by pathway enrichment analysis. DIANAmicroT web server v5.0 also supports a complete integration with the Taverna Workflow Management System (WMS), using the in-house developed DIANA-Taverna Plug-in. This plug-in provides ready-to-use modules for miRNA target prediction and functional analysis, which can be used to form advanced high-throughput analysis pipelines. <s> BIB031 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> Pattern recognition is concerned with the development of systems that learn to solve a given problem using a set of example instances, each represented by a number of features. These problems include clustering, the grouping of similar instances; classification, the task of assigning a discrete label to a given instance; and dimensionality reduction, combining or selecting features to arrive at a more useful representation. The use of statistical pattern recognition algorithms in bioinformatics is pervasive. Classification and clustering are often applied to high-throughput measurement data arising from microarray, mass spectrometry and next-generation sequencing experiments for selecting markers, predicting phenotype and grouping objects or genes. Less explicitly, classification is at the core of a wide range of tools such as predictors of genes, protein function, functional or genetic interactions, etc., and used extensively in systems biology. A course on pattern recognition (or machine learning) should therefore be at the core of any bioinformatics education program. In this review, we discuss the main elements of a pattern recognition course, based on material developed for courses taught at the BSc, MSc and PhD levels to an audience of bioinformaticians, computer scientists and life scientists. We pay attention to common problems and pitfalls encountered in applications and in interpretation of the results obtained. <s> BIB032 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> MicroRNAs (miRNAs) have been emerged as a novel class of endogenous posttranscriptional regulators in a variety of animal and plant species. One challenge facing miRNA research is to accurately identify the target mRNAs, because of the very limited sequence complementarity between miRNAs and their target sites, and the scarcity of experimentally validated targets to guide accurate prediction. In this paper, we propose a new method called SuperMirTar that exploits super vised distance learning to predict miRNA targets. Specifically, we use the experimentally supported miRNA-mRNA pairs as a training set to learn a distance metric function that minimizes the distances between miRNAs and mRNAs with validated interactions, then use the learned function to calculate the distances of test miRNA-mRNA interactions, and those with smaller distances than a predefined threshold are regarded as true interactions. We carry out performance comparison between the proposed approach and seven existing methods on independent datasets; the results show that our method achieves superior performance and can effectively narrow the gap between the number of predicted miRNA targets and the number of experimentally validated ones. <s> BIB033 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> Motivation: The massive spread of repetitive elements in the human genome presents a substantial challenge to the organism, as such elements may accidentally contain seemingly functional motifs. A striking example is offered by the roughly one million copies of Alu repeats in the genome, of which ~0.5% reside within genes’ untranslated regions (UTRs), presenting ~30 000 novel potential targets for highly conserved microRNAs (miRNAs). Here, we examine the functionality of miRNA targets within Alu elements in 3′UTRs in the human genome. ::: ::: Results: Using a comprehensive dataset of miRNA overexpression assays, we show that mRNAs with miRNA targets within Alus are significantly less responsive to the miRNA effects compared with mRNAs that have the same targets outside Alus. Using Ago2-binding mRNA profiling, we confirm that the miRNA machinery avoids miRNA targets within Alus, as opposed to the highly efficient binding of targets outside Alus. We propose three features that prevent potential miRNA sites within Alus from being recognized by the miRNA machinery: (i) Alu repeats that contain miRNA targets and genuine functional miRNA targets appear to reside in distinct mutually exclusive territories within 3′UTRs; (ii) Alus have tight secondary structure that may limit access to the miRNA machinery; and (iii) A-to-I editing of Alu-derived mRNA sequences may divert miRNA targets. The combination of these features is proposed to allow toleration of Alu insertions into mRNAs. Nonetheless, a subset of miRNA targets within Alus appears not to possess any of the aforementioned features, and thus may represent cases where Alu insertion in the genome has introduced novel functional miRNA targets. ::: ::: Contact: moshe.oren@weizmann.ac.il or Pilpel@weizmann.ac.il ::: ::: Supplementary information: Supplementary data are available at Bioinformatics online. <s> BIB034 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> Most of what is presently known about how miRNAs regulate gene expression comes from studies that characterized the regulatory effect of miRNA binding sites located in the 3' untranslated regions (UTR) of mRNAs. In recent years, there has been increasing evidence that miRNAs also bind in the coding region (CDS), but the implication of these interactions remains obscure because they have a smaller impact on mRNA stability compared with miRNA-target interactions that involve 3' UTRs. Here we show that miRNA-complementary sites that are located in both CDS and 3'-UTRs are under selection pressure and share the same sequence and structure properties. Analyzing recently published data of ribosome-protected fragment profiles upon miRNA transfection from the perspective of the location of miRNA-complementary sites, we find that sites located in the CDS are most potent in inhibiting translation, while sites located in the 3' UTR are more efficient at triggering mRNA degradation. Our study suggests that miRNAs may combine targeting of CDS and 3' UTR to flexibly tune the time scale and magnitude of their post-transcriptional regulatory effects. <s> BIB035 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Considered miRNA target predictors <s> Finding microRNA targets in the coding region is difficult due to the overwhelming signal encoding the amino acid sequence. Here, we introduce an algorithm (called PACCMIT-CDS) that finds potential microRNA targets within coding sequences by searching for conserved motifs that are complementary to the microRNA seed region and also overrepresented in comparison with a background model preserving both codon usage and amino acid sequence. Precision and sensitivity of PACCMIT-CDS are evaluated using PAR-CLIP and proteomics data sets. Thanks to the properly constructed background, the new algorithm achieves a lower rate of false positives and better ranking of predictions than do currently available algorithms, which were designed to find microRNA targets within 3' UTRs. <s> BIB036
We selected several representative predictors for the empirical evaluation. The selected methods have to be conveniently accessible to the end users via a web server or a precomputed database. They also have to cover human and mouse, predict target sites (to perform evaluation at the duplex level) and provide propensity (probability) of the interaction. Using these filters we selected eight methods (see Supplementary Table S6 ). We use their latest versions of these methods, except for PicTar2, which is substantially different from PicTar and no longer qualifies as a sequence-based predictor. PicTar 2005 was first published in 2005; five methods including TargetScan 6.2, miRanda 2010, EIMMo3, miREE and mirTarget2 v4 were proposed or updated between 2010 and 2012; and two in 2013: DIANA-microT-CDS and miRmap v1.1. We excluded miREE from the evaluation because this method did not predict any targets on our TEST_duplex and TEST_gene data sets. The remaining seven methods use a diverse set of predictive models, with four that use heuristic scoring functions and three that use the machine learning models including Bayesian classifier, support vector machine (SVM) and regression. miRmap was built based on gene expression data, while the other methods were derived based on the low-throughput experimentally validated data. We collected predictions for these methods using either their online web servers or downloadable precomputed predictions. We recorded their predicted binding targets (sequences or positions) and the corresponding propensities. We consider 38 sequence-based methods, from the earliest predictor that was published in 2003 to the latest method that was released in 2013; chronological list of methods is shown in Table 1 . We exclude the meta methods (because they are inconvenient to use and require availability of results from base methods) and approaches that rely on the experimental data. Most of the miRNA target predictors were developed by different research groups, with several groups that continue maintaining and updating their algorithms. Cohen's group at EMBL proposed the first miRNA target predictor in 2003 BIB002 and updated it in 2005 BIB009 . TargetScan and TargetScanS were developed by Bartel at MIT and Burge at Cambridge . Another popular tool, DIANA-microT, which was created by Hatzigeorgiou group, has been recently updated to version 5.0 BIB019 BIB023 BIB031 . Rajewsky's lab published their predictor PicTar in 2005 and updated it in 2011 BIB010 BIB024 . Predictive methodologies and mechanistic basis of miRNA-mRNA interaction Table 1 summarizes types of predictive models and the underlying details of the miRNA-mRNA interactions that they use to predict miRNA targets. There are two categories of predictive models: heuristic and empirical. The heuristic models use screening algorithms that search positions along the mRNA sequence and scoring functions that filter targets by combining values of several inputs in an ad hoc manner. Early predictors applied heuristic approaches owing to the lack of sufficient amount of data to build the empirical knowledge-based models. Even today the scoring function-based designs are dominant (19 of 38 methods) because of their easy setup, flexibility to integrate different types of inputs and computational efficiency. The empirical models are inferred from a training data set. Given the success of machine learning-based models in bioinformatics BIB025 BIB032 and growing size of the experimental data, since 2006 progressively more predictors use empirical machine learning models including SVMs, decisions trees and artificial neural networks (ANNs). The predictive models use inputs that are derived from the knowledge of mechanistic details of the miRNA-mRNA interactions. The most commonly used predictive input is the complementarity of the base pairing between miRNA and mRNA. In contrast to the near-perfect base pairing in plants BIB001 , animal miRNAs usually bind mRNAs with only some positions that are paired BIB005 . Complementarity of the base pairing in the seed region (the first eight nucleotides at the 5 0 end of miRNAs) is particularly important; only six methods did not consider it. To compare, 15 methods did not consider complementarity in the nonseed region. The major types of complementarity in the seed include 6-mer (six consecutive matches between second and seventh positions from the 5 0 end of miRNA), 7-mer-A1 (extends 6-mer with an adenine (A) nucleotide at the first position of target 3 0 end), 7-mer-m8 (seven consecutive matches from second to eighth position of miRNA) and 8-mer (combines 7-mer-m8 and 7-mer-A1). Some methods consider binding of the first eight nucleotides as important but do not restrict it to particular seed types. Moreover, several predictors (HuMiTar BIB018 , TargetMiner , MultiMiTar BIB026 , miREE and SuperMirTar BIB033 ) also suggest specific positions that are more useful for the prediction. These methods, except for HuMiTar, use machine learning models and empirical feature selection to find these positions. One other exception is that TargetBoost BIB011 , RNA22 BIB013 and SVMicrO BIB020 use patterns of complementarity generated from native miRNA:mRNA complexes, rather than focusing on the seed types. The site accessibility and evolutionary conservation inputs are used to increase specificity. The accessibility is relevant because miRNA:mRNA interaction requires binding of a relatively large RNA-induced silencing complex BIB028 . This input is quantified with content of adenine and uracil nucleotides (AU content) and free energy that estimates stability of the mRNA sequences. Most target predictors use existing software, like Vienna RNA package BIB003 , mFold BIB004 , DINAMelt BIB012 and sFold BIB006 , to calculate the free energy. Authors of RNAhybrid claim that their own approach prevents intramolecular base pairing and bulge loops, which leads to improved estimates of the free energy BIB007 ; this approach was also used in the predictor by Stark et al. BIB009 and in SuperMirTar BIB033 . Most predictors calculate the free energy of the miRNA-target duplexes. However, several methods (MicroTar BIB014 , STarMir BIB015 , PITA BIB016 , TargetMiner , SVMicrO BIB020 , PACMIT BIB021 and miREE ) calculate arguably more relevant relative energy, which is the hybridization energy gained by miRNA:mRNA binding minus the disruption energy lost by opening up the local mRNA structure of the target. Several studies found that enriched AU content in mRNA 3 0 untranslated regions (UTRs) is important for interaction with miRNAs BIB034 BIB017 . This was exploited in 2003 in TargetScan, even before experimental data to that effect was published . Since then several methods have used this information (see 'AU %' column in Table 1 ). Use of the evolutionary conservation of miRNA targets is motivated by a premise that 'similar' species should share common miRNAs and their targets. However, this leads to We summarize key aspects including model type, region that is searched to predict targets and inclusion of several mechanistic properties that are known to provide useful inputs for prediction, such as complementarity between miRNA and mRNA, site accessibility and conservation across species; means that a given aspect was Overview and assessment of miRNA target predictions in animals | 5 at Bibliothek Der TU Muenchen on December 16, 2014 http://bib.oxfordjournals.org/ omission of the nonconserved targets BIB014 BIB027 . The value of the inclusion of the target conservation remains an open question; Table 1 reveals that conservation is used less frequently in recent years. Still, methods that search for targets in long coding DNA segments (CDSs) use conservation to improve specificity BIB008 BIB029 BIB035 BIB036 . Based on an observation that targeting of multiple sites enhances the mRNA regulation BIB022 BIB030 , 17 of the 38 methods increase the propensity of binding to a target gene with multiple predicted sites (see 'Multiple sites' column in Table 1 ).
Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Criteria for empirical evaluation <s> MicroRNAs (miRNAs) are short RNA molecules that regulate gene expression by binding to target messenger RNAs and by controlling protein production or causing RNA cleavage. To date, functions have been assigned to only a few of the hundreds of identified miRNAs, in part because of the difficulty in identifying their targets. The short length of miRNAs and the fact that their complementarity to target sequences is imperfect mean that target identification in animal genomes is not possible by standard sequence comparison methods. Here we screen conserved 3′ UTR sequences from the Drosophila melanogaster genome for potential miRNA targets. The screening procedure combines a sequence search with an evaluation of the predicted miRNA–target heteroduplex structures and energies. We show that this approach successfully identifies the five previously validated let-7, lin-4, and bantam targets from a large database and predict new targets for Drosophila miRNAs. Our target predictions reveal striking clusters of functionally related targets among the top predictions for specific miRNAs. These include Notch target genes for miR-7, proapoptotic genes for the miR-2 family, and enzymes from a metabolic pathway for miR-277. We experimentally verified three predicted targets each for miR-7 and the miR-2 family, doubling the number of validated targets for animal miRNAs. Statistical analysis indicates that the best single predicted target sites are at the border of significance; thus, target predictions should be considered as tentative until experimentally validated. We identify features shared by all validated targets that can be used to evaluate target predictions for animal miRNAs. Our initial evaluation and experimental validation of target predictions suggest functions for two miRNAs. For others, the screen suggests plausible functions, such as a role for miR-277 as a metabolic switch controlling amino acid catabolism. Cross-genome comparison proved essential, as it allows reduction of the sequence search space. Improvements in genome annotation and increased availability of cDNA sequences from other genomes will allow more sensitive screens. An increase in the number of confirmed targets is expected to reveal general structural features that can be used to improve their detection. While the screen is likely to miss some targets, our study shows that valid targets can be identified from sequence alone. <s> BIB001 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Criteria for empirical evaluation <s> A new paradigm of gene expression regulation has emerged recently with the discovery of microRNAs (miRNAs). Most, if not all, miRNAs are thought to control gene expression, mostly by base pairing with miRNA-recognition elements (MREs) found in their messenger RNA (mRNA) targets. Although a large number of human miRNAs have been reported, many of their mRNA targets remain unknown. Here we used a combined bioinformatics and experimental approach to identify important rules governing miRNA-MRE recognition that allow prediction of human miRNA targets. We describe a computational program, "DIANA-microT", that identifies mRNA targets for animal miRNAs and predicts mRNA targets, bearing single MREs, for human and mouse miRNAs. <s> BIB002 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Criteria for empirical evaluation <s> MicroRNAs (miRNAs) are short RNAs that post-transcriptionally regulate the expression of target genes by binding to the target mRNAs. Although a large number of animal miRNAs has been defined, only a few targets are known. In contrast to plant miRNAs, which usually bind nearly perfectly to their targets, animal miRNAs bind less tightly, with a few nucleotides being unbound, thus producing more complex secondary structures of miRNA/target duplexes. Here, we present a program, RNA-hybrid, that predicts multiple potential binding sites of miRNAs in large target RNAs. In general, the program finds the energetically most favorable hybridization sites of a small RNA in a large RNA. Intramolecular hybridizations, that is, base pairings between target nucleotides or between miRNA nucleotides are not allowed. For large targets, the time complexity of the algorithm is linear in the target length, allowing many long targets to be searched in a short time. Statistical significance of predicted targets is assessed with an extreme value statistics of length normalized minimum free energies, a Poisson approximation of multiple binding sites, and the calculation of effective numbers of orthologous targets in comparative studies of multiple organisms. We applied our method to the prediction of Drosophila miRNA targets in 3'UTRs and coding sequence. RNAhybrid, with its accompanying programs RNAcalibrate and RNAeffective, is available for download and as a Web tool on the Bielefeld Bioinformatics Server (http://bibiserv.techfak.uni-bielefeld.de/rnahybrid/). <s> BIB003 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Criteria for empirical evaluation <s> microRNAs are small noncoding genes that regulate the protein production of genes by binding to partially complementary sites in the mRNAs of targeted genes. Here, using our algorithm PicTar, we exploit cross-species comparisons to predict, on average, 54 targeted genes per microRNA above noise in Drosophila melanogaster. Analysis of the functional annotation of target genes furthermore suggests specific biological functions for many microRNAs. We also predict combinatorial targets for clustered microRNAs and find that some clustered microRNAs are likely to coordinately regulate target genes. Furthermore, we compare microRNA regulation between insects and vertebrates. We find that the widespread extent of gene regulation by microRNAs is comparable between flies and mammals but that certain microRNAs may function in clade-specific modes of gene regulation. One of these microRNAs (miR-210) is predicted to contribute to the regulation of fly oogenesis. We also list specific regulatory relationships that appear to be conserved between flies and mammals. Our findings provide the most extensive microRNA target predictions in Drosophila to date, suggest specific functional roles for most microRNAs, indicate the existence of coordinate gene regulation executed by clustered microRNAs, and shed light on the evolution of microRNA function across large evolutionary distances. All predictions are freely accessible at our searchable Web site http://pictar.bio.nyu.edu. <s> BIB004 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Criteria for empirical evaluation <s> BackgroundMicroRNAs (miRs) are small noncoding RNAs that bind to complementary/partially complementary sites in the 3' untranslated regions of target genes to regulate protein production of the target transcript and to induce mRNA degradation or mRNA cleavage. The ability to perform accurate, high-throughput identification of physiologically active miR targets would enable functional characterization of individual miRs. Current target prediction methods include traditional approaches that are based on specific base-pairing rules in the miR's seed region and implementation of cross-species conservation of the target site, and machine learning (ML) methods that explore patterns that contrast true and false miR-mRNA duplexes. However, in the case of the traditional methods research shows that some seed region matches that are conserved are false positives and that some of the experimentally validated target sites are not conserved.ResultsWe present HuMiTar, a computational method for identifying common targets of miRs, which is based on a scoring function that considers base-pairing for both seed and non-seed positions for human miR-mRNA duplexes. Our design shows that certain non-seed miR nucleotides, such as 14, 18, 13, 11, and 17, are characterized by a strong bias towards formation of Watson-Crick pairing. We contrasted HuMiTar with several representative competing methods on two sets of human miR targets and a set of ten glioblastoma oncogenes. Comparison with the two best performing traditional methods, PicTar and TargetScanS, and a representative ML method that considers the non-seed positions, NBmiRTar, shows that HuMiTar predictions include majority of the predictions of the other three methods. At the same time, the proposed method is also capable of finding more true positive targets as a trade-off for an increased number of predictions. Genome-wide predictions show that the proposed method is characterized by 1.99 signal-to-noise ratio and linear, with respect to the length of the mRNA sequence, computational complexity. The ROC analysis shows that HuMiTar obtains results comparable with PicTar, which are characterized by high true positive rates that are coupled with moderate values of false positive rates.ConclusionThe proposed HuMiTar method constitutes a step towards providing an efficient model for studying translational gene regulation by miRs. <s> BIB005 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Criteria for empirical evaluation <s> BackgroundVirtually all currently available microRNA target site prediction algorithms require the presence of a (conserved) seed match to the 5' end of the microRNA. Recently however, it has been shown that this requirement might be too stringent, leading to a substantial number of missed target sites.ResultsWe developed TargetSpy, a novel computational approach for predicting target sites regardless of the presence of a seed match. It is based on machine learning and automatic feature selection using a wide spectrum of compositional, structural, and base pairing features covering current biological knowledge. Our model does not rely on evolutionary conservation, which allows the detection of species-specific interactions and makes TargetSpy suitable for analyzing unconserved genomic sequences.In order to allow for an unbiased comparison of TargetSpy to other methods, we classified all algorithms into three groups: I) no seed match requirement, II) seed match requirement, and III) conserved seed match requirement. TargetSpy predictions for classes II and III are generated by appropriate postfiltering. On a human dataset revealing fold-change in protein production for five selected microRNAs our method shows superior performance in all classes. In Drosophila melanogaster not only our class II and III predictions are on par with other algorithms, but notably the class I (no-seed) predictions are just marginally less accurate. We estimate that TargetSpy predicts between 26 and 112 functional target sites without a seed match per microRNA that are missed by all other currently available algorithms.ConclusionOnly a few algorithms can predict target sites without demanding a seed match and TargetSpy demonstrates a substantial improvement in prediction accuracy in that class. Furthermore, when conservation and the presence of a seed match are required, the performance is comparable with state-of-the-art algorithms. TargetSpy was trained on mouse and performs well in human and drosophila, suggesting that it may be applicable to a broad range of species. Moreover, we have demonstrated that the application of machine learning techniques in combination with upcoming deep sequencing data results in a powerful microRNA target site prediction tool http://www.targetspy.org. <s> BIB006 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Criteria for empirical evaluation <s> BackgroundMicroRNAs (miRNAs) are single-stranded non-coding RNAs known to regulate a wide range of cellular processes by silencing the gene expression at the protein and/or mRNA levels. Computational prediction of miRNA targets is essential for elucidating the detailed functions of miRNA. However, the prediction specificity and sensitivity of the existing algorithms are still poor to generate meaningful, workable hypotheses for subsequent experimental testing. Constructing a richer and more reliable training data set and developing an algorithm that properly exploits this data set would be the key to improve the performance current prediction algorithms.ResultsA comprehensive training data set is constructed for mammalian miRNAs with its positive targets obtained from the most up-to-date miRNA target depository called miRecords and its negative targets derived from 20 microarray data. A new algorithm SVMicrO is developed, which assumes a 2-stage structure including a site support vector machine (SVM) followed by a UTR-SVM. SVMicrO makes prediction based on 21 optimal site features and 18 optimal UTR features, selected by training from a comprehensive collection of 113 site and 30 UTR features. Comprehensive evaluation of SVMicrO performance has been carried out on the training data, proteomics data, and immunoprecipitation (IP) pull-down data. Comparisons with some popular algorithms demonstrate consistent improvements in prediction specificity, sensitivity and precision in all tested cases. All the related materials including source code and genome-wide prediction of human targets are available at http://compgenomics.utsa.edu/svmicro.html.ConclusionsA 2-stage SVM based new miRNA target prediction algorithm called SVMicrO is developed. SVMicrO is shown to be able to achieve robust performance. It holds the promise to achieve continuing improvement whenever better training data that contain additional verified or high confidence positive targets and properly selected negative targets are available. <s> BIB007 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Criteria for empirical evaluation <s> Considering accessibility of the 3′UTR is believed to increase the precision of microRNA target predictions. We show that, contrary to common belief, ranking by the hybridization energy or by the sum of the opening and hybridization energies, used in currently available algorithms, is not an efficient way to rank predictions. Instead, we describe an algorithm which also considers only the accessible binding sites but which ranks predictions according to over-representation. When compared with experimentally validated and refuted targets in the fruit fly and human, our algorithm shows a remarkable improvement in precision while significantly reducing the computational cost in comparison with other free energy based methods. In the human genome, our algorithm has at least twice higher precision than other methods with their default parameters. In the fruit fly, we find five times more validated targets among the top 500 predictions than other methods with their default parameters. Furthermore, using a common statistical framework we demonstrate explicitly the advantages of using the canonical ensemble instead of using the minimum free energy structure alone. We also find that ‘naive’ global folding sometimes outperforms the local folding approach. <s> BIB008 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Criteria for empirical evaluation <s> BackgroundMicroRNAs (miRNAs) play an essential task in gene regulatory networks by inhibiting the expression of target mRNAs. As their mRNA targets are genes involved in important cell functions, there is a growing interest in identifying the relationship between miRNAs and their target mRNAs. So, there is now a imperative need to develop a computational method by which we can identify the target mRNAs of existing miRNAs. Here, we proposed an efficient machine learning model to unravel the relationship between miRNAs and their target mRNAs.ResultsWe present a novel computational architecture MTar for miRNA target prediction which reports 94.5% sensitivity and 90.5% specificity. We identified 16 positional, thermodynamic and structural parameters from the wet lab proven miRNA:mRNA pairs and MTar makes use of these parameters for miRNA target identification. It incorporates an Artificial Neural Network (ANN) verifier which is trained by wet lab proven microRNA targets. A number of hitherto unknown targets of many miRNA families were located using MTar. The method identifies all three potential miRNA targets (5' seed-only, 5' dominant, and 3' canonical) whereas the existing solutions focus on 5' complementarities alone.ConclusionMTar, an ANN based architecture for identifying functional regulatory miRNA-mRNA interaction using predicted miRNA targets. The area of target prediction has received a new momentum with the function of a thermodynamic model incorporating target accessibility. This model incorporates sixteen structural, thermodynamic and positional features of residues in miRNA: mRNA pairs were employed to select target candidates. So our novel machine learning architecture, MTar is found to be more comprehensive than the existing methods in predicting miRNA targets, especially human transcritome. <s> BIB009
We used a comprehensive set of evaluation measures to assess the predictions of the miRNA:target duplexes and miRNA-gene pairs. Each prediction takes two forms: binary value that indicates whether a given duplex or miRNA-gene pair is predicted to be functional; and the real-valued probability (propensity) of a given predicted interaction. The binary predictions were assessed using the following seven measures: where true positives (TP) and true negatives (TN) are the counts of correctly predicted functional and nonfunctional miRNA targets, respectively, and false positives (FP) and false negatives (FN) are the counts of incorrectly predicted functional and nonfunctional miRNA targets, respectively. The values of the Matthews Correlation Coefficient (MCC) range between À1 and 1, with 0 for random predictions and higher values denoting more accurate predictions. MCC provides a robust measurement for skewed data sets (when number of positive and negative outcomes in unbalanced), which is the case with our TEST_duplex data set. Signal-to-Noise Ratio (SNR) of correctly over incorrectly predicted functional targets was calculated in several prior works BIB001 BIB002 BIB004 BIB003 . We computed the SNR of predicted functional (SNRþ) and also nonfunctional samples (SNRÀ) to provide a complete set of measures. Given the skewed counts of native (true) functional and nonfunctional samples in our data sets, we normalized the SNR values as follows: where P_duplex (P_gene) and N_duplex (N_gene) are the numbers of native (true) functional and nonfunctional duplexes (genes) in the TEST_duplex (TEST_gene) data set. The overall count of predicted functional targets is assessed using Predicted-to-Native positive Ratio (PNR) ¼ predicted_functional_ count/true_functional_count. PNR indicates whether a given predictor overpredicts (PNR value > 1) or underpredicts (PNR value < 1) the number of functional miRNA targets. The real-valued propensities were assessed using the receiver operating characteristic (ROC) curve, which represents relation between true-positive rates (TPR) ¼ TP/(TP þ FN) and false-positive rates (FPR) ¼ FP/(FP þ TN). The ROC curves reflect a trade-off between sensitivity and specificity, providing comprehensive information about the predictive performance. We compute the area under the ROC curve (AUC) that ranges between 0 (for a method that does not predict TP) and 1 (for a perfect predictor), with 0.5 denoting a random predictor. Except for the PNR and SNRÀ, which we introduced, and the normalization of the SNRþ and SNRÀ values that is motivated by the unbalanced nature of the benchmark data sets, the other criteria were used to evaluate some of the prior predictors BIB005 BIB007 BIB008 BIB006 BIB009 (see column 'Criteria' in Table 2 ). We also evaluate statistical significance of differences in predictive performance between predictors. We randomly choose 50% of a given data set, calculate the predictive performance and repeat this 10 times. The corresponding 10 pairs of results (to compare a given pair of predictors) are evaluated with the student's t-test if distributions are normal; otherwise we use the Mann-Whitney test. The distribution type is verified using the Anderson-Darling test with the P-value of 0.05.
Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Evaluation protocols <s> BackgroundMicroRNAs (miRNAs) are small noncoding RNAs, which play significant roles as posttranscriptional regulators. The functions of animal miRNAs are generally based on complementarity for their 5' components. Although several computational miRNA target-gene prediction methods have been proposed, they still have limitations in revealing actual target genes.ResultsWe implemented miTarget, a support vector machine (SVM) classifier for miRNA target gene prediction. It uses a radial basis function kernel as a similarity measure for SVM features, categorized by structural, thermodynamic, and position-based features. The latter features are introduced in this study for the first time and reflect the mechanism of miRNA binding. The SVM classifier produces high performance with a biologically relevant data set obtained from the literature, compared with previous tools. We predicted significant functions for human miR-1, miR-124a, and miR-373 using Gene Ontology (GO) analysis and revealed the importance of pairing at positions 4, 5, and 6 in the 5' region of a miRNA from a feature selection experiment. We also provide a web interface for the program.ConclusionmiTarget is a reliable miRNA target gene prediction tool and is a successful application of an SVM classifier. Compared with previous tools, its predictions are meaningful by GO analysis and its performance can be improved given more training examples. <s> BIB001 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Evaluation protocols <s> MicroRNAs (miRNAs) are ~22-nt RNA segments that are involved in the regulation of protein expression primarily by binding to one or more target sites on an mRNA transcript and inhibiting translation. MicroRNAs are likely to factor into multiple developmental pathways, multiple mechanisms of gene regulation, and underlie an array of inherited disease processes and phenotypic determinants. Several computational programs exist to predict miRNA targets in mammals, fruit flies, worms, and plants. However, to date, there is no systematic collection and description of miRNA targets with experimental support. We describe a database, TarBase, which houses a manually curated collection of experimentally tested miRNA targets, in human/mouse, fruit fly, worm, and zebrafish, distinguishing between those that tested positive and those that tested negative. Each positive target site is described by the miRNA that binds it, the gene in which it occurs, the nature of the experiments that were conducted to test it, the sufficiency of the site to induce translational repression and/or cleavage, and the paper from which all these data were extracted. Additionally, the database is functionally linked to several other useful databases such as Gene Ontology (GO) and UCSC Genome Browser. TarBase reveals significantly more experimentally supported targets than even recent reviews claim, thereby providing a comprehensive data set from which to assess features of miRNA targeting that will be useful for the next generation of target prediction programs. TarBase can be accessed at http://www.diana.pcbi.upenn.edu/tarbase. <s> BIB002 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Evaluation protocols <s> BackgroundMicroRNAs (miRs) are small noncoding RNAs that bind to complementary/partially complementary sites in the 3' untranslated regions of target genes to regulate protein production of the target transcript and to induce mRNA degradation or mRNA cleavage. The ability to perform accurate, high-throughput identification of physiologically active miR targets would enable functional characterization of individual miRs. Current target prediction methods include traditional approaches that are based on specific base-pairing rules in the miR's seed region and implementation of cross-species conservation of the target site, and machine learning (ML) methods that explore patterns that contrast true and false miR-mRNA duplexes. However, in the case of the traditional methods research shows that some seed region matches that are conserved are false positives and that some of the experimentally validated target sites are not conserved.ResultsWe present HuMiTar, a computational method for identifying common targets of miRs, which is based on a scoring function that considers base-pairing for both seed and non-seed positions for human miR-mRNA duplexes. Our design shows that certain non-seed miR nucleotides, such as 14, 18, 13, 11, and 17, are characterized by a strong bias towards formation of Watson-Crick pairing. We contrasted HuMiTar with several representative competing methods on two sets of human miR targets and a set of ten glioblastoma oncogenes. Comparison with the two best performing traditional methods, PicTar and TargetScanS, and a representative ML method that considers the non-seed positions, NBmiRTar, shows that HuMiTar predictions include majority of the predictions of the other three methods. At the same time, the proposed method is also capable of finding more true positive targets as a trade-off for an increased number of predictions. Genome-wide predictions show that the proposed method is characterized by 1.99 signal-to-noise ratio and linear, with respect to the length of the mRNA sequence, computational complexity. The ROC analysis shows that HuMiTar obtains results comparable with PicTar, which are characterized by high true positive rates that are coupled with moderate values of false positive rates.ConclusionThe proposed HuMiTar method constitutes a step towards providing an efficient model for studying translational gene regulation by miRs. <s> BIB003 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Evaluation protocols <s> BackgroundVirtually all currently available microRNA target site prediction algorithms require the presence of a (conserved) seed match to the 5' end of the microRNA. Recently however, it has been shown that this requirement might be too stringent, leading to a substantial number of missed target sites.ResultsWe developed TargetSpy, a novel computational approach for predicting target sites regardless of the presence of a seed match. It is based on machine learning and automatic feature selection using a wide spectrum of compositional, structural, and base pairing features covering current biological knowledge. Our model does not rely on evolutionary conservation, which allows the detection of species-specific interactions and makes TargetSpy suitable for analyzing unconserved genomic sequences.In order to allow for an unbiased comparison of TargetSpy to other methods, we classified all algorithms into three groups: I) no seed match requirement, II) seed match requirement, and III) conserved seed match requirement. TargetSpy predictions for classes II and III are generated by appropriate postfiltering. On a human dataset revealing fold-change in protein production for five selected microRNAs our method shows superior performance in all classes. In Drosophila melanogaster not only our class II and III predictions are on par with other algorithms, but notably the class I (no-seed) predictions are just marginally less accurate. We estimate that TargetSpy predicts between 26 and 112 functional target sites without a seed match per microRNA that are missed by all other currently available algorithms.ConclusionOnly a few algorithms can predict target sites without demanding a seed match and TargetSpy demonstrates a substantial improvement in prediction accuracy in that class. Furthermore, when conservation and the presence of a seed match are required, the performance is comparable with state-of-the-art algorithms. TargetSpy was trained on mouse and performs well in human and drosophila, suggesting that it may be applicable to a broad range of species. Moreover, we have demonstrated that the application of machine learning techniques in combination with upcoming deep sequencing data results in a powerful microRNA target site prediction tool http://www.targetspy.org. <s> BIB004 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Evaluation protocols <s> BackgroundMicroRNAs (miRNAs) play an essential task in gene regulatory networks by inhibiting the expression of target mRNAs. As their mRNA targets are genes involved in important cell functions, there is a growing interest in identifying the relationship between miRNAs and their target mRNAs. So, there is now a imperative need to develop a computational method by which we can identify the target mRNAs of existing miRNAs. Here, we proposed an efficient machine learning model to unravel the relationship between miRNAs and their target mRNAs.ResultsWe present a novel computational architecture MTar for miRNA target prediction which reports 94.5% sensitivity and 90.5% specificity. We identified 16 positional, thermodynamic and structural parameters from the wet lab proven miRNA:mRNA pairs and MTar makes use of these parameters for miRNA target identification. It incorporates an Artificial Neural Network (ANN) verifier which is trained by wet lab proven microRNA targets. A number of hitherto unknown targets of many miRNA families were located using MTar. The method identifies all three potential miRNA targets (5' seed-only, 5' dominant, and 3' canonical) whereas the existing solutions focus on 5' complementarities alone.ConclusionMTar, an ANN based architecture for identifying functional regulatory miRNA-mRNA interaction using predicted miRNA targets. The area of target prediction has received a new momentum with the function of a thermodynamic model incorporating target accessibility. This model incorporates sixteen structural, thermodynamic and positional features of residues in miRNA: mRNA pairs were employed to select target candidates. So our novel machine learning architecture, MTar is found to be more comprehensive than the existing methods in predicting miRNA targets, especially human transcritome. <s> BIB005 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Evaluation protocols <s> AbstractmirSVR is a new machine learning method for ranking microRNA target sites by a down-regulation score. The algorithm trains a regression model on sequence and contextual features extracted from miRanda-predicted target sites. In a large-scale evaluation, miRanda-mirSVR is competitive with other target prediction methods in identifying target genes and predicting the extent of their downregulation at the mRNA or protein levels. Importantly, the method identifies a significant number of experimentally determined non-canonical and non-conserved sites. <s> BIB006 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Evaluation protocols <s> Background ::: Machine learning based miRNA-target prediction algorithms often fail to obtain a balanced prediction accuracy in terms of both sensitivity and specificity due to lack of the gold standard of negative examples, miRNA-targeting site context specific relevant features and efficient feature selection process. Moreover, all the sequence, structure and machine learning based algorithms are unable to distribute the true positive predictions preferentially at the top of the ranked list; hence the algorithms become unreliable to the biologists. In addition, these algorithms fail to obtain considerable combination of precision and recall for the target transcripts that are translationally repressed at protein level. ::: Methodology/Principal Finding ::: In the proposed article, we introduce an efficient miRNA-target prediction system MultiMiTar, a Support Vector Machine (SVM) based classifier integrated with a multiobjective metaheuristic based feature selection technique. The robust performance of the proposed method is mainly the result of using high quality negative examples and selection of biologically relevant miRNA-targeting site context specific features. The features are selected by using a novel feature selection technique AMOSA-SVM, that integrates the multi objective optimization technique Archived Multi-Objective Simulated Annealing (AMOSA) and SVM. ::: Conclusions/Significance ::: MultiMiTar is found to achieve much higher Matthew’s correlation coefficient (MCC) of 0.583 and average class-wise accuracy (ACA) of 0.8 compared to the others target prediction methods for a completely independent test data set. The obtained MCC and ACA values of these algorithms range from −0.269 to 0.155 and 0.321 to 0.582, respectively. Moreover, it shows a more balanced result in terms of precision and sensitivity (recall) for the translationally repressed data set as compared to all the other existing methods. An important aspect is that the true positive predictions are distributed preferentially at the top of the ranked list that makes MultiMiTar reliable for the biologists. MultiMiTar is now available as an online tool at www.isical.ac.in/~bioinfo_miu/multimitar.htm. MultiMiTar software can be downloaded from www.isical.ac.in/~bioinfo_miu/multimitar-download.htm. <s> BIB007 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Evaluation protocols <s> The Gene Expression Omnibus (GEO, http://www.ncbi.nlm.nih.gov/geo/) is an international public repository for high-throughput microarray and next-generation sequence functional genomic data sets submitted by the research community. The resource supports archiving of raw data, processed data and metadata which are indexed, cross-linked and searchable. All data are freely available for download in a variety of formats. GEO also provides several web-based tools and strategies to assist users to query, analyse and visualize data. This article reports current status and recent database developments, including the release of GEO2R, an R-based web application that helps users analyse GEO data. <s> BIB008
Benchmark data sets used to develop and test the predictors and the corresponding evaluation procedures are summarized in Table 2 . Many early methods were designed/evaluated using data only from Drosophila owing to limited availability of validated miRNA targets. However, even some early predictors (TargetScan BIB008 , DIANA-microT , miRanda BIB002 and TargetScanS ) considered higher eukaryotes. More recent methods generally cover more species. Interestingly, in 14 cases predictors were validated on test data sets but there was no mention about data being used to design these predictive models. This may mean that the test data was used in the design, e.g. to set thresholds and parameters. HuMiTar was the first method that was properly tested on an independent (from the training set) data set BIB003 . Even with the currently available relatively large number of validated miRNA targets, only a few recent predictors (TargetMiner , TargetSpy BIB004 , Mtar BIB005 , MultiMiTar BIB007 and miREE ) were trained and tested on different (independent) data sets. Moreover, the sizes of some training data sets are relatively small (a few dozen samples) and some data sets are unbalanced and have more artificial nonfunctional targets than the functional targets; some data sets use only a few validated nonfunctional targets. A particularly challenging aspect is a low number of experimentally validated nonfunctional samples, i.e. mRNA validated not to interact with a given miRNA. Several early methods used artificial nonfunctional targets created by either shuffling miRNAs sequences or by randomization of mRNAs; these approaches were criticized to generate unrealistic samples . More recent attempts scan the mRNA transcripts where validated target sites or Ago-binding sites are masked and use the target segments with at least 4-mer matches in the seed region or one mismatch or G:U wobble in the 6-mer seed as the nonfunctional samples BIB006 BIB005 BIB001 . This approach assumes that the knowledge of functional targets or Ago-binding sites is complete, while in fact these computationally generated nonfunctional miRNA-mRNA pairs could be functional. Some recent methods label overexpressed genes when particular miRNA mimics are added to cells as nonfunctional, but data from this limited number of miRNAs may be biased. These various attempts to generate the benchmark data sets may result in mislabeling, overfitting the training data sets and generation of unrealistic (possibly inflated) evaluation of predictive performance. We also analyze the evaluation procedures. The early predictors were evaluated primarily based on SNR between the number of predicted targets in functional genes and in true or artificial nonfunctional genes. PicTar was the first to report sensitivity, based on only 19 native targets. TargetBoost and miTarget were the first to use more informative ROC curves, but with the caveat of using artificial nonfunctional targets. The criteria used to evaluate predictive quality vary widely between methods. Some measures are biased by the composition of the data set (e.g. accuracy and precision) and provide incomplete picture (e.g. sensitivity without specificity and vice versa). This makes comparisons across predictors virtually impossible. The standards to compare between methods are also relatively low, as in most cases evaluation did not include statistical tests. On the positive side, the assessment of several methods included experimental validation of targets. The authors of RNA22 method performed a large-scale validation and claimed that 168 of 226 tested targets were repressed; however, they did not find whether these targets were bound by the specific miRNAs. Some primarily older methods also included functional analysis of the predicted targets. Table 3 shows that miRNA target predictors are available to the end users as web servers, stand-alone packages, precomputed data sets and upon request. The 21 methods that are provided as web servers are convenient for ad hoc (occasional) users. The 13 stand-alone packages are suitable for users who anticipate a high-throughput use and/or who would like to include them into their local software platforms; most of these are also available as the web servers. The convenient to collect precomputed results are provided for 10 methods. However, these predictions may not be updated timely and do not include results for novel miRNAs.
Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Usability and impact <s> MicroRNAs (miRNAs) are short RNAs that post-transcriptionally regulate the expression of target genes by binding to the target mRNAs. Although a large number of animal miRNAs has been defined, only a few targets are known. In contrast to plant miRNAs, which usually bind nearly perfectly to their targets, animal miRNAs bind less tightly, with a few nucleotides being unbound, thus producing more complex secondary structures of miRNA/target duplexes. Here, we present a program, RNA-hybrid, that predicts multiple potential binding sites of miRNAs in large target RNAs. In general, the program finds the energetically most favorable hybridization sites of a small RNA in a large RNA. Intramolecular hybridizations, that is, base pairings between target nucleotides or between miRNA nucleotides are not allowed. For large targets, the time complexity of the algorithm is linear in the target length, allowing many long targets to be searched in a short time. Statistical significance of predicted targets is assessed with an extreme value statistics of length normalized minimum free energies, a Poisson approximation of multiple binding sites, and the calculation of effective numbers of orthologous targets in comparative studies of multiple organisms. We applied our method to the prediction of Drosophila miRNA targets in 3'UTRs and coding sequence. RNAhybrid, with its accompanying programs RNAcalibrate and RNAeffective, is available for download and as a Web tool on the Bielefeld Bioinformatics Server (http://bibiserv.techfak.uni-bielefeld.de/rnahybrid/). <s> BIB001 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Usability and impact <s> Comprehensive identification of all functional elements encoded in the human genome is a fundamental need in biomedical research. Here, we present a comparative analysis of the human, mouse, rat and dog genomes to create a systematic catalogue of common regulatory motifs in promoters and 3' untranslated regions (3' UTRs). The promoter analysis yields 174 candidate motifs, including most previously known transcription-factor binding sites and 105 new motifs. The 3'-UTR analysis yields 106 motifs likely to be involved in post-transcriptional regulation. Nearly one-half are associated with microRNAs (miRNAs), leading to the discovery of many new miRNA genes and their likely target genes. Our results suggest that previous estimates of the number of human miRNA genes were low, and that miRNAs regulate at least 20% of human genes. The overall results provide a systematic view of gene regulation in the human, which will be refined as additional mammalian genomes become available. <s> BIB002 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Usability and impact <s> Regulationofpost-transcriptionalgeneexpressionby microRNAs(miRNA)hassofarbeenvalidatedforonly a few mRNA targets. Based on the large number of miRNA genes and the possibility that one miRNA might influence gene expression of several targets simultaneously, the quantity of ribo-regulated genes is expected to be much higher. Here, we describe the web tool MicroInspector that will analyse a userdefined RNA sequence, which is typically an mRNA or a part of an mRNA, for the occurrence of binding sites for known and registered miRNAs. The program allows variation of temperature, the setting of energy values as well as the selection of different miRNA databasestoidentifymiRNA-bindingsitesofdifferent strength. MicroInspector could spot the correct sites for miRNA-interaction in known target mRNAs. Using other mRNAs, for which such an interaction has not yet been described, we discovered frequently potential miRNA binding sites of similar quality, which can now be analysed experimentally. The MicroInspector program is easy to use and does not require specific computer skills. The service can be accessed via the MicroInspector web server at http://www.imbb. forth.gr/microinspector. <s> BIB003 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Usability and impact <s> We present a new microRNA target prediction algorithm called TargetBoost, and show that the algorithm is stable and identifies more true targets than do existing algorithms. TargetBoost uses machine learning on a set of validated microRNA targets in lower organisms to create weighted sequence motifs that capture the binding characteristics between microRNAs and their targets. Existing algorithms require candidates to have (1) near-perfect complementarity between microRNAs' 5' end and their targets; (2) relatively high thermodynamic duplex stability; (3) multiple target sites in the target's 3' UTR; and (4) evolutionary conservation of the target between species. Most algorithms use one of the two first requirements in a seeding step, and use the three others as filters to improve the method's specificity. The initial seeding step determines an algorithm's sensitivity and also influences its specificity. As all algorithms may add filters to increase the specificity, we propose that methods should be compared before such filtering. We show that TargetBoost's weighted sequence motif approach is favorable to using both the duplex stability and the sequence complementarity steps. (TargetBoost is available as a Web tool from http://www.interagon.com/demo/.). <s> BIB004 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Usability and impact <s> MicroRNAs are small noncoding RNAs that serve as posttranscriptional regulators of gene expression in higher eukaryotes. Their widespread and important role in animals is highlighted by recent estimates that 20%-30% of all genes are microRNA targets. Here, we report that a large set of genes involved in basic cellular processes avoid microRNA regulation due to short 3'UTRs that are specifically depleted of microRNA binding sites. For individual microRNAs, we find that coexpressed genes avoid microRNA sites, whereas target genes and microRNAs are preferentially expressed in neighboring tissues. This mutually exclusive expression argues that microRNAs confer accuracy to developmental gene-expression programs, thus ensuring tissue identity and supporting cell-lineage decisions. <s> BIB005 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Usability and impact <s> microRNAs are small noncoding genes that regulate the protein production of genes by binding to partially complementary sites in the mRNAs of targeted genes. Here, using our algorithm PicTar, we exploit cross-species comparisons to predict, on average, 54 targeted genes per microRNA above noise in Drosophila melanogaster. Analysis of the functional annotation of target genes furthermore suggests specific biological functions for many microRNAs. We also predict combinatorial targets for clustered microRNAs and find that some clustered microRNAs are likely to coordinately regulate target genes. Furthermore, we compare microRNA regulation between insects and vertebrates. We find that the widespread extent of gene regulation by microRNAs is comparable between flies and mammals but that certain microRNAs may function in clade-specific modes of gene regulation. One of these microRNAs (miR-210) is predicted to contribute to the regulation of fly oogenesis. We also list specific regulatory relationships that appear to be conserved between flies and mammals. Our findings provide the most extensive microRNA target predictions in Drosophila to date, suggest specific functional roles for most microRNAs, indicate the existence of coordinate gene regulation executed by clustered microRNAs, and shed light on the evolution of microRNA function across large evolutionary distances. All predictions are freely accessible at our searchable Web site http://pictar.bio.nyu.edu. <s> BIB006 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Usability and impact <s> Summary We present rna22 , a method for identifying microRNA binding sites and their corresponding heteroduplexes. Rna22 does not rely upon cross-species conservation, is resilient to noise, and, unlike previous methods, it first finds putative microRNA binding sites in the sequence of interest, then identifies the targeting microRNA. Computationally, we show that rna22 identifies most of the currently known heteroduplexes. Experimentally, with luciferase assays, we demonstrate average repressions of 30% or more for 168 of 226 tested targets. The analysis suggests that some microRNAs may have as many as a few thousand targets, and that between 74% and 92% of the gene transcripts in four model genomes are likely under microRNA control through their untranslated and amino acid coding regions. We also extended the method's key idea to a low-error microRNA-precursor-discovery scheme; our studies suggest that the number of microRNA precursors in mammalian genomes likely ranges in the tens of thousands. <s> BIB007 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Usability and impact <s> BackgroundThe accurate prediction of a comprehensive set of messenger RNAs (targets) regulated by animal microRNAs (miRNAs) remains an open problem. In particular, the prediction of targets that do not possess evolutionarily conserved complementarity to their miRNA regulators is not adequately addressed by current tools.ResultsWe have developed MicroTar, an animal miRNA target prediction tool based on miRNA-target complementarity and thermodynamic data. The algorithm uses predicted free energies of unbound mRNA and putative mRNA-miRNA heterodimers, implicitly addressing the accessibility of the mRNA 3' untranslated region. MicroTar does not rely on evolutionary conservation to discern functional targets, and is able to predict both conserved and non-conserved targets. MicroTar source code and predictions are accessible at http://tiger.dbs.nus.edu.sg/microtar/, where both serial and parallel versions of the program can be downloaded under an open-source licence.ConclusionMicroTar achieves better sensitivity than previously reported predictions when tested on three distinct datasets of experimentally-verified miRNA-target interactions in C. elegans, Drosophila, and mouse. <s> BIB008 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Usability and impact <s> BackgroundMicroRNAs have emerged as important regulatory genes in a variety of cellular processes and, in recent years, hundreds of such genes have been discovered in animals. In contrast, functional annotations are available only for a very small fraction of these miRNAs, and even in these cases only partially.ResultsWe developed a general Bayesian method for the inference of miRNA target sites, in which, for each miRNA, we explicitly model the evolution of orthologous target sites in a set of related species. Using this method we predict target sites for all known miRNAs in flies, worms, fish, and mammals. By comparing our predictions in fly with a reference set of experimentally tested miRNA-mRNA interactions we show that our general method performs at least as well as the most accurate methods available to date, including ones specifically tailored for target prediction in fly. An important novel feature of our model is that it explicitly infers the phylogenetic distribution of functional target sites, independently for each miRNA. This allows us to infer species-specific and clade-specific miRNA targeting. We also show that, in long human 3' UTRs, miRNA target sites occur preferentially near the start and near the end of the 3' UTR.To characterize miRNA function beyond the predicted lists of targets we further present a method to infer significant associations between the sets of targets predicted for individual miRNAs and specific biochemical pathways, in particular those of the KEGG pathway database. We show that this approach retrieves several known functional miRNA-mRNA associations, and predicts novel functions for known miRNAs in cell growth and in development.ConclusionWe have presented a Bayesian target prediction algorithm without any tunable parameters, that can be applied to sequences from any clade of species. The algorithm automatically infers the phylogenetic distribution of functional sites for each miRNA, and assigns a posterior probability to each putative target site. The results presented here indicate that our general method achieves very good performance in predicting miRNA target sites, providing at the same time insights into the evolution of target sites for individual miRNAs. Moreover, by combining our predictions with pathway analysis, we propose functions of specific miRNAs in nervous system development, inter-cellular communication and cell growth. The complete target site predictions as well as the miRNA/pathway associations are accessible on the ElMMo web server. <s> BIB009 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Usability and impact <s> MicroRNAs are key regulators of gene expression, but the precise mechanisms underlying their interaction with their mRNA targets are still poorly understood. Here, we systematically investigate the role of target-site accessibility, as determined by base-pairing interactions within the mRNA, in microRNA target recognition. We experimentally show that mutations diminishing target accessibility substantially reduce microRNA-mediated translational repression, with effects comparable to those of mutations that disrupt sequence complementarity. We devise a parameter-free model for microRNA-target interaction that computes the difference between the free energy gained from the formation of the microRNA-target duplex and the energetic cost of unpairing the target to make it accessible to the microRNA. This model explains the variability in our experiments, predicts validated targets more accurately than existing algorithms, and shows that genomes accommodate site accessibility by preferentially positioning targets in highly accessible regions. Our study thus demonstrates that target accessibility is a critical factor in microRNA function. <s> BIB010 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Usability and impact <s> MicroRNA.org (http://www.microrna.org) is a comprehensive resource of microRNA target predictions and expression profiles. Target predictions are based on a development of the miRanda algorithm which incorporates current biological knowledge on target rules and on the use of an up-to-date compendium of mammalian microRNAs. MicroRNA expression profiles are derived from a comprehensive sequencing project of a large set of mammalian tissues and cell lines of normal and disease origin. Using an improved graphical interface, a user can explore (i) the set of genes that are potentially regulated by a particular microRNA, (ii) the implied cooperativity of multiple microRNAs on a particular mRNA and (iii) microRNA expression profiles in various tissues. To facilitate future updates and development, the microRNA.org database structure and software architecture is flexibly designed to incorporate new expression and target discoveries. The web resource provides users with functional information about the growing number of microRNAs and their interaction with target genes in many species and facilitates novel discoveries in microRNA gene regulation. <s> BIB011 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Usability and impact <s> Summary A U- r ich e lements (AREs), present in mRNA 3′-UTRs, are potent posttranscriptional regulatory signals that can rapidly effect changes in mRNA stability and translation, thereby dramatically altering gene expression with clinical and developmental consequences. In human cell lines, the TNFα ARE enhances translation relative to mRNA levels upon serum starvation, which induces cell-cycle arrest. An in vivo crosslinking-coupled affinity purification method was developed to isolate ARE-associated complexes from activated versus basal translation conditions. We surprisingly found two microRNP-related proteins, f ragile- X -mental-retardation- r elated protein 1 (FXR1) and A r go naute 2 (AGO2), that associate with the ARE exclusively during translation activation. Through tethering and shRNA-knockdown experiments, we provide direct evidence for the translation activation function of both FXR1 and AGO2 and demonstrate their interdependence for upregulation. This novel cell-growth-dependent translation activation role for FXR1 and AGO2 allows new insights into ARE-mediated signaling and connects two important posttranscriptional regulatory systems in an unexpected way. <s> BIB012 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Usability and impact <s> MicroRNAs (miRNAs) are small noncoding RNAs that repress protein synthesis by binding to target messenger RNAs. We investigated the effect of target secondary structure on the efficacy of repression by miRNAs. Using structures predicted by the Sfold program, we model the interaction between an miRNA and a target as a two-step hybridization reaction: nucleation at an accessible target site followed by hybrid elongation to disrupt local target secondary structure and form the complete miRNA-target duplex. This model accurately accounts for the sensitivity to repression by let-7 of various mutant forms of the Caenorhabditis elegans lin-41 3¢ untranslated region and for other experimentally tested miRNA-target interactions in C. elegans and Drosophila melanogaster. These findings indicate a potent effect of target structure on target recognition by miRNAs and establish a structure-based framework for genome-wide identification of animal miRNA targets. <s> BIB013 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Usability and impact <s> BackgroundMicroRNAs (miRs) are small noncoding RNAs that bind to complementary/partially complementary sites in the 3' untranslated regions of target genes to regulate protein production of the target transcript and to induce mRNA degradation or mRNA cleavage. The ability to perform accurate, high-throughput identification of physiologically active miR targets would enable functional characterization of individual miRs. Current target prediction methods include traditional approaches that are based on specific base-pairing rules in the miR's seed region and implementation of cross-species conservation of the target site, and machine learning (ML) methods that explore patterns that contrast true and false miR-mRNA duplexes. However, in the case of the traditional methods research shows that some seed region matches that are conserved are false positives and that some of the experimentally validated target sites are not conserved.ResultsWe present HuMiTar, a computational method for identifying common targets of miRs, which is based on a scoring function that considers base-pairing for both seed and non-seed positions for human miR-mRNA duplexes. Our design shows that certain non-seed miR nucleotides, such as 14, 18, 13, 11, and 17, are characterized by a strong bias towards formation of Watson-Crick pairing. We contrasted HuMiTar with several representative competing methods on two sets of human miR targets and a set of ten glioblastoma oncogenes. Comparison with the two best performing traditional methods, PicTar and TargetScanS, and a representative ML method that considers the non-seed positions, NBmiRTar, shows that HuMiTar predictions include majority of the predictions of the other three methods. At the same time, the proposed method is also capable of finding more true positive targets as a trade-off for an increased number of predictions. Genome-wide predictions show that the proposed method is characterized by 1.99 signal-to-noise ratio and linear, with respect to the length of the mRNA sequence, computational complexity. The ROC analysis shows that HuMiTar obtains results comparable with PicTar, which are characterized by high true positive rates that are coupled with moderate values of false positive rates.ConclusionThe proposed HuMiTar method constitutes a step towards providing an efficient model for studying translational gene regulation by miRs. <s> BIB014 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Usability and impact <s> Computational microRNA (miRNA) target prediction is one of the key means for deciphering the role of miRNAs in development and disease. Here, we present the DIANA-microT web server as the user interface to the DIANA-microT 3.0 miRNA target prediction algorithm. The web server provides extensive information for predicted miRNA:target gene interactions with a user-friendly interface, providing extensive connectivity to online biological resources. Target gene and miRNA functions may be elucidated through automated bibliographic searches and functional information is accessible through Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways. The web server offers links to nomenclature, sequence and protein databases, and users are facilitated by being able to search for targeted genes using different nomenclatures or functional features, such as the genes possible involvement in biological pathways. The target prediction algorithm supports parameters calculated individually for each miRNA:target gene interaction and provides a signal-to-noise ratio and a precision score that helps in the evaluation of the significance of the predicted results. Using a set of miRNA targets recently identified through the pSILAC method, the performance of several computational target prediction programs was assessed. DIANA-microT 3.0 achieved there with 66% the highest ratio of correctly predicted targets over all predicted targets. The DIANA-microT web server is freely available at www.microrna.gr/microT. <s> BIB015 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Usability and impact <s> Computational identification of putative microRNA (miRNA) targets is an important step towards elucidating miRNA functions. Several miRNA target-prediction algorithms have been developed followed by publicly available databases of these predictions. Here we present a new database offering miRNA target predictions of several binding types, identified by our recently developed modular algorithm RepTar. RepTar is based on identification of repetitive elements in 3′-UTRs and is independent of both evolutionary conservation and conventional binding patterns (i.e. Watson–Crick pairing of ‘seed’ regions). The modularity of RepTar enables the prediction of targets with conventional seed sites as well as rarer targets with non-conventional sites, such as sites with seed wobbles (G-U pairing in the seed region), 3′-compensatory sites and the newly discovered centered sites. Furthermore, RepTar’s independence of conservation enables the prediction of cellular targets of the less evolutionarily conserved viral miRNAs. Thus, the RepTar database contains genome-wide predictions of human and mouse miRNAs as well as predictions of cellular targets of human and mouse viral miRNAs. These predictions are presented in a user-friendly database, which allows browsing through the putative sites as well as conducting simple and advanced queries including data intersections of various types. The RepTar database is available at http://reptar.ekmd.huji.ac.il. <s> BIB016 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Usability and impact <s> BackgroundMicroRNAs (miRNAs) play an essential task in gene regulatory networks by inhibiting the expression of target mRNAs. As their mRNA targets are genes involved in important cell functions, there is a growing interest in identifying the relationship between miRNAs and their target mRNAs. So, there is now a imperative need to develop a computational method by which we can identify the target mRNAs of existing miRNAs. Here, we proposed an efficient machine learning model to unravel the relationship between miRNAs and their target mRNAs.ResultsWe present a novel computational architecture MTar for miRNA target prediction which reports 94.5% sensitivity and 90.5% specificity. We identified 16 positional, thermodynamic and structural parameters from the wet lab proven miRNA:mRNA pairs and MTar makes use of these parameters for miRNA target identification. It incorporates an Artificial Neural Network (ANN) verifier which is trained by wet lab proven microRNA targets. A number of hitherto unknown targets of many miRNA families were located using MTar. The method identifies all three potential miRNA targets (5' seed-only, 5' dominant, and 3' canonical) whereas the existing solutions focus on 5' complementarities alone.ConclusionMTar, an ANN based architecture for identifying functional regulatory miRNA-mRNA interaction using predicted miRNA targets. The area of target prediction has received a new momentum with the function of a thermodynamic model incorporating target accessibility. This model incorporates sixteen structural, thermodynamic and positional features of residues in miRNA: mRNA pairs were employed to select target candidates. So our novel machine learning architecture, MTar is found to be more comprehensive than the existing methods in predicting miRNA targets, especially human transcritome. <s> BIB017 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Usability and impact <s> Animal miRNAs are a large class of small regulatory RNAs that are known to directly and negatively regulate the expression of a large fraction of all protein encoding genes. The identification and characterization of miRNA targets is thus a fundamental problem in biology. miRNAs regulate target genes by binding to 3′ untranslated regions (3′UTRs) of target mRNAs, and multiple binding sites for the same miRNA in 3′UTRs can strongly enhance the degree of regulation. Recent experiments have demonstrated that a large fraction of miRNA binding sites reside in coding sequences. Overall, miRNA binding sites in coding regions were shown to mediate smaller regulation than 3′UTR binding. However, possible interactions between target sites in coding sequences and 3′UTRs have not been studied. Using transcriptomics and proteomics data of ten miRNA mis-expression experiments as well as transcriptome-wide experimentally identified miRNA target sites, we found that mRNA and protein expression of genes containing target sites both in coding regions and 3′UTRs were in general mildly but significantly more regulated than those containing target sites in 3′UTRs only. These effects were stronger for conserved target sites of length 7–8 nt in coding regions compared to non-conserved sites. Combined with our other finding that miRNA target sites in coding regions are under negative selection, our results shed light on the functional importance of miRNA targeting in coding regions. <s> BIB018 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Usability and impact <s> Background ::: Machine learning based miRNA-target prediction algorithms often fail to obtain a balanced prediction accuracy in terms of both sensitivity and specificity due to lack of the gold standard of negative examples, miRNA-targeting site context specific relevant features and efficient feature selection process. Moreover, all the sequence, structure and machine learning based algorithms are unable to distribute the true positive predictions preferentially at the top of the ranked list; hence the algorithms become unreliable to the biologists. In addition, these algorithms fail to obtain considerable combination of precision and recall for the target transcripts that are translationally repressed at protein level. ::: Methodology/Principal Finding ::: In the proposed article, we introduce an efficient miRNA-target prediction system MultiMiTar, a Support Vector Machine (SVM) based classifier integrated with a multiobjective metaheuristic based feature selection technique. The robust performance of the proposed method is mainly the result of using high quality negative examples and selection of biologically relevant miRNA-targeting site context specific features. The features are selected by using a novel feature selection technique AMOSA-SVM, that integrates the multi objective optimization technique Archived Multi-Objective Simulated Annealing (AMOSA) and SVM. ::: Conclusions/Significance ::: MultiMiTar is found to achieve much higher Matthew’s correlation coefficient (MCC) of 0.583 and average class-wise accuracy (ACA) of 0.8 compared to the others target prediction methods for a completely independent test data set. The obtained MCC and ACA values of these algorithms range from −0.269 to 0.155 and 0.321 to 0.582, respectively. Moreover, it shows a more balanced result in terms of precision and sensitivity (recall) for the translationally repressed data set as compared to all the other existing methods. An important aspect is that the true positive predictions are distributed preferentially at the top of the ranked list that makes MultiMiTar reliable for the biologists. MultiMiTar is now available as an online tool at www.isical.ac.in/~bioinfo_miu/multimitar.htm. MultiMiTar software can be downloaded from www.isical.ac.in/~bioinfo_miu/multimitar-download.htm. <s> BIB019 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Usability and impact <s> Summary: Although small non-coding RNAs, such as microRNAs, have well-established functions in the cell, long non-coding RNAs (lncRNAs) have only recently started to emerge as abundant regulators of cell physiology, and their functions may be diverse. A small number of studies describe interactions between small and lncRNAs, with lncRNAs acting either as inhibitory decoys or as regulatory targets of microRNAs, but such interactions are still poorly explored. To facilitate the study of microRNA–lncRNA interactions, we implemented miRcode: a comprehensive searchable map of putative microRNA target sites across the complete GENCODE annotated transcriptome, including 10 419 lncRNA genes in the current version. ::: ::: Availability: http://www.mircode.org ::: ::: Contact: es.ug@nossral.kire ::: ::: Supplementary Information: Supplementary data are available at Bioinformatics online. <s> BIB020 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Usability and impact <s> MicroRNAs, or miRNAs, post-transcriptionally repress the expression of protein-coding genes. The human genome encodes over 1000 miRNA genes that collectively target the majority of messenger RNAs (mRNAs). Base pairing of the so-called miRNA ‘seed’ region with mRNAs identifies many thousands of putative targets. Evaluating the strength of the resulting mRNA repression remains challenging, but is essential for a biologically informative ranking of potential miRNA targets. To address these challenges, predictors may use thermodynamic, evolutionary, probabilistic or sequence-based features. We developed an open-source software library, miRmap, which for the first time comprehensively covers all four approaches using 11 predictor features, 3 of which are novel. This allowed us to examine feature correlations and to compare their predictive power in an unbiased way using high-throughput experimental data from immunopurification, transcriptomics, proteomics and polysome fractionation experiments. Overall, target site accessibility appears to be the most predictive feature. Our novel feature based on PhyloP, which evaluates the significance of negative selection, is the best performing predictor in the evolutionary category. We combined all the features into an integrated model that almost doubles the predictive power of TargetScan. miRmap is freely available from http://cegg.unige.ch/mirmap. <s> BIB021 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Usability and impact <s> MicroRNAs (miRNAs) are small endogenous RNA molecules that regulate gene expression through mRNA degradation and/or translation repression, affecting many biological processes. DIANA-microT web server (http://www.microrna.gr/webServer) is dedicated to miRNA target prediction/functional analysis, and it is being widely used from the scientific community, since its initial launch in 2009. DIANA-microT v5.0, the new version of the microT server, has been significantly enhanced with an improved target prediction algorithm, DIANAmicroT-CDS. It has been updated to incorporate miRBase version 18 and Ensembl version 69. The in silico-predicted miRNA–gene interactions in Homo sapiens, Mus musculus, Drosophila melanogaster and Caenorhabditis elegans exceed 11 million in total. The web server was completely redesigned, to host a series of sophisticated workflows, which can be used directly from the on-line web interface, enabling users without the necessary bioinformatics infrastructure to perform advanced multi-step functional miRNA analyses. For instance, one available pipeline performs miRNA target prediction using different thresholds and meta-analysis statistics, followed by pathway enrichment analysis. DIANAmicroT web server v5.0 also supports a complete integration with the Taverna Workflow Management System (WMS), using the in-house developed DIANA-Taverna Plug-in. This plug-in provides ready-to-use modules for miRNA target prediction and functional analysis, which can be used to form advanced high-throughput analysis pipelines. <s> BIB022 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Usability and impact <s> Finding microRNA targets in the coding region is difficult due to the overwhelming signal encoding the amino acid sequence. Here, we introduce an algorithm (called PACCMIT-CDS) that finds potential microRNA targets within coding sequences by searching for conserved motifs that are complementary to the microRNA seed region and also overrepresented in comparison with a background model preserving both codon usage and amino acid sequence. Precision and sensitivity of PACCMIT-CDS are evaluated using PAR-CLIP and proteomics data sets. Thanks to the properly constructed background, the new algorithm achieves a lower rate of false positives and better ranking of predictions than do currently available algorithms, which were designed to find microRNA targets within 3' UTRs. <s> BIB023
The ease of use is affected by the use and number of parameters, scope of predictions, format of inputs and ability to predict for novel miRNAs. The prediction methods rely on parameters that can be used to control how prediction is performed, e.g. the seed size, the number of allowed guanine-uracil wobbles and mismatches, selection of mRNA regions that are searched and the cutoffs for free energy and predicted propensity score. These parameters are usually set based on experience of the designer or user of a given method, or are optimized empirically using a data set. Eleven methods hardcode and hide these parameters from the users, which arguably makes them easier to use but also reduces ability of the end users to tune the models for specific needs or projects. RNAhybrid BIB001 offers eight (the most) parameters for tuning; RepTar and PITA BIB010 BIB016 have seven and five parameters, respectively; and eight predictors allow adjusting between one and four parameters. Importantly, these predictors provide default values for the parameters, so they can be seamlessly used even by layman users. A 'user-friendly' method should allow predicting a wide range of species and target types. Most of the early methods only allow predictions in the 3 0 UTRs, except for RNAhybrid BIB001 , miRanda BIB011 , DIANA-microT-CDS BIB022 and PACMIT-CDS BIB023 , that also search coding DNA sequences (CDSs) and TargetScanS and Xie's method BIB002 that consider open reading frames (ORFs) and promoters, respectively. As more miRNA targets were discovered beyond the 3 0 UTRs BIB012 BIB018 , several newer programs (RNA22 BIB007 , STarMir BIB013 , Mtar BIB017 and miRcode BIB020 ) predict in the 3 0 UTRs, CDSs and 5 0 UTRs. A few methods (RNAhybrid BIB001 , MicroInspector BIB003 , MicroTar BIB008 and MIRZA [100]) do not limit species for which they predict. They accept target genes as RNA sequences or provide stand-alone packages where users can prepare their own mRNA database. Most of the other predictors are constrained to human, mouse, fly and worm. The latter two were the first two species that were used to study miRNA targets. Seven methods consider a more restrictive set We summarize availability, ease of use and impact/popularity. means that a given aspect was missing. $ denotes unknown, as the information was not available in the paper or on the web server. 'Availability' focuses on type of implementation available to the end user: stand-alone (s), web server (ws), precomputed results (p) and upon request (ur), and provides the corresponding URLs. The links shown in bold font did not work. 'Ease of use' covers aspects related to the scope of a given method and ease to run it including the number of input parameters of the corresponding web servers, the targets regions and species that can be predicted, the approximate number of predicted targets, the format in which the searched genes are provided and the ability to predict for new miRNAs. 'Target region' indicates where a given method searches for targets: untranslated region (UTR), coding DNA segment (CDS) and open reading frame (ORF). The covered species are chicken (c), drosophila (d), chimpanzee (e), dog (g), human (h), mouse (m), nematode (n), opossum (o), rat (r), cow (w), thale cress (t), zebra fish (z) and vertebrate (V). The estimated count of predicted targets per miRNA per gene, or per miRNA only (for predictors do not allow inputting target gene), which is denoted by *, is given in the 'Number of targets' column; counts were estimated based on the corresponding papers or by testing the web servers. The possible formats of the input genes are by name, by sequence or by either name or sequence; 'none' denotes that searching particular genes is not allowed. 'new miRNA' shows whether a given method allows to predict new miRNAs.; methods that allow inputting miRNA sequences can be used to predict new miRNAs and are annotated with , otherwise . 'Impact/popularity' is assessed using the number of times a given method was highlighted and considered in the 15 review papers listed in Supplementary Table S2; of species including human and mouse, and four of them also predict for rat or chicken. Four recent methods (HuMiTar BIB014 , TargetMiner , MultiMiTar BIB019 and miRcode BIB020 ) focus on human mRNAs, and TargetBoost BIB004 works only in worms. Next, we analyze format of the inputs. The target genes can be specified by the name or identifier, by the mRNA sequence or are preloaded and the user is not allowed to enter them. Entering the name (e.g. GenBank Accession, NCBI gene ID and/ or name) is arguably convenient but it also limits the prediction to the mRNAs that are available in the considered reference database(s). Allowing the user to provide mRNA sequence alleviates this drawback. Six predictors (MicroInspector BIB003 , STarMir BIB013 , PITA BIB010 , MultiMiTar BIB019 , miREE and miRmap BIB021 ) accept either the name or the sequence, while 3 and 11 programs accept only sequences or names, respectively. The miRNAs can be inputted in two formats: by name and/or by sequence. Again, although it may be convenient to specify miRNAs by their names, this is a rather substantial drawback, which does not allow predicting for novel miRNAs that are nowadays discovered at a rapid pace. Six methods that offer web servers (TargetScan , DIANA-microT BIB022 , MicroInspector BIB003 , PITA BIB010 , miREE and miRmap BIB021 ) accept either the miRNAs name or the sequence, while 3 and 10 only take the sequences or the names, respectively. Table 3 reveals that 12 methods can predict targets of novel miRNAs. When considering the outputs, the number of predicted targets varies widely between methods. Table 3 reports that while most methods predict a few targets per gene per miRNA, some predict hundreds, while miRanda BIB011 generates hundreds of thousands of targets per miRNA. One way to measure impact/popularity of a given method is to analyze its inclusion in prior reviews. Considering the 16 reviews ( Supplementary Table S2 ), 29 of the 38 methods were included in at least one review and 11 in five or more. Moreover, five reviews highlighted/recommended certain predictors. TargetScan and TargetScanS were recommended in three and four reviews, respectively; DIANA-microT BIB015 and RNAhybrid BIB001 twice, and EMBL method BIB005 , PicTar BIB006 , EIMMo BIB009 and PITA BIB013 once. We also calculated the average citation counts per year since a given predictors was proposed, using the Web of Knowledge. Table 3 reveals that 21 of the 38 methods receive on average >10 citations per year and all methods published before 2008 receive at least five citations per year. Three early methods receive >100 citations every year. TargetScan/ TargetScanS is on the extreme end (400þ citations per year), and this could be attributed to its popularity and convenient availability, the fact that empirical studies often compare to this predictor, and because it is widely used in practical applications.
Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Empirical evaluation of representative miRNA target predictors <s> microRNAs are small noncoding genes that regulate the protein production of genes by binding to partially complementary sites in the mRNAs of targeted genes. Here, using our algorithm PicTar, we exploit cross-species comparisons to predict, on average, 54 targeted genes per microRNA above noise in Drosophila melanogaster. Analysis of the functional annotation of target genes furthermore suggests specific biological functions for many microRNAs. We also predict combinatorial targets for clustered microRNAs and find that some clustered microRNAs are likely to coordinately regulate target genes. Furthermore, we compare microRNA regulation between insects and vertebrates. We find that the widespread extent of gene regulation by microRNAs is comparable between flies and mammals but that certain microRNAs may function in clade-specific modes of gene regulation. One of these microRNAs (miR-210) is predicted to contribute to the regulation of fly oogenesis. We also list specific regulatory relationships that appear to be conserved between flies and mammals. Our findings provide the most extensive microRNA target predictions in Drosophila to date, suggest specific functional roles for most microRNAs, indicate the existence of coordinate gene regulation executed by clustered microRNAs, and shed light on the evolution of microRNA function across large evolutionary distances. All predictions are freely accessible at our searchable Web site http://pictar.bio.nyu.edu. <s> BIB001 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Empirical evaluation of representative miRNA target predictors <s> BackgroundMicroRNAs have emerged as important regulatory genes in a variety of cellular processes and, in recent years, hundreds of such genes have been discovered in animals. In contrast, functional annotations are available only for a very small fraction of these miRNAs, and even in these cases only partially.ResultsWe developed a general Bayesian method for the inference of miRNA target sites, in which, for each miRNA, we explicitly model the evolution of orthologous target sites in a set of related species. Using this method we predict target sites for all known miRNAs in flies, worms, fish, and mammals. By comparing our predictions in fly with a reference set of experimentally tested miRNA-mRNA interactions we show that our general method performs at least as well as the most accurate methods available to date, including ones specifically tailored for target prediction in fly. An important novel feature of our model is that it explicitly infers the phylogenetic distribution of functional target sites, independently for each miRNA. This allows us to infer species-specific and clade-specific miRNA targeting. We also show that, in long human 3' UTRs, miRNA target sites occur preferentially near the start and near the end of the 3' UTR.To characterize miRNA function beyond the predicted lists of targets we further present a method to infer significant associations between the sets of targets predicted for individual miRNAs and specific biochemical pathways, in particular those of the KEGG pathway database. We show that this approach retrieves several known functional miRNA-mRNA associations, and predicts novel functions for known miRNAs in cell growth and in development.ConclusionWe have presented a Bayesian target prediction algorithm without any tunable parameters, that can be applied to sequences from any clade of species. The algorithm automatically infers the phylogenetic distribution of functional sites for each miRNA, and assigns a posterior probability to each putative target site. The results presented here indicate that our general method achieves very good performance in predicting miRNA target sites, providing at the same time insights into the evolution of target sites for individual miRNAs. Moreover, by combining our predictions with pathway analysis, we propose functions of specific miRNAs in nervous system development, inter-cellular communication and cell growth. The complete target site predictions as well as the miRNA/pathway associations are accessible on the ElMMo web server. <s> BIB002 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Empirical evaluation of representative miRNA target predictors <s> AbstractmirSVR is a new machine learning method for ranking microRNA target sites by a down-regulation score. The algorithm trains a regression model on sequence and contextual features extracted from miRanda-predicted target sites. In a large-scale evaluation, miRanda-mirSVR is competitive with other target prediction methods in identifying target genes and predicting the extent of their downregulation at the mRNA or protein levels. Importantly, the method identifies a significant number of experimentally determined non-canonical and non-conserved sites. <s> BIB003 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Empirical evaluation of representative miRNA target predictors <s> MicroRNAs, or miRNAs, post-transcriptionally repress the expression of protein-coding genes. The human genome encodes over 1000 miRNA genes that collectively target the majority of messenger RNAs (mRNAs). Base pairing of the so-called miRNA ‘seed’ region with mRNAs identifies many thousands of putative targets. Evaluating the strength of the resulting mRNA repression remains challenging, but is essential for a biologically informative ranking of potential miRNA targets. To address these challenges, predictors may use thermodynamic, evolutionary, probabilistic or sequence-based features. We developed an open-source software library, miRmap, which for the first time comprehensively covers all four approaches using 11 predictor features, 3 of which are novel. This allowed us to examine feature correlations and to compare their predictive power in an unbiased way using high-throughput experimental data from immunopurification, transcriptomics, proteomics and polysome fractionation experiments. Overall, target site accessibility appears to be the most predictive feature. Our novel feature based on PhyloP, which evaluates the significance of negative selection, is the best performing predictor in the evolutionary category. We combined all the features into an integrated model that almost doubles the predictive power of TargetScan. miRmap is freely available from http://cegg.unige.ch/mirmap. <s> BIB004 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Empirical evaluation of representative miRNA target predictors <s> MicroRNAs (miRNAs) are small endogenous RNA molecules that regulate gene expression through mRNA degradation and/or translation repression, affecting many biological processes. DIANA-microT web server (http://www.microrna.gr/webServer) is dedicated to miRNA target prediction/functional analysis, and it is being widely used from the scientific community, since its initial launch in 2009. DIANA-microT v5.0, the new version of the microT server, has been significantly enhanced with an improved target prediction algorithm, DIANAmicroT-CDS. It has been updated to incorporate miRBase version 18 and Ensembl version 69. The in silico-predicted miRNA–gene interactions in Homo sapiens, Mus musculus, Drosophila melanogaster and Caenorhabditis elegans exceed 11 million in total. The web server was completely redesigned, to host a series of sophisticated workflows, which can be used directly from the on-line web interface, enabling users without the necessary bioinformatics infrastructure to perform advanced multi-step functional miRNA analyses. For instance, one available pipeline performs miRNA target prediction using different thresholds and meta-analysis statistics, followed by pathway enrichment analysis. DIANAmicroT web server v5.0 also supports a complete integration with the Taverna Workflow Management System (WMS), using the in-house developed DIANA-Taverna Plug-in. This plug-in provides ready-to-use modules for miRNA target prediction and functional analysis, which can be used to form advanced high-throughput analysis pipelines. <s> BIB005
We empirically evaluate seven representative target sequencebased predictors, i.e. methods that predict targets from miRNA and mRNA sequences, which are conveniently available to the end users that predict for human and mouse, and which provide sufficiently rich set of outputs. The selection criteria are discussed in the 'Materials and Methods' and Supplementary Table S6 . They include older (PicTar 2005 BIB001 ) and newer (TargetScan 6.2 , DIANA-microT-CDS BIB005 , miRanda 2010 BIB003 , EIMMo3 BIB002 , mirTarget2 v4 and miRmap v1.1 BIB004 ) approaches that use a variety of types of predictive models. The predictions, which were collected using their web servers or precomputed predictions, consist of binding targets (mRNA sequences and/or positions of the binding site on mRNA) and the corresponding propensities (real-valued scores that quantify probability of the miRNA:target interaction). Table 4 and Supplementary Table S7 summarize results of the assessment at the gene level (to predict mRNAs that interact with a given miRNA) on the TEST_gene data set and the duplex levels (to predict whether a given fragment on mRNA interacts with a given miRNA) on the TEST_duplex data set. A given miRNA:target pair was predicted as functional if the target was predicted using the corresponding miRNA; the remaining targets were assumed to be predicted as nonfunctional and the corresponding propensity was set to 0. When assessing the gene level predictions, we scored a given gene using the sum of propensities among all its predicted target sites for a given miRNA. Because these seven methods were initially published before 2012, we use experimentally validated miRNA targets that were published after 2012 to perform the empirical assessment. This limits a bias caused by a potential overlap between our benchmark data and data used to develop a given method. Considering the predictions of the miRNA:mRNA duplexes, TargetScan and DIANA-microT secure the highest AUC values of 0.674 and 0.673, respectively. Moreover, DIANA-microT has the highest MCC, which improves over the second best TargetScan by 0.073 [relative improvement of (0.273-0.200)/ 0.200*100% ¼ 36.8%]. TargetScan offers the highest sensitivity, i.e. it correctly predicts the largest fraction of the functional duplexes. On the other hand, PicTar has the highest specificity, i.e. it correctly predicts the largest number of the nonfunctional duplexes. This means that functional targets predicted by PicTar are likely to be functional. DIANA-microT offers the highest SNR of correct to incorrect functional predictions (SNRþ). TargetScan has the highest SNRÀ (SNR for the nonfunctional predictions), relatively good SNRþ and very good PNR (ratio of the number of predicted to native functional duplexes). PNR value of TargetScan reveals that it only slightly underpredicts, by 3.8%, the number of functional duplexes. The other methods, except for miRmap and EIMMo, underpredict the functional duplexes by a large margin. We illustrate relation between predictive quality (SNR values) and the outputted propensities binned to 10 intervals in Supplementary Figure S3A . The number of predicted duplexes and their SNR values in each interval are denoted by size and color of the bubbles (dark blue for accurate predictions), respectively. Alternating red and blue bubbles for a given predictor indicate that values of its propensity do not correlate with the underlying predictive quality. All methods have blue bubbles for propensity of 0, which means that they predict the nonfunctional duplexes well. However, predicted functional targets (propensity > 0) are often inaccurate (red bubbles) particularly for lower values of propensity. DIANA-microT predicts well when its propensity > 0.7, and miRmap and TargetScan when > 0.4 and 0.8, respectively. Analysis of statistical significance reveals that the differences in the AUC values (results above diagonal in Supplementary Table S7 ) are not statistically significant between TargetScan, DIANA-microT and miRmap. However, these three predictors are significantly better than the other four methods (P-value 0.001). Table 5 analyzes anticipated predictive performance at the duplex level based on information that is available before the prediction is performed, including the nucleotide composition of the seed region and the overall size of the input miRNA sequences. The hints summarized in this Table could guide selection of a predictor based on the miRNA sequence. Most methods, especially TargetScan, DIANA-microT and miRmap, Overview and assessment of miRNA target predictions in animals | 9 at Bibliothek Der TU Muenchen on December 16, 2014 http://bib.oxfordjournals.org/ predict well for medium-sized (22 nucleotides long) miRNAs. The predictions for longer miRNAs are generally less accurate. Considering the nucleotide content in the seed region, the same three methods provide high-quality predictions for miRNAs when the seeds have 2 adenines or 2 guanines, and < 2 cytosines. DIANA-microT also predict well for < 2 adenines and > 2 uracil and miRmap for < 2 adenines. Overall, we recommend TargetScan, DIANA-microT and miRmap because their AUCs > 0.7 for specific types of miRNAs. The overall prediction quality is higher and ranking of the methods is slightly different for the predictions on TEST_gene data set when compared with the TEST_duplex data set ( Table 4 ). TargetScan secures the highest AUC, while EIMMo moves up to the second place and provides the highest MCC. TargetScan improves in AUC over the second best EIMMo by 0.023 (relative improvement of 3.2%) and over miRmap by 0.043 (relative improvement of 4.8%). miRmap offers the highest sensitivity and TargetScan provides arguably the best balance between sensitivity and specificity (both scores are high and similar). MirTarget2 is the most conservative method given its highest specificity, precision and SNRþ, i.e. it predicts only a few functional targets but with high success rate. The PNR values reveal that TargetScan predicts exactly the right number of functional genes and EIMMo only 5.3% too few. Supplementary Figure S3B shows relation between predictive quality (SNR values) and the propensities generated by the prediction methods. Interestingly, predictions associated with higher propensities are more likely to be more accurate, as evidenced by the presence of (dark) blue bubbles. As a highlight, EIMMo predicts well in every propensity bin, and the targets predicted by TargetScan and miRanda with propensities >0.3 and 0.4, respectively, are characterized by high SNR values. Analysis of statistical significance of differences in the AUC values (results below diagonal in Supplementary Table S7 ) reveals that TargetScan's results are significant better (P-value 0.001) compared with the other predictors. AUCs of EIMMo and miRmap are not significantly different and significantly higher than AUCs of the other four methods (P-value 0.001). We also analyze relation between predictive performance at the gene level and the number of target sites predicted in a given gene (Supplementary Figure S3C ). Most methods, except for MirTarget2 and miRanda, can predict three or more target sites per gene for a given miRNA. We observe that predictive quality for genes for which at least two sites are predicted is better (bubbles have darker blue color), particularly for EIMMo, TargetScan and miRanda. This suggests that for these The compositional characteristics include the size of miRNA and the count of each nucleotide type in the seed region. The sizes are divided into short (<22 nt), medium (¼22 nt) and long (>22 nt). The count of nucleotides in the seeds of miRNAs is grouped into low (<2 nt), medium (¼2 nt) and high (>2 nt). The AUC values obtained by a given predictor are coded as: 'À' for [0, 0.55], '¼' for (0.55, 0.6], 'þ' for (0.6, 0.7] and 'þþ' for (0.7, 1.0]. We evaluate seven representative targets predictors. We measure area under the ROC curve (AUC), Matthews correlation coefficient (MCC), sensitivity (Sen.), specifity (Spe.), precision (Prec.), signal-to-noise ratio for predicted functional (SNRþ) and predicted nonfunctional targets (SNRÀ) and predicted-to-native functional target ratio (PNR). Methods are sorted in the descending order by their AUC values. The best value of each measurement across all the predictors is given in bold font. predictors higher number of predicted sites could be used as a marker of higher predictive quality. Predictions at the transcriptome/proteome scale on the TEST_geo and TEST_psilac data sets are evaluated at different thresholds that define the fraction of the most repressed and most overexpressed genes that are annotated as functional and nonfunctional, respectively (Figure 1 ). AUCs are generally higher at the gene level (TEST_geo data set) than at the protein level (TEST_psilac data set). Considering the three gene-level data sets, the ranking of the methods on the TEST_psilac data set is the same as on the TEST_gene data set, and slightly different on the TEST_geo data set. Based on the microarray data, miRmap achieves the best AUC, which is comparable with the AUC of TargetScan and EIMMo. These three predictors have AUCs > 0.7 when evaluated on the top 4% of genes with largest expression changes; using this threshold, on average each miRNA targets 176 mRNAs. We note miRmap was originally trained and tested on two of the three microarrays from the TEST_geo data set, so its predictive quality on this data set could be overestimated. Considering the pSILAC data, only TargetScan provides AUC > 0.7 when using top 1% of proteins for which expression levels change most; this threshold results in an annotation where on average each miRNA regulates 39 proteins. Overall, the AUC values decrease when more ambiguous genes (genes for which expression changes are weaker) are included, i.e. the fraction of the included repressed and overexpressed genes is higher. Figure S4A and B) leads to similar conclusions. TargetScan, EIMMo and miRmap secure the highest values of this index.
Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Discussion <s> Computational microRNA (miRNA) target prediction is a field in flux. Here we present a guide through five widely used mammalian target prediction programs. We include an analysis of the performance of these individual programs and of various combinations of these programs. For this analysis we compiled several benchmark data sets of experimentally supported miRNA-target gene interactions. Based on the results, we provide a discussion on the status of target prediction and also suggest a stepwise approach toward predicting and selecting miRNA targets for experimental testing. <s> BIB001 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Discussion <s> In recent years, microRNAs (miRNAs) have emerged as a major class of regulatory genes, present in most metazoans and important for a diverse range of biological functions. Because experimental identification of miRNA targets is difficult, there has been an explosion of computational target predictions. Although the initial round of predictions resulted in very diverse results, subsequent computational and experimental analyses suggested that at least a certain class of conserved miRNA targets can be confidently predicted and that this class of targets is large, covering, for example, at least 30% of all human genes when considering about 60 conserved vertebrate miRNA gene families. Most recent approaches have also shown that there are correlations between domains of miRNA expression and mRNA levels of their targets. Our understanding of miRNA function is still extremely limited, but it may be that by integrating mRNA and miRNA sequence and expression data with other comparative genomic data, we will be able to gain global and yet specific insights into the function and evolution of a broad layer of post-transcriptional control. <s> BIB002 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Discussion <s> MicroRNAs (miRNAs) are a class of short endogenously expressed RNA molecules that regulate gene expression by binding directly to the messenger RNA of protein coding genes. They have been found to confer a novel layer of genetic regulation in a wide range of biological processes. Computational miRNA target prediction remains one of the key means used to decipher the role of miRNAs in development and disease. Here we introduce the basic idea behind the experimental identification of miRNA targets and present some of the most widely used computational miRNA target identification programs. The review includes an assessment of the prediction quality of these programs and their combinations. Contact: p.alexiou@fleming.gr Supplementary information: Supplementary data are available at Bioinformatics online. <s> BIB003 </s> Comprehensive overview and assessment of computational prediction of microRNA targets in animals <s> Discussion <s> BackgroundmicroRNAs (miRNAs) are tiny endogenous RNAs that have been discovered in animals and plants, and direct the post-transcriptional regulation of target mRNAs for degradation or translational repression via binding to the 3'UTRs and the coding exons. To gain insight into the biological role of miRNAs, it is essential to identify the full repertoire of mRNA targets (target genes). A number of computer programs have been developed for miRNA-target prediction. These programs essentially focus on potential binding sites in 3'UTRs, which are recognized by miRNAs according to specific base-pairing rules.ResultsHere, we introduce a novel method for miRNA-target prediction that is entirely independent of existing approaches. The method is based on the hypothesis that transcription of a miRNA and its target genes tend to be co-regulated by common transcription factors. This hypothesis predicts the frequent occurrence of common cis-elements between promoters of a miRNA and its target genes. That is, our proposed method first identifies putative cis-elements in a promoter of a given miRNA, and then identifies genes that contain common putative cis-elements in their promoters. In this paper, we show that a significant number of common cis-elements occur in ~28% of experimentally supported human miRNA-target data. Moreover, we show that the prediction of human miRNA-targets based on our method is statistically significant. Further, we discuss the random incidence of common cis-elements, their consensus sequences, and the advantages and disadvantages of our method.ConclusionsThis is the first report indicating prevalence of transcriptional regulation of a miRNA and its target genes by common transcription factors and the predictive ability of miRNA-targets based on this property. <s> BIB004
We reviewed 38 miRNA target predictors from all significant perspectives including their prediction models, availability, impact, user friendliness and protocols and measures that are used to evaluate their predictive performance. We found that standardized evaluation procedures are urgently needed because currently predictors are evaluated using different measures, different test protocols and using vastly different data sets. This hinders comparison among these methods and appropriate selection by the end users. To this end, we empirically and systematically compared seven representative predictors on four benchmark data sets, considering prediction of miRNA:mRNA duplexes and targets genes and proteins. We found that although certain methods, like TargetScan and miRmap, offer high overall predictive quality, there is no universally best predictor. For instance, PicTar and MirTarget2 provide predictions with high specificity and low number of FP (incorrectly predicted functional genes/duplexes). Thus, these two methods are suitable for users that would like to obtain a small subset of accurately predicted functional duplexes or genes. EIMMo predicts well at the gene level. We observe that the count of functional target sites or genes predicted by TargetScan is the closest to the native count (PNR value close to 1), and thus, this method should be used to accurately estimate the number of miRNA targets. We found that genes predicted as functional based on a higher number of sites are more likely to be accurate, particularly for the EIMMo and TargetScan predictors. Finally, the benchmark data sets and empirical results that we provide are useful to develop and comparatively assess future prediction methods. We observe that predictions at the duplex level are characterized by lower predictive quality than the predictions of targets genes. This agrees with intuition that predicting target sites should be more difficult than predicting target genes that offer more input information (longer mRNA sequence). Moreover, our estimates of the predictive performance are often lower than the estimates from the original publications. Possible reasons are as follows: (i) we use experimental validated data, which is likely more challenging than the artificial data that were used to assess previous predictors; (ii) the nonfunctional validated duplexes that we use have relatively many Watson-Crick (WC) base pairs in the seed regions (83% have at least six pairs, see Supplementary Table S8 ). These sites were likely hypothesized to be functional, refuted and thus annotated as nonfunctional. This is why they have such seeds, which in turn makes them more challenging to separate from the functional duplexes when compared with a more 'random' site; and (iii) miRanda, PicTar, EIMMo and MirTarget2 provide only precomputed predictions, which may not include most upto-date miRNA and transcript databases. Unfortunately, we could not compare results with the previous reviews BIB001 BIB003 BIB002 because they did not consider a balanced selection of measurements (e.g. only provided sensitivity and precision, which ignore TN), and such one-sided evaluation would not be meaningful. Our review offers in-depth insights that could be used by the end users to select prediction methods based on their predictive performance ( Table 4 ) and their input miRNAs ( Table 5 ). We also provide several practical observations that consider specifics of applications of interest. Arguably, the commonly considered characteristics of the applications of the miRNA target predictors include the need to consider novel miRNAs and to focus on certain regions in the mRNA, to predict a more complete or smaller and more accurate list of targets, to predict for a large set of miRNAs, to tweak desired parameters of the miRNA-mRNA interaction and to generate propensities for the predicted interactions. We address these characteristics as follows: • Only some methods can predict targets for novel miRNAs (see 'New miRNA' column in Table 3 ). • Applications that focus on particular regions (e.g. 5 0 UTR, CDS, promoters) should use predictors that were designed to consider these regions (see 'target region' column in Table 3 ). • Some methods generate few and potentially more accurate targets, while some predict a larger and more complete set of targets that may include more FP (see 'Number of targets' column in Table 3 ). Users should choose an appropriate method depending on whether they look for a more complete or a more accurate set of targets. • When predicting for a large number of miRNAs, the downloadable precomputed results or methods that provide APIs should be used (see 'batch search' in the 'Note' column in the Supplementary Table S6 ). • The end users should apply predictors with tunable seed type parameter, such as PITA, when searching for targets that use a particular seed type. Also, when aiming to find targets with low number of WC pairs in the seed region, only some predictors that consider such targets, like miREE, can be used. • When predicting the target site, the methods that can only predict target genes cannot be used (see 'Target site tracking' column in Supplementary Table S6 ). • Only some predictors provide predictions with the associated propensities of the interaction; many methods only provide binary (functional versus nonfunctional) predictions (see 'Score' column in Supplementary Table S6) Although undoubtedly computational miRNA target predictors are useful and their predictive performance is relatively good, we suggest several areas where further improvements are possible: • Current methods use many different predictive models. In contrast to other areas of bioinformatics, the empirical (knowledgebased) models do not outperform the heuristic models. This could be due to the low quantity of training data, use of artificial training data (randomly generated nonfunctional targets) and unbalanced nature of the data (low number of nonfunctional targets). Thus, one of the future aims should be to improve the quality and quantity of the training data. • Further improvements in predictive quality could be attained by finding and using not yet known characteristics of miRNA:target interactions. For instance, recently cis-element was used to connect primary miRNAs to their potential targets BIB004 , and Gene Ontology annotations and protein-protein interaction networks • Computational miRNA target prediction from sequence is essential to characterize miRNA functions and to develop miRNA-based therapeutics. • We comprehensively summarize 38 miRNA target predictors and empirically evaluate seven methods on four benchmark data sets that annotate targets at the binding site, gene and protein levels. • Current miRNA target prediction methods substantially vary in their predictive methodology, usability and predictive performance. • We offer insights for the end users to select appropriate methods according to their specific application and we discuss advantages and disadvantages of the considered predictors. • New miRNA target predictors are needed, particularly focusing on the high-throughput predictions, improved predictive performance and provision of an expanded range of predicted outputs. Overview and assessment of miRNA target predictions in animals | 13 at Bibliothek Der TU Muenchen on December 16, 2014 http://bib.oxfordjournals.org/ Downloaded from
A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Introduction <s> Due to the introduction of newer technologies like Long Term Evolution (LTE) in already deployed cellular access networks, changes in the energy-efficiency of networks consisting predominantly of macro base station sites (BSSs) can be expected. An investigation has been performed for two prominent energy metrics of cellular networks: Power per Unit Area (PUA) and Energy per bit and Unit Area (EbUA). Analytical relations have been developed that express the influence of parameters such as BSs' transmit (Tx) powers, inter-site distances (ISDs), and a number of heterogeneous macro or LTE micro BSSs on the PUA and EbUA. It has been shown that appropriate selection of these parameters can ensure significant energy savings. Besides the possibility of finding an optimal trade-off among ISDs and Tx powers of macro BSSs, which will minimize PUA and maximize EbUA, adding micro LTE BSs to such heterogeneous networks contributes to the improvement of network energy efficiency. <s> BIB001 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Introduction <s> In the present scenario, an energy efficiency has become a matter of prime importance for wireless networks. To meet the demands of an increased capacity, an improved data rate, and a better quality of the service of the next-generation networks, there is a need to adopt energy-efficient architectures. Along with these requirements, it is also our social responsibility to reduce the carbon footprint by reducing the power consumption in a wireless network. Hence, a green communication is an urgent need. In this paper, we have surveyed various techniques for the power optimization of the upcoming 5G networks. The primary focus is on the use of relays and small cells to improve the energy efficiency of the network. We have discussed the various scenarios of relaying for the next-generation networks. Along with this, the importance of simultaneous wireless power and information transfer, massive multiple input multiple output, and millimeter waves has been analyzed for 5G networks. <s> BIB002 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Introduction <s> The emerging 5G wireless networks will pose extreme requirements such as high throughput and low latency. Caching as a promising technology can effectively decrease latency and provide customized services based on group users behaviour (GUB). In this paper, we carry out the energy efficiency analysis in the cache-enabled hyper cellular networks (HCNs), where the macro cells and small cells (SCs) are deployed heterogeneously with the control and user plane (C/U) split. Benefiting from the assistance of macro cells, a novel access scheme is proposed according to both user interest and fairness of service, where the SCs can turn into semi- sleep mode. Expressions of coverage probability, throughput and energy efficiency (EE) are derived analytically as the functions of key parameters, including the cache ability, search radius and backhaul limitation. Numerical results show that the proposed scheme in HCNs can increase the network coverage probability by more than 200% compared with the single- tier networks. The network EE can be improved by 54% than the nearest access scheme, with larger research radius and higher SC cache capacity under lower traffic load. Our performance study provides insights into the efficient use of cache in the 5G software defined networking (SDN). <s> BIB003 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Introduction <s> The deployment of a large number of small cells poses new challenges to energy efficiency, which has often been ignored in 5G cellular networks. While massive MIMO will reduce the transmission power at the expense of higher computational cost, the question remains as to which (computation or transmission power) is more important in the energy efficiency of 5G small cell networks. Thus, the main objective in this article is to investigate the computation power based on the Landauer principle. Simulation results reveal that more than 50 percent of the energy is consumed by the computation power at 5G small cell BSs. Moreover, the computation power of a 5G small cell BS can approach 800 W when massive MIMO (e.g., 128 antennas) is deployed to transmit high volume traffic. This clearly indicates that computation power optimization can play a major role in the energy efficiency of small cell networks. <s> BIB004 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Introduction <s> Although the promising 5G cell network technology has increased the transmitting rate greatly, it has also brought some challenges. The energy efficiency has become an important topic in 5G networks. In this paper, the energy efficiency of small cell networks is analyzed, and the existing objective functions are classified in order to minimize the energy consumption, and to maximize the energy efficiency, harvested energy, and energy-aware transmission. Commonly used metrics were analyzed on equipment, base station, and network levels, respectively. Moreover, the methods for energy efficiency improvement were introduced according to above-mentioned metrics. Afterward, the relationships between energy efficiency, spectrum efficiency, and space efficiency were discussed. In order to improve efficiency on equipment, base station, and network levels, the energy and spectrum market is proposed and guidelines for the future research on metrics, methods, and market are presented. The proposed market was verified by simulations, and the simulation results have shown that the proposed market improves the energy efficiencies effectively. <s> BIB005 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Introduction <s> Along with spectral efficiency (SE), energy efficiency (EE) is a key performance metric for the design of 5G and beyond 5G (B5G) wireless networks. At the same time, infrastructure sharing among multiple operators has also emerged as a new trend in wireless communication networks. This paper presents an optimization framework for EE and SE maximization in a network, where radio resources are shared among multiple operators. We define a heterogeneous service level agreement (SLA) framework for a shared network, in which the constraints of different operators are handled by two different multi-objective optimization approaches namely the utility profile and scalarization methods. Pareto-optimal solutions are obtained by merging these approaches with the theory of generalized fractional programming. The approach applies to both noise-limited and interference-limited systems, with single-carrier or multi-carrier transmission. Extensive numerical results illustrate the effect of the operator specific SLA requirements on the global spectral and EE. Three network scenarios are considered in the numerical results, each one corresponding to a different SLA, with different operator-specific EE and SE constraints. <s> BIB006 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Introduction <s> To avoid the latency from RRC state transition procedure, legacy network maintains UE as RRC connected state for pre-defined time duration even without any traffic arrival. However, it consumes UE battery rapidly because UE should monitor PDCCH, and send CQI feedback periodically. In this paper, we design the RRC connection control to enhance energy efficiency with moderate control signaling overhead. In RRC INACTIVE state, the newly introduced in NR, both network and UE save UE context including bearer configuration and security even after UE released from network. Owing to the saved UE context, the RRC state transition from RRC INACTIVE to RRC CONNECTED requires fewer number of CN signalling. And thus network can release UE to RRC INACTIVE more aggressively with shorter timer. Furthermore, we propose the connectionless data transmission in RRC INACTIVE without RRC state transition to RRC CONNECTED. In our performance analysis, UE energy consumption can be improved by 50% for the modem only and 18% reduction for the total device including display. Furthermore, when the small data or background (keep-alive) traffic is transferred in RRC INACTIVE, the energy efficiency is increased up to double. <s> BIB007 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Introduction <s> Using a network of cache enabled small cells, traffic during peak hours can be reduced by proactively fetching the content that is most likely to be requested. In this paper, we aim to explore the impact of proactive caching on an important metric for future generation networks, namely, energy efficiency (EE). We argue that, exploiting the spatial repartitions of users in addition to the correlation in their content popularity profiles, can result in considerable improvement of the achievable EE. In this paper, the optimization of EE is decoupled into two related subproblems. The first one addresses the issue of content popularity modeling. While most existing works assume similar popularity profiles for all users, we consider an alternative framework in which, users are clustered according to their popularity profiles. In order to showcase the utility of the proposed clustering, we use a statistical model selection criterion, namely, Akaike information criterion. Using stochastic geometry, we derive a closed-form expression of the achievable EE and we find the optimal active small cell density vector that maximizes it. The second subproblem investigates the impact of exploiting the spatial repartitions of users. After considering a snapshot of the network, we formulate a combinatorial problem that optimizes content placement in order to minimize the transmission power. Numerical results show that the clustering scheme considerably improves the cache hit probability and consequently the EE, compared with an unclustered approach. Simulations also show that the small base station allocation algorithm improves the energy efficiency and hit probability. <s> BIB008
Advances in telecommunication systems around the world have always been pushing the wireless infrastructure to be more resilient and scalable. Ever growing faster data rates and a demand for the highest quality of service has been a strong constraint when energy conservation needs to be considered. Data rates as high as that of 1 Gbps have been foreseen with the advent of 5G. In addition, with an explosive number of heterogeneous devices coming online, including sensors for home security, tablets, and wearable health monitors, the computational power of base stations must increase. An estimated 50% increase in the computing power of baseband units has been predicted to handle this traffic burst BIB004 . Thus, the focus on energy-efficiency needs to include optimization of computational complexity in addition to optimization of transmission power. An estimated 75% of the Information and Communications Technology (ICT) industry is supposed to be wireless by 2020 and today 5% of the world's carbon footprint is coming from this industry alone. A consensus between academia and industry dictates that the foreseen 1000× capacity gain must be achieved with either the present energy consumption or lower . Thanks to energy-efficiency efforts world-wide, energy consumption in the 5G realm, in terms of bits/joule, has been considered as an important design parameter. In 4th generation (4G), the concept of small cells has been introduced to increase the coverage and capacity. Therefore, BIB001 conducted an analysis on energy consumption per unit area for a heterogeneous deployment of cells for fourth generation networks. With 5G, small cells are inevitable in deployments due to their advantage of improved traffic handling within a smaller area as well as the shorter cell ranges that result from the use of higher frequencies. Yet, the increasing number of base stations translate into more energy consumption, although the increase in consumption will not be linear. Small cells, or in other words densification, calls for sophisticated management of resources. Most recently, intelligent resource allocation and control techniques utilizing machine learning algorithms have been suggested to help next generation radios in their autonomous reconfiguration for improving the data rates, energy efficiency and interference mitigation. Overall, the emerging sophistication in both User Equipment (UE) and network side has increased the energy consumption and thus objective functions have been devised to maximize the energy efficiency, harvested energy and energy aware transmission BIB005 . Many of the existing energy efficiency improvement techniques include the use of green energy sources for base stations, modifying the coverage area of a base station depending upon the load level, putting lightly loaded base stations to sleep and load balancing by handing over the UEs to the macro base station. A survey on these technologies for the 5G Radio Access Network (RAN) can be found in BIB002 . This survey has been aimed to contribute towards a greener and a sustainable telecommunication's ecosystem by reviewing and bringing together some of the latest ideas and techniques of energy conservation at base station and network level. A high level diagram shows the areas addressed in Figure 1 . A few of the prominent examples include the introduction of a newer Radio Resource Control (RRC) state for context signalling and cutting down on the redundant state changes BIB007 . Utilization of advanced clustering and caching techniques on the RAN side have been highly appreciated for their benefits of improving the latency of getting the data requested by a group of users and possibly eliminating the factor of clogging the network by a huge number of requests for the same content BIB008 BIB003 . A case study of commercial resource sharing among different operators bears fruitful results in terms of reduced deployment costs and good data rates with minimum interference among them BIB006 . The upcoming sections introduce the basics of energy efficiency, provide justification for the need of gauging the energy consumption and then present the most recent research works carried out for the optimization at different levels of the architecture. This survey bears its uniqueness in its holistic approach to energy-efficiency by covering radio, core and computing side of 5G. This paper is also different than the surveys in the literature BIB004 BIB001 BIB005 , as it focuses on works published in the last few years where the majority of the studies focus on concepts specific to the new 5G standard.
A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Background on Energy Efficiency <s> In this paper heterogeneous wireless cellular networks based on two-tier architecture consisting of macrocells and femtocells are considered. Methods of femtocells deployment and management are explored in order to determine their effects on performance of wireless cellular networks. Thus, network performance parameters are described and analytically calculated for different two-tier network architectures. A specific approach is presented in the paper, where calculations of the network performance parameters are supported with some of the results obtained using an appropriate simulation tool. In such a manner, energy efficiency of the considered two-tier network architectures is studied by introducing a number of so called green metrics. It is clearly shown that significant energy efficiency, as well as throughput, improvements can be achieved by adopting heterogeneous architecture for wireless cellular networks. <s> BIB001 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Background on Energy Efficiency <s> This paper studies interoperability concerns arising from coupling energy-aware radio resource and topology management techniques which are developed to minimise the energy consumption of current and future mobile broadband systems. This paper also proposes an Energy-aware Network Management middleware that harmonises the joint operation of energy-aware radio resource and topology management schemes enhancing the system QoS as well as energy efficiency. <s> BIB002 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Background on Energy Efficiency <s> Although the promising 5G cell network technology has increased the transmitting rate greatly, it has also brought some challenges. The energy efficiency has become an important topic in 5G networks. In this paper, the energy efficiency of small cell networks is analyzed, and the existing objective functions are classified in order to minimize the energy consumption, and to maximize the energy efficiency, harvested energy, and energy-aware transmission. Commonly used metrics were analyzed on equipment, base station, and network levels, respectively. Moreover, the methods for energy efficiency improvement were introduced according to above-mentioned metrics. Afterward, the relationships between energy efficiency, spectrum efficiency, and space efficiency were discussed. In order to improve efficiency on equipment, base station, and network levels, the energy and spectrum market is proposed and guidelines for the future research on metrics, methods, and market are presented. The proposed market was verified by simulations, and the simulation results have shown that the proposed market improves the energy efficiencies effectively. <s> BIB003 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Background on Energy Efficiency <s> Today many users with their smart mobile devices enjoy the benefits of broadband Internet services. This is primarily enabled by pushing computing, control, data storage and processing into the cloud. However, the cloud encounters growing limitations, such as reduced latency, high mobility, high scalability and real-time execution in order to meet the computing and intelligent networking demands for the next 5G mobile and wireless network. A new paradigm called Fog Computing and Networking, or briefly Fog has emerged to resolve these limits. Fog distributes computing, data processing, and networking services closer to the end users. It is an architecture where distributed edge and user devices collaborate with each other and with the clouds to carry out computing, control, networking, and data management tasks. Fog applied in 5G network can significantly improve network performance in terms of spectral and energy efficiency, enable direct device-to-device wireless communications, and support the growing trend of network function virtualization and separation of network control intelligence from radio network hardware. This paper evaluates the quality of cloud and fog computing and networking orchestrated services in 5G mobile and wireless network in terms of energy efficiency. <s> BIB004 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Background on Energy Efficiency <s> In next generation wireless networks along with the overwhelming demand of high data rate and network capacity, the user demands ubiquitous connectivity with the network. In order to fulfill the demand of anywhere at any time data services, the network operators have to install more and more base stations that eventually leads towards high power consumption. For this, the potential solution is derived from 5G network that proposes a heterogeneous environment of wireless access networks. More particularly, deployment of Femto and Pico cell under the umbrella of Macro cell base stations (BS). Such networking strategy will result high network capacity and energy efficiency along with better network coverage. In this article, an analysis of energy efficiency has been carried out by using two-tier and three tier network configurations. The simulation results demonstrate that rational deployment of small cells improves the energy efficiency of wireless network. <s> BIB005 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Background on Energy Efficiency <s> Energy efficiency is a major requirement for next generation mobile networks both as an end to reduce operational expenses and to increase the systems' ecological friendliness. Another integral part of 5G networks is the increased density of the deployment of small radius base stations, such as femtocells. Based on the design principle that demands a system to be active and transmitting only when and where it is needed, we evaluate the energy savings harvested when sleep mode techniques are enforced in dense femtocell deployments. We present our novel variations of sleep mode combined with hybrid access strategies and we estimate capacity and energy benefits. Our simulations show significant advantages in performance and energy efficiency. <s> BIB006 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Background on Energy Efficiency <s> As a new promising of higher data rates and to enable the Internet of Things (IoT), the thirst of energy efficiency in communication networks has become an important milestone in the design and operation. With the emergence of the 5G of wireless networks and the deployment of billions of base stations to the connected devices, the requirement for system design and energy efficiency management will become more attractive. In addition, in the next era of cellular, the energy efficiency is the most important requirement determined by the needs in reducing the carbon footprint of communications, and also in extending the life of the terminal battery. Nevertheless, the new challenge has emerged especially in the backbone of the networks. Therefore, the aim of this paper is to present the potential of 5G system to meet the increasing needs in devices and explosive capacity without causing any significant energy consumption based on functional split architecture particularly for 5G backhaul. <s> BIB007 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Background on Energy Efficiency <s> Cell switch-off (CSO) is an important approach to reducing energy consumption in cellular networks during off-peak periods. CSO addresses the research question of which cells to switch off when. Whereas online CSO, based on immediate user demands and channel states, is problematic to implement and difficult to model, off-line CSO is more practical and tractable. Furthermore, it is known that regular cell layouts generally provide the best coverage and spectral efficiency, which leads us to prefer regular static (off-line) CSO. We introduce sector-based regular CSO patterns for the first time. We organize the existing and newly introduced patterns using a systematic nomenclature; studying 26 patterns in total. We compare these patterns in terms of energy efficiency and the average number of users supported, via a combination of analysis and simulation. We also compare the performance of CSO with two benchmark algorithms. We show that the average number of users can be captured by one parameter. Moreover, we find that the distribution of the number of users is close to Gaussian, with a tractable variance. Our results demonstrate that several patterns that activate only one out of three sectors are particularly beneficial; such CSO patterns have not been studied before. <s> BIB008 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Background on Energy Efficiency <s> Given the rising concerns on carbon emission and increasing operating expense pressure, mobile network operators and device vendors are actively driving the energy-efficient network evolution. Energy efficiency (EE) has been determined as one of the key objectives of the 5G system. To realise sustainable 5G, various new technologies have been proposed to reduce conventional energy consumption. Meanwhile, green energy sources are explored to reduce the dependence on conventional energy. This study makes a survey on recent academia and industry research on the EE of the 5G system from perspectives of radio resource management, architecture and deployment paradigm, green energy harvesting and smart grid integration. Typical green 5G enabling technologies are presented and discussed comprehensively. Moreover, the latest progress on EE in 3GPP is also investigated. Given the broad research areas, only those critical open issues and challenges are presented to inspire further investigations. Finally, the authors identify several research directions as the way forward to realise the green 5G system. <s> BIB009
A formal relationship between energy efficiency and Signal to Interference Noise Ratio (SINR) has been presented in using the bit/joule notion. Meanwhile, Reference BIB003 lays the foundation for energy efficiency in different parts of the network including base stations and the core network. In the literature, energy saving and use of green energy resources have been the two mainstream approaches to offer energy efficiency. Among the energy saving techniques, cell-switch off techniques have been widely exploited. For instance, in the EU FP7 ABSOLUTE project, an energy aware middleware has been proposed that would use the capacity-based thresholds for activation of the base stations BIB002 . In several other studies, data offloading has been considered as an energy-efficient approach. Furthermore, authors in BIB009 have put together several techniques for not only reducing the energy consumption from the traditional energy sources but also for surveying newer Energy Efficiency (EE) schemes in the End-to-End (E2E) system. One of the remarkable mentions by the authors includes the implementation of 3rd Generation Partnership Project (3GPP) compliant EE manager that would be responsible for monitoring energy demands in an E2E session and for implementation of the policies needed for catering to the ongoing energy demand. In addition to energy saving approaches, recently simultaneous wireless energy transfer has been studied. Furthermore, local caching techniques have been proved to be beneficial for relieving the load on the backhaul network by storing the content locally and limiting the re-transmissions, hence reducing energy consumption. Similarly, a cloud based RAN has been envisioned as a possible solution for the computational redistribution in BIB003 . Many of the tasks previously performed by a base station (BS) would be taken away to a data center and only decision making for Radio Frequency (RF) chains as well as baseband to RF conversion would be given to base stations. Traffic pattern and demands would then be catered for well before time and redundant BS would be put to sleep mode according to BIB004 . Furthermore, full duplex Device-to-Device (D2D) communication with uplink channel reuse has been considered to improve SINR and transmission power constraints. A gain of 36% energy efficiency has been demonstrated using the full duplex scheme with enhanced self-interference mitigation mechanism instead of half duplex [14] . As machine learning is penetrating more and more into the operation of wireless networks, Reference suggests that machine learning algorithms would greatly help to predict the hot spots so that other resources could be switched off when not needed. The concept of energy efficiency being treated as a key performance indicator in the upcoming 5G standard considers it to be a global ambition, but it cannot be declared as a specific actionable item on either the operator or vendor side. Divide and conquer approach has been applied to the entire network and improvements have been targeted at either component level, equipment level or at network level employing newer algorithms at both BS and UE side. This discussion advocates the fact that operators would have the leverage of tuning their network for a balance between quality of service and energy consumption. In the following sections, we introduce the recent works in energy-efficiency in 5G as highlighted in Table 1 preceding to a discussion on open issues and challenges. EE improvement by a Centralized BB processing design BIB007 Analytical modelling of EE for a heterogeneous network BIB005 Energy Efficiency Metrics for Heterogeneous Wireless Cellular Networks BIB001 Incentive based sleeping mechanism for densely deployed femto cells BIB006 Sector based switching technique BIB008
A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> As a promising downlink multiple access scheme for future radio access (FRA), this paper discusses the concept and practical considerations of non-orthogonal multiple access (NOMA) with a successive interference canceller (SIC) at the receiver side. The goal is to clarify the benefits of NOMA over orthogonal multiple access (OMA) such as OFDMA adopted by Long-Term Evolution (LTE). Practical considerations of NOMA, such as multi-user power allocation, signalling overhead, SIC error propagation, performance in high mobility scenarios, and combination with multiple input multiple output (MIMO) are discussed. Using computer simulations, we provide system-level performance of NOMA taking into account practical aspects of the cellular system and some of the key parameters and functionalities of the LTE radio interface such as adaptive modulation and coding (AMC) and frequency-domain scheduling. We show under multiple configurations that the system-level performance achieved by NOMA is higher by more than 30% compared to OMA. <s> BIB001 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> Dynamic adaptation of the base stations on/off activity or transmit power, according to space and time traffic variations, are measures accepted in the most contemporary resource management approaches dedicated to improving energy efficiency of cellular access networks. Practical implementation of both measures results in changes to instantaneous base station power consumption. In this paper, extensive analyses presenting influence of the transmit power scaling and on/off switching on instantaneous macro base stations power consumption are given. Based on real on-site measurements performed on a set of macro base stations of different access technologies and production years, we developed linear power consumption models. These models are developed by means of linear regression and precisely model the influence of transmit power on instantaneous power consumption for the second, third and fourth generations of macro base stations. In order to estimate the potential energy savings of transmit power scaling and on/off switching for base stations of different generations, statistical analyses of measured power consumptions are performed. Also, transient times and variations of base stations instantaneous power consumption during transient periods initiated with on/off switching and transmit power scaling are presented. Since the developed power consumption models have huge confidence follow measured results, they can be used as general models for expressing the relationship between transmitted and consumed power for macro base stations of different technologies and generations. <s> BIB002 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> Mobility, resource constraints and unreliable wireless links of mobile P2P networks will cause high data access latency and the communication overhead. Cooperative caching is widely seen as an effective solution to improve the overall system performance in mobile P2P networks. In this paper we present a novel cooperative caching scheme for mobile P2P networks. In our scheme the caching space of each node is divided into three parts: locale caching, cooperative caching and path caching, which respectively store the requested data objects of the nodes, the hot data objects in the networks and the data objects path. We also put forward the cache replacement strategy according to our scheme. Proposed cache replacement strategy not only takes into account the need of the nodes, but also pays attention to collaborative work between nodes. We evaluate the performance of our scheme by using NS-2. The experimental results show that the cache hit ratio is effectively increased and the average hops count is reduced. <s> BIB003 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> This paper focuses on energy efficiency aspects and related benefits of radio-access-network-as-a-service (RANaaS) implementation (using commodity hardware) as architectural evolution of LTE-advanced networks toward 5G infrastructure. RANaaS is a novel concept introduced recently, which enables the partial centralization of RAN functionalities depending on the actual needs as well as on network characteristics. In the view of future definition of 5G systems, this cloud-based design is an important solution in terms of efficient usage of network resources. The aim of this paper is to give a vision of the advantages of the RANaaS, to present its benefits in terms of energy efficiency and to propose a consistent system-level power model as a reference for assessing innovative functionalities toward 5G systems. The incremental benefits through the years are also discussed in perspective, by considering technological evolution of IT platforms and the increasing matching between their capabilities and the need for progressive virtualization of RAN functionalities. The description is complemented by an exemplary evaluation in terms of energy efficiency, analyzing the achievable gains associated with the RANaaS paradigm. <s> BIB004 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> This monograph presents a unified framework for energy efficiency maximization in wireless networks via fractional programming theory. The definition of energy efficiency is introduced, with reference to single-user and multi-user wireless networks, and it is observed how the problem of resource allocation for energy efficiency optimization is naturally cast as a fractional program. An extensive review of the state-of-the-art in energy efficiency optimization by fractional programming is provided, with reference to centralized and distributed resource allocation schemes. A solid background on fractional programming theory is provided. The key-notion of generalized concavity is presented and its strong connection with fractional functions described. A taxonomy of fractional problems is introduced, and for each class of fractional problem, general solution algorithms are described, discussing their complexity and convergence properties. The described theoretical and algorithmic framework is applied to solve energy efficiency maximization problems in practical wireless networks. A general system and signal model is developed which encompasses many relevant special cases, such as one-hop and two-hop heterogeneous networks, multi-cell networks, small-cell networks, device-to-device systems, cognitive radio systems, and hardware-impaired networks, wherein multiple-antennas and multiple subcarriers are possibly employed. Energy-efficient resource allocation algorithms are developed, considering both centralized, cooperative schemes, as well as distributed approaches for self-organizing networks. Finally, some remarks on future lines of research are given, stating some open problems that remain to be studied. It is shown how the described framework is general enough to be extended in these directions, proving useful in tackling future challenges that may arise in the design of energy-efficient future wireless networks. <s> BIB005 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> Recent trend of network communication is leading towards the innovation of high speed wireless broadband technology. The scheduling of real-time traffic in certain network will give high impact on the system, so the most efficient scheduling is crucial. This paper proposes an energy-efficient resource allocation scheduler with QoS aware support for LTE network. The ultimate aim is to promote and achieve the green wireless LTE network and environmental friendly. Some related works on green LTE networks are also being discussed. <s> BIB006 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> Traditional wireless networks mainly rely on macro cell deployments, meanwhile with the advances in forth generation networks, the recent architectures of LTE and LTE-A support Heterogeneous Networks (HetNets) that employ a mix of macro and small cells. Small cells aim at increasing coverage and capacity. Coverage both at cell edges and indoor environments can be significantly improved by relays and small cells. Capacity is inherently limited because of the limited spectrum, and although 4G wireless networks have been able to provide a considerable amount of increase in capacity, it has always been challenging to keep up with the growing user demands. In particular, the high volume of traffic resulting from video uploads or downloads is the major reason for the ever growing user demand. In the Internet, content caching at locations closer to the users have been a successful approach to enhance resource utilization. Very recently, content caching within the wireless network has been considered for 4G networks. In this paper, we propose an Integer Linear Programming (ILP)-based energy-efficient content placement approach for small cells. The proposed model, namely minimize Uplink Power and Caching Power (minUPCA), jointly minimizes uplink and caching powers. We compare the performance of minUPCA with a scheme that only aims to minimize uplink power. Our results show that minUPCA provides a compromise between the uplink energy budget of the User Equipment (UE) and the caching energy budget of the Small Cell Base Station (SCBS). <s> BIB007 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> In this paper we evaluate the energy efficiency of a 5G radio access network (RAN) based on LTE technology when comparing two small cell deployment strategies to enhance the RAN capacity. Specifically, we compare densifying a 3-sector macrocell RAN with small cells against first upgrading to a 6-sector macrocell RAN before densifying with small cells. The latter strategy has been used in urban areas by 4G network operators. The energy consumption gain (ECG) is used as a figure of merit in this paper. The radio base station power consumption is estimated by using a realistic power consumption model. Our results show that deploying a small cell overlay in a 3-sector macrocell RAN is more energy efficient than deploying a small cell overlay in a 6-sector macrocell RAN even though the latter uses fewer small cells. Further energy savings can be achieved by implementing an adaptive sectorisation technique. An energy saving of 25% is achieved for 6-sectors when progressively decreasing the number of active sectors from 6 to 1 in accordance with the temporal average traffic load. Irrespective, the 3-sector option with or without incorporating the adaptive sectorisation technique is always more energy efficient. <s> BIB008 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> A number of merits could be brought by network function virtualization (NFV) such as scalability, on demand allocation of resources, and the efficient utilization of network resources. In this paper, we introduce a framework for designing an energy efficient architecture for 5G mobile network function virtualization. In the proposed architecture, the main functionalities of the mobile core network which include the packet gateway (P-GW), serving gateway (S-GW), mobility management entity (MME), policy control and charging role function, and the home subscriber server (HSS) functions are virtualized and provisioned on demand. We also virtualize the functions of the base band unit (BBU) of the evolved node B (eNB) and offload them from the mobile radio side. We leverage the capabilities of gigabit passive optical networks (GPON) as the radio access technology to connect the remote radio head (RRH) to new virtualized BBUs. We consider the IP/WDM backbone network and the GPON based access network as the hosts of virtual machines (VMs) where network functions will be implemented. Two cases were investigated; in the first case, we considered virtualization in the IP/WDM network only (since the core network is typically the location that supports virtualization) and in the second case we considered virtualization in both the IP/WDM and GPON access network. Our results indicate that we can achieve energy savings of 22% on average with virtualization in both the IP/WDM network and GPON access network compared to the case where virtualization is only done in the IP/WDM network. <s> BIB009 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> In this paper, we investigate the interference management problem in a full-duplex cellular network from a spectrum resource allocation perspective. In order to maximize the full-duplex network throughput, we propose an interference area based resource allocation algorithm, which can pair the downlink UE and uplink UE with limited mutual interference. The simulation results verify the efficiency of the proposed interference area based resource allocation algorithm in the investigated full-duplex cellular network. <s> BIB010 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> The emerging 5G wireless networks will pose extreme requirements such as high throughput and low latency. Caching as a promising technology can effectively decrease latency and provide customized services based on group users behaviour (GUB). In this paper, we carry out the energy efficiency analysis in the cache-enabled hyper cellular networks (HCNs), where the macro cells and small cells (SCs) are deployed heterogeneously with the control and user plane (C/U) split. Benefiting from the assistance of macro cells, a novel access scheme is proposed according to both user interest and fairness of service, where the SCs can turn into semi- sleep mode. Expressions of coverage probability, throughput and energy efficiency (EE) are derived analytically as the functions of key parameters, including the cache ability, search radius and backhaul limitation. Numerical results show that the proposed scheme in HCNs can increase the network coverage probability by more than 200% compared with the single- tier networks. The network EE can be improved by 54% than the nearest access scheme, with larger research radius and higher SC cache capacity under lower traffic load. Our performance study provides insights into the efficient use of cache in the 5G software defined networking (SDN). <s> BIB011 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> Wireless networks have made huge progress over the past three decades. Nevertheless, emerging fifth-generation (5G) networks are under pressure to continue in this direction at an even more rapid pace, at least for the next ten to 20 years. This pressure is exercised by rigid requirements as well as emerging technology trends that are aimed at introducing improvements to the 5G wireless world. <s> BIB012 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> In this paper, we study the joint resource allocation algorithm design for downlink and uplink multicarrier transmission assisted by a shared user equipment (UE)-side distributed antenna system (SUDAS). The proposed SUDAS simultaneously utilizes licensed frequency bands and unlicensed frequency bands, (e.g. millimeter wave bands), to enable a spatial multiplexing gain for single-antenna UEs to improve energy efficiency and system throughput of $5$-th generation (5G) outdoor-to-indoor communication. The design of the UE selection, the time allocation to uplink and downlink, and the transceiver processing matrix is formulated as a non-convex optimization problem for the maximization of the end-to-end system energy efficiency (bits/Joule). The proposed problem formulation takes into account minimum data rate requirements for delay sensitive UEs and the circuit power consumption of all transceivers. In order to design a tractable resource allocation algorithm, we first show that the optimal transmitter precoding and receiver post-processing matrices jointly diagonalize the end-to-end communication channel for both downlink and uplink communication via SUDAS. Subsequently, the matrix optimization problem is converted to an equivalent scalar optimization problem for multiple parallel channels, which is solved by an asymptotically globally optimal iterative algorithm. Besides, we propose a suboptimal algorithm which finds a locally optimal solution of the non-convex optimization problem. Simulation results illustrate that the proposed resource allocation algorithms for SUDAS achieve a significant performance gain in terms of system energy efficiency and spectral efficiency compared to conventional baseline systems by offering multiple parallel data streams for single-antenna UEs. <s> BIB013 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> 5G wireless technology is paving the way to revolutionize future ubiquitous and pervasive networking, wireless applications, and user quality of experience. To realize its potential, 5G must provide considerably higher network capacity, enable massive device connectivity with reduced latency and cost, and achieve considerable energy savings compared to existing wireless technologies. The main objective of this article is to explore the potential of NFV in enhancing 5G radio access networks' functional, architectural, and commercial viability, including increased automation, operational agility, and reduced capital expenditure. The ETSI NFV Industry Specification Group has recently published drafts focused on standardization and implementation of NFV. Harnessing the potential of 5G and network functions virtualization, we discuss how NFV can address critical 5G design challenges through service abstraction and virtualized computing, storage, and network resources. We describe NFV implementation with network overlay and SDN technologies. In our discussion, we cover the first steps in understanding the role of NFV in implementing CoMP, D2D communication, and ultra densified networks. <s> BIB014 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> Several critical benefits are encompassed by the concept of NFV when this concept is brought under the roof of 5G such as scalability, high level of flexibility, efficient utilisation of network resources, cost and power reduction, and on demand allocation of network resources. NFV could reduce the cost for installing and maintaining network equipment through consolidating the hardware resources. By deploying NFV, network resources could be shared between different users and several network functions in a facile and flexible way. Beside this the network resources could be rescaled and allocated to each function in the network. As a result, the NFV can be customised according the precise demands, so that all the network components and users could be handled and accommodated efficiently. In this paper we extend the virtualization framework that was introduced in our previous work to include a large range of virtual machine workloads with the presence of mobile core network virtual machine intra communication. In addition, we investigate a wide range of traffic reduction factors which are caused by base band virtual machines (BBUVM) and their effect on the power consumption. We used two general scenarios to group our finding, the first one is virtualization in both IP over WDM (core network) and GPON (access network) while the second one is only in IP over WDM network (core network). We illustrate that the virtualization in IP over WDM and GPON can achieve power saving around (16.5% – 19.5%) for all cases compared to the case where no NFV is deployed, while the virtualization in IP over WDM records around (13.5% – 16.5%). <s> BIB015 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> Network Function Virtualization (NFV) enables mobile operators to virtualize their network entities as Virtualized Network Functions (VNFs), offering fine-grained on-demand network capabilities. VNFs can be dynamically scale-in/out to meet the performance desire and other dynamic behaviors. However, designing the auto-scaling algorithm for desired characteristics with low operation cost and low latency, while considering the existing capacity of legacy network equipment, is not a trivial task. In this paper, we propose a VNF Dynamic Auto Scaling Algorithm (DASA) considering the tradeoff between performance and operation cost. We develop an analytical model to quantify the tradeoff and validate the analysis through extensive simulations. The results show that the DASA can significantly reduce operation cost given the latency upper-bound. Moreover, the models provide a quick way to evaluate the cost- performance tradeoff and system design without wide deployment, which can save cost and time. <s> BIB016 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> In cloud computing paradigm, virtual resource autoscaling approaches have been intensively studied recent years. Those approaches dynamically scale in/out virtual resources to adjust system performance for saving operation cost. However, designing the autoscaling algorithm for desired performance with limited budget, while considering the existing capacity of legacy network equipment, is not a trivial task. In this paper, we propose a Deadline and Budget Constrained Autoscaling (DBCA) algorithm for addressing the budget-performance tradeoff. We develop an analytical model to quantify the tradeoff and cross-validate the model by extensive simulations. The results show that the DBCA can significantly improve system performance given the budget upper-bound. In addition, the model provides a quick way to evaluate the budget-performance tradeoff and system design without wide deployment, saving on cost and time. <s> BIB017 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> Ultra-dense networks can further improve the spectrum efficiency (SE) and the energy efficiency (EE). However, the interference avoidance and the green design are becoming more complex due to the intrinsic densification and scalability. It is known that the much denser small cells are deployed, the more cooperation opportunities exist among them. In this paper, we characterize the cooperative behaviors in the Nash bargaining cooperative game-theoretic framework, where we maximize the EE performance with a certain sacrifice of SE performance. We first analyze the relationship between the EE and the SE, based on which we formulate the Nash-product EE maximization problem. We achieve the closed-form sub-optimal SE equilibria to maximize the EE performance with and without the minimum SE constraints. We finally propose a CE2MG algorithm, and numerical results verify the improved EE and fairness of the presented CE2MG algorithm compared with the non-cooperative scheme. <s> BIB018 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> Wireless cellular networks have seen dramatic growth in number of mobile users. As a result, data requirements, and hence the base-station power consumption has increased significantly. It in turn adds to the operational expenditures and also causes global warming. The base station power consumption in long-term evolution (LTE) has, therefore, become a major challenge for vendors to stay green and profitable in competitive cellular industry. It necessitates novel methods to devise energy efficient communication in LTE. Importance of the topic has attracted huge research interests worldwide. Energy saving (ES) approaches proposed in the literature can be broadly classified in categories of energy efficient resource allocation, load balancing, carrier aggregation, and bandwidth expansion. Each of these methods has its own pros and cons leading to a tradeoff between ES and other performance metrics resulting into open research questions. This paper discusses various ES techniques for the LTE systems and critically analyses their usability through a comprehensive comparative study. <s> BIB019 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> In this paper, device-to-device (D2D) communication and small cell technology are introduced into cellular networks to form three layers of heterogeneous network (HetNet). The resource allocation problem of D2D users and small cellular users (SCUEs) is studied in this network, and a resource allocation method under satisfying the communication quality of macro cellular users, D2D users and SCUEs is proposed. Firstly, in order to reduce the computational complexity, regional restrictions on macro base station and users are conducted; Then, in order to improve the system throughput, a resource allocation method based on interference control is proposed. The simulation results show that the proposed method can effectively reduce the computational complexity and improve the overall system throughput. <s> BIB020 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> The Orthogonal Frequency Division Multiplexing (OFDM) has been widely used in the next generation networks. With the increasing of the wireless equipment, the problem of energy consumption for the wireless network has become a big challenge. Power control is the key of the network management, while power allocations and channel assignments have been investigated for maximizing energy efficiency in each cell in the OFDM-based cellular network. The optimal problem of maximizing energy efficiency of networks has been formulated as a non-linear fractional program. The dual decomposition and sub-gradient iteration have been used to solve it. Furthermore, a numerical simulation has been proposed to verify the algorithm proposed in this paper. The simulation results show that the maximum energy efficiency in each cell can be obtained. <s> BIB021 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> Content caching is an efficient technique to reduce delivery latency and system congestion during peak-traffic times by bringing data closer to end users. Existing works consider caching only at higher layers separated from physical layer. In this paper, we study wireless caching networks by taking into account cache capability when designing the signal transmission. In particular, we investigate multi-layer caching and their performance in edge-caching wireless networks where both base station (BS) and users are capable of storing content data in their local cache. Two notable uncoded and coded caching strategies are studied. Firstly, we propose a coded caching strategy that is applied to arbitrary value of cache size. The required backhaul and access rates are given as a function of the BS and user cache size. Secondly, closed-form expressions for the system energy efficiency (EE) corresponding to the two caching methods are derived. Thirdly, the system EE is maximized via precoding vectors design and optimization while satisfying the user request rate. Finally, numerical results are presented to verify the effectiveness of the two caching methods. <s> BIB022 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> Along with spectral efficiency (SE), energy efficiency (EE) is a key performance metric for the design of 5G and beyond 5G (B5G) wireless networks. At the same time, infrastructure sharing among multiple operators has also emerged as a new trend in wireless communication networks. This paper presents an optimization framework for EE and SE maximization in a network, where radio resources are shared among multiple operators. We define a heterogeneous service level agreement (SLA) framework for a shared network, in which the constraints of different operators are handled by two different multi-objective optimization approaches namely the utility profile and scalarization methods. Pareto-optimal solutions are obtained by merging these approaches with the theory of generalized fractional programming. The approach applies to both noise-limited and interference-limited systems, with single-carrier or multi-carrier transmission. Extensive numerical results illustrate the effect of the operator specific SLA requirements on the global spectral and EE. Three network scenarios are considered in the numerical results, each one corresponding to a different SLA, with different operator-specific EE and SE constraints. <s> BIB023 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> This paper focuses on resource allocation in energy-cooperation enabled two-tier heterogeneous networks (HetNets) with non-orthogonal multiple access (NOMA), where base stations (BSs) are powered by both renewable energy sources and the conventional grid. Each BS can serve multiple users at the same time and frequency band. To deal with the fluctuation of renewable energy harvesting, we consider that renewable energy can be shared between BSs via the smart grid. In such networks, user association and power control need to be re-designed, since existing approaches are based on OMA. Therefore, we formulate a problem to find the optimum user association and power control schemes for maximizing the energy efficiency of the overall network, under quality-of-service constraints. To deal with this problem, we first propose a distributed algorithm to provide the optimal user association solution for the fixed transmit power. Furthermore, a joint user association and power control optimization algorithm is developed to determine the traffic load in energy-cooperation enabled NOMA HetNets, which achieves much higher energy efficiency performance than existing schemes. Our simulation results demonstrate the effectiveness of the proposed algorithm, and show that NOMA can achieve higher energy efficiency performance than OMA in the considered networks. <s> BIB024 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> Non-orthogonal multiple access (NOMA) has been recently considered as a promising multiple access technique for fifth generation (5G) mobile networks as an enabling technology to meet the demands of low latency, high reliability, massive connectivity, and high throughput. The two dominants types of NOMA are: power-domain and code-domain. The key feature of power-domain NOMA is to allow different users to share the same time, frequency, and code, but with different power levels. In code-domain NOMA, different spread-spectrum codes are assigned to different users and are then multiplexed over the same time-frequency resources. This paper concentrates on power-domain NOMA. In power-domain NOMA, Successive Interference Cancellation (SIC) is employed at the receiver. In this paper, the optimum received uplink power levels using a SIC detector is determined analytically for any number of transmitters. The optimum uplink received power levels using the SIC decoder in NOMA strongly resembles the μ-law encoding used in pulse code modulation (PCM) speech companders. <s> BIB025 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> Energy efficiency is likely to be the litmus test for the sustainability of upcoming 5G networks. Before the new generation of cellular networks are ready to roll out, their architecture designers are motivated to leverage the SDN technology for the sake of its offered flexibility, scalability, and programmability to achieve the 5G KPI of 10 times lower energy consumption. In this paper, we present Proofs-of-Concept of Energy Management and Monitoring Applications (EMMAs) in the context of three challenging, realistic case studies, along with a SDN/NFV-based MANO architecture to manage converged fronthaul/backhaul 5G transport networks. <s> BIB026 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> Next-generation wireless networks are expected to support extremely high data rates and radically new applications, which require a new wireless radio technology paradigm. The challenge is that of assisting the radio in intelligent adaptive learning and decision making, so that the diverse requirements of next-generation wireless networks can be satisfied. Machine learning is one of the most promising artificial intelligence tools, conceived to support smart radio terminals. Future smart 5G mobile terminals are expected to autonomously access the most meritorious spectral bands with the aid of sophisticated spectral efficiency learning and inference, in order to control the transmission power, while relying on energy efficiency learning/inference and simultaneously adjusting the transmission protocols with the aid of quality of service learning/inference. Hence we briefly review the rudimentary concepts of machine learning and propose their employment in the compelling applications of 5G networks, including cognitive radios, massive MIMOs, femto/small cells, heterogeneous networks, smart grid, energy harvesting, device-todevice communications, and so on. Our goal is to assist the readers in refining the motivation, problem formulation, and methodology of powerful machine learning algorithms in the context of future networks in order to tap into hitherto unexplored applications and services. <s> BIB027 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> Abstract Spurred by both economic and environmental concerns, energy efficiency (EE) has now become one of the key pillars for the fifth generation (5G) mobile communication networks. To maximize the downlink EE of the 5G ultra dense network (UDN), we formulate a constrained EE maximization problem and translate it into a convex representation based on the fractional programming theory. To solve this problem, we first adopt a centralized algorithm to reach the optimum based on Dinkelbach’s procedure. To improve the efficiency and reduce the computational complexity, we further propose a distributed iteration resource allocation algorithm based on alternating direction method of multipliers (ADMM). For the proposed distributed algorithm, the local and dual variables are updated by each base station (BS) in parallel and independently, and the global variables are updated through the coordination and information exchange among BSs. Moreover, as the noise may lead to imperfect information exchange among BSs, the global variables update may be subject to failure. To cope with this problem, we propose a robust distributed algorithm, for which the global variable only updates as the information exchange is successful. We prove that this modified robust distributed algorithm converges to the optimal solution of the primal problem almost surely. Simulation results validate our proposed centralized and distributed algorithms. Especially, the proposed robust distributed algorithm can effectively eliminate the impact of noise and converge to the optimal value at the cost of a little increase of computational complexity. <s> BIB028 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> Energy and spectral efficiencies are key metrics to assess the performance of networks and compare different configurations or techniques. There are many ways to define those metrics, and the performance indicators used in their calculation can also be measured in different ways. Using an LTE-A network, we measure different performance indicators and the metrics' outputs are compared. Modifying the transmitted output power, the bandwidth, and the number of base stations, different network configurations are also compared. As expected, the measurements show that increasing the bandwidth increases the throughput more than it increases the energy consumption. Results clearly show that using inappropriate indicators can be misleading. The power indicator should include all energy consumed and the throughput should be dependent on the traffic, taking into account the idle time of the network, if any. There is a need to include more performance indicators into the metrics, especially those related to quality of service. <s> BIB029 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> To avoid the latency from RRC state transition procedure, legacy network maintains UE as RRC connected state for pre-defined time duration even without any traffic arrival. However, it consumes UE battery rapidly because UE should monitor PDCCH, and send CQI feedback periodically. In this paper, we design the RRC connection control to enhance energy efficiency with moderate control signaling overhead. In RRC INACTIVE state, the newly introduced in NR, both network and UE save UE context including bearer configuration and security even after UE released from network. Owing to the saved UE context, the RRC state transition from RRC INACTIVE to RRC CONNECTED requires fewer number of CN signalling. And thus network can release UE to RRC INACTIVE more aggressively with shorter timer. Furthermore, we propose the connectionless data transmission in RRC INACTIVE without RRC state transition to RRC CONNECTED. In our performance analysis, UE energy consumption can be improved by 50% for the modem only and 18% reduction for the total device including display. Furthermore, when the small data or background (keep-alive) traffic is transferred in RRC INACTIVE, the energy efficiency is increased up to double. <s> BIB030 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> Using a network of cache enabled small cells, traffic during peak hours can be reduced by proactively fetching the content that is most likely to be requested. In this paper, we aim to explore the impact of proactive caching on an important metric for future generation networks, namely, energy efficiency (EE). We argue that, exploiting the spatial repartitions of users in addition to the correlation in their content popularity profiles, can result in considerable improvement of the achievable EE. In this paper, the optimization of EE is decoupled into two related subproblems. The first one addresses the issue of content popularity modeling. While most existing works assume similar popularity profiles for all users, we consider an alternative framework in which, users are clustered according to their popularity profiles. In order to showcase the utility of the proposed clustering, we use a statistical model selection criterion, namely, Akaike information criterion. Using stochastic geometry, we derive a closed-form expression of the achievable EE and we find the optimal active small cell density vector that maximizes it. The second subproblem investigates the impact of exploiting the spatial repartitions of users. After considering a snapshot of the network, we formulate a combinatorial problem that optimizes content placement in order to minimize the transmission power. Numerical results show that the clustering scheme considerably improves the cache hit probability and consequently the EE, compared with an unclustered approach. Simulations also show that the small base station allocation algorithm improves the energy efficiency and hit probability. <s> BIB031 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> In this paper, we study delay-aware cooperative online content caching with limited caching space and unknown content popularity in dense small cell wireless networks. We propose a Cooperative Online Content cAching algorithm (COCA) that decides in which BS the requested content should be cached with considerations of three important factors: the residual cache space in each small cell basestation (SBS), the number of coordinated connections each SBS establishes with other SBSs, and the number of served users in the coverage area of each SBS. In addition, due to limited storage space in the cache, the proposed COCA algorithm eliminates the least recently used (LRU) contents to free up the space. We compare the delay performance of the proposed COCA algorithm with the existing offline cooperative caching schemes through simulations. Simulation results demonstrate that the proposed COCA algorithm has a better delay performance than the existing offline algorithms. <s> BIB032 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> We propose and compare different potential placement schemes for baseband functions and mobile edge computing on their energy efficiency. Simulation results show that NFV enabled flexible placement reduces more than 20% power than traditional solutions. <s> BIB033 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> The current base station centric cellular network architecture hinders the implementation of effective sleep techniques, often resulting in energy-inefficient mobile networks. The efforts towards 5G and network densification, however, open new possibilities and may, at last, allow the integration of sleep modes without any QoS degradation. In this paper, we consider heterogeneous networks in which data and control planes are split and independent, referred to as SDHN. We present an energy consumption metric that can be used to evaluate the radio access power consumption and the associated energy efficiency of these networks. Concerning other metrics in literature, the proposal accounts for both the coverage area as well as the traffic load, and it is relatively simple to use. The proposed metric is applied to evaluate the power consumption performance of an LTE SDHN in an urban indoor scenario. Results confirm that sleep modes in such architectures can effectively cut power consumption and improve energy efficiency while preserving QoS. <s> BIB034 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> Recently, Fog-RANs have been introduced as the evolution of Cloud Radio Access Networks (CRAN) for enabling edge computing in 5G systems. By alleviating the fronthaul burden for data transfer, transport delays are expected to be greatly reduced. However, in order to support envisioned 5G real-time and delay-sensitive applications, tailored radio resource and interference management schemes become necessary. Therefore, this paper investigates the issues of user scheduling and beamforming for energy efficient Fog-RAN. We formulate the energy efficiency maximization problem, taking into account the local user clustering constraint specific to Fog-RANs. Given the difficulty of this non-convex optimization problem, we propose a strategy where the energy efficient user scheduling is split in two parts: first, we solve an equivalent sum-rate maximization problem, then, the most energy-efficient FogAPs are activated in a greedy manner. To meet the requirement of low computational complexity of FogAPs, local beamforming is performed given fixed user scheduling. Simulation results show that the proposed scheme not only provides similar levels of user rates and fairness, but also largely outperforms the system energy efficiency in comparison with the baseline scheme1. <s> BIB035 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> NOMA has been recognized as a highly promising FRA technology to satisfy the requirements of the fifth generation era on high spectral efficiency and massive connectivity. Since the EE has become a growing concern in FRA from both the industrial and societal perspectives, this article discusses the sustainability issues of NOMA. We first thoroughly examine the theoretical power regions of NOMA to show the minimum transmission power with fixed data rate requirement, demonstrating the EE performance advantage of NOMA over orthogonal multiple access. Then we explore the role of energy-aware resource allocation and grant-free transmission in further enhancing the EE performance of NOMA. Based on this exploration, a hybrid NOMA strategy that reaps the joint benefits of resource allocation and grantfree transmission is investigated to simultaneously accomplish high throughput, large connectivity, and low energy cost. Finally, we identify some important and interesting future directions for NOMA designers to follow in the next decade. <s> BIB036 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> By analytically showing that index coding (IC) is more power efficient than superposition coding (SC) when appropriate caching contents are available for a pair of users, we propose a sub-optimal joint user clustering and power allocation scheme for a single-cell downlink non-orthogonal multiple access network with caching memory at the receivers that alternate between IC and SC. Simulation studies demonstrate that the proposed scheme significantly reduces the transmission power when compared with the benchmark scheme that only allows SC. <s> BIB037 </s> A Survey on Recent Trends and Open Issues in Energy Efficiency of 5G <s> Optimization Scope Problem Addressed Citation <s> In this paper, network function virtualization (NVF) is identified as a promising key technology that can contribute to energy-efficiency improvement in 5G networks. An optical network supported architecture is proposed and investigated in this work to provide the wired infrastructure needed in 5G networks and to support NFV towards an energy efficient 5G network. In this architecture the mobile core network functions as well as baseband function are virtualized and provided as VMs. The impact of the total number of active users in the network, backhaul/fronthaul configurations and VM inter-traffic are investigated. A mixed integer linear programming (MILP) optimization model is developed with the objective of minimizing the total power consumption by optimizing the VMs location and VMs servers’ utilization. The MILP model results show that virtualization can result in up to 38% (average 34%) energy saving. The results also reveal how the total number of active users affects the baseband virtual machines (BBUVMs) optimal distribution whilst the core network virtual machines (CNVMs) distribution is affected mainly by the inter-traffic between the VMs. For real-time implementation, two heuristics are developed, an Energy Efficient NFV without CNVMs inter-traffic (EENFVnoITr) heuristic and an Energy Efficient NFV with CNVMs inter-traffic (EENFVwithITr) heuristic, both produce comparable results to the optimal MILP results. Finally, a Genetic algorithm is developed for further verification of the results. <s> BIB038
On interdependence among transmit and consumed power of macro base station technologies BIB002 Utilization of Nash product for maximizing cooperative EE BIB018 Energy Efficiency in Wireless Networks via Fractional Programming Theory BIB005 Energy efficiency maximization oriented resource allocation in 5G ultra-dense network: Centralized and distributed algorithms BIB028 Comparison of Spectral and Energy Efficiency Metrics Using Measurements in a LTE-A Network BIB029 Energy Management in LTE Networks BIB019 Energy-efficient resource allocation scheduler with QoS aware supports for green LTE network BIB006 Interference-area-based resource allocation for full-duplex communications BIB010 A resource allocation method for D2D and small cellular users in HetNet BIB020 Highly Energy-Efficient Resource Allocation in Power Telecommunication Network BIB021 EE enhancement with RRC Connection Control for 5G New Radio (NR) BIB030 Proactive caching based on the content popularity on small cells BIB031 Cooperative Online Caching in Small Cell Networks with Limited Cache Size and Unknown Content Popularity BIB032 Economical Energy Efficiency: An Advanced Performance Metric for 5G Systems Energy-efficient design for edge-caching wireless networks: When is coded-caching beneficial? BIB022 Content caching in small cells with optimized UL and caching power BIB007 An effective cooperative caching scheme for mobile P2P networks BIB003 EE analysis of heterogeneous cache enabled 5G hyper cellular networks BIB011 EE at the network level Motivation for infrastructure sharing based on current energy consumption figures BIB012 Energy efficiency in 5G access networks: Small cell densification and high order sectorisation BIB008 EE at the network level Energy-Efficient User Association and Beamforming for 5G Fog Radio Access Networks BIB035 Global energy and spectral efficiency maximization in a shared noise-limited environment BIB023 EE Resource Allocation in NOMA BIB024 Concept and practical considerations of non-orthogonal multiple access (NOMA) for future radio access BIB001 Optimum received power levels of UL NOMA signals for EE improvement BIB025 Spectral efficient nonorthogonal multiple access schemes (NOMA vs RAMA) Non-Orthogonal Multiple Access: Achieving Sustainable Future Radio Access BIB036 Mode Selection Between Index Coding and Superposition Coding in Cache-based NOMA Networks BIB037 Use case of shared UE side distributed antenna System for indoor usage BIB013 Optimized Energy Aware 5G Network Function Virtualization BIB038 Energy Efficient Network Function Virtualization in 5G Networks BIB009 Network Function Virtualization in 5G BIB014 A Framework for Energy Efficient NFV in 5G Networks BIB015 Energy efficient Placement of Baseband Functions and Mobile Edge Computing in 5G Networks BIB033 Energy Efficiency Benefits of RAN-as-a-Service Concept for a Cloud-Based 5G Mobile Network Infrastructure BIB004 Dynamic Auto Scaling Algorithm (DASA) for 5G Mobile Networks BIB016 Design and Analysis of Deadline and Budget Constrained Autoscaling (DBCA) Algorithm for 5G Mobile Networks BIB017 EE using SDN technology Impact of software defined networking (SDN) paradigm on EE BIB026 EE gains from the separated control and data planes in a heterogeneous network BIB034 EE using ML techniques Machine Learning Paradigms for Next-Generation Wireless Networks BIB027 Switch-on/off policies for energy harvesting small cells through distributed Q-learning