Query Text
stringlengths
9
138k
Ranking 1
stringlengths
10
36.2k
Ranking 2
stringlengths
9
138k
Ranking 3
stringlengths
10
14.7k
Ranking 4
stringlengths
9
36.2k
Ranking 5
stringlengths
9
138k
Ranking 6
stringlengths
9
36.2k
Ranking 7
stringlengths
10
138k
Ranking 8
stringlengths
9
36.2k
Ranking 9
stringlengths
9
36.2k
Ranking 10
stringlengths
9
6.54k
Ranking 11
stringlengths
9
36.2k
Ranking 12
stringlengths
13
7.98k
Ranking 13
stringlengths
21
6.07k
score_0
float64
1
1.25
score_1
float64
0
0.25
score_2
float64
0
0.25
score_3
float64
0
0.25
score_4
float64
0
0.24
score_5
float64
0
0.24
score_6
float64
0
0.24
score_7
float64
0
0.21
score_8
float64
0
0.21
score_9
float64
0
0.21
score_10
float64
0
0.2
score_11
float64
0
0
score_12
float64
0
0
score_13
float64
0
0
Learning Vocabularies over a Fine Quantization A novel similarity measure for bag-of-words type large scale image retrieval is presented. The similarity function is learned in an unsupervised manner, requires no extra space over the standard bag-of-words method and is more discriminative than both L2-based soft assignment and Hamming embedding. The novel similarity function achieves mean average precision that is superior to any result published in the literature on the standard Oxford 5k, Oxford 105k and Paris datasets/protocols. We study the effect of a fine quantization and very large vocabularies (up to 64 million words) and show that the performance of specific object retrieval increases with the size of the vocabulary. This observation is in contradiction with previously published results. We further demonstrate that the large vocabularies increase the speed of the tf-idf scoring step.
Spatial consistency of dense features within interest regions for efficient landmark recognition Recently, feature grouping has been proposed as a method for improving retrieval results for logos and web images. This relies on the idea that a group of features matching over a local region in an image is more discriminative than a single feature match. In this paper, we evolve this concept further and apply it to the more challenging task of landmark recognition. We propose a novel combination of dense sampling of SIFT features with interest regions which represent the more salient parts of the image in greater detail. In place of conventional dense sampling used in category recognition that computes features on a regular grid at a number of fixed scales, we allow the sampling density and scale to vary based on the scale of the interest region. We develop new techniques for exploring stronger geometric constraints inside the feature groups and computing the match score. The spatial information is stored efficiently in an inverted index structure. The proposed approach considers part-based matching of interest regions instead of matching entire images using a histogram under bag-of-words. This helps reducing the influence of background clutter and works better under occlusion. Experiments reveal that directing more attention to the salient regions of the image and applying proposed geometric constraints helps in vastly improving recognition rates for reasonable vocabulary sizes.
Efficient Large-Scale Similarity Search Using Matrix Factorization We consider the image retrieval problem of finding the images in a dataset that are most similar to a query image. Our goal is to reduce the number of vector operations and memory for performing a search without sacrificing accuracy of the returned images. We adopt a group testing formulation and design the decoding architecture using either dictionary learning or eigendecomposition. The latter is a plausible option for small-to-medium sized problems with high-dimensional global image descriptors, whereas dictionary learning is applicable in large-scale scenarios. We evaluate our approach for global descriptors obtained from both SIFT and CNN features. Experiments with standard image search benchmarks, including the Yahoo100M dataset comprising 100 million images, show that our method gives comparable (and sometimes superior) accuracy compared to exhaustive search while requiring only 10% of the vector operations and memory. Moreover, for the same search complexity, our method gives significantly better accuracy compared to approaches based on dimensionality reduction or locality sensitive hashing.
Image retrieval with reciprocal and shared nearest neighbors Content-based image retrieval systems typically rely on a similarity measure between image vector representations, such as in bag-of-words, to rank the database images in decreasing order of expected relevance to the query. However, the inherent asymmetry of k-nearest neighborhoods is not properly accounted for by traditional similarity measures, possibly leading to a loss of retrieval accuracy. This paper addresses this issue by proposing similarity measures that use neighborhood information to assess the relationship between images. First, we extend previous work on k-reciprocal nearest neighbors to produce new measures that improve over the original primary metric. Second, we propose measures defined on sets of shared nearest neighbors for reranking the shortlist. Both these methods are simple, yet they significantly improve the accuracy of image search engines on standard benchmark datasets.
Fine-Grained Image Search Large-scale image search has been attracting lots of attention from both academic and commercial fields. The conventional bag-of-visual-words (BoVW) model with inverted index is verified efficient at retrieving near-duplicate images, but it is less capable of discovering fine-grained concepts in the query and returning semantically matched search results. In this paper, we suggest that instance search should return not only near- duplicate images, but also fine-grained results, which is usually the actual intention of a user. We propose a new and interesting problem named fine-grained image search, which means that we prefer those images containing the same fine-grained concept with the query. We formulate the problem by constructing a hierarchical database and defining an evaluation method. We thereafter introduce a baseline system using fine-grained classification scores to represent and co-index images so that the semantic attributes are better incorporated in the online querying stage. Large-scale experiments reveal that promising search results are achieved with reasonable time and memory consumption. We hope this paper will be the foundation for future work on image search. We also expect more follow-up efforts along this research topic and look forward to commercial fine-grained image search engines.
BSIFT: toward data-independent codebook for large scale image search. Bag-of-Words (BoWs) model based on Scale Invariant Feature Transform (SIFT) has been widely used in large-scale image retrieval applications. Feature quantization by vector quantization plays a crucial role in BoW model, which generates visual words from the high- dimensional SIFT features, so as to adapt to the inverted file structure for the scalable retrieval. Traditional feature quantization approaches suffer several issues, such as necessity of visual codebook training, limited reliability, and update inefficiency. To avoid the above problems, in this paper, a novel feature quantization scheme is proposed to efficiently quantize each SIFT descriptor to a descriptive and discriminative bit-vector, which is called binary SIFT (BSIFT). Our quantizer is independent of image collections. In addition, by taking the first 32 bits out from BSIFT as code word, the generated BSIFT naturally lends itself to adapt to the classic inverted file structure for image indexing. Moreover, the quantization error is reduced by feature filtering, code word expansion, and query sensitive mask shielding. Without any explicit codebook for quantization, our approach can be readily applied in image search in some resource-limited scenarios. We evaluate the proposed algorithm for large scale image search on two public image data sets. Experimental results demonstrate the index efficiency and retrieval accuracy of our approach.
Total recall II: Query expansion revisited Most effective particular object and image retrieval approaches are based on the bag-of-words (BoW) model. All state-of-the-art retrieval results have been achieved by methods that include a query expansion that brings a significant boost in performance. We introduce three extensions to automatic query expansion: (i) a method capable of preventing tf-idf failure caused by the presence of sets of correlated features (confusers), (ii) an improved spatial verification and re-ranking step that incrementally builds a statistical model of the query object and (iii) we learn relevant spatial context to boost retrieval performance. The three improvements of query expansion were evaluated on standard Paris and Oxford datasets according to a standard protocol, and state-of-the-art results were achieved.
Improving image-based localization by active correspondence search We propose a powerful pipeline for determining the pose of a query image relative to a point cloud reconstruction of a large scene consisting of more than one million 3D points. The key component of our approach is an efficient and effective search method to establish matches between image features and scene points needed for pose estimation. Our main contribution is a framework for actively searching for additional matches, based on both 2D-to-3D and 3D-to-2D search. A unified formulation of search in both directions allows us to exploit the distinct advantages of both strategies, while avoiding their weaknesses. Due to active search, the resulting pipeline is able to close the gap in registration performance observed between efficient search methods and approaches that are allowed to run for multiple seconds, without sacrificing run-time efficiency. Our method achieves the best registration performance published so far on three standard benchmark datasets, with run-times comparable or superior to the fastest state-of-the-art methods.
Topology preserving hashing for similarity search Binary hashing has been widely used for efficient similarity search. Learning efficient codes has become a research focus and it is still a challenge. In many cases, the real-world data often lies on a low-dimensional manifold, which should be taken into account to capture meaningful neighbors with hashing. The importance of a manifold is its topology, which represents the neighborhood relationships between its subregions and the relative proximities between the neighbors of each subregion, e.g. the relative ranking of neighbors of each subregion. Most existing hashing methods try to preserve the neighborhood relationships by mapping similar points to close codes, while ignoring the neighborhood rankings. Moreover, most hashing methods lack in providing a good ranking for query results since they use Hamming distance as the similarity metric, and in practice, there are often a lot of results sharing the same distance to a query. In this paper, we propose a novel hashing method to solve these two issues jointly. The proposed method is referred to as Topology Preserving Hashing (TPH). TPH is distinct from prior works by preserving the neighborhood rankings of data points in Hamming space. The learning stage of TPH is formulated as a generalized eigendecomposition problem with closed form solutions. Experimental comparisons with other state-of-the-art methods on three noted image benchmarks demonstrate the efficacy of the proposed method.
BLOGS: Balanced local and global search for non-degenerate two view epipolar geometry This work considers the problem of estimating the epipolar geometry between two cameras without needing a prespecified set of correspondences. It is capable of resolving the epipolar geometry for cases when the views differ significantly in terms of baseline and rotation, resulting in a large number features in one image that have no correspondence in the other image. We do conditional characterization of the probability space of correspondences based on Joint Feature Distributions (JFD). We seek to maximize the probabilistic support of the putative correspondence set over a number of MCMC iterations, guided by proposal distributions based on similarity or JFD. Similarity based guidance provides large movements (global) through correspondence space and JFD based guidance provides small movements (local) around the best known epipolar geometry the algorithm has found so far. We also propose a simple and novel method to rule out, at each iteration, correspondences that lead to degenerate configurations, thus speeding up convergence. We compare our algorithm with LO-RANSAC, NAPSAC, MAPSAC and BEEM, which are the current state of the art competing methods, on a dataset that has significantly more change in baseline, rotation, and scale than those used in the current literature. We quantitatively benchmark the performance using manually specified ground truth corresponding point pairs. We find that our approach can achieve results of similar quality as the current state of art in 10 times lesser number of iterations. We are also able to tolerate upto 90% outlier correspondences.
Meta-Recognition: The Theory and Practice of Recognition Score Analysis In this paper, we define meta-recognition, a performance prediction method for recognition algorithms, and examine the theoretical basis for its post-recognition score analysis form through the use of the statistical extreme value theory (EVT). The ability to predict the performance of a recognition system based on its outputs for each match instance is desirable for a number of important reasons, including automatic threshold selection for determining matches and non-matches, and automatic algorithm selection or weighting for multi-algorithm fusion. The emerging body of literature on post-recognition score analysis has been largely constrained to biometrics, where the analysis has been shown to successfully complement or replace image quality metrics as a predictor. We develop a new statistical predictor based upon the Weibull distribution, which produces accurate results on a per instance recognition basis across different recognition problems. Experimental results are provided for two different face recognition algorithms, a fingerprint recognition algorithm, a SIFT-based object recognition system, and a content-based image retrieval system.
Comparative evaluation of Received Signal-Strength Index (RSSI) based indoor localization techniques for construction jobsites This paper evaluates the accuracy of several RSSI-based localization techniques on a live jobsite and compares them to results obtained in an operating building. RSSI-based localization algorithms were tested due to their relative low cost and potential for accuracy. Four different localization algorithms (MinMax, Maximum Likelihood, Ring Overlapping Circle RSSI and k-Nearest Neighbor) were evaluated at both locations. The results indicate that the tested localization algorithms performed less well on the construction jobsite than they did in the operating building. The simple MinMax algorithm has better performance than other algorithms, with average errors as low as 1.2m with a beacon density of 0.186/m2. The Ring Overlapping Circle RSSI algorithm was also shown to have good results and avoids implementation difficulties of other algorithms. k-Nearest Neighbor algorithms, previously explored by other construction researchers, have good accuracy in some test cases but may be particularly sensitive to beacon positioning.
Training-Free, Generic Object Detection Using Locally Adaptive Regression Kernels We present a generic detection/localization algorithm capable of searching for a visual object of interest without training. The proposed method operates using a single example of an object of interest to find similar matches, does not require prior knowledge (learning) about objects being sought, and does not require any preprocessing step or segmentation of a target image. Our method is based on the computation of local regression kernels as descriptors from a query, which measure the likeness of a pixel to its surroundings. Salient features are extracted from said descriptors and compared against analogous features from the target image. This comparison is done using a matrix generalization of the cosine similarity measure. We illustrate optimality properties of the algorithm using a naive-Bayes framework. The algorithm yields a scalar resemblance map, indicating the likelihood of similarity between the query and all patches in the target image. By employing nonparametric significance tests and nonmaxima suppression, we detect the presence and location of objects similar to the given query. The approach is extended to account for large variations in scale and rotation. High performance is demonstrated on several challenging data sets, indicating successful detection of objects in diverse contexts and under different imaging conditions.
Calibration of Rotating Sensors This paper reports about a method for calibrating rotating senors, namely, rotating sensor-line cameras and laser range-finders. Both together are used to reconstruct accurately 3D environments, such as, for example, large buildings. One of the important steps in the 3D reconstruction pipeline is the fusion of data. This requires an understanding of spatial relationships among the acquired data. Sensor calibration is the key to accurate 3D models.
1.027572
0.025
0.015038
0.013044
0.006824
0.004822
0.002306
0.000972
0.000282
0.000073
0.00002
0.000003
0
0
Learning to Combine Mid-Level Cues for Object Proposal Generation In recent years, region proposals have replaced sliding windows in support of object recognition, offering more discriminating shape and appearance information through improved localization. One powerful approach for generating region proposals is based on minimizing parametric energy functions with parametric maxflow. In this paper, we introduce Parametric Min-Loss (PML), a novel structured learning framework for parametric energy functions. While PML is generally applicable to different domains, we use it in the context of region proposals to learn to combine a set of mid-level grouping cues to yield a small set of object region proposals with high recall. Our learning framework accounts for multiple diverse outputs, and is complemented by diversification seeds based on image location and color. This approach casts perceptual grouping and cue combination in a novel structured learning framework which yields baseline improvements on VOC 2012 and COCO 2014.
Online Object Tracking with Proposal Selection Tracking-by-detection approaches are some of the most successful object trackers in recent years. Their success is largely determined by the detector model they learn initially and then update over time. However, under challenging conditions where an object can undergo transformations, e.g., severe rotation, these methods are found to be lacking. In this paper, we address this problem by formulating it as a proposal selection task and making two contributions. The first one is introducing novel proposals estimated from the geometric transformations undergone by the object, and building a rich candidate set for predicting the object location. The second one is devising a novel selection strategy using multiple cues, i.e., detection score and edgeness score computed from state-of-the-art object edges and motion boundaries. We extensively evaluate our approach on the visual object tracking 2014 challenge and online tracking benchmark datasets, and show the best performance.
Improving object detection with deep convolutional networks via Bayesian optimization and structured prediction Object detection systems based on the deep convolutional neural network (CNN) have recently made ground-breaking advances on several object detection benchmarks. While the features learned by these high-capacity neural networks are discriminative for categorization, inaccurate localization is still a major source of error for detection. Building upon high-capacity CNN architectures, we address the localization problem by 1) using a search algorithm based on Bayesian optimization that sequentially proposes candidate regions for an object bounding box, and 2) training the CNN with a structured loss that explicitly penalizes the localization inaccuracy. In experiments, we demonstrate that each of the proposed methods improves the detection performance over the baseline method on PASCAL VOC 2007 and 2012 datasets. Furthermore, two methods are complementary and significantly outperform the previous state-of-the-art when combined.
Object Detection via a Multi-region and Semantic Segmentation-Aware CNN Model. We propose an object detection system that relies on a multi-region deep convolutional neural network (CNN) that also encodes semantic segmentation-aware features. The resulting CNN-based representation aims at capturing a diverse set of discriminative appearance factors and exhibits localization sensitivity that is essential for accurate object localization. We exploit the above properties of our recognition module by integrating it on an iterative localization mechanism that alternates between scoring a box proposal and refining its location with a deep CNN regression model. Thanks to the efficient use of our modules, we detect objects with very high localization accuracy. On the detection challenges of PASCAL VOC2007 and PASCAL VOC2012 we achieve mAP of 78.2% and 73.9% correspondingly, surpassing any other published work by a significant margin.
Segmentation as selective search for object recognition For object recognition, the current state-of-the-art is based on exhaustive search. However, to enable the use of more expensive features and classifiers and thereby progress beyond the state-of-the-art, a selective search strategy is needed. Therefore, we adapt segmentation as a selective search by reconsidering segmentation: We propose to generate many approximate locations over few and precise object delineations because (1) an object whose location is never generated can not be recognised and (2) appearance and immediate nearby context are most effective for object recognition. Our method is class-independent and is shown to cover 96.7% of all objects in the Pascal VOC 2007 test set using only 1,536 locations per image. Our selective search enables the use of the more expensive bag-of-words method which we use to substantially improve the state-of-the-art by up to 8.5% for 8 out of 20 classes on the Pascal VOC 2010 detection challenge.
Reduced Analytic Dependency Modeling: Robust Fusion for Visual Recognition This paper addresses the robustness issue of information fusion for visual recognition. Analyzing limitations in existing fusion methods, we discover two key factors affecting the performance and robustness of a fusion model under different data distributions, namely (1) data dependency and (2) fusion assumption on posterior distribution. Considering these two factors, we develop a new framework to model dependency based on probabilistic properties of posteriors without any assumption on the data distribution. Making use of the range characteristics of posteriors, the fusion model is formulated as an analytic function multiplied by a constant with respect to the class label. With the analytic fusion model, we give an equivalent condition to the independent assumption and derive the dependency model from the marginal distribution property. Since the number of terms in the dependency model increases exponentially, the Reduced Analytic Dependency Model (RADM) is proposed based on the convergent property of analytic function. Finally, the optimal coefficients in the RADM are learned by incorporating label information from training data to minimize the empirical classification error under regularized least square criterion, which ensures the discriminative power. Experimental results from robust non-parametric statistical tests show that the proposed RADM method statistically significantly outperforms eight state-of-the-art score-level fusion methods on eight image/video datasets for different tasks of digit, flower, face, human action, object, and consumer video recognition.
Lighting and pose robust face sketch synthesis Automatic face sketch synthesis has important applications in law enforcement and digital entertainment. Although great progress has been made in recent years, previous methods only work under well controlled conditions and often fail when there are variations of lighting and pose. In this paper, we propose a robust algorithm for synthesizing a face sketch from a face photo taken under a different lighting condition and in a different pose than the training set. It synthesizes local sketch patches using a multiscale Markov Random Field (MRF) model. The robustness to lighting and pose variations is achieved in three steps. Firstly, shape priors specific to facial components are introduced to reduce artifacts and distortions caused by variations of lighting and pose. Secondly, new patch descriptors and metrics which are more robust to lighting variations are used to find candidates of sketch patches given a photo patch. Lastly, a smoothing term measuring both intensity compatibility and gradient compatibility is used to match neighboring sketch patches on the MRF network more effectively. The proposed approach significantly improves the performance of the state-of-the-art method. Its effectiveness is shown through experiments on the CUHK face sketch database and celebrity photos collected from the web.
Action and Event Recognition with Fisher Vectors on a Compact Feature Set Action recognition in uncontrolled video is an important and challenging computer vision problem. Recent progress in this area is due to new local features and models that capture spatio-temporal structure between local features, or human-object interactions. Instead of working towards more complex models, we focus on the low-level features and their encoding. We evaluate the use of Fisher vectors as an alternative to bag-of-word histograms to aggregate a small set of state-of-the-art low-level descriptors, in combination with linear classifiers. We present a large and varied set of evaluations, considering (i) classification of short actions in five datasets, (ii) localization of such actions in feature-length movies, and (iii) large-scale recognition of complex events. We find that for basic action recognition and localization MBH features alone are enough for state-of-the-art performance. For complex events we find that SIFT and MFCC features provide complementary cues. On all three problems we obtain state-of-the-art results, while using fewer features and less complex models.
The feature and spatial covariant kernel: adding implicit spatial constraints to histogram In this paper, we are motivated to augment the holistic histogram representation with implicit spatial constrains. To be more concrete, we aim at finding a good match function for the problem of object/scene categorization which considers the spatial constraints against heavy clutter and occlusion. Our solution is a partial match kernel under the histogram representation which varies simultaneously at both the feature and spatial resolutions, named as the Feature and Spatial Covariant (FESCO) kernel. Both the FESCO kernel and its late fusion alternative achieve better match accuracy than Spatial Pyramid Match [13] and Pyramid Match [11]. We also apply the keypoint features to video indexing. And on a large scale TRECVID data sets of over 300 hours videos, to our best knowledge, this approach achieves the state-of-the-art result for a single feature.
Objects in Context In the task of visual object categorization, semantic con- text can play the very important role of reducing ambigu- ity in objects' visual appearance. In this work we propose to incorporate semantic object context as a post-processing step into any off-the-shelf object categorization model. Us- ing a conditional random field (CRF) framework, our ap- proach maximizes object label agreement according to con- textual relevance. We compare two sources of context: one learned from training data and another queried from Google Sets. The overall performance of the proposed framework is evaluated on the PASCAL and MSRC datasets. Our findings conclude that incorporating context into object categorization greatly improves categorization accuracy.
Complex events detection using data-driven concepts Automatic event detection in a large collection of unconstrained videos is a challenging and important task. The key issue is to describe long complex video with high level semantic descriptors, which should find the regularity of events in the same category while distinguish those from different categories. This paper proposes a novel unsupervised approach to discover data-driven concepts from multi-modality signals (audio, scene and motion) to describe high level semantics of videos. Our methods consists of three main components: we first learn the low-level features separately from three modalities. Secondly we discover the data-driven concepts based on the statistics of learned features mapped to a low dimensional space using deep belief nets (DBNs). Finally, a compact and robust sparse representation is learned to jointly model the concepts from all three modalities. Extensive experimental results on large in-the-wild dataset show that our proposed method significantly outperforms state-of-the-art methods.
Intelligent multi-camera video surveillance: A review Intelligent multi-camera video surveillance is a multidisciplinary field related to computer vision, pattern recognition, signal processing, communication, embedded computing and image sensors. This paper reviews the recent development of relevant technologies from the perspectives of computer vision and pattern recognition. The covered topics include multi-camera calibration, computing the topology of camera networks, multi-camera tracking, object re-identification, multi-camera activity analysis and cooperative video surveillance both with active and static cameras. Detailed descriptions of their technical challenges and comparison of different solutions are provided. It emphasizes the connection and integration of different modules in various environments and application scenarios. According to the most recent works, some problems can be jointly solved in order to improve the efficiency and accuracy. With the fast development of surveillance systems, the scales and complexities of camera networks are increasing and the monitored environments are becoming more and more complicated and crowded. This paper discusses how to face these emerging challenges.
A Parallel Hardware Architecture for Scale and Rotation Invariant Feature Detection This paper proposes a parallel hardware architecture for image feature detection based on the scale invariant feature transform algorithm and applied to the simultaneous localization and mapping problem. The work also proposes specific hardware optimizations considered fundamental to embed such a robotic control system on-a-chip. The proposed architecture is completely stand-alone; it reads the input data directly from a CMOS image sensor and provides the results via a field-programmable gate array coupled to an embedded processor. The results may either be used directly in an on-chip application or accessed through an Ethernet connection. The system is able to detect features up to 30 frames per second (320times240 pixels) and has accuracy similar to a PC-based implementation. The achieved system performance is at least one order of magnitude better than a PC-based solution, a result achieved by investigating the impact of several hardware-orientated optimizations on performance, area and accuracy.
Keypoint Detection Based on the Unimodality Test of HOGs. We present a new method for keypoint detection. The main drawback of existing methods is their lack of robustness to image distortions. Small variations of the image lead to big differences in keypoint localizations. The present work shows a way of determining singular points in an image using histograms of oriented gradients (HOGs). Although HOGs are commonly used as keypoint descriptors, they have not been used in the detection stage before. We show that the unimodality of HOGs can be used as a measure of significance of the interest points. We show that keypoints detected using HOGs present higher robustness to image distortions, and we compare the results with existing methods, using the repeatability criterion.
1.203888
0.101944
0.026213
0.01938
0.004731
0.000974
0.000244
0.000075
0.000018
0.000001
0
0
0
0
3D Object Recognition in Cluttered Scenes with Local Surface Features: A Survey 3D object recognition in cluttered scenes is a rapidly growing research area. Based on the used types of features, 3D object recognition methods can broadly be divided into two categories-global or local feature based methods. Intensive research has been done on local surface feature based methods as they are more robust to occlusion and clutter which are frequently present in a real-world scene. This paper presents a comprehensive survey of existing local surface feature based 3D object recognition methods. These methods generally comprise three phases: 3D keypoint detection, local surface feature description, and surface matching. This paper covers an extensive literature survey of each phase of the process. It also enlists a number of popular and contemporary databases together with their relevant attributes.
A Novel Multi-Purpose Matching Representation of Local 3D Surfaces: A Rotationally Invariant, Efficient, and Highly Discriminative Approach With an Adjustable Sensitivity In this paper, a novel approach to local 3D surface matching representation suitable for a range of 3D vision applications is introduced. Local 3D surface patches around key points on the 3D surface are represented by 2D images such that the representing 2D images enjoy certain characteristics which positively impact the matching accuracy, robustness, and speed. First, the proposed representation is complete, in the sense, there is no information loss during their computation. Second, the 3DoF 2D representations are strictly invariant to all the 3DoF rotations. To optimally avail surface information, the sensitivity of the representations to surface information is adjustable. This also provides the proposed matching representation with the means to optimally adjust to a particular class of problems/applications or an acquisition technology. Each 2D matching representation is a sequence of adjustable integral kernels, where each kernel is efficiently computed from a triple of precise 3D curves (profiles) formed by intersecting three concentric spheres with the 3D surface. Robust techniques for sampling the profiles and establishing correspondences among them were devised. Based on the proposed matching representation, two techniques for the detection of key points were presented. The first is suitable for static images, while the second is suitable for 3D videos. The approach was tested on the face recognition grand challenge v2.0, the 3D twins expression challenge, and the Bosphorus data sets, and a superior face recognition performance was achieved. In addition, the proposed approach was used in object class recognition and tested on a Kinect data set.
Difference of Normals as a Multi-scale Operator in Unorganized Point Clouds A novel multi-scale operator for unorganized 3D point clouds is introduced. The Difference of Normals (DoN) provides a computationally efficient, multi-scale approach to processing large unorganized 3D point clouds. The application of DoN in the multi-scale filtering of two different real-world outdoor urban LIDAR scene datasets is quantitatively and qualitatively demonstrated. In both datasets the DoN operator is shown to segment large 3D point clouds into scale-salient clusters, such as cars, people, and lamp posts towards applications in semi-automatic annotation, and as a pre-processing step in automatic object recognition. The application of the operator to segmentation is evaluated on a large public dataset of outdoor LIDAR scenes with ground truth annotations.
Aligning 2.5D Scene Fragments With Distinctive Local Geometric Features and Voting-Based Correspondences Aligning 2.5D views has been extensively explored in the past decades, where most prior works have concentrated on object data with complex structures. This paper presents a method to align real-word scene scans with challenging features such as noise, poor geometric information, and highly repeatable patterns. Our method consists of two modules: pairwise and multiview alignments. Key to the proposed pairwise alignment method is the rotational contour signature geometric feature and voting-based correspondence selection algorithm. The former promises strong discriminative power for 2.5D scene data, while the latter affords high-quality correspondences via a voting process for all raw feature matches using <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$L_{2}$ </tex-math></inline-formula> distance and point pair affinity constraints. For the multiview alignment method, we first use a connected graph algorithm to establish the connections of all 2.5D views for coarse merging; then, we propose a shape-growing iterative closest point algorithm for further refinement. Experiments are conducted on scene point cloud datasets addressing both the indoor and outdoor scenarios, whereby we demonstrate that the proposed pairwise alignment method clearly outperforms the state of the art. Moreover, the proposed multiview alignment method manages to put multiple unordered 2.5D scene fragments into a unified coordinate system automatically, accurately, and efficiently.
Variable Dimensional Local Shape Descriptors for Object Recognition in Range Data We propose a new set of highly descriptive local shape descriptors (LSDs) for model-based object recognition and pose determination in input range data. Object recognition is performed in three phases: point matching, where point correspondences are established between range data and the complete model using local shape descriptors; pose recovery, where a computationally robust algorithm generates a rough alignment between the model and its instance in the scene, if such an instance is present; and pose refinement. While previously developed LSDs take a minimalist approach, in that they try to construct low dimensional and compact descriptors, we use high (up to 9) dimensional descriptors as the key to more accurate and robust point correspondence. Our strategy significantly simplifies the computational burden of the pose recovery phase by investing more time in the point matching phase. Experiments with Lidar and dense stereo range data illustrate the effectiveness of the approach by providing a higher percentage of correct matches in the candidate point matches list than a leading minimalist technique. Consequently, the number of RANSAC iterations required for recognition andpose determination is drastically smaller in our approach.
Voting-Based Pose Estimation For Robotic Assembly Using A 3d Sensor We propose a voting-based pose estimation algorithm applicable to 3D sensors, which are fast replacing their 2D counterparts in many robotics, computer vision, and gaming applications. It was recently shown that a pair of oriented 3D points, which are points on the object surface with normals, in a voting framework enables fast and robust pose estimation. Although oriented surface points are discriminative for objects with sufficient curvature changes, they are not compact and discriminative enough for many industrial and real-world objects that are mostly planar. As edges play the key role in 2D registration, depth discontinuities are crucial in 3D. In this paper, we investigate and develop a family of pose estimation algorithms that better exploit this boundary information. In addition to oriented surface points, we use two other primitives: boundary points with directions and boundary line segments. Our experiments show that these carefully chosen primitives encode more information compactly and thereby provide higher accuracy for a wide class of industrial parts and enable faster computation. We demonstrate a practical robotic bin-picking system using the proposed algorithm and a 3D sensor.
Harris 3D: a robust extension of the Harris operator for interest point detection on 3D meshes With the increasing amount of 3D data and the ability of capture devices to produce low-cost multimedia data, the capability to select relevant information has become an interesting research field. In 3D objects, the aim is to detect a few salient structures which can be used, instead of the whole object, for applications like object registration, retrieval, and mesh simplification. In this paper, we present an interest points detector for 3D objects based on Harris operator, which has been used with good results in computer vision applications. We propose an adaptive technique to determine the neighborhood of a vertex, over which the Harris response on that vertex is calculated. Our method is robust to several transformations, which can be seen in the high repeatability values obtained using the SHREC feature detection and description benchmark. In addition, we show that Harris 3D outperforms the results obtained by recent effective techniques such as Heat Kernel Signatures.
Comparing images using the Hausdorff distance The Hausdorff distance measures the extent to which each point of a model set lies near some point of an image set and vice versa. Thus, this distance can be used to determine the degree of resemblance between two objects that are superimposed on one another. Efficient algorithms for computing the Hausdorff distance between all possible relative positions of a binary image and a model are presented. The focus is primarily on the case in which the model is only allowed to translate with respect to the image. The techniques are extended to rigid motion. The Hausdorff distance computation differs from many other shape comparison methods in that no correspondence between the model and the image is derived. The method is quite tolerant of small position errors such as those that occur with edge detectors and other feature extraction methods. It is shown that the method extends naturally to the problem of comparing a portion of a model against an image.
Reconstructing a textured CAD model of an urban environment using vehicle-borne laser range scanners and line cameras Abstract. In this paper, a novel method is presented for generating a textured CAD model of an outdoor urban environment using a vehicle-borne sensor system. In data measurement, three single-row laser range scanners and six line cameras are mounted on a measurement vehicle, which has been equipped with a GPS/INS/Odometer-based navigation system. Laser range and line images are measured as the vehicle moves forward. They are synchronized with the navigation system so they can be geo-referenced to a world coordinate system. Generation of the CAD model is conducted in two steps. A geometric model is first generated using the geo-referenced laser range data, where urban features, such as buildings, ground surfaces, and trees are extracted in a hierarchical way. Different urban features are represented using different geometric primitives, such as a planar face, a triangulated irregular network (TIN), and a triangle. The texture of the urban features is generated by projecting and resampling line images onto the geometric model. An outdoor experiment is conducted, and a textured CAD model of a real urban environment is reconstructed in a full automatic mode.
The MOPED framework: Object recognition and pose estimation for manipulation We present MOPED, a framework for Multiple Object Pose Estimation and Detection that seamlessly integrates single-image and multi-image object recognition and pose estimation in one optimized, robust, and scalable framework. We address two main challenges in computer vision for robotics: robust performance in complex scenes, and low latency for real-time operation. We achieve robust performance with Iterative Clustering Estimation (ICE), a novel algorithm that iteratively combines feature clustering with robust pose estimation. Feature clustering quickly partitions the scene and produces object hypotheses. The hypotheses are used to further refine the feature clusters, and the two steps iterate until convergence. ICE is easy to parallelize, and easily integrates single- and multi-camera object recognition and pose estimation. We also introduce a novel object hypothesis scoring function based on M-estimator theory, and a novel pose clustering algorithm that robustly handles recognition outliers. We achieve scalability and low latency with an improved feature matching algorithm for large databases, a GPU/CPU hybrid architecture that exploits parallelism at all levels, and an optimized resource scheduler. We provide extensive experimental results demonstrating state-of-the-art performance in terms of recognition, scalability, and latency in real-world robotic applications.
Modeling the World from Internet Photo Collections There are billions of photographs on the Internet, comprising the largest and most diverse photo collection ever assembled. How can computer vision researchers exploit this imagery? This paper explores this question from the standpoint of 3D scene modeling and visualization. We present structure-from-motion and image-based rendering algorithms that operate on hundreds of images downloaded as a result of keyword-based image search queries like "Notre Dame" or "Trevi Fountain." This approach, which we call Photo Tourism, has enabled reconstructions of numerous well-known world sites. This paper presents these algorithms and results as a first step towards 3D modeling of the world's well-photographed sites, cities, and landscapes from Internet imagery, and discusses key open problems and challenges for the research community.
Simultaneous Camera Pose and Correspondence Estimation with Motion Coherence Traditionally, the camera pose recovery problem has been formulated as one of estimating the optimal camera pose given a set of point correspondences. This critically depends on the accuracy of the point correspondences and would have problems in dealing with ambiguous features such as edge contours and high visual clutter. Joint estimation of camera pose and correspondence attempts to improve performance by explicitly acknowledging the chicken and egg nature of the pose and correspondence problem. However, such joint approaches for the two-view problem are still few and even then, they face problems when scenes contain largely edge cues with few corners, due to the fact that epipolar geometry only provides a "soft" point to line constraint. Viewed from the perspective of point set registration, the point matching process can be regarded as the registration of points while preserving their relative positions (i.e. preserving scene coherence). By demanding that the point set should be transformed coherently across views, this framework leverages on higher level perceptual information such as the shape of the contour. While thus potentially allowing registration of non-unique edge points, the registration framework in its traditional form is subject to substantial point localization error and is thus not suitable for estimating camera pose. In this paper, we introduce an algorithm which jointly estimates camera pose and correspondence within a point set registration framework based on motion coherence, with the camera pose helping to localize the edge registration, while the "ambiguous" edge information helps to guide camera pose computation. The algorithm can compute camera pose over large displacements and by utilizing the non-unique edge points can recover camera pose from what were previously regarded as feature-impoverished SfM scenes. Our algorithm is also sufficiently flexible to incorporate high dimensional feature descriptors and works well on traditional SfM scenes with adequate numbers of unique corners.
A Dense Stereo Matching Using Two-Pass Dynamic Programming with Generalized Ground Control Points A method for solving dense stereo matching problem is presented in this paper. First, a new generalized ground control points (GGCPs) scheme is introduced, where one or more disparity candidates for the true disparity of each pixel are assigned by local matching using the oriented spatial filters. By allowing "all" pixels to have multiple candidates for their true disparities, GGCPs not only guarantee to provide a sufficient number of starting pixels needed for guiding the subsequent matching process, but also remarkably reduce the risk of false match, improving the previous GCP-based approaches where the number of the selected control points tends to be inversely proportional to the reliability. Second, by employing a two-pass dynamic programming technique that performs optimization both along and across the scanlines, we solve the typical inter-scanline inconsistency problem. Moreover, combined with the GGCPs, the stability and efficiency of the optimization are improved significantly. Experimental results for the standard data sets show that the proposed algorithm achieves comparable results to the state-of-the-arts with much less computational cost.
Mixture Distributions for Weakly Supervised Classification in Remote Sensing Images For its simplicity and efficiency, the bag-of-words represe ntation based on appearance features is widely used in image and text classifi cation. Its draw- back is that shape patterns of the image are neglected. This paper presents a novel image classification approach using a bag-of-words r epresentation of textons while taking into account spatial information. A generative proba- bilistic modeling of the distribution of textons is propose d. The parameters of the mixture's components are estimated using a EM algorithm . We show that the number of classes in a database can be found automatically and exac- tly by MDL. This modeling gives very good results for the task of weakly supervised classification in satellite images.
1.015616
0.015714
0.014286
0.011825
0.00908
0.005714
0.002454
0.000324
0.000084
0.000023
0.000005
0
0
0
The method for image retrieval based on multi-factors correlation utilizing block truncation coding. In this paper, we proposed multi-factors correlation (MFC) to describe the image, structure element correlation (SEC), gradient value correlation (GVC) and gradient direction correlation (GDC). At first, the RGB color space image is converted to a bitmap image and a mean color component image utilizing the block truncation coding (BTC). Then, three correlations will be used to extract the image feature. The structure elements can effectively represent the bitmap which is generated by BTC, and SEC can effectively denote the bitmap׳s structure and the correlation of the block in the bitmap. GVC and GDC can effectively denote the gradient relation, which is computed by a mean color component image. Formed by SEC, GVC and GDC, the image feature vectors can effectively represent the image. In the end, the results demonstrate that the method has better performance than other image retrieval methods in the experiment.
A novel method for image retrieval based on structure elements' descriptor In this paper, structure elements' descriptor (SED) - a novel texture descriptor, is proposed. SED can effectively describe images and represent image local features. Moreover, SED can extract and describe color and texture features. The image structure elements' histogram (SEH) is computed by SED, and HSV color space is used (it has been quantized to 72 bins). SEH integrates the advantages of both statistical and structural texture description methods, and it can represent the spatial correlation of color and texture. The results demonstrate that the method has a better performance than other image retrieval methods in the experiments.
Visual word spatial arrangement for image retrieval and classification We present word spatial arrangement (WSA), an approach to represent the spatial arrangement of visual words under the bag-of-visual-words model. It lies in a simple idea which encodes the relative position of visual words by splitting the image space into quadrants using each detected point as origin. WSA generates compact feature vectors and is flexible for being used for image retrieval and classification, for working with hard or soft assignment, requiring no pre/post processing for spatial verification. Experiments in the retrieval scenario show the superiority of WSA in relation to Spatial Pyramids. Experiments in the classification scenario show a reasonable compromise between those methods, with Spatial Pyramids generating larger feature vectors, while WSA provides adequate performance with much more compact features. As WSA encodes only the spatial information of visual words and not their frequency of occurrence, the results indicate the importance of such information for visual categorization.
Image retrieval based on multi-texton histogram This paper presents a novel image feature representation method, called multi-texton histogram (MTH), for image retrieval. MTH integrates the advantages of co-occurrence matrix and histogram by representing the attribute of co-occurrence matrix using histogram. It can be considered as a generalized visual attribute descriptor but without any image segmentation or model training. The proposed MTH method is based on Julesz's textons theory, and it works directly on natural images as a shape descriptor. Meanwhile, it can be used as a color texture descriptor and leads to good performance. The proposed MTH method is extensively tested on the Corel dataset with 15000 natural images. The results demonstrate that it is much more efficient than representative image feature descriptors, such as the edge orientation auto-correlogram and the texton co-occurrence matrix. It has good discrimination power of color, texture and shape features.
Histograms of Oriented Gradients for Human Detection We study the question of feature sets for robust visual object recognition, adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of Histograms of Oriented Gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.
Photorealistic Scene Reconstruction by Voxel Coloring A novel scene reconstruction technique is presented,different from previous approaches in its ability to copewith large changes in visibility and its modeling of intrinsicscene color and texture information. The methodavoids image correspondence problems by working in adiscretized scene space whose voxels are traversed in afixed visibility ordering. This strategy takes full accountof occlusions and allows the input cameras to be far apartand widely distributed about the environment. The algorithmidentifies a special set of invariant voxels which togetherform a spatial and photometric reconstruction of thescene, fully consistent with the input images. The approachis evaluated with images from both inward-facing and outward-facingcameras.
Surface reconstruction from unorganized points We describe and demonstrate an algorithm that takes as input an unorganized set of points 1 n IR 3 on or near an un- known manifold M, and produces as output a simplicial surface that approximates M. Neither the topology, the presence of boundaries, nor the geometry of M are assumed to be known in advance — all are inferred automatically from the data. This problem naturally arises in a variety of practical situations such as range scanning an object from multiple view points, recovery of biological shapes from two-dimensional slices, and interactive surface sketching.
Human recognition using 3D ear images. This paper proposes an ear recognition technique which makes use of 3D along with co-registered 2D ear images. It presents a two-step matching technique to compare two 3D ears. In the first step, it computes salient 3D data points from 3D ear images with the help of local 2D feature points of co-registered 2D ear images. Subsequently, it uses these salient 3D points to coarsely align 3D ear images. In the second step, it performs final matching of coarsely aligned 3D ear images by using a Generalized Procrustes Analysis (GPA) and Iterative Closest Point (ICP) based matching technique (GPA-ICP). The proposed technique has been tested on 1780 images of 404 subjects (two or more images per subject) of University of Notre Dame public database-Collection J2 (UND-J2) which consists of co-registered 2D and 3D ear images with scale and pose variations. It has achieved a verification accuracy of 98.30% with an equal error rate of 1.8%.
Video Stabilization Using Scale-Invariant Features Video Stabilization is one of those important video processing techniques to remove the unwanted camera vibration in a video sequence. In this paper, we present a practical method to remove the annoying shaky motion and reconstruct a stabilized video sequence with good visual quality. Here, the scale invariant(SIFT) features, proved to be invariant to image scale and rotation, is applied to estimate the camera motion. The unwanted vibrations are separated from the intentional camera motion with the combination of Gaussian kernel filtering and parabolic fitting. It is demonstrated that our method effectively removes the high frequency "noise' motion, but also minimize the missing area as much as possible. To reconstruct the undefined areas, resulting from motion compensation, we adopt the mosaicing method with Dynamic Programming . The proposed method has been confirmed to be effective over a widely variety of videos.
Epipolar Geometry of Panoramic Cameras . This paper presents fundamental theory and design of centralpanoramic cameras. Panoramic cameras combine a convex hyperbolic orparabolic mirror with a perspective camera to obtain a large field of view.We show how to design a panoramic camera with a tractable geometryand we propose a simple calibration method. We derive the image formationfunction for such a camera. The main contribution of the paperis the derivation of the epipolar geometry between a pair of panoramiccameras. We show...
Learning to detect unseen object classes by between-class attribute transfer We study the problem of object classification when train- ing and test classes are disjoint, i.e. no training examples of the target classes are available. This setup has hardly been studied in computer vision research, but it is the rule rather than the exception, because the world contains tens of thou- sands of different object classes and for only a very few of them image, collections have been formed and annotated with suitable class labels. In this paper, we tackle the problem by introducing attribute-based classification. It performs object detection based on a human-specified high-level description of the target objects instead of training images. The description consists of arbitrary semantic attributes, like shape, color or even geographic information. Because such properties transcend the specific learning task at hand, they can be pre-learned, e.g. from image datasets unrelated to the cur- rent task. Afterwards, new classes can be detected based on their attribute representation, without the need for a new training phase. In order to evaluate our method and to facil- itate research in this area, we have assembled a new large- scale dataset, "Animals with Attributes", of over 30,000 an- imal images that match the 50 classes in Osherson's clas- sic table of how strongly humans associate 85 semantic at- tributes with animal classes. Our experiments show that by using an attribute layer it is indeed possible to build a learning object detection system that does not require any training images of the target classes.
IrisNet: an internet-scale architecture for multimedia sensors Most current sensor network research explores the use of extremely simple sensors on small devices called motes and focuses on over-coming the resource constraints of these devices. In contrast, our research explores the challenges of multimedia sensors and is motivated by the fact that multimedia devices, such as cameras, are rapidly becoming inexpensive, yet their use in a sensor network presents a number of unique challenges. For example, the data rates involved with multimedia sensors are orders of magnitude greater than those for sensor motes and this data cannot easily be processed by traditional sensor network techniques that focus on scalar data. In addition, the richness of the data generated by multimedia sensors makes them useful for a wide variety of applications. This paper presents an overview of IRISNET, a sensor network architecture that enables the creation of a planetary-scale infrastructure of multimedia sensors that can be shared by a large number of applications. To ensure the efficient collection of sensor readings, IRISNET enables the application-specific processing of sensor feeds on the significant computation resources that are typically attached to multimedia sensors. IRISNET enables the storage of sensor readings close to their source by providing a convenient and extensible distributed XML database infrastructure. Finally, IRISNET provides a number of multimedia processing primitives that enable the effective processing of sensor feeds in-network and at-sensor.
Complex events detection using data-driven concepts Automatic event detection in a large collection of unconstrained videos is a challenging and important task. The key issue is to describe long complex video with high level semantic descriptors, which should find the regularity of events in the same category while distinguish those from different categories. This paper proposes a novel unsupervised approach to discover data-driven concepts from multi-modality signals (audio, scene and motion) to describe high level semantics of videos. Our methods consists of three main components: we first learn the low-level features separately from three modalities. Secondly we discover the data-driven concepts based on the statistics of learned features mapped to a low dimensional space using deep belief nets (DBNs). Finally, a compact and robust sparse representation is learned to jointly model the concepts from all three modalities. Extensive experimental results on large in-the-wild dataset show that our proposed method significantly outperforms state-of-the-art methods.
Autonomous Detection Of Volcanic Plumes On Outer Planetary Bodies We experimentally evaluated the efficacy of various autonomous supervised classification techniques for detecting transient geophysical phenomena. We demonstrated methods of detecting volcanic plumes on the planetary satellites Io and Enceladus using spacecraft images from the Voyager, Galileo, New Horizons, and Cassini missions. We successfully detected 73-95% of known plumes in images from all four mission datasets. Additionally, we showed that the same techniques are applicable to differentiating geologic features, such as plumes and mountains, which exhibit similar appearances in images.
1.1
0.033333
0.011111
0.006667
0.000068
0
0
0
0
0
0
0
0
0
Internet image archaeology: automatically tracing the manipulation history of photographs on the web We propose a system for automatically detecting the ways in which images have been copied and edited or manipulated. We draw upon these manipulation cues to construct probable parent-child relationships between pairs of images, where the child image was derived through a series of visual manipulations on the parent image. Through the detection of these relationships across a plurality of images, we can construct a history of the image, called the visual migration map (VMM), which traces the manipulations applied to the image through past generations. We propose to apply VMMs as part of a larger internet image archaeology system (IIAS), which can process a given set of related images and surface many interesting instances of images from within the set. In particular, the image closest to the "original" photograph might be among the images with the most descendants in the VMM. Or, the images that are most deeply descended from the original may exhibit unique differences and changes in the perspective being conveyed by the author. We evaluate the system across a set of photographs crawled from the web and find that many types of image manipulations can be automatically detected and used to construct plausible VMMs. These maps can then be successfully mined to find interesting instances of images and to suppress uninteresting or redundant ones, leading to a better understanding of how images are used over different times, sources, and contexts.
Exploitation and Exploration Balanced Hierarchical Summary For Landmark Images While we have made significant progress over image understanding and search, how to meet the ultimate goal of satisfying both exploration and exploitation in one single system is still an open challenge. In the context of landmark images, it means that a system should not only be able to help users to quickly locate the photo they are interested in (exploitation) but also to discover different parts of the landmark which have never been seen before (exploration), which is a common request as evidenced by many recent multimedia studies. To the best of our knowledge, existing systems mainly focus on either exploration (e.g. photo browsing) or exploitation (e.g., representative photo identification) while users’ need of exploration and exploitation is dynamically mixed. In this paper, we tackle the challenge by organizing landmark images in a hierarchical summary which gives user the flexibility of conducting both exploration and exploitation. In the hierarchical summary construction, we introduce two principles: coherence principle and diversity principle. Behind these two principles, the intrinsic concept is “detail-level”, which measures how much detail that an image reflects for a certain landmark. A new objective function is derived from the definition of both exploration and exploitation experience on detail-level. The problem of finding optimal hierarchical summary is formulated as searching over a space of trees for the one that achieves the best objective score. Extensive quantitative experimental results and comprehensive user studies show that the optimized hierarchical summary is able to satisfy both experience simultaneously.
Partial-Duplicate Clustering and Visual Pattern Discovery on Web Scale Image Database In this paper, we study the problem of discovering visual patterns and partial-duplicate images, which is fundamental to visual concept representation and image parsing, but very challenging when the database is extremely large, such as billions of images indexed by a commercial search engine. Although extensive research with sophisticated algorithms has been conducted for either partial-duplicate clustering or visual pattern discovery, most of them can not be easily extended to this scale, since both are clustering problems in nature and require pairwise comparisons. To tackle this computational challenge, we introduce a novel and highly parallelizable framework to discover partialduplicate images and visual patterns in a unified way in distributed computing systems. We emphasize the nested property of local features, and propose the generalized nested feature (GNF) as a mid-level representation for regions and local patterns. Initial coarse clusters are then discovered by GNFs, upon which n-gram GNF is defined to represent co-occurrent visual patterns. After that, efficient merging and refining algorithms are used to get the partial-duplicate clusters, and logical combinations of probabilistic GNF models are leveraged to represent the visual patterns of partially duplicate images. Extensive experiments show the parallelizable property and effectiveness of the algorithms on both partial-duplicate clustering and visual pattern discovery. With 2000 machines, it costs about 8 and 400 minutes to process 1 million and 40 million images respectively, which is quite efficient compared to previous methods.
An efficient near-duplicate video shot detection method using shot-based interest points We propose a shot-based interest point selection approach for effective and efficient near-duplicate search over a large collection of video shots. The basic idea is to eliminate the local descriptors with lower frequencies among the selected video frames from a shot to ensure that the shot representation is compact and discriminative. Specifically, we propose an adaptive frame selection strategy called furthest point voronoi (FPV) to produce the shot frame set according to the shot content and frame distribution. We describe a novel strategy named reference extraction (RE) to extract the shot interest descriptors from a keyframe with the support of the selected frame set. We demonstrate the effectiveness and efficiency of the proposed approaches with extensive experiments.
Partition min-hash for partial duplicate image discovery In this paper, we propose Partition min-Hash (PmH), a novel hashing scheme for discovering partial duplicate images from a large database. Unlike the standard min-Hash algorithm that assumes a bag of words image representation, our approach utilizes the fact that duplicate regions among images are often localized. By theoretical analysis, simulation, and empirical study, we show that PmH outperforms standard min-Hash in terms of precision and recall, while being orders of magnitude faster. When combined with the start-of-the-art Geometric min-Hash algorithm, our approach speeds up hashing by 10 times without losing precision or recall. When given a fixed time budget, our method achieves much higher recall than the state-of-the-art.
Scalable logo recognition in real-world images In this paper we propose a highly effective and scalable framework for recognizing logos in images. At the core of our approach lays a method for encoding and indexing the relative spatial layout of local features detected in the logo images. Based on the analysis of the local features and the composition of basic spatial structures, such as edges and triangles, we can derive a quantized representation of the regions in the logos and minimize the false positive detections. Furthermore, we propose a cascaded index for scalable multi-class recognition of logos. For the evaluation of our system, we have constructed and released a logo recognition benchmark which consists of manually labeled logo images, complemented with non-logo images, all posted on Flickr. The dataset consists of a training, validation, and test set with 32 logo-classes. We thoroughly evaluate our system with this benchmark and show that our approach effectively recognizes different logo classes with high precision.
An Image-Based Approach to Video Copy Detection With Spatio-Temporal Post-Filtering This paper introduces a video copy detection system which efficiently matches individual frames and then verifies their spatio-temporal consistency. The approach for matching frames relies on a recent local feature indexing method, which is at the same time robust to significant video transformations and efficient in terms of memory usage and computation time. We match either keyframes or uniformly sampled frames. To further improve the results, a verification step robustly estimates a spatio-temporal model between the query video and the potentially corresponding video segments. Experimental results evaluate the different parameters of our system and measure the trade-off between accuracy and efficiency. We show that our system obtains excellent results for the TRECVID 2008 copy detection task.
Object Retrieval With Large Vocabularies And Fast Spatial Matching In this paper, we present a large-scale object retrieval system. The user supplies a query object by selecting a region of a query image, and the system returns a ranked list of images that contain the same object, retrieved from a large corpus. We demonstrate the scalability and performance of our system on a dataset of over 1 million images crawled from the photo-sharing site, Flickr [3], using Oxford landmarks as queries.Building an image feature vocabulary is a major time and performance bottleneck, due to the size of our dataset. To address this problem we compare different scalable methods for building a vocabulary and introduce a novel quautization method based on randomized trees which we show outperforms the current state-of-the-art on an extensive ground-truth. Our experiments show that the quantization has a major effect on retrieval quality. To further improve query performance, we add an efficient spatial verification stage to re-rank the results returned from our bag-of-words model and show that this consistently improves search quality, though by less of a margin when the visual vocabulary is large.We view this work as a promising step towards much larger, "web-scale" image corpora.
Detecting Irregularities in Images and in Video We address the problem of detecting irregularities in visual data, e.g., detecting suspicious behaviors in video sequences, or identifying salient patterns in images. The term "irregular" depends on the context in which the "regular" or "valid" are defined. Yet, it is not realistic to expect explicit definition of all possible valid configurations for a given context. We pose the problem of determining the validity of visual data as a process of constructing a puzzle: We try to compose a new observed image region or a new video segment ("the query") using chunks of data ("pieces of puzzle") extracted from previous visual examples ("the database"). Regions in the observed data which can be composed using large contiguous chunks of data from the database are considered very likely, whereas regions in the observed data which cannot be composed from the database (or can be composed, but only using small fragmented pieces) are regarded as unlikely/suspicious. The problem is posed as an inference process in a probabilistic graphical model. We show applications of this approach to identifying saliency in images and video, for detecting suspicious behaviors and for automatic visual inspection for quality assurance.
Nested sparse quantization for efficient feature coding Many state-of-the-art methods in object recognition extract features from an image and encode them, followed by a pooling step and classification. Within this processing pipeline, often the encoding step is the bottleneck, for both computational efficiency and performance. We present a novel assignment-based encoding formulation. It allows for the fusion of assignment-based encoding and sparse coding into one formulation. We also use this to design a new, very efficient, encoding. At the heart of our formulation lies a quantization into a set of k-sparse vectors, which we denote as sparse quantization. We design the new encoding as two nested, sparse quantizations. Its efficiency stems from leveraging bit-wise representations. In a series of experiments on standard recognition benchmarks, namely Caltech 101, PASCAL VOC 07 and ImageNet, we demonstrate that our method achieves results that are competitive with the state-of-the-art, and requires orders of magnitude less time and memory. Our method is able to encode one million images using 4 CPUs in a single day, while maintaining a good performance.
Online multiclass learning by interclass hypothesis sharing We describe a general framework for online multiclass learning based on the notion of hypothesis sharing. In our framework sets of classes are associated with hypotheses. Thus, all classes within a given set share the same hypothesis. This framework includes as special cases commonly used constructions for multiclass categorization such as allocating a unique hypothesis for each class and allocating a single common hypothesis for all classes. We generalize the multiclass Perceptron to our framework and derive a unifying mistake bound analysis. Our construction naturally extends to settings where the number of classes is not known in advance but, rather, is revealed along the online learning process. We demonstrate the merits of our approach by comparing it to previous methods on both synthetic and natural datasets.
Efficient Image Feature Combination with Hierarchical Scheme for Content-Based Image Management System This paper proposes efficient image feature combinations based on local descriptor and hierarchical indexing scheme obtained by clustering with global descriptor for content-based image management system such as image identification and identical image grouping. As features for the image retrieval, we consider both global feature which has general information of overall image for fast image retrieval and local feature which is based on feature points and has high matching accuracy for fine matching of images. The developed local feature is invariant to image scale and rotation, addition of noise, and change in illumination, thus, it sufficiently performs reliable matching between different views of scene across affine transformation. The method works with global feature among image clusters of database in advance and do fine searching only among image data in the cluster with local feature. In order to decrease computation time, we apply conventional clustering methods to group images similar in their characteristics together so that search can be made in a hierarchical manner by fine matching within partial database of candidate images. It can overcome the drawback of exhaustive matching time between similar images by using only local descriptor.
3d Indoor Environment Modeling By A Mobile Robot With Omnidirectional Stereo And Laser Range Finder This paper deals with generation of 3D environment models. The model is expected to be used for location recognition by robots and users. For such a use, very precise models are not necessary. We therefore develop a method of generating 3D environment models relatively simply and fast. We use an omnidirectional stereo as a primary sensor and additionally use a laser range finder. The model is composed of layered contours of free spaces, with textures extracted from images. Results of modeling and application of the model to robot localization are presented.
Has somethong changed here? Autonomous difference detection for security patrol robots This paper presents a system for autonomous change detection with a security patrol robot. In an initial step a reference model of the environment is created and changes are then detected with respect to the reference model as differences in coloured 3D point clouds, which are obtained from a 3D laser range scanner and a CCD camera. The suggested approach introduces several novel aspects, including a registration method that utilizes local visual features to deter- mine point correspondences (thus essentially working without an initial pose estimate) and the 3D-NDT representation with adaptive cell size to efficiently represent both the spatial and colour aspects of the reference model. Apart from a detailed description of the individual parts of the difference detection system, a qualitative experimental evaluation in an indoor lab environment is presented, which demonstrates that the suggested system is able register and detect changes in spatial 3D data and also to detect changes that occur in colour space and are not observable using range values only. I. I NTRODUCTION
1.046133
0.028571
0.028571
0.014311
0.00898
0.003308
0.001085
0.000134
0.00001
0
0
0
0
0
Incremental algorithms for finding the convex hulls of circles and the lower envelopes of parabolas : The existing O(n log n) algorithms for finding the convex hulls of circlesand the lower envelope of parabolas follow the divide-and-conquer paradigm. Thedifficulty with developing incremental algorithms for these problems is that the introductionof a new circle or parabola can cause \Theta(n) structural changes, leading to\Theta(n2) total structural changes during the running of the algorithm. In this note weexamine the geometry of these problems and show that, if the circles or...
Fast Algorithms for Large-State-Space HMMs with Applications to Web Usage Analysis In applying Hidden Markov Models to the analysis of massive data streams, it is often necessary to use an artificially reduced set of states; this is due in large part to the fact that the basic HMM estimation algorithms have a quadratic dependence on the size of the state set. We present algorithms that reduce this computational bottleneck to linear or near-linear time, when the states can be embedded in an underlying grid of parameters. This type of state representation arises in many domains; in particular, we show ail application to traffic analysis at a high-volume Web site.
Multiresolution Markov models for signal and image processing Reviews a significant component of the rich field of statistical multiresolution (MR) modeling and processing. These MR methods have found application and permeated the literature of a widely scattered set of disciplines, and one of our principal objectives is to present a single, coherent picture of this framework. A second goal is to describe how this topic fits into the even larger field of MR methods and concepts-in particular, making ties to topics such as wavelets and multigrid methods. A third goal is to provide several alternate viewpoints for this body of work, as the methods and concepts we describe intersect with a number of other fields. The principle focus of our presentation is the class of MR Markov processes defined on pyramidally organized trees. The attractiveness of these models stems from both the very efficient algorithms they admit and their expressive power and broad applicability. We show how a variety of methods and models relate to this framework including models for self-similar and 1/f processes. We also illustrate how these methods have been used in practice.
Distance transformations in digital images A distance transformation converts a binary digital image, consisting of feature and non-feature pixels, into an image where all non-feature pixels have a value corresponding to the distance to the nearest feature pixel. Computing these distances is in principle a global operation. However, global operations are prohibitively costly. Therefore algorithms that consider only small neighborhoods, but still give a reasonable approximation of the Euclidean distance, are necessary. In the first part of this paper optimal distance transformations are developed. Local neighborhoods of sizes up to 7×7 pixels are used. First real-valued distance transformations are considered, and then the best integer approximations of them are computed. A new distance transformation is presented, that is easily computed and has a maximal error of about 2%. In the second part of the paper six different distance transformations, both old and new, are used for a few different applications. These applications show both that the choice of distance transformation is important, and that any of the six transformations may be the right choice.
Comparison of Graph Cuts with Belief Propagation for Stereo, using Identical MRF Parameters Recent stereo algorithms have achieved impressive resultsby modelling the disparity image as a Markov RandomField (MRF). An important component of an MRF-basedapproach is the inference algorithm used to find the mostlikely setting of each node in the MRF. Algorithms havebeen proposed which use Graph Cuts or Belief Propagationfor inference. These stereo algorithms differ in both theinference algorithm used and the formulation of the MRF.It is unknown whether to attribute the responsibility for differencesin performance to the MRF or the inference algorithm.We address this through controlled experiments bycomparing the Belief Propagation algorithm and the GraphCuts algorithm on the same MRF's, which have been createdfor calculating stereo disparities. We find that the labellingsproduced by the two algorithms are comparable.The solutions produced by Graph Cuts have a lower energythan those produced with Belief Propagation, but this doesnot necessarily lead to increased performance relative tothe ground-truth.
Segmentation Based Disparity Estimation Using Color And Depth Information The well-known cooperative stereo uses two dimensional rectangular window for a local block matching and three dimensional box-shaped volume for a global optimization procedure. In many cases. appropriate selections of these matching regions call provide satisfactory matching results. This paper presents a new method for iteratively modifying sizes and shapes of matching regions based on color and depth information. This algorithm computes the aggregated matching, costs with two ideas. The first idea is to select matching regions based oil object boundaries to avoid projective distortion. This provides the reliable matching scores as well as the prevention of the foreground fattening phenomenon. The second idea is to iteratively modify the segmentation map by merging the regions where the disparities are likely to be the same. Experimental results show that the proposed algorithm provides more accurate disparity map than other algorithms. Especially, the computed disparity map shows the advantage of our algorithm in disparity discontinuity regions.
A fusion method of data association and virtual detection for minimizing track loss and false track In this paper, we present a method to track multiple moving vehicles using the global nearest neighborhood (GNN) data association (DA) based on 2D global position and virtual detection based on motion tracking. Unlikely the single target tracking, multiple target tracking needs to associate observation-to-track pairs. DA is a process to determine which measurements are used to update each track. We use the GNN data association not to lost track and not to connect incorrect measurements. GNN is a simple, robust, and optimal technique for intelligent vehicle applications with a stereo vision system that can reliably estimates the position of a vehicle. However, an incomplete detection and recognition technique bring low track maintenance due to missed detections and false alarms. A complementary virtual detection method adds to GNN method. Virtual detection is used to recover the missed detection by motion tracking when the track maintains for some periods. Motion tracking estimates virtual region of interest (ROI) of the missed detection using a pyramidal Lukas-Kanade feature tracker. Next, GNN associates the lost tracks and virtual measurements if the measurement exists in the validation gate. Our experimental results show that our tracking method works well in a stereo vision system with incomplete detection and recognition ability.
Real-time stereo matching based on fast belief propagation. In this paper, a global optimum stereo matching algorithm based on improved belief propagation is presented which is demonstrated to generate high quality results while maintaining real-time performance. These results are achieved using a foundation based on the hierarchical belief propagation architecture combined with a novel asymmetric occlusion handling model, as well as parallel graphical processing. Compared to the other real-time methods, the experimental results on Middlebury data show the efficiency of our approach.
Fast unambiguous stereo matching using reliability-based dynamic programming. An efficient unambiguous stereo matching technique is presented in this paper. Our main contribution is to introduce a new reliability measure to dynamic programming approaches in general. For stereo vision application, the reliability of a proposed match on a scanline is defined as the cost difference between the globally best disparity assignment that includes the match and the globally best assignment that does not include the match. A reliability-based dynamic programming algorithm is derived accordingly, which can selectively assign disparities to pixels when the corresponding reliabilities exceed a given threshold. The experimental results show that the new approach can produce dense (> 70 percent of the unoccluded pixels) and reliable (error rate < 0.5 percent) matches efficiently (< 0.2 sec on a 2GHz P4) for the four Middlebury stereo data sets.
Real-time stereo on GPGPU using progressive multi-resolution adaptive windows We introduce a new GPGPU-based real-time dense stereo matching algorithm. The algorithm is based on a progressive multi-resolution pipeline which includes background modeling and dense matching with adaptive windows. For applications in which only moving objects are of interest, this approach effectively reduces the overall computation cost quite significantly, and preserves the high definition details. Running on an off-the-shelf commodity graphics card, our implementation achieves a 36 fps stereo matching on 1024x768 stereo video with a fine 256 pixel disparity range. This is effectively same as 7200M disparity evaluations per second. For scenes where the static background assumption holds, our approach outperforms all published alternative algorithms in terms of the speed performance, by a large margin. We envision a number of potential applications such as real-time motion capture, as well as tracking, recognition and identification of moving objects in multi-camera networks.
Design, Architecture and Control of a Mobile Site-Modeling Robot A distributed, modular, heterogeneous architecture is presented that illustrates an approach to solving and integrating common tasks in mobile robotics, such as path planning, localization, sensor fusion, environmental modeling, and motion control. Experimental results are shown for an autonomous navigation task to confirm the applicability of our approach.
Shape-Based Object Localization for Descriptive Classification Discriminative tasks, including object categorization and detection, are central components of high-level computer vision. However, sometimes we are interested in a finer-grained characterization of the object's properties, such as its pose or articulation. In this paper we develop a probabilistic method (LOOPS) that can learn a shape and appearance model for a particular object class, and be used to consistently localize constituent elements (landmarks) of the object's outline in test images. This localization effectively projects the test image into an alternative representational space that makes it particularly easy to perform various descriptive tasks. We apply our method to a range of object classes in cluttered images and demonstrate its effectiveness in localizing objects and performing descriptive classification, descriptive ranking, and descriptive clustering.
Multiscale Keypoint Analysis based on Complex Wavelets
Video Snapshots: Creating High-Quality Images from Video Clips. We describe a unified framework for generating a single high-quality still image ("snapshot") from a short video clip. Our system allows the user to specify the desired operations for creating the output image, such as super-resolution, noise and blur reduction, and selection of best focus. It also provides a visual summary of activity in the video by incorporating saliency-based objectives in the snapshot formation process. We show examples on a number of different video clips to illustrate the utility and flexibility of our system.
1.075017
0.025022
0.017401
0.009531
0.001721
0.000046
0.000026
0.000017
0.000011
0.000005
0.000001
0
0
0
A fast dual method for HIK SVM learning Histograms are used in almost every aspect of computer vision, from visual descriptors to image representations. Histogram Intersection Kernel (HIK) and SVM classifiers are shown to be very effective in dealing with histograms. This paper presents three contributions concerning HIK SVM classification. First, instead of limited to integer histograms, we present a proof that HIK is a positive definite kernel for non-negative real-valued feature vectors. This proof reveals some interesting properties of the kernel. Second, we propose ICD, a deterministic and highly scalable dual space HIK SVM solver. ICD is faster than and has similar accuracies with general purpose SVM solvers and two recently proposed stochastic fast HIK SVM training methods. Third, we empirically show that ICD is not sensitive to the C parameter in SVM. ICD achieves high accuracies using its default parameters in many datasets. This is a very attractive property because many vision problems are too large to choose SVM parameters using cross-validation.
Efficient and Effective Visual Codebook Generation Using Additive Kernels Common visual codebook generation methods used in a bag of visual words model, for example, k-means or Gaussian Mixture Model, use the Euclidean distance to cluster features into visual code words. However, most popular visual descriptors are histograms of image measurements. It has been shown that with histogram features, the Histogram Intersection Kernel (HIK) is more effective than the Euclidean distance in supervised learning tasks. In this paper, we demonstrate that HIK can be used in an unsupervised manner to significantly improve the generation of visual codebooks. We propose a histogram kernel k-means algorithm which is easy to implement and runs almost as fast as the standard k-means. The HIK codebooks have consistently higher recognition accuracy over k-means codebooks by 2-4% in several benchmark object and scene recognition data sets. The algorithm is also generalized to arbitrary additive kernels. Its speed is thousands of times faster than a naive implementation of the kernel k-means algorithm. In addition, we propose a one-class SVM formulation to create more effective visual code words. Finally, we show that the standard k-median clustering method can be used for visual codebook generation and can act as a compromise between the HIK / additive kernel and the k-means approaches.
Combining color-based invariant gradient detector with HoG descriptors for robust image detection in scenes under cast shadows In this work we present a robust detection method in outdoor scenes under cast shadows using color based invariant gradients in combination with HoG local features. The method achieves good detection rates in urban scene classification and person detection outperforming traditional methods based on intensity gradient detectors which are sensible to illumination variations but not to cast shadows. The method uses color based invariant gradients that emphasize material changes and extract relevant and invariant features for detection while neglecting shadow contours. This method allows to train and detect objects and scenes independently of scene illumination, cast and self shadows. Moreover, it allows to do training in one shot, that is, when the robot visits the scene for the first time.
Large-scale image categorization with explicit data embedding Kernel machines rely on an implicit mapping of the data such that non-linear classification in the original space corresponds to linear classification in the new space. As kernel machines are difficult to scale to large training sets, it has been proposed to perform an explicit mapping of the data and to learn directly linear classifiers in the new space. In this paper, we consider the problem of learning image categorizers on large image sets (e.g. > 100k images) using bag-of-visual-words (BOV) image representations and Support Vector Machine classifiers. We experiment with three approaches to BOV embedding: 1) kernel PCA (kPCA), 2) a modified kPCA we propose for additive kernels and 3) random projections for shift-invariant kernels. We report experiments on 3 datasets: Cal-tech101, VOC07 and ImageNet. An important conclusion is that simply square-rooting BOV vectors - which corresponds to an exact mapping for the Bhattacharyya kernel - already leads to large improvements, often quite close to the best results obtained with additive kernels. Another conclusion is that, although it is possible to go beyond additive kernels, the embedding comes at a much higher cost.
Learning and using taxonomies for fast visual categorization The computational complexity of current visual categorization algorithms scales linearly at best with the number of categories. The goal of classifying simultaneously Ncat = 104 - 105 visual categories requires sub-linear classification costs. We explore algorithms for automatically building classification trees which have, in principle, logNcat complexity. We find that a greedy algorithm that recursively splits the set of categories into the two minimally confused subsets achieves 5-20 fold speedups at a small cost in classification performance. Our approach is independent of the specific classification algorithm used. A welcome by-product of our algorithm is a very reasonable taxonomy of the Caltech-256 dataset.
Kernel Codebooks for Scene Categorization This paper introduces a method for scene categorization by modeling ambiguity in the popular codebook approach. The codebook approach describes an image as a bag of discrete visual codewords, where the frequency distributions of these words are used for image categorization. There are two drawbacks to the traditional codebook model: codeword uncertainty and codeword plausibility. Both of these drawbacks stem from the hard assignment of visual features to a single codeword. We show that allowing a degree of ambiguity in assigning codewords improves categorization performance for three state-of-the-art datasets.
Performance evaluation of local colour invariants In this paper, we compare local colour descriptors to grey-value descriptors. We adopt the evaluation framework of Mikolayzcyk and Schmid. We modify the framework in several ways. We decompose the evaluation framework to the level of local grey-value invariants on which common region descriptors are based. We compare the discriminative power and invariance of grey-value invariants to that of colour invariants. In addition, we evaluate the invariance of colour descriptors to photometric events such as shadow and highlights. We measure the performance over an extended range of common recording conditions including significant photometric variation. We demonstrate the intensity-normalized colour invariants and the shadow invariants to be highly distinctive, while the shadow invariants are more robust to both changes of the illumination colour, and to changes of the shading and shadows. Overall, the shadow invariants perform best: they are most robust to various imaging conditions while maintaining discriminative power. When plugged into the SIFT descriptor, they show to outperform other methods that have combined colour information and SIFT. The usefulness of C-colour-SIFT for realistic computer vision applications is illustrated for the classification of object categories from the VOC challenge, for which a significant improvement is reported.
Estimating the number of people in crowded scenes by MID based foreground segmentation and head-shoulder detection This paper proposes a novel method to address the problem of estimating the number of people in surveillance scenes with people gathering and waiting. The proposed method combines a MID (mosaic image difference) based foreground segmentation algorithm and a HOG (histograms of oriented gradients) based head-shoulder detection algorithm to provide an accurate estimation of people counts in the observed area. In our framework, the MID-based foreground segmentation module provides active areas for the head-shoulder detection module to detect heads and count the number of people. Numerous experiments are conducted and convincing results demonstrate the effectiveness of our method.
Scene recognition on the semantic manifold A new architecture, denoted spatial pyramid matching on the semantic manifold (SPMSM), is proposed for scene recognition. SPMSM is based on a recent image representation on a semantic probability simplex, which is now augmented with a rough encoding of spatial information. A connection between the semantic simplex and a Riemmanian manifold is established, so as to equip the architecture with a similarity measure that respects the manifold structure of the semantic space. It is then argued that the closed-form geodesic distance between two manifold points is a natural measure of similarity between images. This leads to a conditionally positive definite kernel that can be used with any SVM classifier. An approximation of the geodesic distance reveals connections to the well-known Bhattacharyya kernel, and is explored to derive an explicit feature embedding for this kernel, by simple square-rooting. This enables a low-complexity SVM implementation, using a linear SVM on the embedded features. Several experiments are reported, comparing SPMSM to state-of-the-art recognition methods. SPMSM is shown to achieve the best recognition rates in the literature for two large datasets (MIT Indoor and SUN) and rates equivalent or superior to the state-of-the-art on a number of smaller datasets. In all cases, the resulting SVM also has much smaller dimensionality and requires much fewer support vectors than previous classifiers. This guarantees much smaller complexity and suggests improved generalization beyond the datasets considered.
Good Practice in Large-Scale Learning for Image Classification We benchmark several SVM objective functions for large-scale image classification. We consider one-versus-rest, multiclass, ranking, and weighted approximate ranking SVMs. A comparison of online and batch methods for optimizing the objectives shows that online methods perform as well as batch methods in terms of classification accuracy, but with a significant gain in training speed. Using stochastic gradient descent, we can scale the training to millions of images and thousands of classes. Our experimental evaluation shows that ranking-based algorithms do not outperform the one-versus-rest strategy when a large number of training examples are used. Furthermore, the gap in accuracy between the different algorithms shrinks as the dimension of the features increases. We also show that learning through cross-validation the optimal rebalancing of positive and negative examples can result in a significant improvement for the one-versus-rest strategy. Finally, early stopping can be used as an effective regularization strategy when training with online algorithms. Following these "good practices," we were able to improve the state of the art on a large subset of 10K classes and 9M images of ImageNet from 16.7 percent Top-1 accuracy to 19.1 percent.
Learning visual object definitions by observing human activities Humanoid robots, while moving in our everyday environments, necessarily need to recognize objects. Providing robust object definitions for every single object in our environ- ments is challenging and impossible in practice. In this work, we build upon the fact that objects have different uses and humanoid robots, while co-existing with humans, should have the ability of observing humans using the different objects and learn the corresponding object definitions. We present an object recognition algorithm, FOCUS, for Finding Object Classifications through Use and Structure. FOCUS learns structural properties (visual features) of objects by knowing first the object's affordance properties and observing humans interacting with that object with known activities. FOCUS combines an activity recognizer, flexible and robust to any environment, which captures how an object is used with a low-level visual feature processor. The relevant features are then associated with an object definition which is then used for object recognition. The strength of the method relies on the fact that we can define multiple aspects of an object model, i.e., structure and use, that are individually robust but insufficient to define the object, but can do so jointly. We present the FOCUS approach in detail, which we have demonstrated in a variety of activities, objects, and environments. We show illustrating empirical evidence of the efficacy of the method.
Autonomous visual navigation of a mobile robot using a human-guided experience Information on the surrounding environment is necessary for a robot to move autonomously. Many previous robots use a given map and landmarks. Making such a map is, however, a tedious work for the user. Therefore this paper proposes a navigation strategy which requires minimum user assistance. In this strategy, the user first guides a mobile robot to a destination by remote control. During this movement, the robot observes the surrounding environment to make a map. Once the map is generated, the robot computes and follows the shortest path to the destination autonomously. To realize this navigation strategy, we develop: (1) a method of map generation by integrating multiple observation results considering the uncertainties in observation and motion, (2) a fast robot localization method which does not use explicit feature correspondence, and (3) a method of selecting effective viewing directions using the history of observation during the guided movement. Experimental results using a real robot show the feasibility of the proposed strategy.
Recognizing complex events using large margin joint low-level event model In this paper we address the challenging problem of complex event recognition by using low-level events. In this problem, each complex event is captured by a long video in which several low-level events happen. The dataset contains several videos and due to the large number of videos and complexity of the events, the available annotation for the low-level events is very noisy which makes the detection task even more challenging. To tackle these problems we model the joint relationship between the low-level events in a graph where we consider a node for each low-level event and whenever there is a correlation between two low-level events the graph has an edge between the corresponding nodes. In addition, for decreasing the effect of weak and/or irrelevant low-level event detectors we consider the presence/absence of low-level events as hidden variables and learn a discriminative model by using latent SVM formulation. Using our learned model for the complex event recognition, we can also apply it for improving the detection of the low-level events in video clips which enables us to discover a conceptual description of the video. Thus our model can do complex event recognition and explain a video in terms of low-level events in a single framework. We have evaluated our proposed method over the most challenging multimedia event detection dataset. The experimental results reveals that the proposed method performs well compared to the baseline method. Further, our results of conceptual description of video shows that our model is learned quite well to handle the noisy annotation and surpass the low-level event detectors which are directly trained on the raw features.
Hybrid social media network Analysis and recommendation of multimedia information can be greatly improved if we know the interactions between the content, user, and concept, which can be easily observed from the social media networks. However, there are many heterogeneous entities and relations in such networks, making it difficult to fully represent and exploit the diverse array of information. In this paper, we develop a hybrid social media network, through which the heterogeneous entities and relations are seamlessly integrated and a joint inference procedure across the heterogeneous entities and relations can be developed. The network can be used to generate personalized information recommendation in response to specific targets of interests, e.g., personalized multimedia albums, target advertisement and friend/topic recommendation. In the proposed network, each node denotes an entity and the multiple edges between nodes characterize the diverse relations between the entities (e.g., friends, similar contents, related concepts, favorites, tags, etc). Given a query from a user indicating his/her information needs, a propagation over the hybrid social media network is employed to infer the utility scores of all the entities in the network while learning the edge selection function to activate only a sparse subset of relevant edges, such that the query information can be best propagated along the activated paths. Driven by the intuition that much redundancy exists among the diverse relations, we have developed a robust optimization framework based on several sparsity principles. We show significant performance gains of the proposed method over the state of the art in multimedia retrieval and recommendation using data crawled from social media sites. To the best of our knowledge, this is the first model supporting not only aggregation but also judicious selection of heterogeneous relations in the social media networks.
1.028877
0.030303
0.018182
0.005062
0.003306
0.001222
0.000319
0.000107
0.000016
0.000002
0
0
0
0
Minimum correspondence sets for improving large-scale augmented paper Augmented Paper (AP) is an important area of Augmented Reality (AR). Many AP systems rely on visual features for paper document identification. Although promising, these systems can hardly support large sets of documents (i.e. one million documents) because of high memory and time cost in handling high-dimensional features. On the other hand, general large-scale image identification techniques are not well customized to AP, costing unnecessarily more resources to achieve the identification accuracy required by AP. To address this mismatching between AP and image identification techniques, we propose a novel large-scale image identification technique well geared to AP. At its core is a geometric verification scheme based on Minimum visual-word Correspondence Set (MICSs). MICS is a set of visual word (i.e. quantized visual feature) correspondences, each of which contains a minimum number of correspondences that are sufficient for deriving a transformation hypothesis between a captured document image and an indexed image. Our method selects appropriate MICSs to vote in a Hough space of transformation parameters, and uses a robust dense region detection algorithm to locate the possible transformation models in the space. The models are then utilized to verify all the visual word correspondences to precisely identify the matching indexed image. By taking advantage of unique geometric constraints in AP, our method can significantly reduce the time and memory cost while achieving high accuracy. As showed in evaluation with two AP systems called FACT and EMM, over a dataset with 1M+ images, our method achieves 100% identification accuracy and 0.67% registration error for FACT; For EMM, our method outperforms the state-of-the-art image identification approach by achieving 4% improvements in detection rate and almost perfect precision, while saving 40% and 70% memory and time cost.
A tool for authoring unambiguous links from printed content to digital media Embedded Media Markers (EMMs) are nearly transparent icons printed on paper documents that link to associated digital media. By using the document content for retrieval, EMMs are less visually intrusive than barcodes and other glyphs while still providing an indication for the presence of links. An initial implementation demonstrated good overall performance but exposed difficulties in guaranteeing the creation of unambiguous EMMs. We developed an EMM authoring tool that supports the interactive authoring of EMMs via visualizations that show the user which areas on a page may cause recognition errors and automatic feedback that moves the authored EMM away from those areas. The authoring tool and the techniques it relies on have been applied to corpora with different visual characteristics to explore the generality of our approach.
Embedded media barcode links: optimally blended barcode overlay on paper for linking to associated media Embedded Media Barcode Links, or simply EMBLs, are optimally blended iconic barcode marks, printed on paper documents, that signify the existence of multimedia associated with that part of the document content (Figure 1). EMBLs are used for multimedia retrieval with a camera phone. Users take a picture of an EMBL-signified document patch using a cell phone, and the multimedia associated with the EMBL-signified document location is displayed on the phone. Unlike a traditional barcode which requires an exclusive space, the EMBL construction algorithm acts as an agent to negotiate with a barcode reader for maximum user and document benefits. Because of this negotiation, EMBLs are optimally blended with content and thus have less interference with the original document layout and can be moved closer to a media associated location. Retrieval of media associated with an EMBL is based on the barcode identification of a captured EMBL. Therefore, EMBL retains nearly all barcode identification advantages, such as accuracy, speed, and scalability. Moreover, EMBL takes advantage of users' knowledge of a traditional barcode. Unlike Embedded Media Maker (EMM) which requires underlying document features for marker identification, EMBL has no requirement for the underlying features. This paper will discuss the procedures for EMBL construction and optimization. It will also give experimental results that strongly support the EMBL construction and optimization ideas.
Embedded media markers: marks on paper that signify associated media Embedded Media Markers, or simply EMMs, are nearly transparent iconic marks printed on paper documents that signify the existence of media associated with that part of the document. EMMs also guide users' camera operations for media retrieval. Users take a picture of an EMM-signified document patch using a cell phone, and the media associated with the EMM-signified document location is displayed on the phone. Unlike bar codes, EMMs are nearly transparent and thus do not interfere with the document appearance. Retrieval of media associated with an EMM is based on image local features of the captured EMM-signified document patch. This paper describes a technique for semi-automatically placing an EMM at a location in a document, in such a way that it encompasses sufficient identification features with minimal disturbance to the original document.
Distinctive Image Features from Scale-Invariant Keypoints This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.
A flexible new technique for camera calibration We propose a flexible technique to easily calibrate a camera. It only requires the camera to observe a planar pattern shown at a few (at least two) different orientations. Either the camera or the planar pattern can be freely moved. The motion need not be known. Radial lens distortion is modeled. The proposed procedure consists of a closed-form solution, followed by a nonlinear refinement based on the maximum likelihood criterion. Both computer simulation and real data have been used to test the proposed technique and very good results have been obtained. Compared with classical techniques which use expensive equipment such as two or three orthogonal planes, the proposed technique is easy to use and flexible. It advances 3D computer vision one more step from laboratory environments to real world use.
A conference key distribution system Encryption is used in a communication system to safeguard information in the transmitted messages from anyone other than the intended receiver(s). To perform the encryption and decryption the transmitter and receiver(s) ought to have matching encryption and decryption keys. A clever way to generate these keys is to use the public key distribution system invented by Diffie and Hellman. That system, however, admits only one pair of communication stations to share a particular pair of encryption and decryption keys, The public key distribution system is generalized to a conference key distribution system (CKDS) which admits any group of stations to share the same encryption and decryption keys. The analysis reveals two important aspects of any conference key distribution system. One is the multitap resistance, which is a measure of the information security in the communication system. The other is the separation of the problem into two parts: the choice of a suitable symmetric function of the private keys and the choice of a suitable one-way mapping thereof. We have also shown how to use CKDS in connection with public key ciphers and an authorization scheme.
Human recognition using 3D ear images. This paper proposes an ear recognition technique which makes use of 3D along with co-registered 2D ear images. It presents a two-step matching technique to compare two 3D ears. In the first step, it computes salient 3D data points from 3D ear images with the help of local 2D feature points of co-registered 2D ear images. Subsequently, it uses these salient 3D points to coarsely align 3D ear images. In the second step, it performs final matching of coarsely aligned 3D ear images by using a Generalized Procrustes Analysis (GPA) and Iterative Closest Point (ICP) based matching technique (GPA-ICP). The proposed technique has been tested on 1780 images of 404 subjects (two or more images per subject) of University of Notre Dame public database-Collection J2 (UND-J2) which consists of co-registered 2D and 3D ear images with scale and pose variations. It has achieved a verification accuracy of 98.30% with an equal error rate of 1.8%.
Object tracking: A survey The goal of this article is to review the state-of-the-art tracking methods, classify them into different categories, and identify new trends. Object tracking, in general, is a challenging problem. Difficulties in tracking objects can arise due to abrupt object motion, changing appearance patterns of both the object and the scene, nonrigid object structures, object-to-object and object-to-scene occlusions, and camera motion. Tracking is usually performed in the context of higher-level applications that require the location and/or shape of the object in every frame. Typically, assumptions are made to constrain the tracking problem in the context of a particular application. In this survey, we categorize the tracking methods on the basis of the object and motion representations used, provide detailed descriptions of representative methods in each category, and examine their pros and cons. Moreover, we discuss the important issues related to tracking including the use of appropriate image features, selection of motion models, and detection of objects.
ITEMS: intelligent travel experience management system An intelligent travel experience management system, abbreviated as ITEMS, is proposed to help tourists organize and present the digital travel contents in an automatic and efficient manner. Readily available metadata are adopted to reduce the overhead of user intervention and manual annotation. Robust image similarity metrics are also incorporated to utilize the powerful searching capability of WWW search engines. The proposed system automatically identifies the inherent geo-information of personal media, and accordingly integrates media with map and text-based schedule to facilitate travel experience management and presentation. We show several prototype systems in two application scenarios and demonstrate the effectiveness of the proposed methodology.
High-quality passive facial performance capture using anchor frames We present a new technique for passive and markerless facial performance capture based on anchor frames. Our method starts with high resolution per-frame geometry acquisition using state-of-the-art stereo reconstruction, and proceeds to establish a single triangle mesh that is propagated through the entire performance. Leveraging the fact that facial performances often contain repetitive subsequences, we identify anchor frames as those which contain similar facial expressions to a manually chosen reference expression. Anchor frames are automatically computed over one or even multiple performances. We introduce a robust image-space tracking method that computes pixel matches directly from the reference frame to all anchor frames, and thereby to the remaining frames in the sequence via sequential matching. This allows us to propagate one reconstructed frame to an entire sequence in parallel, in contrast to previous sequential methods. Our anchored reconstruction approach also limits tracker drift and robustly handles occlusions and motion blur. The parallel tracking and mesh propagation offer low computation times. Our technique will even automatically match anchor frames across different sequences captured on different occasions, propagating a single mesh to all performances.
Minimal correlation classification When the description of the visual data is rich and consists of many features, a classification based on a single model can often be enhanced using an ensemble of models. We suggest a new ensemble learning method that encourages the base classifiers to learn different aspects of the data. Initially, a binary classification algorithm such as Support Vector Machine is applied and its confidence values on the training set are considered. Following the idea that ensemble methods work best when the classification errors of the base classifiers are not related, we serially learn additional classifiers whose output confidences on the training examples are minimally correlated. Finally, these uncorrelated classifiers are assembled using the GentleBoost algorithm. Presented experiments in various visual recognition domains demonstrate the effectiveness of the method.
Integrating Representative and Discriminative Models for Object Category Detection We propose a novel approach for shape-based segmentation based on a specially designed level set function format. This format permits us to better control the process of object registration which is an important part in the shapebased segmentation framework. ...
Boosting k-NN for Categorization of Natural Scenes The k-nearest neighbors (k-NN) classification rule has proven extremely successful in countless many computer vision applications. For example, image categorization often relies on uniform voting among the nearest prototypes in the space of descriptors. In spite of its good generalization properties and its natural extension to multi-class problems, the classic k-NN rule suffers from high variance when dealing with sparse prototype datasets in high dimensions. A few techniques have been proposed in order to improve k-NN classification, which rely on either deforming the nearest neighborhood relationship by learning a distance function or modifying the input space by means of subspace selection. From the computational standpoint, many methods have been proposed for speeding up nearest neighbor retrieval, both for multidimensional vector spaces and nonvector spaces induced by computationally expensive distance measures.In this paper, we propose a novel boosting approach for generalizing the k-NN rule, by providing a new k-NN boosting algorithm, called UNN (Universal Nearest Neighbors), for the induction of leveraged k-NN. We emphasize that UNN is a formal boosting algorithm in the original boosting terminology. Our approach consists in redefining the voting rule as a strong classifier that linearly combines predictions from the k closest prototypes. Therefore, the k nearest neighbors examples act as weak classifiers and their weights, called leveraging coefficients, are learned by UNN so as to minimize a surrogate risk, which upper bounds the empirical misclassification rate over training data. These leveraging coefficients allows us to distinguish the most relevant prototypes for a given class. Indeed, UNN does not affect the k-nearest neighborhood relationship, but rather acts on top of k-NN search.We carried out experiments comparing UNN to k-NN, support vector machines (SVM) and AdaBoost on categorization of natural scenes, using state-of-the art image descriptors (Gist and Bag-of-Features) on real images from Oliva and Torralba (Int. J. Comput. Vis. 42(3):145---175, 2001), Fei-Fei and Perona (IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), pp. 524---531, 2005), and Xiao et al. (IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3485---3492, 2010). Results display the ability of UNN to compete with or beat the other contenders, while achieving comparatively small training and testing times.
1.2
0.2
0.1
0.022222
0.000017
0
0
0
0
0
0
0
0
0
Mobile product image search by automatic query object extraction Mobile product image search aims at identifying a product, or retrieving similar products from a database based on a photo captured from a mobile phone camera. Application of traditional image retrieval methods (e.g. bag-of-words) to mobile visual search has been shown to be effective in identifying duplicate/near-duplicate photos, near-planar and textured objects such as landmarks, books/cd covers. However, retrieving more general product categories is still a challenging research problem due to variations in viewpoint, illumination, scale, the existence of blur and background clutter in the query image, etc. In this paper, we propose a new approach that can simultaneously extract the product instance from the query, identify the instance, and retrieve visually similar product images. Based on the observation that good query segmentation helps improve retrieval accuracy and good search results provide good priors for segmentation, we formulate our approach in an iterative scheme to improve both query segmentation and retrieval accuracy. To this end, a weighted object mask voting algorithm is proposed based on a spatially-constrained model, which allows robust localization and segmentation of the query object, and achieves significantly better retrieval accuracy than previous methods. We show the effectiveness of our approach by applying it to a large, real-world product image dataset and a new object category dataset.
Recognizing Products: A Per-Exemplar Multi-Label Image Classification Approach Large-scale instance-level image retrieval aims at retrieving specific instances of objects or scenes. Simultaneously retrieving multiple objects in a test image adds to the difficulty of the problem, especially if the objects are visually similar. This paper presents an efficient approach for per-exemplar multi-label image classification, which targets the recognition and localization of products in retail store images. We achieve runtime efficiency through the use of discriminative random forests, deformable dense pixel matching and genetic algorithm optimization. Cross-dataset recognition is performed, where our training images are taken in ideal conditions with only one single training image per product label, while the evaluation set is taken using a mobile phone in real-life scenarios in completely different conditions. In addition, we provide a large novel dataset and labeling tools for products image search, to motivate further research efforts on multi-label retail products image classification. The proposed approach achieves promising results in terms of both accuracy and runtime efficiency on 680 annotated images of our dataset, and 885 test images of GroZi-120 dataset. We make our dataset of 8350 different product images and the 680 test images from retail stores with complete annotations available to the wider community.
Scalable Face Image Retrieval with Identity-Based Quantization and Multireference Reranking State-of-the-art image retrieval systems achieve scalability by using a bag-of-words representation and textual retrieval methods, but their performance degrades quickly in the face image domain, mainly because they produce visual words with low discriminative power for face images and ignore the special properties of faces. The leading features for face recognition can achieve good retrieval performance, but these features are not suitable for inverted indexing as they are high-dimensional and global and thus not scalable in either computational or storage cost. In this paper, we aim to build a scalable face image retrieval system. For this purpose, we develop a new scalable face representation using both local and global features. In the indexing stage, we exploit special properties of faces to design new component-based local features, which are subsequently quantized into visual words using a novel identity-based quantization scheme. We also use a very small Hamming signature (40 bytes) to encode the discriminative global feature for each face. In the retrieval stage, candidate images are first retrieved from the inverted index of visual words. We then use a new multireference distance to rerank the candidate images using the Hamming signature. On a one millon face database, we show that our local features and global Hamming signatures are complementary—the inverted index based on local features provides candidate images with good recall, while the multireference reranking with global Hamming signature leads to good precision. As a result, our system is not only scalable but also outperforms the linear scan retrieval system using the state-of-the-art face recognition feature in term of the quality.
Robust Text Detection In Natural Images With Edge-Enhanced Maximally Stable Extremal Regions Detecting text in natural images is an important prerequisite. In this paper, we propose a novel text detection algorithm, which employs edge-enhanced Maximally Stable Extremal Regions as basic letter candidates. These candidates are then filtered using geometric and stroke width information to exclude non-text objects. Letters are paired to identify text lines, which are subsequently separated into words. We evaluate our system using the ICDAR competition dataset and our mobile document database. The experimental results demonstrate the excellent performance of the proposed method.
Associative Hierarchical Random Fields This paper makes two contributions: the first is the proposal of a new model—The associative hierarchical random field (AHRF), and a novel algorithm for its optimization; the second is the application of this model to the problem of semantic segmentation. Most methods for semantic segmentation are formulated as a labeling problem for variables that might correspond to either pixels or segments such as super-pixels. It is well known that the generation of super pixel segmentations is not unique. This has motivated many researchers to use multiple super pixel segmentations for problems such as semantic segmentation or single view reconstruction. These super-pixels have not yet been combined in a principled manner, this is a difficult problem, as they may overlap, or be nested in such a way that the segmentations form a segmentation tree. Our new hierarchical random field model allows information from all of the multiple segmentations to contribute to a global energy. MAP inference in this model can be performed efficiently using powerful graph cut based move making algorithms. Our framework generalizes much of the previous work based on pixels or segments, and the resulting labelings can be viewed both as a detailed segmentation at the pixel level, or at the other extreme, as a segment selector that pieces together a solution like a jigsaw, selecting the best segments from different segmentations as pieces. We evaluate its performance on some of the most challenging data sets for object class segmentation, and show that this ability to perform inference using multiple overlapping segmentations leads to state-of-the-art results.
Unsupervised discovery of co-occurrence in sparse high dimensional data An efficient min-Hash based algorithm for discovery of dependencies in sparse high-dimensional data is presented. The dependencies are represented by sets of features co-occurring with high probability and are called co-ocsets.Sparse high dimensional descriptors, such as bag of words, have been proven very effective in the domain of image retrieval. To maintain high efficiency even for very large data collection, features are assumed independent. We show experimentally that co-ocsets are not rare, i.e. the independence assumption is often violated, and that they may ruin retrieval performance if present in the query image. Two methods for managing co-ocsets in such cases are proposed. Both methods significantly outperform the state-of-the-art in image retrieval, one is also significantly faster.
Unsupervised discovery of mid-level discriminative patches The goal of this paper is to discover a set of discriminative patches which can serve as a fully unsupervised mid-level visual representation. The desired patches need to satisfy two requirements: 1) to be representative, they need to occur frequently enough in the visual world; 2) to be discriminative, they need to be different enough from the rest of the visual world. The patches could correspond to parts, objects, "visual phrases", etc. but are not restricted to be any one of them. We pose this as an unsupervised discriminative clustering problem on a huge dataset of image patches. We use an iterative procedure which alternates between clustering and training discriminative classifiers, while applying careful cross-validation at each step to prevent overfitting. The paper experimentally demonstrates the effectiveness of discriminative patches as an unsupervised mid-level visual representation, suggesting that it could be used in place of visual words for many tasks. Furthermore, discriminative patches can also be used in a supervised regime, such as scene classification, where they demonstrate state-of-the-art performance on the MIT Indoor-67 dataset.
Learning Where To Classify In Multi-View Semantic Segmentation There is an increasing interest in semantically annotated 3D models, e.g. of cities. The typical approaches start with the semantic labelling of all the images used for the 3D model. Such labelling tends to be very time consuming though. The inherent redundancy among the overlapping images calls for more efficient solutions. This paper proposes an alternative approach that exploits the geometry of a 3D mesh model obtained from multi-view reconstruction. Instead of clustering similar views, we predict the best view before the actual labelling. For this we find the single image part that bests supports the correct semantic labelling of each face of the underlying 3D mesh. Moreover, our single-image approach may surprise because it tends to increase the accuracy of the model labelling when compared to approaches that fuse the labels from multiple images. As a matter of fact, we even go a step further, and only explicitly label a subset of faces (e.g. 10%), to subsequently fill in the labels of the remaining faces. This leads to a further reduction of computation time, again combined with a gain in accuracy. Compared to a process that starts from the semantic labelling of the images, our method to semantically label 3D models yields accelerations of about 2 orders of magnitude. We tested our multi-view semantic labelling on a variety of street scenes.
Latent Semantic Minimal Hashing for Image Retrieval. Hashing-based similarity search is an important technique for large-scale query-by-example image retrieval system, since it provides fast search with computation and memory efficiency. However, it is a challenge work to design compact codes to represent original features with good performance. Recently, a lot of unsupervised hashing methods have been proposed to focus on preserving geometric structure similarity of the data in the original feature space, but they have not yet fully refined image features and explored the latent semantic feature embedding in the data simultaneously. To address the problem, in this paper, a novel joint binary codes learning method is proposed to combine image feature to latent semantic feature with minimum encoding loss, which is referred as latent semantic minimal hashing. The latent semantic feature is learned based on matrix decomposition to refine original feature, thereby it makes the learned feature more discriminative. Moreover, a minimum encoding loss is combined with latent semantic feature learning process simultaneously, so as to guarantee the obtained binary codes are discriminative as well. Extensive experiments on several well-known large databases demonstrate that the proposed method outperforms most state-of-the-art hashing methods.
Distinctive Image Features from Scale-Invariant Keypoints This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.
A Multi-State Constraint Kalman Filter For Vision-Aided Inertial Navigation In this paper, we present an Extended Kalman Filter (EKF)-based algorithm for real-time vision-aided inertial navigation. The primary contribution of this work is the derivation of a measurement model that is able to express the geometric constraints that arise when a static feature is observed from multiple camera poses. This measurement model does not require including the 3D feature position in the state vector of the EKF and is optimal, up to linearization errors. The vision-aided inertial navigation algorithm we propose has computational complexity only linear in the number of features, and is capable of high-precision pose estimation in large-scale real-world environments. The performance of the algorithm is demonstrated in extensive experimental results, involving a camera/IMU system localizing within an urban area.
A new approach for fingerprint verification based on wide baseline matching using local interest points and descriptors In this article is proposed a new approach to automatic fingerprint verification that is not based on the standard ridge-minutiae-based framework, but in a general-purpose wide baseline matching methodology. Instead of detecting and matching the standard structural features, in the proposed approach local interest points are detected in the fingerprint, then local descriptors are computed in the neighborhood of these points, and afterwards these descriptors are compared using local and global matching procedures. The final verification is carried out by a Bayes classifier. It is important to remark that the local interest points do not correspond to minutiae or singular points, but to local maxima in a scale-space representation of the fingerprint images. The proposed system has 4 variants that are validated using the FVC2004 test protocol. The best variant, which uses an enhanced fingerprint image, SDoG interest points and SIFT descriptors, achieves a FRR of 20.9% and a FAR of 5.7% in the FVC2004-DB1 test database, without using any minutia or singular points' information.
Panoramic Depth Imaging: Single Standard Camera Approach In this paper we present a panoramic depth imaging system. The system is mosaic-based which means that we use a single rotating camera and assemble the captured images in a mosaic. Due to a setoff of the camera's optical center from the rotational center of the system we are able to capture the motion parallax effect which enables stereo reconstruction. The camera is rotating on a circular path with a step defined by the angle, equivalent to one pixel column of the captured image. The equation for depth estimation can be easily extracted from the system geometry. To find the corresponding points on a stereo pair of panoramic images the epipolar geometry needs to be determined. It can be shown that the epipolar geometry is very simple if we are doing the reconstruction based on a symmetric pair of stereo panoramic images. We get a symmetric pair of stereo panoramic images when we take symmetric pixel columns on the left and on the right side from the captured image center column. Epipolar lines of the symmetrical pair of panoramic images are image rows. The search space on the epipolar line can be additionaly constrained. The focus of the paper is mainly on the system analysis. Results of the stereo reconstruction procedure and quality evaluation of generated depth images are quite promissing. The system performs well for reconstruction of small indoor spaces. Our finall goal is to develop a system for automatic navigation of a mobile robot in a room.
Ghost detection and removal for high dynamic range images: Recent advances High dynamic range (HDR) image generation and display technologies are becoming increasingly popular in various applications. A standard and commonly used approach to obtain an HDR image is the multiple exposures' fusion technique which consists of combining multiple images of the same scene with varying exposure times. However, if the scene is not static during the sequence acquisition, moving objects manifest themselves as ghosting artefacts in the final HDR image. Detecting and removing ghosting artefacts is an important issue for automatically generating HDR images of dynamic scenes. The aim of this paper is to provide an up-to-date review of the recently proposed methods for ghost-free HDR image generation. Moreover, a classification and comparison of the reviewed methods is reported to serve as a useful guide for future research on this topic.
1.030905
0.018514
0.014785
0.007086
0.004541
0.001869
0.000878
0.000344
0.000107
0.000023
0.000004
0
0
0
Modeling Coverage in Camera Networks: A Survey Modeling the coverage of a sensor network is an important step in a number of design and optimization techniques. The nature of vision sensors presents unique challenges in deriving such models for camera networks. A comprehensive survey of geometric and topological coverage models for camera networks from the literature is presented. The models are analyzed and compared in the context of their intended applications, and from this treatment the properties of a hypothetical inclusively general model of each type are derived.
Evaluating the fuzzy coverage model for 3D multi-camera network applications An intuitive three-dimensional task-oriented coverage model for 3D multi-camera networks based on fuzzy sets is presented. The model captures the vagueness inherent in the concept of visual coverage, with a specific target of the feature detection and matching task. The coverage degree predicted by the model is validated against various multi-camera network configurations using the SIFT feature detection and description algorithm.
Visual coverage using autonomous mobile robots for search and rescue applications. This paper focuses on visual sensing of 3D large-scale environments. Specifically, we consider a setting where a group of robots equipped with a camera must fully cover a surrounding area. To address this problem we propose a novel descriptor for visual coverage that aims at measuring visual information of an area based on a regular discretization of the environment in voxels. Moreover, we propose an autonomous cooperative exploration approach which controls the robot movements so to maximize information accuracy (defined based on our visual coverage descriptor) and minimizing movement costs. Finally, we define a simulation scenario based on real visual data and on widely used robotic tools (such as ROS and Stage) to empirically evaluate our approach. Experimental results show that the proposed method outperforms a baseline random approach and an uncoordinated one, thus being a valid solution for visual coverage in large scale outdoor scenarios.
Leveraging 3D City Models for Rotation Invariant Place-of-Interest Recognition Given a cell phone image of a building we address the problem of place-of-interest recognition in urban scenarios. Here, we go beyond what has been shown in earlier approaches by exploiting the nowadays often available 3D building information (e.g. from extruded floor plans) and massive street-level image data for database creation. Exploiting vanishing points in query images and thus fully removing 3D rotation from the recognition problem allows then to simplify the feature invariance to a purely homothetic problem, which we show enables more discriminative power in feature descriptors than classical SIFT. We rerank visual word based document queries using a fast stratified homothetic verification that in most cases boosts the correct document to top positions if it was in the short list. Since we exploit 3D building information, the approach finally outputs the camera pose in real world coordinates ready for augmenting the cell phone image with virtual 3D information. The whole system is demonstrated to outperform traditional approaches on city scale experiments for different sources of street-level image data and a challenging set of cell phone images.
A randomized art-gallery algorithm for sensor placement This paper descirbes a placement strategy to compute a set of “good” locations where visual sensing will be most effective. Throughout this paper it is assumed that a {\em polygonal 2-D map} of a workspace is given as input. This polygonal map --- also known as a {\em floor plan} of {\em layout} --- is used to compute a set of locations where expensive sensing tasks (such as 3-D image acquisition) could be executed. A map-building robot, for example, can visit these locations in order to build a full 3-D model of the workspace.The sensor placement strategy relies on a randomized algorithm that solves a variant of the {\em art-gallery problem}-\cite{Oro87, She92, Urr97} : Find the minimum set of guards inside a polygonal workspace from which the entire workspace boundary is visibe. To better take into account the limitations of physical sensors, the algorithm computes a set of guards that satisfies incidence and range constraints. Although the computed set of guards is not guaranteed to have minimum size, the algorithm does compute with high probability a set whose size is at most a factor $\big0{ (n + h) \cot \log(c \ (n + h) )$ from the optimal size$c$, where $n$ is the number of edges in the input polygonal map and $n$ the number of obstacles in its interior (holes).
Determining an initial image pair for fixing the scale of a 3d reconstruction from an image sequence Algorithms for metric 3d reconstruction of scenes from calibrated image sequences always require an initialization phase for fixing the scale of the reconstruction. Usually this is done by selecting two frames from the sequence and fixing the length of their base-line. In this paper a quality measure, that is based on the uncertainty of the reconstructed scene points, for the selection of such a stable image pair is proposed. Based on this quality measure a fully automatic initialization phase for simultaneous localization and mapping algorithms is derived. The proposed algorithm runs in real-time and some results for synthetic as well as real image sequences are shown.
Scene Summarization for Online Image Collections We formulate the problem of scene summarization as se- lecting a set of images that efficiently represents the visual content of a given scene. The ideal summary presents the most interesting and important aspects of the scene with minimal redundancy. We propose a solution to this prob- lem using multi-user image collections from the Internet. Our solution examines the distribution of images in the col- lection to select a set of canonical views to form the scene summary, using clustering techniques on visual features. The summaries we compute also lend themselves naturally to the browsing of image collections, and can be augmented by analyzing user-specified image tag data. We demonstrate the approach using a collection of images of the city of Rome, showing the ability to automatically decompose the images into separate scenes, and identify canonical views for each scene.
A survey of glove-based input Clumsy intermediary devices constrain our interaction with computers and their applications. Glove-based input devices let us apply our manual dexterity to the task. We provide a basis for understanding the field by describing key hand-tracking technologies and applications using glove-based input. The bulk of development in glove-based input has taken place very recently, and not all of it is easily accessible in the literature. We present a cross-section of the field to date. Hand-tracking devices may use the following technologies: position tracking, optical tracking, marker systems, silhouette analysis, magnetic tracking or acoustic tracking. Actual glove technologies on the market include: Sayre glove, MIT LED glove, Digital Data Entry Glove, DataGlove, Dexterous HandMaster, Power Glove, CyberGlove and Space Glove. Various applications of glove technologies include projects into the pursuit of natural interfaces, systems for understanding signed languages, teleoperation and robotic control, computer-based puppetry, and musical performance.<>
Comparing images using the Hausdorff distance The Hausdorff distance measures the extent to which each point of a model set lies near some point of an image set and vice versa. Thus, this distance can be used to determine the degree of resemblance between two objects that are superimposed on one another. Efficient algorithms for computing the Hausdorff distance between all possible relative positions of a binary image and a model are presented. The focus is primarily on the case in which the model is only allowed to translate with respect to the image. The techniques are extended to rigid motion. The Hausdorff distance computation differs from many other shape comparison methods in that no correspondence between the model and the image is derived. The method is quite tolerant of small position errors such as those that occur with edge detectors and other feature extraction methods. It is shown that the method extends naturally to the problem of comparing a portion of a model against an image.
Part-based statistical models for object classification and detection We propose using simple mixture models to define a set of mid-level binary local features based on binary oriented edge input. The features capture natural local structures in the data and yield very high classification rates when used with a variety of classifiers trained on small training sets, exhibiting robustness to degradation with clutter. Of particular interest are the use of the features as variables in simple statistical models for the objects thus enabling likelihood based classification. Pre-training decision boundaries between classes, a necessary component of non-parametric techniques, is thus avoided. Class models are trained separately with no need to access data of other classes. Experimental results are presented for handwritten character recognition, classification of deformed LATEX symbols involving hundreds of classes, and side view car detection.
The foundations of cost-sensitive learning Extracting rules from RBFs is not a trivial task because of nonlinear functions or high input dimensionality. In such cases, some of the hidden units of the RBF network have a tendency to be "shared" across several output classes or even may not contribute ...
Bisection approach for pixel labelling problem This paper formulates pixel labelling as a series of two-category classification. Unlike existing techniques, which assign a determinate label to each pixel, we assign a label set to each pixel and shrink the label set step by step. Determinate labelling is achieved within log"2n (n is size of label set) steps. In each step, we bisect the label set into two subsets and discard the one with higher cost of assigning it to the pixel. Simultaneous labelling of an image is carried out by minimizing an energy function that can be minimized via graph cut algorithm. Based on the bisection approach, we propose a bitwise algorithm for pixel labelling, which set one bit of each pixel's label in each step. We apply the proposed algorithm to stereo matching and image restoration. Experimental results demonstrate that both good performance and high efficiency are achieved.
New image descriptors based on color, texture, shape, and wavelets for object and scene image classification. This paper presents new image descriptors based on color, texture, shape, and wavelets for object and scene image classification. First, a new three Dimensional Local Binary Patterns (3D-LBP) descriptor, which produces three new color images, is proposed for encoding both color and texture information of an image. The 3D-LBP images together with the original color image then undergo the Haar wavelet transform with further computation of the Histograms of Oriented Gradients (HOG) for encoding shape and local features. Second, a novel H-descriptor, which integrates the 3D-LBP and the HOG of its wavelet transform, is presented to encode color, texture, shape, as well as local information. Feature extraction for the H-descriptor is implemented by means of Principal Component Analysis (PCA) and Enhanced Fisher Model (EFM) and classification by the nearest neighbor rule for object and scene image classification. And finally, an innovative H-fusion descriptor is proposed by fusing the PCA features of the H-descriptors in seven color spaces in order to further incorporate color information. Experimental results using three datasets, the Caltech 256 object categories dataset, the UIUC Sports Event dataset, and the MIT Scene dataset, show that the proposed new image descriptors achieve better image classification performance than other popular image descriptors, such as the Scale Invariant Feature Transform (SIFT), the Pyramid Histograms of visual Words (PHOW), the Pyramid Histograms of Oriented Gradients (PHOG), Spatial Envelope, Color SIFT four Concentric Circles (C4CC), Object Bank, the Hierarchical Matching Pursuit, as well as LBP.
Autonomous Detection Of Volcanic Plumes On Outer Planetary Bodies We experimentally evaluated the efficacy of various autonomous supervised classification techniques for detecting transient geophysical phenomena. We demonstrated methods of detecting volcanic plumes on the planetary satellites Io and Enceladus using spacecraft images from the Voyager, Galileo, New Horizons, and Cassini missions. We successfully detected 73-95% of known plumes in images from all four mission datasets. Additionally, we showed that the same techniques are applicable to differentiating geologic features, such as plumes and mountains, which exhibit similar appearances in images.
1.0525
0.05
0.05
0.004545
0.000561
0.000009
0.000002
0
0
0
0
0
0
0
Accurate visual odometry from a rear parking camera As an increasing number of automatic safety and navigation features are added to modern vehicles, the crucial job of providing real-time localisation is predominantly performed by a single sensor, GPS, despite its well-known failings, particularly in urban environments. Various attempts have been made to supplement GPS to improve localisation performance, but these usually require additional specialised and expensive sensors. Offering increased value to vehicle OEMs, we show that it is possible to use just the video stream from a rear parking camera to produce smooth and locally accurate visual odometry in real-time. We use an efficient whole image alignment approach based on ESM, taking account of both the difficulties and advantages of the fact that a parking camera views only the road surface directly behind a vehicle. Visual odometry is complementary to GPS in offering localisation information at 30 Hz which is smooth and highly accurate locally whilst GPS is course but offers absolute measurements. We demonstrate our system in a large scale experiment covering real urban driving. We also present real-time fusion of our visual estimation with automotive GPS to generate a commodity-cost localisation solution which is smooth, accurate and drift free in global coordinates.
Multi-task Learning of Visual Odometry Estimators.
Semi-parametric models for visual odometry This paper introduces a novel framework for estimating the motion of a robotic car from image information, a scenario widely known as visual odometry. Most current monocular visual odometry algorithms rely on a calibrated camera model and recover relative rotation and translation by tracking image features and applying geometrical constraints. This approach has some drawbacks: translation is recovered up to a scale, it requires camera calibration which can be tricky under certain conditions, and uncertainty estimates are not directly obtained. We propose an alternative approach that involves the use of semi-parametric statistical models as means to recover scale, infer camera parameters and provide uncertainty estimates given a training dataset. As opposed to conventional non-parametric machine learning procedures, where standard models for egomotion would be neglected, we present a novel framework in which the existing parametric models and powerful non-parametric Bayesian learning procedures are combined. We devise a multiple output Gaussian Process (GP) procedure, named Coupled GP, that uses a parametric model as the mean function and a non-stationary covariance function to map image features directly into vehicle motion. Additionally, this procedure is also able to infer joint uncertainty estimates (full covariance matrices) for rotation and translation. Experiments performed using data collected from a single camera under challenging conditions show that this technique outperforms traditional methods in trajectories of several kilometers.
Parallel, real-time monocular visual odometry We present a real-time, accurate, large-scale monocular visual odometry system for real-world autonomous outdoor driving applications. The key contributions of our work are a series of architectural innovations that address the challenge of robust multithreading even for scenes with large motions and rapidly changing imagery. Our design is extensible for three or more parallel CPU threads. The system uses 3D-2D correspondences for robust pose estimation across all threads, followed by local bundle adjustment in the primary thread. In contrast to prior work, epipolar search operates in parallel in other threads to generate new 3D points at every frame. This significantly boosts robustness and accuracy, since only extensively validated 3D points with long tracks are inserted at keyframes. Fast-moving vehicles also necessitate immediate global bundle adjustment, which is triggered by our novel keyframe design in parallel with pose estimation in a thread-safe architecture. To handle inevitable tracking failures, a recovery method is provided. Scale drift is corrected only occasionally, using a novel mechanism that detects (rather than assumes) local planarity of the road by combining information from triangulated 3D points and the inter-image planar homography. Our system is optimized to output pose within 50 ms in the worst case, while average case operation is over 30 fps. Evaluations are presented on the challenging KITTI dataset for autonomous driving, where we achieve better rotation and translation accuracy than other state-of-the-art systems.
Fast relocalisation and loop closing in keyframe-based SLAM In this paper we present for the first time a relocalisation method for keyframe-based SLAM that can deal with severe viewpoint change, at frame-rate, in maps containing thousands of keyframes. As this method relies on local features, it permits the interoperability between cameras, allowing a camera to relocalise in a map built by a different camera. We also perform loop closing (detection + correction), at keyframerate, in loops containing hundreds of keyframes. For both relocalisation and loop closing, we propose a bag of words place recognizer with ORB features, which is able to recognize places spending less than 39 ms, including feature extraction, in databases containing 10K images (without geometrical verification). We evaluate the performance of this recognizer in four different datasets, achieving high recall and no false matches, and getting better results than the state-of-art in place recognition, being one order of magnitude faster.
Vision-Based Mobile Robot Localization And Mapping Using Scale-Invariant Features A key component of a mobile robot system is the A kc, ability to localize itself accurately and build a map of the environment simultaneously. In this paper, a vision-based mobile robot localization and mapping algorithm is described which ruses scale-invariant image features as landmarks in unmodified dynamic environments. These 3D landmarks are localized and robot ego-motion is estimated by matching them,, taking into account the feature viewpoint variation. With our Triclops stereo vision system, experiments show that these features are robustly matched between views, 3D landmarks are tracked, robot pose is estimated and a 3D map is built.
Visual odometry learning for unmanned aerial vehicles This paper addresses the problem of using visual information to estimate vehicle motion (a.k.a. visual odometry) from a machine learning perspective. The vast majority of current visual odometry algorithms are heavily based on geometry, using a calibrated camera model to recover relative translation (up to scale) and rotation by tracking image features over time. Our method eliminates the need for a parametric model by jointly learning how image structure and vehicle dynamics affect camera motion. This is achieved with a Gaussian Process extension, called Coupled GP, which is trained in a supervised manner to infer the underlying function mapping optical flow to relative translation and rotation. Matched image features parameters are used as inputs and linear and angular velocities are the outputs in our non-linear multi-task regression problem. We show here that it is possible, using a single uncalibrated camera and establishing a first-order temporal dependency between frames, to jointly estimate not only a full 6 DoF motion (along with a full covariance matrix) but also relative scale, a non-trivial problem in monocular configurations. Experiments were performed with imagery collected with an unmanned aerial vehicle (UAV) flying over a deserted area at speeds of 100-120 km/h and altitudes of 80-100 m, a scenario that constitutes a challenge for traditional visual odometry estimators.
Rawseeds ground truth collection systems for indoor self-localization and mapping A trustable and accurate ground truth is a key requirement for benchmarking self-localization and mapping algorithms; on the other hand, collection of ground truth is a complex and daunting task, and its validation is a challenging issue. In this paper we propose two techniques for indoor ground truth collection, developed in the framework of the European project Rawseeds, which are mutually independent and also independent on the sensors onboard the robot. These techniques are based, respectively, on a network of fixed cameras, and on a network of fixed laser scanners. We show how these systems are implemented and deployed, and, most importantly, we evaluate their performance; moreover, we investigate the possible fusion of their outputs.
Distributed message passing for large scale graphical models In this paper we propose a distributed message-passing algorithm for inference in large scale graphical models. Our method can handle large problems efficiently by distributing and parallelizing the computation and memory requirements. The convergence and optimality guarantees of recently developed message-passing algorithms are preserved by introducing new types of consistency messages, sent between the distributed computers. We demonstrate the effectiveness of our approach in the task of stereo reconstruction from high-resolution imagery, and show that inference is possible with more than 200 labels in images larger than 10 MPixels.
Robust Estimation for an Inverse Problem Arising in Multiview Geometry We propose a new approach to the problem of robust estimation for a class of inverse problems arising in multiview geometry. Inspired by recent advances in the statistical theory of recovering sparse vectors, we define our estimator as a Bayesian maximum a posteriori with multivariate Laplace prior on the vector describing the outliers. This leads to an estimator in which the fidelity to the data is measured by the L 驴-norm while the regularization is done by the L 1-norm. The proposed procedure is fairly fast since the outlier removal is done by solving one linear program (LP). An important difference compared to existing algorithms is that for our estimator it is not necessary to specify neither the number nor the proportion of the outliers; only an upper bound on the maximal measurement error for the inliers should be specified. We present theoretical results assessing the accuracy of our procedure, as well as numerical examples illustrating its efficiency on synthetic and real data.
Unscented FastSLAM: A Robust and Efficient Solution to the SLAM Problem The Rao-Blackwellized particle filter (RBPF) and FastSLAM have two important limitations, which are the derivation of the Jacobian matrices and the linear approximations of nonlinear functions. These can make the filter inconsistent. Another challenge is to reduce the number of particles while maintaining the estimation accuracy. This paper provides a robust new algorithm based on the scaled unscented transformation called unscented FastSLAM (UFastSLAM). It overcomes the important drawbacks of the previous frameworks by directly using nonlinear relations. This approach improves the filter consistency and state estimation accuracy, and requires smaller number of particles than the FastSLAM approach. Simulation results in large-scale environments and experimental results with a benchmark dataset are presented, demonstrating the superiority of the UFastSLAM algorithm.
Localization and Matching Using the Planar Trifocal Tensor With Bearing-Only Data This paper addresses the robot and landmark localization problem from bearing-only data in three views, simultaneously to the robust association of this data. The localization algorithm is based on the 1-D trifocal tensor, which relates linearly the observed data and the robot localization parameters. The aim of this work is to bring this useful geometric construction from computer vision closer to robotic applications. One contribution is the evaluation of two linear approaches of estimating the 1-D tensor: the commonly used approach that needs seven bearing-only correspondences and another one that uses only five correspondences plus two calibration constraints. The results in this paper show that the inclusion of these constraints provides a simpler and faster solution and better estimation of robot and landmark locations in the presence of noise. Moreover, a new method that makes use of scene planes and requires only four correspondences is presented. This proposal improves the performance of the two previously mentioned methods in typical man-made scenarios with dominant planes, while it gives similar results in other cases. The three methods are evaluated with simulation tests as well as with experiments that perform automatic real data matching in conventional and omnidirectional images. The results show sufficient accuracy and stability to be used in robotic tasks such as navigation, global localization or initialization of simultaneous localization and mapping (SLAM) algorithms.
Maximally stable local description for scale selection Scale and affine-invariant local features have shown excellent performance in image matching, object and texture recognition. This paper optimizes keypoint detection to achieve stable local descriptors, and therefore, an improved image representation. The technique performs scale selection based on a region descriptor, here SIFT, and chooses regions for which this descriptor is maximally stable. Maximal stability is obtained, when the difference between descriptors extracted for consecutive scales reaches a minimum. This scale selection technique is applied to multi-scale Harris and Laplacian points. Affine invariance is achieved by an integrated affine adaptation process based on the second moment matrix. An experimental evaluation compares our detectors to Harris-Laplace and the Laplacian in the context of image matching as well as of category and texture classification. The comparison shows the improved performance of our detector.
Fast Visual Retrieval Using Accelerated Sequence Matching We present an approach to represent, match, and index various types of visual data, with the primary goal of enabling effective and computationally efficient searches. In this approach, an image/video is represented by an ordered list of feature descriptors. Similarities between such representations are then measured by the approximate string matching technique. This approach unifies visual appearance and the ordering information in a holistic manner with joint consideration of visual-order consistency between the query and the reference instances, and can be used for automatically identifying local alignments between two pieces of visual data. This capability is essential for tasks such as video copy detection where only small portions of the query and the reference videos are similar. To deal with large volumes of data, we further show that this approach can be significantly accelerated along with a dedicated indexing structure. Extensive experiments on various visual retrieval and classification tasks demonstrate the superior performance of the proposed techniques compared to existing solutions.
1.045934
0.035737
0.020448
0.006584
0.003346
0.001672
0.000756
0.000145
0.000036
0.000016
0.000006
0.000002
0
0
Contour detection and hierarchical image segmentation. This paper investigates two fundamental problems in computer vision: contour detection and image segmentation. We present state-of-the-art algorithms for both of these tasks. Our contour detector combines multiple local cues into a globalization framework based on spectral clustering. Our segmentation algorithm consists of generic machinery for transforming the output of any contour detector into a hierarchical region tree. In this manner, we reduce the problem of image segmentation to that of contour detection. Extensive experimental evaluation demonstrates that both our contour detection and segmentation methods significantly outperform competing algorithms. The automatically generated hierarchical segmentations can be interactively refined by user-specified annotations. Computation at multiple image resolutions provides a means of coupling our system to recognition applications.
A polynomial algorithm for submap isomorphism of general maps Combinatorial maps explicitly encode orientations of edges around vertices, and have been used in many fields. In this paper, we address the problem of searching for patterns in model maps by putting forward the concept of symbol graph. A symbol graph will be constructed and stored for each model map in the preprocessing. Furthermore, an algorithm for submap isomorphism is presented based on symbol sequence searching in the symbol graphs. The computational complexity of this algorithm is quadratic in the worst case if we neglect the preprocessing step.
Multi-Cue Mid-Level Grouping Region proposal methods provide richer object hypotheses than sliding windows with dramatically fewer proposals, yet they still number in the thousands. This large quantity of proposals typically results from a diversification step that propagates bottom-up ambiguity in the form of proposals to the next processing stage. In this paper, we take a complementary approach in which mid-level knowledge is used to resolve bottom-up ambiguity at an earlier stage to allow a further reduction in the number of proposals. We present a method for generating regions using the mid-level grouping cues of closure and symmetry. In doing so, we combine mid-level cues that are typically used only in isolation, and leverage them to produce fewer but higher quality proposals. We emphasize that our model is mid-level by learning it on a limited number of objects while applying it to different objects, thus demonstrating that it is transferable to other objects. In our quantitative evaluation, we (1) establish the usefulness of each grouping cue by demonstrating incremental improvement, and (2) demonstrate improvement on two leading region proposal methods with a limited budget of proposals.
A framework for measuring sharpness in natural images captured by digital cameras based on reference image and local areas. Image quality is a vital criterion that guides the technical development of digital cameras. Traditionally, the image quality of digital cameras has been measured using test-targets and/or subjective tests. Subjective tests should be performed using natural images. It is difficult to establish the relationship between the results of artificial test targets and subjective data, however, because of the different test image types. We propose a framework for objective image quality metrics applied to natural images captured by digital cameras. The framework uses reference images captured by a high-quality reference camera to find image areas with appropriate structural energy for the quality attribute. In this study, the framework was set to measure sharpness. Based on the results, the mean performance for predicting subjective sharpness was clearly higher than that of the state-of-the-art algorithm and test-target sharpness metrics.
Deep Epitomic Convolutional Neural Networks. Deep convolutional neural networks have recently proven extremely competitive in challenging image recognition tasks. This paper proposes the epitomic convolution as a new building block for deep neural networks. An epitomic convolution layer replaces a pair of consecutive convolution and max-pooling layers found in standard deep convolutional neural networks. The main version of the proposed model uses mini-epitomes in place of filters and computes responses invariant to small translations by epitomic search instead of max-pooling over image positions. The topographic version of the proposed model uses large epitomes to learn filter maps organized in translational topographies. We show that error back-propagation can successfully learn multiple epitomic layers in a supervised fashion. The effectiveness of the proposed method is assessed in image classification tasks on standard benchmarks. Our experiments on Imagenet indicate improved recognition performance compared to standard convolutional neural networks of similar architecture. Our models pre-trained on Imagenet perform excellently on Caltech-101. We also obtain competitive image classification results on the small-image MNIST and CIFAR-10 datasets.
Spatial Statistics of Image Features for Performance Comparison When matching images for applications such as mosaicking and homography estimation, the distribution of features across the overlap region affects the accuracy of the result. This paper uses the spatial statistics of these features, measured by Ripley's K-function, to assess whether feature matches are clustered together or spread around the overlap region. A comparison of the performances of a dozen state-of-the-art feature detectors is then carried out using analysis of variance and a large image database. Results show that SFOP introduces significantly less aggregation than the other detectors tested. When the detectors are rank-ordered by this performance measure, the order is broadly similar to those obtained by other means, suggesting that the ordering reflects genuine performance differences. Experiments on stitching images into mosaics confirm that better coverage values yield better quality outputs.
Nonparametric Scene Parsing via Label Transfer. While there has been a lot of recent work on object recognition and image understanding, the focus has been on carefully establishing mathematical models for images, scenes and objects. In this paper, we propose a novel, nonparametric approach for object recognition and scene parsing using a new technology we name label transfer. For an input image, our system first retrieves its nearest neighbors from a large database containing fully annotated images. Then, the system establishes dense correspondences between the input image and each of the nearest neighbors using the dense SIFT flow algorithm [27], which aligns two images based on local image structures. Finally, based on the dense scene correspondences obtained from the SIFT flow, our system warps the existing annotations, and integrates multiple cues in a Markov random field framework to segment and recognize the query image. Promising experimental results have been achieved by our nonparametric scene parsing system on challenging databases. Compared to existing object recognition approaches that require training classifiers or appearance models for each object category, our system is easy to implement, has few parameters, and embeds contextual information naturally in the retrieval/alignment procedure.
Semantic Image Segmentation via Deep Parsing Network This paper addresses semantic image segmentation by incorporating rich information into Markov Random Field (MRF), including high-order relations and mixture of label contexts. Unlike previous works that optimized MRFs using iterative algorithm, we solve MRF by proposing a Convolutional Neural Network (CNN), namely Deep Parsing Network (DPN), which enables deterministic end-to-end computation in a single forward pass. Specifically, DPN extends a contemporary CNN architecture to model unary terms and additional layers are carefully devised to approximate the mean field algorithm (MF) for pairwise terms. It has several appealing properties. First, different from the recent works that combined CNN and MRF, where many iterations of MF were required for each training image during back-propagation, DPN is able to achieve high performance by approximating one iteration of MF. Second, DPN represents various types of pairwise terms, making many existing works as its special cases. Third, DPN makes MF easier to be parallelized and speeded up in Graphical Processing Unit (GPU). DPN is thoroughly evaluated on the PASCAL VOC 2012 dataset, where a single DPN model yields a new state-of-the-art segmentation accuracy of 77.5%.
Manhattan Junction Catalogue for Spatial Reasoning of Indoor Scenes Junctions are strong cues for understanding the geometry of a scene. In this paper, we consider the problem of detecting junctions and using them for recovering the spatial layout of an indoor scene. Junction detection has always been challenging due to missing and spurious lines. We work in a constrained Manhattan world setting where the junctions are formed by only line segments along the three principal orthogonal directions. Junctions can be classified into several categories based on the number and orientations of the incident line segments. We provide a simple and efficient voting scheme to detect and classify these junctions in real images. Indoor scenes are typically modeled as cuboids and we formulate the problem of the cuboid layout estimation as an inference problem in a conditional random field. Our formulation allows the incorporation of junction features and the training is done using structured prediction techniques. We outperform other single view geometry estimation methods on standard datasets.
Integrated feature selection and higher-order spatial feature extraction for object categorization In computer vision, the bag-of-visual words image representation has been shown to yield good results. Recent work has shown that modeling the spatial re- lationship between visual words further improves per- formance. Previous work extracts higher-order spatial features exhaustively. However, these spatial features are expensive to compute. We propose a novel method that simultaneously performs feature selection and fea- ture extraction. Higher-order spatial features are pro- gressively extracted based on selected lower order ones, thereby avoiding exhaustive computation. The method can be based on any additive feature selection algorithm such as boosting. Experimental results show that the method is computationally much more efficient than previous approaches, without sacrificing accuracy.1
Structure Is a Visual Class Invariant The problem of learning the class identity of visual objects has received considerable attention recently. With rare exception, all of the work to date assumes low variation in appearance, which limits them to a single depictive style usually photographic. The same object depicted in other styles -- as a drawing, perhaps -- cannot be identified reliably. Yet humans are able to name the object no matter how it is depicted, and even recognise a real object having previously seen only a drawing. This paper describes a classifier which is unique in being able to learn class identity no matter how the class instances are depicted. The key to this is our proposition that topological structure is a class invariant. Practically, we depend on spectral graph analysis of a hierarchical description of an image to construct a feature vector of fixed dimension. Hence structure is transformed to a feature vector, which can be classified using standard methods. We demonstrate the classifier on several diverse classes.
From 3D Point Clouds to Pose-Normalised Depth Maps We consider the problem of generating either pairwise-aligned or pose-normalised depth maps from noisy 3D point clouds in a relatively unrestricted poses. Our system is deployed in a 3D face alignment application and consists of the following four stages: (i) data filtering, (ii) nose tip identification and sub-vertex localisation, (iii) computation of the (relative) face orientation, (iv) generation of either a pose aligned or a pose normalised depth map. We generate an implicit radial basis function (RBF) model of the facial surface and this is employed within all four stages of the process. For example, in stage (ii), construction of novel invariant features is based on sampling this RBF over a set of concentric spheres to give a spherically-sampled RBF (SSR) shape histogram. In stage (iii), a second novel descriptor, called an isoradius contour curvature signal, is defined, which allows rotational alignment to be determined using a simple process of 1D correlation. We test our system on both the University of York (UoY) 3D face dataset and the Face Recognition Grand Challenge (FRGC) 3D data. For the more challenging UoY data, our SSR descriptors significantly outperform three variants of spin images, successfully identifying nose vertices at a rate of 99.6%. Nose localisation performance on the higher quality FRGC data, which has only small pose variations, is 99.9%. Our best system successfully normalises the pose of 3D faces at rates of 99.1% (UoY data) and 99.6% (FRGC data).
Discrete-continuous optimization for large-scale structure from motion Recent work in structure from motion (SfM) has successfully built 3D models from large unstructured collections of images downloaded from the Internet. Most approaches use incremental algorithms that solve progressively larger bundle adjustment problems. These incremental techniques scale poorly as the number of images grows, and can drift or fall into bad local minima. We present an alternative formulation for SfM based on finding a coarse initial solution using a hybrid discrete-continuous optimization, and then improving that solution using bundle adjustment. The initial optimization step uses a discrete Markov random field (MRF) formulation, coupled with a continuous Levenberg-Marquardt refinement. The formulation naturally incorporates various sources of information about both the cameras and the points, including noisy geotags and vanishing point estimates. We test our method on several large-scale photo collections, including one with measured camera positions, and show that it can produce models that are similar to or better than those produced with incremental bundle adjustment, but more robustly and in a fraction of the time.
Combined 2D-3D categorization and classification for multimodal perception systems In this article we describe an object perception system for autonomous robots performing everyday manipulation tasks in kitchen environments. The perception system gains its strengths by exploiting that the robots are to perform the same kinds of tasks with the same objects over and over again. It does so by learning the object representations necessary for the recognition and reconstruction in the context of pick-and-place tasks. The system employs a library of specialized perception routines that solve different, well-defined perceptual sub-tasks and can be combined into composite perceptual activities including the construction of an object model database, multimodal object classification, and object model reconstruction for grasping. We evaluate the effectiveness of our methods, and give examples of application scenarios using our personal robotic assistants acting in a human living environment.
1.003614
0.00625
0.003785
0.003347
0.003142
0.002171
0.001136
0.000574
0.000311
0.000071
0.000012
0.000003
0
0
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
37
Edit dataset card