doc_id
stringlengths
4
40
title
stringlengths
7
300
abstract
stringlengths
2
10k
corpus_id
uint64
171
251M
834ed024ed283727bfa2b823b989131534458de2
A Fast Minutiae-Based Fingerprint Recognition System
The spectral minutiae representation is a method to represent a minutiae set as a fixed-length feature vector, which is invariant to translation, and in which rotation and scaling become translations, so that they can be easily compensated for. These characteristics enable the combination of fingerprint recognition systems with template protection schemes that require as an input a fixed-length feature vector. Based on the spectral minutiae features, this paper introduces two feature reduction algorithms: the Column Principal Component Analysis and the Line Discrete Fourier Transform feature reductions, which can efficiently compress the template size with a reduction rate of 94%. With reduced features, we can also achieve a fast minutiae-based matching algorithm. This paper presents the performance of the spectral minutiae fingerprint recognition system and shows a matching speed with 125 000 comparisons per second on a PC with Intel Pentium D processor 2.80 GHz and 1 GB of RAM. This fast operation renders our system suitable as a preselector for a large-scale fingerprint identification system, thus significantly reducing the time to perform matching, especially in systems operating at geographical level (e.g., police patrolling) or in complex critical environments (e.g., airports).
13,472,907
543a005dd1c6118c73e099e65119ae10c790969e
The Effect of Image Resolution on the Performance of a Face Recognition System
In this paper we investigate the effect of image resolution on the error rates of a face verification system. We do not restrict ourselves to the face recognition algorithm only, but we also consider the face registration. In our face recognition system, the face registration is done by finding landmarks in a face image and subsequent alignment based on these landmarks. To investigate the effect of image resolution we performed experiments where we varied the resolution. We investigate the effect of the resolution on the face recognition part, the registration part and the entire system. This research also confirms that accurate registration is of vital importance to the performance of the face recognition algorithm. The results of our face recognition system are optimal on face images with a resolution of 32 times 32 pixels
13,855,616
77e61add817020078c024427807d09011319b756
Manually annotated characteristic descriptors: Measurability and variability
In this paper we study the measurability and variability of manually annotated characteristic descriptors on a forensic relevant face dataset. Characteristic descriptors are facial features (landmarks, shapes, etc.) that can be used during forensic case work. With respect to measurability, we observe that a significant proportion cannot be determined in images representative of forensic case work. Landmarks, closed and open shapes, and other forensic facial features show mostly that the variability depends on the image quality. Up to 50% of all considered evidential values are either positively or negatively influenced by annotator variability. However, when considering images with the lowest quality, we found that more than 70% of the evidential value intervals in principle could yield the wrong conclusion.
20,042,402
650914a5cd2161e68d55abe9406b7476f78f777d
Biometric Authentication for a Mobile Personal Device
Secure access is prerequisite for a mobile personal device (MPD) in a personal network (PN). An authentication method using biometrics, specifically face, is proposed in this paper. A fast face detection and registration method based on a Viola-Jones detector is implemented, and a face-authentication method based on subspace metrics is developed. Experiments show that the authentication method is effective with an equal error rate (EER) of 1.2%, despite its simplicity
15,464,421
f11b7bb77ff20ca267a95581995c478fee1ae4b0
Pseudo Identities Based on Fingerprint Characteristics
This paper presents the integrated project TURBINE which is funded under the EU 7th research framework programme. This research is a multi-disciplinary effort on privacy enhancing technology, combining innovative developments in cryptography and fingerprint recognition. The objective of this project is to provide a breakthrough in electronic authentication for various applications in the physical world and on the Internet. On the one hand it will provide secure identity verification thanks to fingerprint recognition. On the other hand it will reliably protect the biometric data through advanced cryptography technology. In concrete terms, it will provide the assurance that (i) the data used for the authentication, generated from the fingerprint, cannot be used to restore the original fingerprint sample, (ii) the individual will be able to create different "pseudo-identities" for different applications with the same fingerprint, whilst ensuring that these different identities (and hence the related personal data) cannot be linked to each other, and (iii) the individual is enabled to revoke an biometric identifier (pseudo-identity) for a given application in case it should not be used anymore.
14,154,206
c4087a37f5add82446ec28ab9865d01ed67f6abc
A high quality finger vascular pattern dataset collected using a custom designed capturing device
The number of finger vascular pattern datasets available for the research community is scarce, therefore a new finger vascular pattern dataset containing 1440 images is prsented. This dataset is unique in its kind as the images are of high resolution and have a known pixel density. Furthermore this is the first dataset which contains the age, gender and handedness of the participating volunteers as meta data. The images have been captured using a custom designed capturing device. The various aspects of designing this capturing device are addressed in this paper as well. To confirm whether this new dataset is in fact an important contribution some performance figures in terms of EER of several published state-of-the-art algorithms using this new dataset and an existing dataset from the Peking University are presented. Using this new dataset EERs down to 0.4% have been achieved.
724,483
6cc38f053c83fb885f4b7b1aa4bd185395583b2e
Adaptive interpolation of discrete-time signals that can be modeled as autoregressive processes
This paper presents an adaptive algorithm for the restoration of lost sample values in discrete-time signals that can locally be described by means of autoregressive processes. The only restrictions are that the positions of the unknown samples should be known and that they should be embedded in a sufficiently large neighborhood of known samples. The estimates of the unknown samples are obtained by minimizing the sum of squares of the residual errors that involve estimates of the autoregressive parameters. A statistical analysis shows that, for a burst of lost samples, the expected quadratic interpolation error per sample converges to the signal variance when the burst length tends to infinity. The method is in fact the first step of an iterative algorithm, in which in each iteration step the current estimates of the missing samples are used to compute the new estimates. Furthermore, the feasibility of implementation in hardware for real-time use is established. The method has been tested on artificially generated auto-regressive processes as well as on digitized music and speech signals.
17,149,340
f5493eecad1877d3d9cdf16bde765ebf19b07b66
A Real Helper Data Scheme
The helper data scheme utilizes a secret key to protect biometric templates. The current helper data scheme requires binary feature representations that introduce quantization error and thus reduce the capacity of biometric channels. For spectral-minutiae based fingerprint recognition systems, Shannon theory proves that the current helper data scheme cannot have more than 6 bits. A 6-bit secret key is too short to secure the storage of biometric templates. Therefore, we propose a new helper data scheme without quantization. A basic realization is to convert the real-valued feature vector into a phase vector. By applying the spectral minutiae method in the FVC2000-DB2 fingerprint database, our new helper data scheme together with repetition codes and BCH codes allows at least 76 secret bits.
6,470,266
37e3dd3535d6d89256df38593f2858811957cb20
Likelihood-ratio-based biometric verification
The paper presents results on optimal similarity measures for biometric verification based on fixed-length feature vectors. First, we show that the verification of a single user is equivalent to the detection problem, which implies that, for single-user verification, the likelihood ratio is optimal. Second, we show that, under some general conditions, decisions based on posterior probabilities and likelihood ratios are equivalent and result in the same receiver operating curve. However, in a multi-user situation, these two methods lead to different average error rates. As a third result, we prove theoretically that, for multi-user verification, the use of the likelihood ratio is optimal in terms of average error rates. The superiority of this method is illustrated by experiments in fingerprint verification. It is shown that error rates below 10/sup -3/ can be achieved when using multiple fingerprints for template construction.
11,928,430
4e8a8da80495f5fa65ed60ae910d643e227af7ce
A Bayesian model for predicting face recognition performance using image quality
Quality of a pair of facial images is a strong indicator of the uncertainty in decision about identity based on that image pair. In this paper, we describe a Bayesian approach to model the relation between image quality (like pose, illumination, noise, sharpness, etc) and corresponding face recognition performance. Experiment results based on the MultiPIE data set show that our model can accurately aggregate verification samples into groups for which the verification performance varies fairly consistently. Our model does not require similarity scores and can predict face recognition performance using only image quality information. Such a model has many applications. As an illustrative application, we show improved verification performance when the decision threshold automatically adapts according to the quality of facial images.
11,609,948
62c5827bd5f180d50ba15bffe79b48b3d546d67b
Bit Rates in Audio Source Coding
The goal is to introduce and solve the audio coding optimization problem. Psychoacoustic results such as masking and excitation pattern models are combined with results from rate distortion theory to formulate the audio coding optimization problem. The solution of the audio optimization problem is a masked error spectrum, prescribing how quantization noise must be distributed over the audio spectrum to obtain a minimal bit rate and an inaudible coding errors. This result cannot only be used to estimate performance bounds, but can also be directly applied in audio coding systems. Subband coding applications to magnetic recording and transmission are discussed in some detail. Performance bounds for this type of subband coding system are derived. >
1,607,000
acb4a2957e7131b442f87ac6e4fa2d3c1f31357e
Robust 3D face recognition in the presence of realistic occlusions
Facial occlusions pose significant problems for automatic face recognition systems. In this work, we propose a novel occlusion-resistant three-dimensional (3D) facial identification system. We show that, under extreme occlusions due to hair, hands, and eyeglasses, typical 3D face recognition systems exhibit poor performance. In order to deal with occlusions, our proposed system employs occlusion-resistant registration, occlusion detection, and regional classifiers. A two-step registration module first detects the nose region on the curvedness-weighted convex shape index map, and then performs fine alignment using nose-based Iterative Closest Point (ICP) algorithm. Occluded areas are determined automatically via a generic face model. After non-facial parts introduced by occlusions are removed, a variant of Gappy Principal Component Analysis (Gappy PCA) is used to restore the full face from occlusion-free facial surfaces. Experimental results obtained on realistically occluded facial images from the Bosphorus 3D face database shows that, with the use of score-level fusion of regional Linear Discriminant Analysis (LDA) classifiers, the proposed method improves rank-1 identification accuracy significantly: from 76.12% to 94.23%.
6,320,176
0126f1566a5a9ba051137afb6c1fe28f93584def
Robust Biometric Score Fusion by Naive Likelihood Ratio via Receiver Operating Characteristics
This paper presents a novel method of fusing multiple biometrics on the matching score level. We estimate the likelihood ratios of the fused biometric scores, via individual receiver operating characteristics (ROC) which construct the Naive Bayes classifier. Using a limited number of operation points on the ROC, we are able to realize reliable and robust estimation of the Naive Bayes probability without explicit estimation of the genuine and impostor score distributions. Different from previous work, the method takes into consideration a particular characteristic of the matching score: its quantitative value is already an indication of the sample's likelihood of being genuine. This characteristic is integrated into the proposed method to improve the fusion performance while reducing the inherent algorithmic complexity. We demonstrate by experiments that the proposed method is reliable and robust, suitable for a wide range of matching score distributions in realistic data and public databases.
18,329,936
1fb860f43d806cd4164c7d09233103e898acd446
Biometric Authentication for a Mobile Personal Device
Secure access is prerequisite for a mobile personal device (MPD) in a personal network (PN). An authentication method using biometrics, specifically face, is proposed in this paper. A fast face detection and registration method based on a Viola-Jones detector is implemented, and a face-authentication method based on subspace metrics is developed. Experiments show that the authentication method is effective with an equal error rate (EER) of 1.2%, despite its simplicity
62,308,683
a28703301e099d5f1c536289d79937faab37aeed
Likelihood-ratio-based verification in high-dimensional spaces.
The increase of the dimensionality of data sets often leads to problems during estimation, which are denoted as the curse of dimensionality. One of the problems of second-order statistics (SOS) estimation in high-dimensional data is that the resulting covariance matrices are not full rank, so their inversion, for example, needed in verification systems based on the likelihood ratio, is an ill-posed problem, known as the singularity problem. A classical solution to this problem is the projection of the data onto a lower dimensional subspace using principle component analysis (PCA) and it is assumed that any further estimation on this dimension-reduced data is free from the effects of the high dimensionality. Using theory on SOS estimation in high-dimensional spaces, we show that the solution with PCA is far from optimal in verification systems if the high dimensionality is the sole source of error. For moderate dimensionality, it is already outperformed by solutions based on euclidean distances and it breaks down completely if the dimensionality becomes very high. We propose a new method, the fixed-point eigenwise correction, which does not have these disadvantages and performs close to optimal.
54,557,474
c6263b89716a1fa8bb1d6a75e2f5849758a990dd
Verification Under Increasing Dimensionality
Verification decisions are often based on second order statistics estimated from a set of samples. Ongoing growth of computational resources allows for considering more and more features, increasing the dimensionality of the samples. If the dimensionality is of the same order as the number of samples used in the estimation or even higher, then the accuracy of the estimate decreases significantly. In particular, the eigenvalues of the covariance matrix are estimated with a bias and the estimate of the eigenvectors differ considerably from the real eigenvectors. We show how a classical approach of verification in high dimensions is severely affected by these problems, and we show how bias correction methods can reduce these problems.
9,782,167
be4a695cf64310281c9936175e775e8f22795d6a
A Bootstrap Approach to Eigenvalue Correction
Eigenvalue analysis is an important aspect in many data modeling methods. Unfortunately, the eigenvalues of the sample covariance matrix (sample eigenvalues) are biased estimates of the eigenvalues of the covariance matrix of the data generating process (population eigenvalues). We present a new method based on bootstrapping to reduce the bias in the sample eigenvalues: the eigenvalue estimates are updated in several iterations, where in each iteration synthetic data is generated to determine how to update the population eigenvalue estimates. Comparison of the bootstrap eigenvalue correction with a state of the art correction method by Karoui shows that depending on the type of population eigenvalue distribution, sometimes the Karoui method performs better and sometimes our bootstrap method.
9,899,033
1b0c6a1d2b74f15e1c7458683e67d09b38c9ccec
Model-based reconstruction for illumination variation in face images
We propose a novel method to correct for arbitrary illumination variation in the face images. The main purpose is to improve recognition results of face images taken under uncontrolled illumination conditions. We correct the illumination variation in the face images using a face shape model, which allows us to estimate the face shape in the face image. Using this face shape, we can reconstruct a face image under frontal illumination. These reconstructed images improve the results in face identification. We experimented both with face images acquired under different controlled illumination conditions in a laboratory and under uncontrolled illumination conditions.
8,141,720
aa73d01cc2a30b54098f4ceba883a144e6f9523c
Beyond the eye of the beholder: On a forensic descriptor of the eye region
The task of forensic facial experts is to assess the likelihood whether a suspect is depicted on crime scene images. They typically (a) use morphological analysis when comparing parts of the facial region, and (b) combine this partial evidence into a final judgment. Facial parts can be considered as soft biometric modalities and in recent years have been studied in the biometric community. In this paper we focus on the region around the eye from a forensic perspective by applying the FISWG feature list of the eye modality. We compare existing work from the soft biometric perspective based on a texture descriptor with our approach.
10,262,205
3c486ca382f7aca7d83344e8492589bcc17a5659
Maximum Key Size and Classification Performance of Fuzzy Commitment for Gaussian Modeled Biometric Sources
Template protection techniques are used within biometric systems in order to protect the stored biometric template against privacy and security threats. A great portion of template protection techniques are based on extracting a key from, or binding a key to the binary vector derived from the biometric sample. The size of the key plays an important role, as the achieved privacy and security mainly depend on the entropy of the key. In the literature, it can be observed that there is a large variation on the reported key lengths at similar classification performance of the same template protection system, even when based on the same biometric modality and database. In this work, we determine the analytical relationship between the classification performance of the fuzzy commitment scheme and the theoretical maximum key size given as input a Gaussian biometric source. We show the effect of the system parameters such as the biometric source capacity, the number of feature components, the number of enrolment and verification samples, and the target performance on the maximum key size. Furthermore, we provide an analysis of the effect of feature interdependencies on the estimated maximum key size and classification performance. Both the theoretical analysis, as well as an experimental evaluation using the MCYT fingerprint database showed that feature interdependencies have a large impact on performance and key size estimates. This property can explain the large deviation in reported key sizes in literature.
8,075,051
4e8f301dbedc9063831da1306b294f2bd5b10477
Discriminating Power of FISWG Characteristic Descriptors Under Different Forensic Use Cases
FISWG characteristic descriptors are facial features that can be used for evidence evaluation during forensic case work. In this paper we investigate the discriminating power of a biometric system that uses these characteristic descriptors as features under different forensic use cases. We show that in every forensic use case we can find characteristic descriptors that exhibit moderate to low discriminating power. In all but one use cases, a commercial face recognition system outperforms the characteristic descriptors. However, in low resolution surveillance camera images, some (combination of) characteristic descriptors yield better results than commercial systems.
983,326
c05822fdddfec2531143000c5b52f791f63f6b00
Identification Performance of Evidential Value Estimation for Fingermarks
Law enforcement agencies around the world use biometrics and fingerprints to solve and fight crime. Forensic experts are needed to record fingermarks at crime scenes and to ensure those captured are of evidential value. This process needs to be automated and streamlined as much as possible to improve efficiency and reduce workload. It has previously been demonstrated that is possible to estimate a fingermark's evidential value automatically for image captures taken with a mobile phone or other devices, such as a scanner or a high-quality camera. Here we study the relationship between a fingermark being of evidential value and its correct and certain identification and if it is possible to achieve identification despite the mark not having sufficient evidential value. Subsequently, we also investigate the influence the capture device used makes and if a mobile phone is a considerable option. Our results show that automatic identification is possible for 126 of the 1,428 fingermarks captured by a mobile phone, of which 116 were marked as having evidential value by experts and 123 by an automated algorithm.
4,866,958
1fe8b8dc1271b0cb5ce37f21be5809546597cfdf
Performances of the likelihood-ratio classifier based on different data modelings
The classical likelihood ratio classifier easily collapses in many biometric applications especially with independent training-test subjects. The reason lies in the inaccurate estimation of the underlying user-specific feature density. Firstly, the feature density estimation suffers from insufficient number of user-specific samples during the enrollment phase. Even if more enrollment samples are available, it is most likely that they are not reliable enough. Furthermore, it may happen that enrolled samples do not obey the Gaussian density model. Therefore, it is crucial to properly estimate the underlying user-specific feature density in the above situations. In this paper, we give an overview of several data modeling methods. Furthermore, we propose a discretized density based data model. Experimental results on FRGC face data set has shown reasonably good performance with our proposed model.
1,485,146
a71ce884e2c6fd8b47e3ef189b7be7251d540291
Pitfall of the Detection Rate Optimized Bit Allocation within template protection and a remedy
One of the requirements of a biometric template protection system is that the protected template ideally should not leak any information about the biometric sample or its derivatives. In the literature, several proposed template protection techniques are based on binary vectors. Hence, they require the extraction of a binary representation from the real- valued biometric sample. In this work we focus on the Detection Rate Optimized Bit Allocation (DROBA) quantization scheme that extracts multiple bits per feature component while maximizing the overall detection rate. The allocation strategy has to be stored as auxiliary data for reuse in the verification phase and is considered as public. This implies that the auxiliary data should not leak any information about the extracted binary representation. Experiments in our work show that the original DROBA algorithm, as known in the literature, creates auxiliary data that leaks a significant amount of information. We show how an adversary is able to exploit this information and significantly increase its success rate on obtaining a false accept. Fortunately, the information leakage can be mitigated by restricting the allocation freedom of the DROBA algorithm. We propose a method based on population statistics and empirically illustrate its effectiveness. All the experiments are based on the MCYT fingerprint database using two different texture based feature extraction algorithms.
7,018,442
130c62f6cd9d6b6fb9d260d5708b3fa4d603143f
A concatenated coding scheme for biometric template protection
Cryptography may mitigate the privacy problem in biometric recognition systems. However, cryptography technologies lack error-tolerance and biometric samples cannot be reproduced exactly, rising the robustness problem. The biometric template protection system needs a good feature extraction algorithm to be a good classifier. But, an even effective feature extractor can give a very low-quality biometric channel (i.e. high Bit Error Rate (BER)). Using the Spectral Minutiae method to identify fingerprints is one of the examples, which gives a BER of 40 ~ 50% to most of the matching channels. Therefore, we propose a concatenated coding scheme based on erasure codes to achieve a robust and secure biometric recognition system. The key idea is to transmit more packets than needed for decoding and allow the erasure-encoded packet suffering high BER to be discarded. The erasure decoder can reconstruct the secret key by collecting enough surviving packets. By applying the spectral minutiae method in the FVC2000-DB2 fingerprint database, the unprotected system achieves an EER of 3.7% and our proposed coding scheme reaches an EER of 4.6% with a 798-bit secret key.
10,927,421
33636053d0f288d509204fd30b115d9d4ce172f1
The spectral relevance of glottal-pulse parameters
The paper analyses how variations of the parameters of the Liljencrants-Fant (1985) model of glottal flow influence the speech spectrum, in order to determine the spectral relevance of these parameters. The effects of small parameter variations are described analytically. This analysis also gives an indication to what extent the LF parameters can be estimated reliably from the speech spectrum. The effects of larger parameter variations are discussed with the help of figures. Results are presented for a number of sets of estimated glottal-pulse parameters that were taken from the literature. The main conclusion is that the LF model, which, given the fundamental period, is a three-parameter model, actually operates as a one- or a two-parameter model.
16,096,392
7bec9f7fe9c8f13b7514ea5de4c3f8fec2afdd2b
Hybrid fusion for biometrics: Combining score-level and decision-level fusion
A general framework of fusion at decision level, which works on ROCs instead of matching scores, is investigated. Under this framework, we further propose a hybrid fusion method, which combines the score-level and decision-level fusions, taking advantage of both fusion modes. The hybrid fusion adaptively tunes itself between the two levels of fusion, and improves the final performance over the original two levels. The proposed hybrid fusion is simple and effective for combining different biometrics.
5,611,425
769fe6803435feb6a395d84b953a7b081d4eeb4f
Binary Representations of Fingerprint Spectral Minutiae Features
A fixed-length binary representation of a fingerprint has the advantages of a fast operation and a small template storage. For many biometric template protection schemes, a binary string is also required as input. The spectral minutiae representation is a method to represent a minutiae set as a fixed-length real-valued feature vector. In order to be able to apply the spectral minutiae representation with a template protection scheme, we introduce two novel methods to quantize the spectral minutiae features into binary strings: Spectral Bits and Phase Bits. The experiments on the FVC2002 database show that the binary representations can even outperformed the spectral minutiae real-valued features.
9,338,253
e6d9d3a2f1560e507a24b8cfe3d2f4369c79e0f6
Impact of eye detection error on face recognition performance
The locations of the eyes are the most commonly used features to perform face normalisation (i.e. alignment of facial features), which is an essential preprocessing stage of many face recognition systems. In this study, the authors study the sensitivity of open source implementations of five face recognition algorithms to misalignment caused by eye localisation errors. They investigate the ambiguity in the location of the eyes by comparing the difference between two independent manual eye annotations. They also study the error characteristics of automatic eye detectors present in two commercial face recognition systems. Furthermore, they explore the impact of using different eye detectors for training/enrolment and query phases of a face recognition system. These experiments provide an insight into the influence of eye localisation errors on the performance of face recognition systems and recommend a strategy for the design of training and test sets of a face recognition algorithm.
1,634,788
2727b5eb800a89570c4c54b2b3bc726be29ed170
Designing a Low-Resolution Face Recognition System for Long-Range Surveillance
Most face recognition systems deal well with high-resolution facial images, but perform much worse on low-resolution facial images. In low-resolution face recognition, there is a specific but realistic surveillance scenario: a surveillance camera monitoring a large area. In this scenario, usually the gallery images are of high-resolution and the probe images are of various low-resolutions depending on the distances between the subject and the camera. In this paper, we design a low-resolution face recognition system for this scenario. We use a state-of-the-art mixed-resolution classifier to deal with the resolution mismatch between the gallery and probe images. We also set up experiments to explore the best training configuration for probe images of various resolutions. Our experimental results show that one classifier which is trained on images of various resolutions covering the whole range has promising results in the long-range surveillance scenario. This system has at least as good performance as combining multiple face recognition systems that are optimised for different resolutions.
4,853,840
a90fd922047d3c8262e0905783172898dc181b42
The relation between the secrecy rate of biometric template protection and biometric recognition performance
A theoretical result relating the maximum achievable security of the family of biometric template protection systems known as key-binding systems to the recognition performance of a biometric recognition system that is optimal in Neyman-Pearson sense is derived. The relation allows for the computation of the maximum achievable key length from the Receiver Operating Characteristic (ROC) of the optimal biometric recognition system. Illustrative examples that demonstrate how the shape of the ROC impacts the security of a template protection system are presented and discussed.
10,084,551
02330cf6800a803784db7d6944fc4930ae30e1d5
Reducing audible spectral discontinuities
A common problem in diphone synthesis is discussed, viz., the occurrence of audible discontinuities at diphone boundaries. Informal observations show that spectral mismatch is the most likely the clause of this phenomenon. We first set out to find an objective spectral measure for discontinuity. To this end, several spectral distance measures are related to the results of a listening experiment. Then, we studied the feasibility of extending the diphone database with context-sensitive diphones to reduce the occurrence of audible discontinuities. The number of additional diphones is limited by clustering consonant contexts that have a similar effect on the surrounding vowels on the basis of the best performing distance measure. A listening experiment has shown that the addition of these context-sensitive diphones significantly reduces the amount of audible discontinuities.
10,742,723
7f819275132a6026e7323015d1c55f1fe4779248
Grip-Pattern Verification for Smart Gun Based on Maximum-Pairwise Comparison and Mean-Template Comparison
In our biometric verification system of a smart gun, the rightful user of a gun is authenticated by grip-pattern recognition. In this work verification will be done using two types of comparison methods, respectively. One is mean-template comparison, where the matching score between a test image and a subject is computed, by comparing the test image to the mean value of training samples of this subject. The other one is maximum-pairwise comparison, where the matching score between a test image and a subject is selected as the maximum, among all the similarity scores resulting from comparison between the test image and each training sample of this subject. Experimental results show that a much lower false-acceptance rate can be obtained at the required false-rejection rate of our system using maximum-pairwise comparison, than mean-template comparison.
15,311,664
68b73d5d9a6e76d6611af0c6b71483768571716b
Fingerprint Verification Using Spectral Minutiae Representations
Most fingerprint recognition systems are based on the use of a minutiae set, which is an unordered collection of minutiae locations and orientations suffering from various deformations such as translation, rotation, and scaling. The spectral minutiae representation introduced in this paper is a novel method to represent a minutiae set as a fixed-length feature vector, which is invariant to translation, and in which rotation and scaling become translations, so that they can be easily compensated for. These characteristics enable the combination of fingerprint recognition systems with template protection schemes that require a fixed-length feature vector. This paper introduces the concept of algorithms for two representation methods: the location-based spectral minutiae representation and the orientation-based spectral minutiae representation. Both algorithms are evaluated using two correlation-based spectral minutiae matching algorithms. We present the performance of our algorithms on three fingerprint databases. We also show how the performance can be improved by using a fusion scheme and singular points.
8,746,517
aa255b96f6fbba52c2b42cc3cd291494b1d93268
Binary Biometrics: An Analytic Framework to Estimate the Performance Curves Under Gaussian Assumption
In recent years, the protection of biometric data has gained increased interest from the scientific community. Methods such as the fuzzy commitment scheme, helper-data system, fuzzy extractors, fuzzy vault, and cancelable biometrics have been proposed for protecting biometric data. Most of these methods use cryptographic primitives or error-correcting codes (ECCs) and use a binary representation of the real-valued biometric data. Hence, the difference between two biometric samples is given by the Hamming distance (HD) or bit errors between the binary vectors obtained from the enrollment and verification phases, respectively. If the HD is smaller (larger) than the decision threshold, then the subject is accepted (rejected) as genuine. Because of the use of ECCs, this decision threshold is limited to the maximum error-correcting capacity of the code, consequently limiting the false rejection rate (FRR) and false acceptance rate tradeoff. A method to improve the FRR consists of using multiple biometric samples in either the enrollment or verification phase. The noise is suppressed, hence reducing the number of bit errors and decreasing the HD. In practice, the number of samples is empirically chosen without fully considering its fundamental impact. In this paper, we present a Gaussian analytical framework for estimating the performance of a binary biometric system given the number of samples being used in the enrollment and the verification phase. The error-detection tradeoff curve that combines the false acceptance and false rejection rates is estimated to assess the system performance. The analytic expressions are validated using the Face Recognition Grand Challenge v2 and Fingerprint Verification Competition 2000 biometric databases.
16,648,199
3a57c3adfe8e8c1fa3276bf26b42bad949ca6ba1
The centroid of the symmetrical Kullback-Leibler distance
This paper discusses the computation of the centroid induced by the symmetrical Kullback-Leibler distance. It is shown that it is the unique zeroing argument of a function which only depends on the arithmetic and the normalized geometric mean of the cluster. An efficient algorithm for its computation is presented. Speech spectra are used as an example.
15,490,536
4156e519e6774bd4fd31ae74f96b20f58736b6cb
Regional fusion for high-resolution palmprint recognition using spectral minutiae representation
The spectral minutiae representation (SMC) has been recently proposed as a novel method to minutiae-based fingerprint recognition, which is invariant to minutiae translation and rotation and presents low computational complexity. As high-resolution palmprint recognition is also mainly based on minutiae sets, SMC has been applied to palmprints and used in full-to-full palmprint matching. However, the performance of that approach was still limited. As one of the main reasons for this is the much bigger size of a palmprint compared with a fingerprint, the authors propose a division of the palmprint into smaller regions. Then, to further improve the performance of spectral minutiae-based palmprint matching, in this work the authors present anatomically inspired regional fusion while using SMC for palmprints. Firstly, the authors consider three regions of the palm, namely interdigital, thenar and hypothenar, which have inspiration in anatomic cues. Then, the authors apply SMC to region-to-region palmprint comparison and study regional discriminability when using the method. After that, the authors implement regional fusion at score level by combining the scores of different regional comparisons in the palm with two fusion methods, that is, sum rule and logistic regression. The authors evaluate region-to-region comparison and regional fusion based on spectral minutiae matching on a public high-resolution palmprint database, THUPALMLAB. Both manual segmentation and automatic segmentation are performed to obtain the three palm regions for each palm. Essentially using the complex SMC, the authors obtain results on region-to-region comparison which show that the hypothenar and interdigital regions outperform the thenar region. More importantly, the authors achieve significant performance improvements by regional fusion using regions segmented both manually and automatically. One main advantage of the approach the authors took is that human examiners can segment the palm into the three regions without prior knowledge of the system, which makes the segmentation process easy to be incorporated in protocols such as in forensic science.
7,764,996
d56d334da173e971e7ee2b1b9ec66da1fdb0bb00
Biometric Authentication System on Mobile Personal Devices
We propose a secure, robust, and low-cost biometric authentication system on the mobile personal device for the personal network. The system consists of the following five key modules: 1) face detection; 2) face registration; 3) illumination normalization; 4) face verification; and 5) information fusion. For the complicated face authentication task on the devices with limited resources, the emphasis is largely on the reliability and applicability of the system. Both theoretical and practical considerations are taken. The final system is able to achieve an equal error rate of 2% under challenging testing protocols. The low hardware and software cost makes the system well adaptable to a large range of security applications.
14,079,193
894f540ed8e603a51c22c7040a5485dff856ae25
Effect of calibration data on forensic likelihood ratio from a face recognition system
A biometric system used for forensic evaluation requires a conversion of the score to a likelihood ratio. A likelihood ratio can be computed as the ratio of the probability of a score given the prosecution hypothesis is true and the probability of a score given the defense hypothesis is true. In this paper we study two different approaches of a forensic likelihood ratio computation in the context of forensic face recognition. These approaches differ in the databases they use to obtain the score distribution under the prosecution and the defense hypothesis and therefore consider slightly different interpretation of these hypotheses. The goal of this study is to quantify the effect of these approaches on the resultant likelihood ratio in the context of evidence evaluation from a face recognition system. A state-of-the art commercial face recognition system is employed for facial images comparison and computation of scores. A simple forensic case is simulated by randomly selecting a small subset from the FRGC database. Images in this subset are used to estimate the score distribution under the prosecution and the defense hypothesis and the effect of different approaches of a likelihood ratio computation is demonstrated and explained. It is observed that there is a significant variation in the resultant likelihood ratios given the databases which are used to model the prosecution and defense hypothesis are varied.
870,961
31072f36bcb4169fa3f45d74557d41cab4af75bd
Low-resolution face alignment and recognition using mixed-resolution classifiers
A very common case for law enforcement is recognition of suspects from a long distance or in a crowd. This is an important application for low-resolution face recognition (in the authors' case, face region below 40 × 40 pixels in size). Normally, high-resolution images of the suspects are used as references, which will lead to a resolution mismatch of the target and reference images since the target images are usually taken at a long distance and are of low resolution. Most existing methods that are designed to match high-resolution images cannot handle low-resolution probes well. In this study, they propose a novel method especially designed to compare low-resolution images with high-resolution ones, which is based on the log-likelihood ratio (LLR). In addition, they demonstrate the difference in recognition performance between real low-resolution images and images down-sampled from high-resolution ones. Misalignment is one of the most important issues in low-resolution face recognition. Two approaches - matching-score-based registration and extended training of images with various alignments - are introduced to handle the alignment problem. Their experiments on real low-resolution face databases show that their methods outperform the state-of-the-art.
5,883,275
d3f5431bc06f78a57c7cabb63fc486908d8dfcae
Multiple component predictive coding framework of still images
In this paper, we propose a multiple component predictive coding framework. We firstly separate the reconstructed image into several subcomponents; and then predict each subcomponent independently but encode them together. To separate image into multiple subcomponents, we also propose a fast operator-based image separation algorithm. With the help of multicomponent prediction strategy, our prediction results can achieve superior performance than the H.264/AVC intra frame prediction method for images containing rich textures. By adopting the residue coding method used in H.264/AVC, we compare the compression efficacy of our proposed algorithm with the state-of-art JPEG2000 and H.264/AVC intra frame compression algorithms in the experimental part. The numerical results show that our algorithm is better than both H.264/AVC intra frame coding algorithm and JPEG2000 algorithm for images with ample textures.
17,231,085
20c58d07ad60e6ed26f22c08d77678ba69f41003
A fast content-dependent interpolation approach via adaptive filtering
Improving the subjective quality and reducing the computational complexity of interpolation algorithms are important issues in video and network signal processing. To this end, we propose a fast adaptive image interpolation algorithm that classifies pixels and uses different linear interpolation kernels that are adaptive to the class of a pixel. Pixels are classified into regions relevant to the perception of an image, either in a texture region, an edge region, or a smooth region. Image interpolation is performed with Neville filters, which can be efficiently implemented by a lifting scheme. Since linear interpolation tends to over-smooth pixels in edge regions and texture regions, we apply the Laplacian operator to enhance the pixels in those regions. The results of simulations show that the proposed algorithm not only reduces the computational complexity of the process, but also improves the visual quality of the interpolated images.
18,067,948
35a7d41b56379065c3e8ce8abbcdfe0dc513d97b
Distortion-optimized transmission of multiple description coded images over noisy channels with feedback
Transmission of compressed images over noisy channels is challenging because the encoded bitstreams are sensitive to channel errors. Multiple description coding (MDC) schemes encode a given image into multiple independent descriptions and then transmit them over separate channels; this way, packet loss in one (or some) channel(s) can be compensated by the received packets in other channels. However, image content can not be recovered when all descriptions are lost simultaneously. Instead of using error correction codes, we propose to transmit MDC descriptions over communication channels with feedback. The NACK-only, SR-ARQ scheme is applied as the transport protocol. Upon receiving a NACK, the sender follows the proposed optimal transmission strategy to select the next packet to transmit such that the end-to-end distortion of the received image is minimized.
17,158,020
0a900362927833c3bb34ac433c1b8afdd3eb6595
Parameter estimation of a fractional Brownian motion in a white noise using wavelets
To discriminate the fractal parameter of a fractional Brownian motion (fBm) embedded in a white noise is equivalent to discriminating the composite singularity formed by superimposing a peak singularity upon a Dirac singularity. We use the autocorrelation of the wavelet transform coefficients to characterize the composite singularity, by formalizing this problem as a nonlinear optimization problem. We modify the internal penalty function method to efficiently estimate the parameters of the fBm in the white noise.<<ETX>>
121,870,259
5f17b0f9368940841833ad543e934115c8d9c017
Subband Weighting With Pixel Connectivity for 3-D Wavelet Coding
Performing optimal bit-allocation with 3-D wavelet coding methods is difficult because energy is not conserved after applying the motion-compensated temporal filtering (MCTF) process and the spatial wavelet transform. The problem cannot be solved by extending the 2-D wavelet coefficients weighting method directly and then applying the result to 3-D wavelet coefficients, since this approach does not consider the complicated pixel connectivity that results from the lifting-based MCTF process. In this paper, we propose a novel weighting method, which takes account of the pixel connectivity, to solve the problem and derive the effect of the quantization error of a subband on the reconstruction error of a group of pictures. We employ the proposed method on a 2-D+t structure with different temporal filters, namely the 5-3 filter and the 9-7 filter. Experiments on various coding parameters and sequences show that the proposed approach improves the bit-allocation performance over that obtained by using the weightings derived without considering the pixel connectivity in the MCTF process.
17,691,218
8d8d4cfe96e18119cd647462e502d8a51788c4f8
Gridding spot centers of smoothly distorted microarray images
We use an optimization technique to accurately locate a distorted grid structure in a microarray image. By assuming that spot centers deviate smoothly from a checkerboard grid structure, we show that the process of gridding spot centers can be formulated as a constrained optimization problem. The constraint is equal to the variations of the transform parameter. We demonstrate the accuracy of our algorithm on two sets of microarray images. One set consists of some images from the Stanford Microarray Database; we compare our centers with those annotated in the Database. The other set consists of oligonucleotide images, and we compare our results with those obtained by GenePix Pro 5.0. Our experiments were performed completely automatically.
6,264,840
c98ff2588ae4f61329304ed244cf17be7b072a6e
Shape from texture: estimation of planar surface orientation through the ridge surfaces of continuous wavelet transform
In this correspondence, a method is proposed for estimating the surface orientation of a planar texture under perspective projection based on the ridge of a two-dimensional (2-D) continuous wavelet transform (CWT). We show that an analytical solution of the surface orientation can be derived from the scales of the ridge surface. A comparative study with an existing method is given.
9,865,521
18c10ebf202a3b2aed59ddf5204e399bfe439123
An asymmetric subspace watermarking method for copyright protection
We present an asymmetric watermarking method for copyright protection that uses different matrix operations to embed and extract a watermark. It allows for the public release of all information, except the secret key. We investigate the conditions for a high detection probability, a low false positive probability, and the possibility of unauthorized users successfully hacking into our system. The robustness of our method is demonstrated by the simulation of various attacks.
236,445,281
ec9ecdbeb356b3c0707336ffb76b86498c3e440b
An ARQ-based diversity system for transmission of EZW compressed images over noisy channels
Transmission of compressed images is challenging because the communication channels can be noisy and the compressed bitstreams are sensitive to channel errors. Most of the proposed methods in the literature do not guarantee the quality of the received image over noisy channels. We propose an ARQ-based diversity system as a solution, in which the use of the diversity system avoids the problems when one or some channels get congested, and the ARQ scheme guarantees the quality of the received image. A specific ARQ-based diversity system is designed for transmission of images compressed using EZW (embedded zerotree wavelet). Experiments performance results are shown.
37,935,616
2369cccf762a427bbd06c0654f33aaf736a6861b
A multi-channel channel-optimized scheme for EZW using rate-distortion functions
We develop a multi-channel channel-optimized scheme for embedded zerotree wavelet (EZW) image compression in a noisy transmission environment. A block-based modification for EZW is applied to improve the robustness of EZW, and to produce several coded bitstreams for transmission over multiple channels with different noise conditions. Then the respective channel noise is considered in the rate-distortion analysis, and the resultant rate-distortion functions are used for optimal bit allocation among the coded bitstreams. The case of burst noise is analyzed as an example.
9,915,798
0023ebb30e797dc426b8b3681d1aafece394136e
Multiridge detection and time-frequency reconstruction
The ridges of the wavelet transform, the Gabor transform, or any time-frequency representation of a signal contain crucial information on the characteristics of the signal. Indeed, they mark the regions of the time-frequency plane where the signal concentrates most of its energy. We introduce a new algorithm to detect and identify these ridges. The procedure is based on an original form of Markov chain Monte Carlo algorithm especially adapted to the present situation. We show that this detection algorithm is especially useful for noisy signals with multiridge transforms. It is a common practice among practitioners to reconstruct a signal from the skeleton of a transform of the signal (i.e., the restriction of the transform to the ridges). After reviewing several known procedures, we introduce a new reconstruction algorithm, and we illustrate its efficiency on speech signals and its robustness and stability on chirps perturbed by synthetic noise at different SNRs.
16,382,203
5968d463f2d3bc38670fd5b02dac0bec71577ba9
Robust block-based EZW image compression with channel noise optimized rate-distortion functions
We apply the dynamic bit allocation to the block-based EZW algorithm for robust image compression. To optimize the performance of the bit allocation, the effect of the channel noise is concerned. The robustness of our method was evaluated.
9,447,110
2adde6e8c5d746224554c042ea7cbd2f7dc85389
Image denoising using wavelet Bayesian network models
A number of techniques have been developed to deal with image denoising, which is regarded as the simplest inverse problem. In this paper, we propose an approach that constructs a Bayesian network from the wavelet coefficients of a single image such that different Bayesian networks can be obtained from different input images. Then, we utilize the maximum-a-posterior (MAP) estimator to derive the wavelet coefficients. Constructing a graphical model usually requires a large number of training images. However, we demonstrate that by using certain wavelet properties, namely, interscale data dependency, decorrelation between wavelet coefficients, and sparsity of the wavelet representation, a robust Bayesian network can be constructed from one image to resolve the denoising problem. Our experiment results show that, in terms of the peak-signal-to-noise-ratio (PSNR) performance, the proposed approach outperforms state-of-art algorithms on several images with various amounts of white Gaussian noise.
7,470,113
6e13807936a09349db1da4338b2aa4ab54494ff8
Re-weighting the morphological diversity
Signal separation has a fundamental role in many image applications, such as noise removing (white noise, reflection, rain, etc), segmentation, and inpainting. To fulfill signal separation, morphological component analysis (MCA) has been widely deployed in a plenty of applications [1], [2], [3]. MCA uses dictionaries to model morphologies of subcomponents, but the coherence between dictionaries may cause the defect presenting in the obtained subcomponents [4], [5]. In this article, we replace the sparse coding of MCA with the weighted sparse coding, and by assigning heavier weights to dictionaries' highly coherent atoms, the defect presenting in the obtained subcomponents is reduced. The experimental results show that the proposed signal separation algorithm achieves a significant performance gain over MCA.
17,707,834
348002956d729d66df019119b69443ed1726ff2d
Subjective and Objective Comparison of Advanced Motion Compensation Methods for Blocking Artifact Reduction in a 3-D Wavelet Coding System
We compare, both objectively and subjectively, the performance of various advanced motion compensation methods, including overlapped block motion compensation (OBMC) and control grid interpolation (CGI), in a 3-D wavelet-based coding system. The motion vectors of the methods are obtained by using a sequence of 1-D dynamic programming algorithms that minimizes the cost function. Our experiment results indicate that an OBMC sequence usually has a higher peak signal-to-noise ratio (PSNR) than other methods, while a CGI sequence usually contains the fewest blocking artifacts. We provide a simple framework that combines the OBMC and CGI sequences. The proposed hybrid method removes more than 50% of the blocking artifacts of an OBMC sequence, while simultaneously maintaining a high PSNR performance
8,392,728
58f4d41863be83e04f6d06822ae542f736ad8279
Message Passing Using the Cover Text as Secret Key
Conventional secret message passing methods embed a message in the cover-text, so the receiver must use stego-text to extract the message content. In contrast, this paper proposes a new paradigm in which the receiver does not necessarily require stego-text to retrieve the message content. Under the proposed approach, the sender can produce keys without modifying the cover-image, and the intended recipient can use the keys and an image that resembles like the cover-image to recover the message. The feature has the potential to generate many new secret message passing applications that would probably be impossible under the current widely used paradigm. The performance criteria of the new paradigm are presented. We propose a subspace approach to implement the paradigm, demonstrate that the criteria can be satisfied, and consider some interesting application scenarios.
14,911,101
9a54b8e35145b697a25a5407729e93a05e3a7260
Characterization of signals by the ridges of their wavelet transforms
The characterization and the separation of amplitude and frequency modulated signals is a classical problem of signal analysis and signal processing. We present a couple of new algorithmic procedures for the detection of ridges in the modulus of the (continuous) wavelet transform of one-dimensional (1-D) signals. These detection procedures are shown to be robust to additive white noise. We also derive and test a new reconstruction procedure. The latter uses only information from the restriction of the wavelet transform to a sample of points from the ridge. This provides a very efficient way to code the information contained in the signal.
16,854,689
f3974687aa1378ec04e3dfc86a243078f285c4fc
Multi-Objective Optimization and Characterization of Pareto Points for Scalable Coding
In this paper, we formulated the optimal bit-allocation problem for a scalable codec for images/videos as a graph-based constrained vector-valued optimization problem with many optimal solutions, which are referred to as Pareto points. Pareto points are generally derived using weighted sum scalarization; however, it has yet to be determined whether all Pareto points can be derived using this approach. This paper addresses this issue. When presented as a theorem, our results indicate that as long as the rate-distortion function of each resolution is strictly decreasing and convex and the Pareto points form a continuous curve, then all Pareto points can be derived using scalarization. The theorem is verified using the state-of-the-art scalable coding method H.264/SVC and a scalability extension of High Efficiency Video Coding (HEVC). We highlight a number of easily interpretable Pareto points that represent a good trade-off between candidate resolutions. The proximity point is defined as the Pareto point closest to the ideal performance for each resolution. We also model the Pareto points as a function of total bit rate and demonstrate that the Pareto points at other target bit rates can be predicted.
65,207,852
7e7e8886f2a2149257c15d3107b095ebb0a2b77d
Interlayer Bit Allocation for Scalable Video Coding
In this paper, we present a theoretical analysis of the distortion in multilayer coding structures. Specifically, we analyze the prediction structure used to achieve temporal, spatial, and quality scalability of scalable video coding (SVC) and show that the average peak signal-to-noise ratio (PSNR) of SVC is a weighted combination of the bit rates assigned to all the streams. Our analysis utilizes the end user's preference for certain resolutions. We also propose a rate-distortion (R-D) optimization algorithm and compare its performance with that of a state-of-the-art scalable bit allocation algorithm. The reported experiment results demonstrate that the R-D algorithm significantly outperforms the compared approach in terms of the average PSNR.
17,483,691
971815e7c0cfc24e46689227a70919bf712abcf4
A backward wavelet remesher for level of detail control and scalable coding
Multi-resolution and wavelet analysis have generated considerable interest in the field of mesh surface representation. In this paper, we propose a backward, coarse-to-fine framework that derives a semi-regular approximation of an original mesh, and demonstrate its effectiveness on level-of-detail and scalable coding applications. The framework is flexible and simple because the position of a new vertex at a finer resolution can be derived in a closed form, based on the affine combination of a subdivision scheme, the original mesh, and “new” information about the wavelet coefficients. We report the results of experiments on both applications; and also compare the scalable coding results with those of other methods.
7,643,433
abb8e3c727651480230fe0e441334caaac14731d
Mixture of Gaussian Blur Kernel Representation for Blind Image Restoration
Blind image restoration is a nonconvex problem involving the restoration of images using unknown blur kernels. The success of the restoration process depends on three factors: first, the amount of prior information concerning the image and blur kernel; second, the algorithm used to perform restoration; and third, the initial guesses made by the algorithm. Prior information of an image can often be used to restore the sharpness of edges. In contrast, there is no consensus concerning the use of prior information in the restoration of images from blur kernels, due to the complex nature of image blurring processes. In this paper, we model a blur kernel as a linear combination of basic two-dimensional (2-D) patterns. To illustrate this process, we constructed a dictionary comprising atoms of Gaussian functions derived from the Kronecker product of 1-D Gaussian sequences. Our results show that the proposed method is more robust than other state-of-the-art methods in a noisy environment, due to its increased signal-to-noise ratio (ISNR). This approach also proved more stable than the other methods, due to the steady increase in ISNR as the number of iterations is increased.
24,450,010
ea4d52de5de3a9e30b015584545e6901a8e9a286
Wavelet Bayesian Network Image Denoising
From the perspective of the Bayesian approach, the denoising problem is essentially a prior probability modeling and estimation task. In this paper, we propose an approach that exploits a hidden Bayesian network, constructed from wavelet coefficients, to model the prior probability of the original image. Then, we use the belief propagation (BP) algorithm, which estimates a coefficient based on all the coefficients of an image, as the maximum-a-posterior (MAP) estimator to derive the denoised wavelet coefficients. We show that if the network is a spanning tree, the standard BP algorithm can perform MAP estimation efficiently. Our experiment results demonstrate that, in terms of the peak-signal-to-noise-ratio and perceptual quality, the proposed approach outperforms state-of-the-art algorithms on several images, particularly in the textured regions, with various amounts of white Gaussian noise.
14,411,101
c393c2d9163527a87a3c8ec5b1fe639e80f727ea
Segmenting Microarray Image Spots using an Active Contour Approach
Inspired by Paragious and Deriche's work, which unifies boundary-based and region-based image partition approaches, we integrate the active contour(snake) model and the Fisher criterion to capture, respectively, the boundary and region information of microarray images. We then use the proposed algorithm to automatically segment the spots in the microarray images, and compare our results with those obtained by commercial software.
9,969,780
df0c2aac653920d44c132a076878e8a3c60bb020
Parameter estimation and denoising of 2-D noisy fractional Brownian motion using non-orthogonal wavelets
Fractional Brownian motion (fBm) is a non-stationary stochastic model, which has a 1/f spectrum and statistical self-similar property. We extend the proposed methods of Hwang to an isotropic 2-D noisy fBm image. The extension is not straightforward; although one can obtain the fractal parameter of an isotropic fBm by averaging of the estimated fractal parameters from several directions by means of the 1-D fractal parameter estimation algorithm, this approach does not perform well in practice. It was shown by Hwang that it requires more than 1000 sampled points for a robust 1-D fractal parameter estimation. For a median size image (say with size 256 by 256 or smaller), there is not enough pixels at each direction for a robust 1-D fractal parameter estimation. Thus, alternative methods must be developed in order that the robustness fractal estimation from a noisy fBm image with small size can be achieved. In this paper, we show that the wavelet transform of an isotropic fBm image at each scale is a two-dimensional weakly stationary process at both the horizontal and vertical directions. Thus, robust fractal parameter estimation can be obtained from two-dimensional wavelet coefficients, even for a small noisy fBm image. We propose a fractal parameter estimation algorithm which formulates the robust fractal parameter estimation problem as the characterization of a composite singularity from the autocorrelation of wavelet transforms of a noisy fBm image.
14,161,268
d6ce565daf94c6c81b089ecd664b81b1662e86b5
Segmentation of 3D textured images using continuous wavelet transform
A common assumption of the shape from texture problem is that a perceived image mainly contains only one type of texture with the same surface orientation. Unfortunately, a natural image is often composed of more than one textures. In order to solve the shape from texture problem in a practical manner, we need to segment 3D textured images. In this paper, we propose a new algorithm for the task. We estimate the local surface orientations from the the scales of the ridge points of continuous wavelet transform. Then, the local surface orientations are used as the features for texture segmentation. Textured images synthesized from Brodatz's album and several natural images demonstrate the performance of our method.
36,516,430
19b1e69bae34e5ced379b8de048a845d877e7052
Wavelet analysis for brain-function imaging
The authors present a new algorithmic procedure for the analysis of brain images. This procedure is specifically designed to image the activity and functional organization of the brain. The authors' results are tested on data collected and previously analyzed with the technique known as in vivo optical imaging of intrinsic signals. The authors' procedure enhances the applicability of this technique and facilitates the extension of the underlying ideas to other imaging problems (e.g., functional MRI). The authors' thrust is two fold. First, they give a systematic method to control the blood vessel artifacts which typically reduce the dynamic range of the image. They propose a mathematical model for the vibrations in time of the veins and arteries and they design a new method for cleaning the images of the vessels with the highest time variations. This procedure is based on the analysis of the singularities of the images. The use of wavelet transform is of crucial importance in characterizing the singularities and reconstructing appropriate versions of the original images. The second important component of the authors' work is the analysis of the time evolution of the fine structure of the images. They show that, once the images have been cleaned of the blood vessel vibrations/variations, the principal component of the time evolutions of the signals is due to the functional activity following the stimuli. The part of the brain where this function takes place can be localized and delineated with precision.
17,877,150
0eaf333093ba53d9d9616c3a87b3b50302d7f280
Enhancing image watermarking methods with/without reference images by optimization on second-order statistics
The watermarking method has emerged as an important tool for content tracing, authentication, and data hiding in multimedia applications. We propose a watermarking strategy in which the watermark of a host is selected from the robust features of the estimated forged images of the host. The forged images are obtained from Monte Carlo simulations of potential pirate attacks on the host image. The solution of applying an optimization technique to the second-order statistics of the features of the forged images gives two orthogonal spaces. One of them characterizes most of the variations in the modifications of the host. Our watermark is embedded in the other space that most potential pirate attacks do not touch. Thus, the embedded watermark is robust. Our watermarking method uses the same framework for watermark detection with a reference and blind detection. We demonstrate the performance of our method under various levels of attacks.
14,096,368
83c8edd9a5f28b8a00085d7c2866a6d1f5c7861e
Singularities and noise discrimination with wavelets
One can detect and characterize the singularities of a signal from the evolution of the wavelet transform coefficients across scales. The authors discriminate signal information from noise by using some prior knowledge of the properties of singularities. The wavelet transform of the signal is processed in order to remove the singularities created by the noise. The authors restore a sharp signal where part of the noise has been suppressed. Examples in one and two dimensions are shown.<<ETX>>
122,650,243
91240ba62b19bcfc1cf7bf5ee92f131482246329
Variational calculus approach to multiresolution image mosaic
Image mosaic combines two or more images. It has found many applications in computer vision, image processing, and computer graphics. A common goal of the problem is to join two or more images such that there is an invisible boundary around the seam line and the mosaic image is as little distorted from the original images as possible. We propose a new image mosaic method by wavelet multiresolution analysis and variational calculus. We first project the images into wavelet spaces. The projected images at each wavelet space are then blended. Variational calculus techniques are applied to balance the quality between the smoothness around the seam line and the fidelity of the combined image relative to the original images in image blending. A mosaic image is finally obtained by summing the blended images at the wavelet spaces. Experimental results based on our method are demonstrated.
1,586,690
af07eb756a082f6abb874218903e759889124109
EMD Revisited: A New Understanding of the Envelope and Resolving the Mode-Mixing Problem in AM-FM Signals
Empirical mode decomposition (EMD) is an adaptive and data-driven approach for analyzing multicomponent nonlinear and nonstationary signals. The stop criterion, envelope technique, and mode-mixing problem are the most important topics that need to be addressed in order to improve the EMD algorithm. In this paper, we study the envelope technique and the mode-mixing problem caused by separating multicomponent AM-FM signals with the EMD algorithm. We present a new necessary condition on the envelope that questions the current assumption that the envelope passes through the extreme points of an intrinsic mode function (IMF). Then, we present a solution to the mode-mixing problem that occurs when multicomponent AM-FM signals are separated. We experiment on several signals, including simulated signals and real-life signals, to demonstrate the efficacy of the proposed method in resolving the mode-mixing problem.
15,492,900
64901f469f9277063dafb80a7a146c89ddf34a6c
Constrained Null Space Component Analysis for Semiblind Source Separation Problem
The blind source separation (BSS) problem extracts unknown sources from observations of their unknown mixtures. A current trend in BSS is the semiblind approach, which incorporates prior information on sources or how the sources are mixed. The constrained independent component analysis (ICA) approach has been studied to impose constraints on the famous ICA framework. We introduced an alternative approach based on the null space component (NCA) framework and referred to the approach as the c-NCA approach. We also presented the c-NCA algorithm that uses signal-dependent semidefinite operators, which is a bilinear mapping, as signatures for operator design in the c-NCA approach. Theoretically, we showed that the source estimation of the c-NCA algorithm converges with a convergence rate dependent on the decay of the sequence, obtained by applying the estimated operators on corresponding sources. The c-NCA can be formulated as a deterministic constrained optimization method, and thus, it can take advantage of solvers developed in optimization society for solving the BSS problem. As examples, we demonstrated electroencephalogram interference rejection problems can be solved by the c-NCA with proximal splitting algorithms by incorporating a sparsity-enforcing separation model and considering the case when reference signals are available.
22,976,688
c57a96693f3ae4f792038a2810f4b4a9d9445e59
Shape from texture based on the ridge of continuous wavelet transform
We propose a new shape from texture method based on the ridge of continuous wavelet transform. This method determines the orientations of a planar surface in a direct way under the perspective projection model. The variations of the image projected from a planar surface can be accurately characterized by the ridge of the continuous wavelet transform. The ridge of the 1-D signal and 2-D image are represented as a ridge curve and ridge plane, respectively. Ridges represent the energy concentration in the time-frequency plane where the energy is a local maxima. We show that the ridge of the projected image is a parabolic plane with a rotation angle equal to the tilt angle of the planar surface. The ridge is then rotated with the angle such that the slant effect appears in the X-axis and plays no role along the Y-axis. As a result, the rotated ridge plane can be regarded as the plane composed of many 1-D ridge curves. The slant angle of the 2-D image is thus obtained from the derived slant angle of the 1-D signal. A voting method and a curve fitting method are developed to obtain the slant angle of the 1-D signal. Several synthetic and real-world images have demonstrated the robustness and accuracy of our method.
41,482,586
635314b782c704d9acf3c60f5ac6873ff4990a59
Enhancing image watermarking methods by second order statistics
A pirate attack on an image aims to create an invisible modification of it. We propose a watermarking strategy which applies optimization techniques to the second order statistics of perceptually unaltered modifications of an image. The solution gives two orthogonal spaces. One of them characterizes most of the variations in the modification of the image. Our watermark is embedded in the other space that most potential pirate attacks do not touch. Thus, the embedded watermark is robust. We also show that our method is able to enhance many existing watermarking strategies. We also demonstrate the performance of our method.
37,349,787
d322e1ea7822c4f9f25fe55ad159437a5e3bc2ee
Estimation of 2-D noisy fractional Brownian motion and its applications using wavelets
The two-dimensional (2-D) fractional Brownian motion (fBm) model is useful in describing natural scenes and textures. Most fractal estimation algorithms for 2-D isotropic fBm images are simple extensions of the one-dimensional (1-D) fBm estimation method. This method does not perform well when the image size is small (say, 32x32). We propose a new algorithm that estimates the fractal parameter from the decay of the variance of the wavelet coefficients across scales. Our method places no restriction on the wavelets. Also, it provides a robust parameter estimation for small noisy fractal images. For image denoising, a Wiener filter is constructed by our algorithm using the estimated parameters and is then applied to the noisy wavelet coefficients at each scale. We show that the averaged power spectrum of the denoised image is isotropic and is a nearly 1/f process. The performance of our algorithm is shown by numerical simulation for both the fractal parameter and the image estimation. Applications to coastline detection and texture segmentation in a noisy environment are also demonstrated.
2,399,769
b40134cd8000e791fbf665b2b82eaa95995c1610
An Automatic Eye Wink Interpretation System for the Disable
This paper proposes an automatic eye wink interpretation system for the severely handicapped people. First, we apply the support vector machine (SVM) and template matching algorithm to detect the eyes and then track the eye winks. Then, we convert the sequence of eye winks into code (binary digit) sequence. Finally, we use the dynamic programming to translate the eye wink sequence to a certain command for human-machine interface. In the experiments, our system demonstrates very good performance and high accuracy.
17,056,432
917cfb7349af2d1f6f08a0c69851645e1a61c8c5
Wavelet based active contour model for object tracking
We propose an integrated wavelet-based framework for an active contour model (snake) which is used for the motion tracking of deformable objects in a sequence of images. The input image frame is decomposed into a multiresolution representation using a wavelet transform. First, the wavelet transform coefficients are used in a multiresolution motion estimation process to find the initial contour in the frame. The presented multiresolution motion estimation method allows larger movement of the tracked object than the traditional image-based motion estimation. Secondly, the wavelet transform modulus at each scale is considered in the energy function of the active contour model. The application of biological cell tracking using the proposed method is shown as an example.
14,702,960
290f343cae6110689518e2ddd678db66d36e60c3
A proximal method for the K-SVD dictionary learning
In this paper, we propose a dictionary updating method and show numerically that it can converge to a dictionary that outperforms the dictionary derived by the K-SVD method. The proposed method is based on the proximal point approach used in the convex optimization algorithm. We incorporate the approach into the well-known MOD and combine the result with the K-SVD method to obtain the proposed method. We analyze the complexity of the proposed method and compare it with that of the K-SVD method. The results of experiments demonstrate that our method outperforms K-SVD with only a slight increase in the execution time.
16,296,064
1216109ab9c5c8054edccc746ff7b251aad2a81b
Timing acquisition for fractal modulation in Gaussian white and 1/f channels
We propose a clock acquisition algorithm for timing recovery for fractal modulation in additive white Gaussian noise (AWGN) and 1/f channels. Our acquisition algorithm uses exclusively the data redundancy inherently built in the fractal modulation to locate the beginning of timing of all the subbands simultaneously. Different diversity techniques can then be applied after the beginning of the data block is obtained. Our acquisition functions are derived by using the maximum-likelihood method. Using a serial search algorithm to maximize the acquisition function, we may find the timing point. Simulations have been conducted in evaluating the mean acquisition time of our algorithm.
15,849,648
6cb80010acfd354255b0ec40ce0b5756bc4a59eb
Extending depth of field in noisy light field photography
A robust depth of field (DOF) extension algorithm was proposed based on the refocusing property of a light field photograph and the depth-from-defocus approach of multi-focus image fusing. The main techniques of the algorithm are depth estimation and all-in-focus image estimation. By making use of the redundancy of a light field photograph, we leverage both estimations in a noisy environment. For the noise level smaller than 25dB, the proposed algorithm is still robust and the performance is better than other methods both in PSNR and SSIM. We conclude that the algorithm is robust to high noise.
41,920,355
a7213f330387943f97ab90889a0afde0aee203cf
Mode Selection and Optimal Rate Control for Video Coding using an and-or Tree Representation
We propose an AND-OR tree-based approach for structuring a variety of mode selections in a hybrid video coding system. The proposed approach can systematically analyze the input sequence and transform it into an AND-OR tree representation to allocate bits to each node of the tree. The motion vector of our system is estimated in the wavelet domain, where wavelet coefficients in different scales can be obtained by using different wavelets. We demonstrate the performance of the new features, and compare our codec's performance with that of an H.264/AVC-like implementation.
11,815,680
dda252f6b8c55d6bc050bae9d7e1ad3d1544baab
Iris Recognition Based on Matching Pursuits
We propose a novel dynamic programming matching pursuit (DPMP) algorithm for iris recognition. The method modifies the matching pursuit (MP) algorithm to select the most representative atoms for the iris recognition. In the experiments, we demonstrate that our system has (1) better performance for both personal identification and verification, and (2) a better ROC curve, (3) less computation than the conventional MP-based iris recognition.
750,876
191615a1008118285585d05678754b085bdf9781
Design Error-Resilient Multiple Substreams 3D Coder Including Receiver Post-Processing in Analysis
We propose an error control scheme for video communications over lossy channels. The proposed algorithm uses receiver post-processing in analysis to coordinate with multiple encoded streams which is capable of handling error concealment and error protection to achieve robust transmissions. Unlike previous methods, our algorithm focuses on joint optimizing multiple substreams distortion with concealment over error prone channels. The algorithm minimizes the expected rate-distortion function to achieve the optimal FEC result. In the experiments, we demonstrate the effectiveness of our method by using a 3-D SPIHT algorithm. Simulation results show that the proposed protection strategy achieves about 2 dB higher peak signal-to-noise ratio compared to conventional method.
9,030,564
fe38ccac5b852aa7d2e16243f03f7cde784ea374
Deriving 3D shape properties by using backward wavelet remesher
It is important to determine 3D shape properties of a population of 3D mesh models in biomedical imaging issues. In contrast to conventional 3D shape analysis techniques focusing on applications like shape matching and shape retrieval, we propose in this paper a strategy capable to collect statistical information of multiple triangular mesh models. Our method operates in a coarse-to-fine fashion based on wavelet synthesis. Hence, its analysis result can be invariant against the triangular tiling of the input mesh model. This characteristic enables us to make a comparison among multiple mesh models simultaneously. The experiment results show that our method can extract 3D shape components of each observation scale and is efficient in estimating an average shape and visualizing 3D shape variability of multiple triangular mesh models.
3,790,495
2ca3d24745929f4e79595ebeec8503ba67cd17b1
A surface-constrained volumetric alignment method for image atlasing
In this paper, we propose a prototype system capable of incorporating 3D shape information with conventional TPS-based (thin-plate-spline) volumetric registration method for image atlasing. Our method consists of two phases. The former phase registers and warps the 3D mesh surface models describing the tissue shape boundary of the input image volumes, and the latter aims to align the input image volumes with the aid of the boundary constraints suggested by the former. The proposed volumetric registration method is driven and constrained by the pre-registered 3D mesh surface model. Experiments show that using our framework for volumetric image registration and warping obtains a performance comparable to or better than a well-known benchmark method.
15,357,905
67c779d7eba38beaeaa30dbe51c9c0f7a0804813
A Sampling-Based Gem Algorithm with Classification for Texture Synthesis
Research on texture synthesis has made substantial progress in recent years, and many patch-based sampling algorithms now produce quality results in an acceptable computation time. However, when such algorithms are applied, whether they provide good results for specific textures, and why they do so, are questions that have yet to be answered. In this article, we deal specifically with the second question by modeling the synthesis problem as one of learning from incomplete data, and propose an algorithm that is a generalization of patch-work approach. Through this algorithm, we demonstrate that the solution of patch-based sampling approaches is an approximation of finding the maximum-likelihood optimum by the generalized expectation and maximization (GEM) algorithm
18,483,132
69edc4dcc24d4bb7b9b495f8f893632209677aa5
Error concealment protection for loss resilient bitplane-coded video communications
In this paper, we propose an error control scheme for video communications over lossy channels. The proposed algorithm uses error concealment protection (ECP) approach to coordinate with multiple encoded streams which is capable of handling error concealment to achieve robust transmissions. Unlike previous methods, our algorithm focuses on joint optimizing multiple substreams distortion with concealment over error prone channels. The algorithm minimizes the expected rate-distortion function to achieve the optimal FEC result. In the experiments, we demonstrate the effectiveness of our method by using a 3-D SPIHT algorithm. Simulation results show that the proposed protection strategy achieves about 2 dB higher peak signal-to-noise ratio compared to conventional method
12,583,901
4be6c5f166e2b544cd2655caf1d4931615e7c9f7
3D thin-plate spline registration for Drosophila brain surface model
With the progress of model averaging algorithms, scientists in the field of brain research have an increasing demand for methods capable to register and warp source data to the pre-registered standard atlas. We here propose a thin-plate spline (TPS) based surface registration method to facilitate the registration and warping process of Drosophila brain data. Our contributions are twofold. First, the proposed method performs TPS-based registration in the parameterization domain, and hence it no longer needs a rigid transformation to globally align and scale the input models. Second, the obtained well-registered surface model can act as boundary constraints for further volumetric registration schemes. Experiments show that the proposed method is effective. For models with a 750-voxel-long bounding box diagonal, the average surface-to-surface distance is reduced to about 0.1-voxel-long after registration.
17,578,294
8fbda76a1f9a9cc6d5ae18047540b829ec62ff16
Analysis of singularities from modulus maxima of complex wavelets
Complex-valued wavelets are normally used to measure instantaneous frequencies, while real wavelets are normally used to detect singularities. We prove that the wavelet modulus maxima with a complex-valued wavelet can detect and characterize singularities. This is an extension of the previous wavelet work of Mallat and Hwang on modulus maxima using a real wavelet. With this extension, we can simultaneously detect instantaneous frequencies and singularities from the wavelet modulus maxima of a complex-valued wavelet. Some results of singularity detection with the modulus maxima from a real wavelet and an analytic complex-valued wavelet are compared. We also demonstrate that singularity detection methods can be employed to detect the corners of a planar object.
207,878,685
af2756d8fdea66c5e79923c06d95cc50e046a821
Very low-bit video coding based on gain-shape VQ and matching pursuits
We show that the techniques of gain-shape VQ can be used in optimizing the dictionary of the matching pursuit codec. Various performance of our method were evaluated.
19,952,311
7756c24837b1f9ca3fc5be4ce7b4de0fcf9de8e6
Singularity detection and processing with wavelets
The mathematical characterization of singularities with Lipschitz exponents is reviewed. Theorems that estimate local Lipschitz exponents of functions from the evolution across scales of their wavelet transform are reviewed. It is then proven that the local maxima of the wavelet transform modulus detect the locations of irregular structures and provide numerical procedures to compute their Lipschitz exponents. The wavelet transform of singularities with fast oscillations has a particular behavior that is studied separately. The local frequency of such oscillations is measured from the wavelet transform modulus maxima. It has been shown numerically that one- and two-dimensional signals can be reconstructed, with a good approximation, from the local maxima of their wavelet transform modulus. As an application, an algorithm is developed that removes white noises from signals by analyzing the evolution of the wavelet transform maxima across scales. In two dimensions, the wavelet transform maxima indicate the location of edges in images. >
2,661,011
efea38b4c4ac33cc8a06750eae547db5bdcaca42
A Proximal Method for Dictionary Updating in Sparse Representations
In this paper, we propose a new dictionary updating method for sparse dictionary learning. Our method imposes the ℓ0 norm constraint on coefficients as well as a proximity regularization on the distance of dictionary modifications in the dictionary updating process. We show that the derived dictionary updating rule is a generalization of the K-SVD method. We study the convergence and the complexity of the proposed method. We also compare its performance with that of other methods.
14,861,770
7be4b74af2d762ab8725ebb4e74b16932e16ba6d
Complex-valued wavelet transform applications in planar shape prototype generation and recognition
The prototype generation has been widely used in industrial design, medical imaging, computer animation, pattern recognition, and computer vision. We propose a new hierarchical tree based algorithm for generating a prototype contour from a class of similar planar solid objects. Our algorithm uses the wavelet moduli of a complex-valued wavelet to extract the contour features. Features of different objects are organized as a binary tree. The root of a subtree corresponds to the prototype contour for the contours of its children. We show that one can update this tree efficiently by only modifying the local subcontours. An experiment of our method to prototype generation and pattern recognition is demonstrated.
28,579,469
eb693e644b261fdd5bfdfea69e892cfca17be753
Matching pursuits low bit rate video coding with codebooks adaptation
We propose a codebook adaptation algorithm for matching pursuit low bit rate video compression. The matching pursuit dictionary is a set of basis functions. In practice, a function is approximated as a vector. According to the incoming data, the basis functions are modified similar to the minimum-point-finding algorithm in stochastic regression. Although matching pursuit low bit rate compressions have been studied intensively, bases adaptation in the dictionary is a new approach. We demonstrate the performance of our adaptation method on MPEG-4 video sequences.
13,531,898
1f39bed74d9b544816fdf9f31e74b5b3682ebe87
Combined Error Concealment and Error Correction in Rate-Distortion Analysis for Multiple Substream Transmissions
We propose a new framework for multiple scalable bitstream video communications over lossy channels. The major feature of the framework is that the encoder estimates the effects of postprocessing concealment and includes those effects in the rate-distortion analysis. Based on the framework, we develop a rate-distortion optimization algorithm to generate multiple scalable bitstreams. The algorithm maximizes the expected peak signal-to-noise ratio by optimally assigning forward error control codes and transmission schemes in a constrained bandwidth. The framework is a general approach motivated by previous methods that perform concealment in the decoder, as in our special case. Simulations show that the proposed approach can be implemented efficiently and that it outperforms previous methods by more than 2 dB
15,179,432
138a4c741f0e70a2295ae5fbfc22483d8bbc2440
Analysis on multiresolution mosaic images
Image mosaicing is the act of combining two or more images and is used in many applications in computer vision, image processing, and computer graphics. It aims to combine images such that no obstructive boundaries exist around overlapped regions and to create a mosaic image that exhibits as little distortion as possible from the original images. In the proposed technique, the to-be-combined images are first projected into wavelet subspaces. The images projected into the same wavelet space are then blended. Our blending function is derived from an energy minimization model which balances the smoothness around the overlapped region and the fidelity of the blended image to the original images. Experiment results and subjective comparison with other methods are given.
207,878,783
1d2774b410589521d9d7a07e90c973060b24d7a5
Estimating particulate matter using COTS cameras
Particulate pollution has become increasingly critical and threatening for human health. Although a number of approaches have been attempted for particulate pollution monitoring, these approaches are either expensive, unscalable, or requiring deployment of yet-another sensing infrastructure. In this study, by combining the advanced image dehazing and support vector machine techniques, we propose a novel particulate matter sensing approach using commercially off-the-shelf cameras. Using a Raspberry Pi-based testbed, we conducted a half-year measurement and conduct a comprehensive analysis of our approach. We show that our approach is effective, and the 80%-th estimation error is below 20 and 30 μg/m3 for PM2.5 and PM10 estimation, respectively. Moreover, the proposed approach can be easily applied to the existing camera surveillance infrastructure, as long as the photos contain both long-range and near-view objects.
35,744,188
d8605136c3d072e83d6552b4244fbcc4818ad433
A Subspace Approach to Timing Acquisition for Wavelet-Based Multirate Transmissions
We propose a subspace-based maximum-likelihood approach for acquiring the timing of wavelet-based multirate transmission systems. The S-curve analysis shows that our approach can correctly acquire the initial timing of a symbol from anywhere within a symbol-time interval at a cost of increasing jitter variance.
14,832,459
541174c0726e2b75468ca0b01694055459c36af3
An asymmetric watermarking method for copyright protection utilizing dual bases
We present an asymmetric watermarking method for copyright protection that uses different matrix operations to embed and extract a watermark. It allows for the public release of all information, except the secret key. We investigate the conditions for a high detection probability, a low false positive probability, and the possibility of unauthorized users successfully hacking into our system. The robustness of our method is demonstrated by the simulation of various attacks.
9,236,435
8797c6a8ce5e85d7070fe7bc581e19e53e14c882
Adaptive Signal Decomposition Based on Local Narrow Band Signals
We propose an operator-based method of adaptive signal decomposition, whereby a local narrow band signal is defined in the null space of a singular local linear operator. Based on the definition and the algorithm, we propose two types of local narrow band signals and two singular .operator estimation methods for adaptive signal decomposition. We show that our approach can solve a special case of Huang 's empirical-mode decomposition algorithm. For signals that cannot be resolved by our method or the empirical-mode decomposition algorithm, we propose a hybrid approach. Conceptually, the approach applies the empirical-mode decomposition algorithm, followed by our algorithms. Our experiments show that the proposed hybrid approach can solve a wide range of complex signals effectively.
16,823,870
7f31ff0f1350378141b949c1478265324e45517a
Light field upsampling by joint bilateral filtering on epipolar plane images
Due to the trade-off between spatial and angular resolution, the effective spatial resolution of a light field image is usually less than one percent of the number of pixels on the photo sensor. In this paper, we propose a prototype algorithm to upsample a light field image. Because the boundary edges of 3D objects would result in lines on epipolar plane images (EPIs), the main idea of our method is to preserve these line structures while upsampling so that the image enlargement can still have sharp boundary edges. The kernel of the proposed algorithm is an iterative joint-bilateral filtering process. Experiments show that the upsampled image derived by our method is still refocusable and has better visual quality than those derived by other methods. Finally, the main contribution of this method is that it decomposes a 4D light field space L(u, v, s, t) upsampling problem into a series of 1D, parameter-free upsampling subproblems that can be solved fast in u-s and v-t EPI domains.
16,010,853