doc_id
stringlengths
4
40
title
stringlengths
7
300
abstract
stringlengths
2
10k
corpus_id
uint64
171
251M
232ec0bf43a22ed72a2f4f7485669f8b1c3e08cb
Sparse representation of a blur kernel for out-of-focus blind image restoration
Blind image restoration is a non-convex problem which involves restoration of images from an unknown blur kernel. The factors affecting the performance of this restoration are how much prior information about an image and a blur kernel are provided and what algorithm is used to perform the restoration task. Prior information on images is often employed to restore the sharpness of the edges of an image. However, no consensus is present regarding what prior information to use in restoring from a blur kernel due to complex image blurring processes. In this paper, we propose modelling of a blur kernel as a sparse linear combinations of basic 2-D patterns. Our approach has a competitive edge over the existing blur kernel modelling methods because our method has the flexibility to customize the dictionary design, which makes it well-adaptive to a variety of applications. As a demonstration, we construct a dictionary formed by basic patterns derived from the Kronecker product of Gaussian sequences. We also compare our results with those derived by other state-of-the-art methods, in terms of improvement SNR (ISNR).
17,411,129
76c361ed5ab4846580b23a41004355c006ff8518
Colored multi-neuron image processing for segmenting and tracing neural circuits
Recently developed were the Brainbow and Flybow techniques that can image and visualize a large number of neurons simultaneously; however, scientists still lack adequate tools to process this kind of colored multi-neuron image volumes. Due to dozens of colorized neuron fibers spreading densely in a very intricate structure, it is difficult to trace them by existing algorithms designed for single-neuron images. We proposed a framework to formulate and solve this issue, and the experimental results show that our method can successfully extract independent neurons from Flybow images. Consequently, the proposed procedure contributes to neuroscience by increasing the efficiency of collecting neuron information from Flybow images.
12,769,626
573f2428128d8a6514009434da8adbae84188e6f
Null Space Pursuit: An Operator-based Approach to Adaptive Signal Separation
The operator-based signal separation approach uses an adaptive operator to separate a signal into additive subcomponents. The approach can be formulated as an optimization problem whose optimal solution can be derived analytically. However, the following issues must still be resolved: estimating the robustness of the operator's parameters and the Lagrangian multipliers, and determining how much of the information in the null space of the operator should be retained in the residual signal. To address these problems, we propose a novel optimization formula for operator-based signal separation and show that the parameters of the problem can be estimated adaptively. We demonstrate the effectiveness of the proposed method by processing several signals, including real-life signals.
12,801,395
92416930642c2fcbea20fd04022f1d0efee4ce18
Estimation of fractional Brownian motion embedded in a noisy environment using nonorthogonal wavelets
We show that nonorthogonal wavelets can characterize the fractional Brownian motion (fBm) that is in white noise. We demonstrate the point that discriminating the parameter of fBm from that of noise is equivalent to discriminating the composite singularity formed by superimposing a peak singularity on a Dirac singularity. We characterize the composite singularity by formalizing this problem as a nonlinear optimization problem. This yields our parameter estimation algorithm. For fractal signal estimation, Wiener filtering is explicitly formulated as a function of the signal and noise parameters and the wavelets. We show that the estimated signal is a 1/f process. Comparative studies through numerical simulations of our methods with those of Wornell and Oppenheim (1992) are presented.
15,028,613
cbf322a8d2d9fc27ea808be17c1582fb3b62c8b1
Optimal Bit-allocation for Wavelet-based Scalable Video Coding
We investigate the wavelet-based scalable video coding problem and present a solution that takes account of each user's preferred resolution. Based on the preference, we formulate the bit allocation problem of wavelet-based scalable video coding. We propose three methods to solve the problem. The first is an efficient Lagrangian-based method that solves the upper bound of the problem optimally, and the second is a less efficient dynamic programming method that solves the problem optimally. Both methods require knowledge of the user's preference. For the case where the user's preference is unknown, we solve the problem by a min-max approach. Our objective is to find the bit allocation solution that maximizes the worst possible performance. We show that the worst performance occurs when all users subscribe to the same spatial, temporal, and quality resolutions. Thus, the min-max solution is exactly the same as the traditional bit allocation method for a non-scalable wavelet codec. We conduct several experiments on the 2D+t MCTF-EZBC wavelet codec with respect to various subscribers' preferences. The results demonstrate that knowing the users' preferences improves the coding performance of the scalable video codec significantly.
15,074,430
c8cfafe5a7501f2d83c1cc14a7505c89b7aff1be
An Examplar-Based Approach for Texture Compaction Synthesis and Retrieval
A texture representation should corroborate various functions of a texture. In this paper, we present a novel approach that incorporates texture features for retrieval in an examplar-based texture compaction and synthesis algorithm. The original texture is compacted and compressed in the encoder to obtain a thumbnail texture, which the decoder then synthesizes to obtain a perceptually high quality texture. We propose using a probabilistic framework based on the generalized EM algorithm to analyze the solutions of the approach. Our experiment results show that a high quality synthesized texture can be generated in the decoder from a compressed thumbnail texture. The number of bits in the compressed thumbnail is 400 times lower than that in the original texture and 50 times lower than that needed to compress the original texture using JPEG2000. We also show that, in terms of retrieval and synthesization, our compressed and compacted textures perform better than compressed cropped textures and compressed compacted textures derived by the patchwork algorithm.
8,931,528
5b4d189a4e9e43511e5a9117e61272e78ade7a79
Planar-shape prototype generation using a tree-based random greedy algorithm
A prototype is representative of a set of similar objects. This paper proposes an approach that formulates the problem of prototype generation as finding the mean from a given set of objects, where the prototype solution must satisfy certain constraints. These constraints describe the important perceptual features of the sample shapes that the proposed prototype must retain. The contour prototype generated from a set of planar objects was used as an example of the approach, and the corners were used as the perceptual features to be preserved in the proposed prototype shape. However, finding a prototype solution for more than two contours is computationally intractable. A tree-based approach is therefore proposed in which an efficient greedy random algorithm is used to obtain a good approximation of the proposed prototype and analyze the expected complexity of the algorithm. The proposed prototype-generation process for hand-drawn patterns is described and discussed in this paper.
17,977,946
ce5af89914eb74730f4b9facd2b08759e300e3eb
Optimal Multiresolution Blending of Confocal Microscope Images
Typical mosaicing schemes assume that to-be-combined images are equally informative; thus, the images are processed in a similar manner. However, the new imaging technique for confocal fluorescence images has revealed a problem when two asymmetrically informative biological images are stitched during microscope image mosaicing. The latter process is widely used in biological studies to generate a higher resolution image by combining multiple images taken at different times and angles. To resolve the earlier problem, we propose a multiresolution optimization approach that evaluates the blending coefficients based on the relative importance of the overlapping regions of the to-be-combined image pair. The blending coefficients are the optimal solution obtained by a quadratic programming algorithm with constraints that are enforced by the biological requirements. We demonstrate the efficacy of the proposed approach on several confocal microscope fluorescence images and compare the results with those derived by other methods.
7,884,839
f5d753e2fcc3b36a0166395c770175fa3770398f
Robust speech recognition features based on temporal trajectory filtering of frequency band spectrum
The paper presents the use of a variety of filters in the temporal trajectories of the frequency band spectrum to extract speech recognition features for environmental robustness. Three kinds of filters for emphasizing the statistically important parts of speech are proposed. First, a bank of RASTA-like band-pass filters to fit the statistical peaks of the modulation frequency band spectrum of speech are used. Secondly, a three-channel octave band-filter band with a smoothed rectangular window spline is applied. Thirdly, a data-driven filter is developed. Experimental results show that significant improvements for speech recognition using the proposed feature extraction approach under noisy environments can be achieved.
9,191,549
f52a8266c064a72d56849847d132fe2a7880dd60
Adaptive Integral Operators for Signal Separation
The operator-based signal separation approach uses an adaptive operator to separate a signal into a set of additive subcomponents. In this paper, we show that differential operators and their initial and boundary values can be exploited to derive corresponding integral operators. Although the differential operators and the integral operators have the same null space, the latter are more robust to noisy signals. Moreover, after expanding the kernels of Frequency Modulated (FM) signals via eigen-decomposition, the operator-based approach with the integral operator can be regarded as the matched filter approach that uses eigen-functions as the matched filters. We then incorporate the integral operator into the Null Space Pursuit (NSP) algorithm to estimate the kernel and extract the subcomponent of a signal. To demonstrate the robustness and efficacy of the proposed algorithm, we compare it with several state-of-the-art approaches in separating multiple-component synthesized signals and real-life signals.
16,143,889
1f483d118ca92465609d75d287664f714aff10a3
Operator based multicomponent AM-FM signal separation approach
The operator-based signal separation approach, which formulates the signal separation as an optimization problem, uses an adaptive operator to separate a signal into additive subcomponents. Furthermore, it is possible to design different operators to fit different signal models. In this paper, we propose a new kind of differential operator to separate multicomponent AM-FM signals. We then use the estimated operators to calculate each sub-component's envelope and instantaneous frequency. To demonstrate the efficacy of the proposed method, we compare the decomposition and AM-FM demodulation results of several signals, including real-life signals.
12,697,816
64101badea7e7723869a8c3d26be7080a1b71192
Adaptive early jump-out technique for fast motion estimation in video coding
An adaptive early jump-out technique for speeding up the block-based motion estimation is proposed. By using the new technique, we can speed up the full range search several times without losing the picture quality significantly. The proposed technique can also be embedded into almost all the existing fast motion estimation algorithms to speed up the computation further. Since the proposed technique can be embedded into the existing motion estimation algorithms, it can be applied to almost all the standard video codecs, such as the MPEG coder, and improve the coding speed of such codecs significantly. Our technique has been tested on the H.261 and the MPEG-I codecs, and the coding speed improves significantly.
6,728,329
6315b1f2493c4a3dc0f62c9016b49d3dc9795520
Efficient post-compression error-resilient 3D-scalable video transmission for packet erasure channels
We propose an efficient error-resilient video transmission algorithm over packet erasure channels using optimal source and channel bit-allocation. This algorithm uses FEC and rate-distortion optimization to find the optimal allocation of source and channel bits in each quality layer. The packet loss probability is periodically reported to the video server. This method can also be incorporated with any coding structure that generates a set of independent compressed bitstreams. The algorithm efficacy is demonstrated by simulations in which the video compression is an error-resilient 3D-SPIHT algorithm and the channel protection is provided by Reed-Solomon (RS) codes.
16,729,517
17ed181a280994af09c0a9093ba805c4532062c3
Distortion estimation and bit allocation for MCTF based 3-D wavelet video coding
In this paper, we propose a novel way to derive the weighting for subbands that conserves energy. Once the weight of each subband has been calculated by motion-compensation temporal filtering, the effect of the quantization error of a subband on the reconstruction error of a group of pictures can be determined. Then, we weigh each subband and use the weighted coefficients to derive the optimal bit-allocation solution for 3-D wavelet coding methods. We apply the proposed method to bit-allocation on a 2D+t structure with a 5-3 temporal wavelet filter, and show that it achieves a 0.5-1.5 dB peak- signal-to-noise improvement over using the weighting of subbands on various coding parameters and sequences.
7,792,345
db7c92e92a787ea95c26054bc0c03680bb78027c
Advanced motion compensation techniques for blocking artifacts reduction in 3-D video coding systems
This paper describes a new 3-D framework for the construction of a hybrid motion compensation model to reduce blocking artifacts in a highly scalable video coding system. Previous works have focused primarily on using control grid interpolation (CGI) or over-lapped block motion compensation (OBMC) to reduce blocking artifacts. However, both methods generate distinct side effects, such as PSNR degradation or poor de-blocking results, during the de-blocking process. The main objective of our proposed model is to achieve better visual quality and maintain satisfactory coding efficiency simultaneously by integrating the advantages of CGI and OBMC. The introduction of the dual mode selection between CGI and OBMC provides a more flexible mechanism to adjust our objective function and thereby achieves higher objective and subjective coding gains. Experimental results show that our proposed model achieves improved coding efficiency in terms of both PSNR and visual quality.
14,415,255
2e659104960b910ca922d8f857f259211785fd6d
Clock synchronization for fractal modulation
We propose a naive algorithm for the timing recovery for fractal modulation. We first investigate the bit error rate due to the clock error of a fractal modulation. Then, we propose clock synchronization techniques for fractal modulation. Our acquisition algorithm uses exclusively the self-similar property of fractal for signal modulation. The performance of our algorithm is also demonstrated.
17,421,249
29a1d55024322350589e1e3f7e7f3fc38c2b4d32
Timing acquisition for wavelet-based multirate transmissions
The acquisition problem in wavelet-based modulation is very important. We discuss the problem of the ML-based method proposed by M. Luise et al. (see IEEE Trans. Commun., vol.48, p.1047-54, 2000) for timing acquisition and then we develop a novel acquisition algorithm. Our method uses the properties of the scaling function in the derivation of the acquisition function. We also show that the S-curve of our acquisition algorithm is smooth and has a unique zero during the signaling interval. Thus, we can acquire the correct symbol timing without ambiguity. The performance of our acquisition algorithm is evaluated by Monto-Carlo simulation using the Meyer wavelet.
30,541,518
fd47e988c6dca39650353b3ad6419e58d175aaee
Biomedical image mosaicing: An optimized multiscale approach
In typical mosaicing or blending algorithms, it is usually assumed that the to-be-combined images are equally informative, and each component image would be processed in a similar manner. However, because of the photobleaching effect, the fluorescence intensity of confocal microscope image may degenerate; therefore, the overlapping regions will become asymmetrically informative. We formulate the problem of mosaicing such images as a multiscale optimization approach. The optimal blending coeffients can thus be obtained by a quadratic programming with constraints enforced by the biological requirements. We also demonstrate the efficacy on several confocal microscope fluorescence images as well as the EM images and compare the mosaicing results with those derived by other methods.
7,271,950
3e8ef77599a7d453ac3804365756f2d11e97df7c
An integral operator based adaptive signal separation approach
The operator-based signal separation approach uses an adaptive operator to separate a signal into additive subcomponents. And different types of operator can depict different properties of a signal. In this paper, we define a new kind of integral operator which can be derived from the second kind of Fredholm integral equation. Then, we analyze the properties of the proposed integral operator and discuss its relation to the second condition of Intrinsic Mode Function (IMF). To demonstrate the robustness and efficacy of the proposed operator, we incorporate it into the Null Space Pursuit algorithm to separate several multicomponent signals, including a real-life signal.
14,131,110
431724148cabf7d5ff9b9265d9a1f70e397655e0
Automatic Microarray Spot Segmentation Using a Snake-Fisher Model
Inspired by Paragious and Deriche's work, which unifies boundary-based and region-based image partition approaches, we integrate the snake model and the Fisher criterion to capture, respectively, the boundary information and region information of microarray images. We then use the proposed algorithm to segment the spots in the microarray images, and compare our results with those obtained by commercial software. Our algorithm is automatic because the parameters are adaptively estimated from the data without human intervention.
2,805,013
fb8604ff505d01faa57369f338f9486e014f90b9
Color image enhancement using retinex with robust envelope
In this paper, we propose a color image enhancement method that uses retinex with a robust envelope to improve the visual appearance of an image. The word “retinex” is hybird “retina” and “cortex”, suggesting that human visual perception is involved in this color image enhancement. To avoid the gray-world violation, a color-shifting problem, an input RGB color image is transformed into an HVS color image, but only the V component is enhanced. Furthermore, to prevent hallow artifacts, we construct a robust envelope with a gradient-dependent weighting to limit disturbances around intensity gaps such as edges and corners. Our experiment results show that the proposed method yields a better (almost hallow-free) performance than traditional image enhancement methods.
667,362
76533d86f8125f851976daffe0b331f5e3c5cedc
Texture classification using wavelet scale relationships
It has been documented in the literature that texture can be well characterised by features obtained from its multi-scale representation. Typically, the textured image being analysed is decomposed into separate frequency and/or orientation bands, and features extracted separately from each such band. In this paper, we propose that features modelling the relationships between scale bands of such a representation provide a better characterisation of textured images than features extracted from individual bands alone. Using this conjecture, we develop a novel feature set for texture classification, and demonstrate its effectiveness using a set of images obtained from the Brodatz texture album.
9,825,155
3e3b87451e10a43ff5a6f3aa570b18791b47a16f
Logarithmic quantisation of wavelet coefficients for improved texture classification performance
The coefficients of the wavelet transform have been widely used for texture analysis tasks, including segmentation, classification and synthesis. Second order statistics of such values have been shown to give excellent performance in these applications, and are typically calculated using co-occurrence matrices, which require quantisation of the coefficients. In this paper, we propose a non-linear quantisation function which is experimentally shown to better characterise textured images, and use this to formulate a new set of texture features, the wavelet log co-occurrence signatures.
247,941
2c2205aa7f8df87b1ec9f65bae42acd404579189
Representing and identifying jointed objects using a multiresolution technique
A multiresolution technique for representing and identifying objects in images is presented. Images are pre processed to extract object contours. Extracted contours are parameterised to form 1-D arrays. Object representations are then constructed as sets of contour waveforms viewed at different resolution levels. In the learning phase of the proposed algorithm, a database of object models is constructed which representations of objects obtained form images. The database includes jointed objects defined as those consisting of more than one part. During the object identification phase, object parts are allowed to be moved in the plane perpendicular to the joint's axil. To make the algorithm fast and efficient, only a subset of the representation points is used for object identification. The performance of the proposed algorithm is investigated under shifts, rotations, and scale changes.
39,183,538
43007d124ba696c1c043263ab394bd1a83747138
On the robustness and security of digital image watermarking
In most of the digital image watermarking schemes, it becomes a common practice to address security in terms of robustness, which is basically a norm in cryptography. Such consideration in developing and evaluation of a watermarking scheme may severely affect the performance and render the scheme ultimately unusable. This paper provides an explicit theoretical analysis towards watermarking security and robustness in figuring out the exact problem status from the literature. With the necessary hypotheses and analyses from technical perspective, we demonstrate the fundamental realization of the problem. Finally, some necessary recommendations are made for complete assessment of watermarking security and robustness.
6,917,625
1538f51577eec8cd7616d387213a6d0ed723ce1e
Detecting Uncommon Trajectories
An effective video surveillance system relies on detection of suspicious activities. In recent times, there has been an increasing focus on detecting anomalies in human behaviour using surveillance cameras as they provide a clue to preventing breaches in security. Human behaviour can be termed as suspicious when it is uncommon in occurrences and deviates from commonly understood behaviour within a particular context. This work aims to detect regions of interest in video sequences based on an understanding of uncommon behaviour. A commonality value is calculated to distinguish between common and uncommon occurrences. The proposed strategy is validated by classifying walking path of the people in a shopping mall corridor. CAVIAR database is used for this purpose. The results demonstrate the efficacy of the proposed approach in detecting deviant walking paths.
9,727,038
3afbc336577f9de94a4c6b2246d738bea8e98ba8
Detecting commonly occupied regions in video sequences
An effective video surveillance system relies on the detection of suspicious activities. In recent times, there has been an increasing focus on detecting anomalies in human behaviour using surveillance cameras as they provide a clue to preventing breaches in security. Human behaviour can be termed as suspicious when it is uncommon in occurrence and deviates from commonly understood behaviour within a particular context. This work aims to detect regions of interest in video sequences based on an understanding of uncommon behaviour. A commonality value is calculated to distinguish between common and uncommon occurrences. The proposed strategy is validated by classifying commonly occupied walking path regions in a shopping mall corridor and CAVIAR database is used for this purpose. The results demonstrate the efficacy of the proposed approach in detecting deviant walking paths.
21,046,465
1e638b4aeda69f52af2fb2aea894d14247fbf89c
A security system based on human iris identification using wavelet transform
A security system based on the recognition of the iris of human eyes using the wavelet transform is presented. The zero crossings of the wavelet transform are used to extract the unique features obtained from the grey level profiles of the iris. The recognition process is performed in two stages. The first stage consists of building a one dimensional representation of the grey level profiles of the iris followed by obtaining the wavelet transform zero crossings of the resulting representation. The second stage is the matching procedure for iris recognition. The proposed approach uses only a few selected intermediate resolution levels for matching, thus making it computationally efficient as well as less sensitive to noise and quantisation errors. A normalisation process is implemented to compensate for size variations due to the possible changes in the camera to face distance. The technique has been tested on real images in both noise free and noisy conditions. The technique is being investigated for real time implementation, as a standalone system, for access control to high security areas.
30,476,324
91c4c1eca097a848f800b849ec3d0b9b882372bd
A Context-Based Approach for Detecting Suspicious Behaviours
A video surveillance system capable of detecting suspicious activities or behaviours is of paramount importance to law enforcement agencies. Such a system will not only reduce the work load of security personnel involved with monitoring the CCTV video feeds but also improve the time required to respond to any incident. There are two well known models to detect suspicious behaviour: misuse detection models which are dependent on suspicious behaviour definitions and anomaly detection models which measure deviations from defined normal behaviour. However, it is nearly possible to encapsulate the entire spectrum of either suspicious or normal behaviour. One of the ways to overcome this problem is by developing a system which learns in real time and adapts itself to behaviour which can be considered as common and normal or uncommon and suspicious. We present an approach utilising contextual information. Two contextual features, namely, type of behaviour and the commonality level of each type are extracted from long-term observation. Then, a data stream model which treats the incoming data as a continuous stream of information is used to extract these features. We further propose a clustering algorithm which works in conjunction with data stream model. Experiments and comparisons are conducted on the well known CAVIAR datasets to show the efficacy of utilising contextual information for detecting suspicious behaviour. The proposed approach is generic in nature and can be applicable to any features. However for the purpose of this study, we have employed pedestrian trajectories to represent the behaviour of people.
6,111,464
095a0c84dea051f285ce261de6db8b82c5241602
Vision-based pirouettes using radial obstacle profile
Mapping algorithms commonly use "radial sweeps" of the surrounding environment as input. Producing a sweep is a challenging task for a robot using only vision. With no odometers to measure turn angles, a vision-based robot must have another method to verify rotations. In this paper we propose using the radial obstacle profile (ROP), which gives the radial distance to the nearest obstacle in any direction in the robot's field of view. By matching the ROPs before and after a turn, the robot should be able to verify that the expected angle of rotation matches the actual angle. Combining successive ROPs then produces a radial sweep.
191,536
8dd6287881a0cdd22c7169f2584d6250c2c0317e
Cognitive styles, subject content and the design of computer based instruction
The authors present a study on designing computer based instruction (CBI) considering the individual students' preferred cognitive styles and the effects of the subject contents on the learning outcomes. The bimodal nature of cognitive styles was examined in order to assess the full ramification of cognitive styles on learning. Students' cognitive styles were analysed using cognitive style analysis (CSA) software. On the basis of the CSA results, students used either matched or mismatched CBI material. Analysing test results suggests that certain test tasks may be more suitable to certain cognitive styles than others. Results also support the reported argument that subject types have an affinity to certain cognitive styles and acknowledge the need to consider the nature of the subject matter in designing personalised CBI instruction. The consistent better performance by the matched group suggests potential for further investigations where the limitations cited here may be eliminated. This study was used in teaching some components of a digital communications subject in an electrical engineering course.
44,596,043
3395db4e7edd054c9989eca65f1341372343ba9e
Segmentation of bone marrow stromal cells in phase contrast microscopy images
The morphology of bone marrow stromal cells (BMSCs) and the nature of phase contrast images make segmentation of such images challenging as many standard segmentation approaches do not work. This presents an obstacle to the development of systems that could use pattern recognition (PR) techniques to assess culture quality, since successful segmentation is an important precursor to successful pattern recognition. A method is presented for image normalisation and segmentation of cell regions within sub-confluent cell cultures of human bone marrow stromal cells, including a novel method of dealing with the halo associated with phase contrast images. The proposed method was evaluated by measuring its effect on the accuracy of a subsequent PR stage that was trained to discriminate between two BMSC cultures of differing quality. The accuracy achieved averaged 93% across four commonly used PR algorithms, corresponding to an overall accuracy gain of 17% compared to non-normalised, unsegmented images.
12,182,787
41c04ba7716860a35e7c2bfd2e6921c0764b5e5f
Developing a Digital Image Watermarking Model
This paper presents a key based generic model for digital image watermarking. The model aims at addressing an identified gap in the literature by providing a basis for assessing different watermarking requirements in various digital image applications. We start with a formulation of a basic watermarking system, and define system inputs and outputs. We then proceed to incorporate the use of keys in the design of various system components. Using the model, we also define a few fundamental design and evaluation parameters. To demonstrate the significance of the proposed model, we provide an example of how it can be applied to formally define common attacks.
5,252,759
bd70c09ce7e98852e7734685ce186cf71dc6f7aa
A Multiple-Control Fuzzy Vault
We introduce multiple-control fuzzy vaults allowing generalized threshold, compartmented and multilevel access structure. The presented schemes enable many useful applications employing multiple users and/or multiple locking sets. Introducing the original single control fuzzy vault of Juels and Sudan we identify several similarities and differences between their vault and secret sharing schemes which influence how best to obtain working generalizations. We design multiple-control fuzzy vaults suggesting applications using biometric credentials as locking and unlocking values. Furthermore we assess the security of our obtained generalizations for insider/ outsider attacks and examine the access-complexity for legitimate vault owners.
16,407,652
3c6846d92f70d563a272313d42c962adfab8fa8b
Object recognition using an affine invariant wavelet representation
A novel algorithm based on the dyadic wavelet transform for recognising a planar object undergoing a general affine transformation is presented. The proposed algorithm has two steps: constructing the representation and the matching process. Two different wavelets associated with a given scaling function are used in the construction. In the matching procedure, only extrema of the representation are used. This makes the process efficient and less sensitive to small variations in the representation. Experimental results show that the representation is robust and, combined with the matching algorithm, it efficiently classifies unknown objects.<<ETX>>
57,333,590
8eedb6cab1966e02940e5b661c0e77c6e1fb6bd3
Mobility assessment using simulated Arti.cial Human Vision
Recent research on Artificial Human Vision (AHV, or visual prostheses) has focused on providing visually meaningful information to the blind through electrical stimulation of a visual system component. This paper reports on the use of a programmable PDA-based AHV simulator which can be used by normally sighted participants. Using three different display types, mobility performance on an indoor arti ficial mobility course was assessed using Percentage of Preferred Walking Speed (PPWS) and mobility errors. A looming obstacle alert display was not found to assist with mobility performance. Mobility performance increased as participants learned to use the simulation effectively. Posture, head movements and gait were affected by use of the simulation.
995,807
132cb58eb17743b4633005253ee203ca140c0bca
A Scheme for Enhancing Security Using Multiple Fingerprints and the Fuzzy Vault
Enhanced security can be achieved combining biometrics and cryptographic concepts together. Aiming for security enhancement, this paper presents a scheme for merging multiple fingerprints with a cryptographic concept, the fuzzy vault. Thereby multiple fingerprints are eligible to lock and unlock a secret securely embedded within the multiple-control fuzzy vault. Given either threshold, compartmented or multilevel secret sharing access structures, different security aspects can be directed and enhanced. Given the capability of merging multiple biometrics/fingerprints with secret sharing structures implies additionally a major security enhancement. Valuable application scenarios are outlined and security achievements highlighted.
18,854,109
322777e820830260e8fd05d22a253e25b2f59b21
Spherical Diffusion for Scale-Invariant Keypoint Detection in Wide-Angle Images
Two variants of the SIFT algorithm are presented which operate on calibrated central projection wide-angle images characterised as having extreme radial distortion. Both define the scale-space kernel, termed the spherical Gaussian, as the solution of the heat diffusion equation on the unit sphere. Scale-space images are obtained as the convolution of the image mapped to the sphere with the spherical Gaussian which is shift invariant to pure rotation and the radial distortion in the original image. The first method termed sSIFT implements convolution in the spherical Fourier domain, and the second termed pSIFT approximates this process more efficiently in the spatial domain using stereographic projection. Results using real fisheye and equiangular catadioptric image sequences show improvements in the overall matching performance (recall vs 1-precision) of these methods versus SIFT, which treats the image as planar perspective.
105,565
f6399a65e669fe2d914fe82bacc92642bb803050
A Context Space Model for Detecting Anomalous Behaviour in Video Surveillance
An automatic anomalous human behaviour detection is one of the goals of smart surveillance systems' domain of research. The automatic detection addresses several human factor issues underlying the existing surveillance systems. To create such a detection system, contextual information needs to be considered. This is because context is required in order to understand human behaviour. Unfortunately, the use of contextual information is still limited in the automatic anomalous human behaviour detection approaches. This paper proposes a context space model which has two benefits: (a) It provides guidelines for the system designers to select information which can be used to describe context, (b) It enables a system to distinguish between different contexts. A comparative analysis is conducted between a context-based system which employs the proposed context space model and a system which is implemented based on one of the existing approaches. The comparison is applied on a scenario constructed using video clips from CAVIAR dataset. The results show that the context-based system outperforms the other system. This is because the context space model allows the system to consider knowledge learned from the relevant context only.
7,214,856
87dd40cd2d75d7f9bf0179abb9e38c73cd2ec8df
A method for recognising household tools using the wavelet transform
A method to recognise household tools based on the wavelet transform is presented. Object contours are represented at different resolution levels based on the wavelet transform zero-crossing representation. A dissimilarity function is developed and used to recognise an object by comparing its representation with those in the data base. The method which is translation, rotation and zoom invariant, has been tested on real images under noise-free and noisy conditions.
122,399,315
cd72a72a0e2a40c04b369f8272b7740f8575e45a
An Update-Describe Approach for Human Action Recognition in Surveillance Video
In this paper, an approach for human action recognition is presented based on adaptive bag-of-words features. Bag-of-words techniques employ a codebook to describe a human action. For successful recognition, most action recognition systems currently require the optimal codebook size to be determined, as well as all instances of human actions to be available for computing the features. These requirements are difficult to satisfy in real life situations. An update - describe method for addressing these problems is proposed. Initially, interest point patches are extracted from action clips. Then, in the update step these patches are clustered using the Clustream algorithm. Each cluster centre corresponds to a visual word. A histogram of these visual words representing an action is constructed in the describe step. A chi-squared distance-based classifier is utilised for recognising actions. The proposed approach is implemented on benchmark KTH and Weizmann datasets.
153,121
197fdffeae043c1dc84f0477e8c0968b0854e6b0
Scale invariant feature matching with wide angle images
Numerous scale-invariant feature matching algorithms using scale-space analysis have been proposed for use with perspective cameras, where scale-space is defined as convolution with a Gaussian. The contribution of this work is a method suitable for use with wide angle cameras. Given an input image, we map it to the unit sphere and obtain scale-space images by convolution with the solution of the spherical diffusion equation on the sphere which we implement in the spherical Fourier domain. Using such an approach, the scale-space response of a point in space is independent of its position on the image plane for a camera subject to pure rotation. Scale-invariant features are then found as local extrema in scale-space. Given this set of scale-invariant features, we then generate feature descriptors by considering a circular support region defined on the sphere whose size is selected relative to the feature scale. We compare our method to a naive implementation of SIFT where the image is treated as perspective, where our results show an improvement in matching performance.
2,412,773
d261da99dc58014ae4c9f95bd052a1dae88863c8
Insights Into Students' Conceptual Understanding Using Textual Analysis: A Case Study in Signal Processing
Concept inventory tests are one method to evaluate conceptual understanding and identify possible misconceptions. The multiple-choice question format, offering a choice between a correct selection and common misconceptions, can provide an assessment of students' conceptual understanding in various dimensions. Misconceptions of some engineering concepts exist due to a lack of mental frameworks, or schemas, for these types of concepts or conceptual areas. This study incorporated an open textual response component in a multiple-choice concept inventory test to capture written explanations of students' selections. The study's goal was to identify, through text analysis of student responses, the types and categorizations of concepts in these explanations that had not been uncovered by the distractor selections. The analysis of the textual explanations of a subset of the discrete-time signals and systems concept inventory questions revealed that students have difficulty conceptually explaining several dimensions of signal processing. This contributed to their inability to provide a clear explanation of the underlying concepts, such as mathematical concepts. The methods used in this study evaluate students' understanding of signals and systems concepts through their ability to express understanding in written text. This may present a bias for students with strong written communication skills. This study presents a framework for extracting and identifying the types of concepts students use to express their reasoning when answering conceptual questions.
13,092,342
25f5147e650a1f3c315573ce9c877dc6650f3fed
Personal identification using images of the human palm
A prototype system for human identification using images of the palm is introduced. An image capture platform is constructed such that variation of the camera-to-palm distance is minimised to eliminate the need to compensate for scaling transformation. Appropriate lighting conditions are considered in order to enhance the palm features during image capture. The Hough transform is used to detect the palm features as approximated straight lines. Translation and rotation invariance are achieved by using the edge of the palm as a reference for all feature measurements.
62,177,749
cba6a1563d815b4e73ec72c0a61a015026419779
Using textual analysis with concept inventories to identify root causes of misconceptions
Engineers must have deep and accurate conceptual understanding of their field and Concept inventories (CIs) are one method of assessing conceptual understanding and providing formative feedback. Current CI tests use Multiple Choice Questions (MCQ) to identify misconceptions and have undergone reliability and validity testing to assess conceptual understanding. However, they do not readily provide the diagnostic information about students' reasoning and therefore do not effectively point to specific actions that can be taken to improve student learning. We piloted the textual component of our diagnostic CI on electrical engineering students using items from the signals and systems CI. We then analysed the textual responses using automated lexical analysis software to test the effectiveness of these types of software and interviewed the students regarding their experience using the textual component. Results from the automated text analysis revealed that students held both incorrect and correct ideas for certain conceptual areas and provided indications of student misconceptions. User feedback also revealed that the inclusion of the textual component is helpful to students in assessing and reflecting on their own understanding.
21,502,610
d72fce1d014599dbd1cfae418c205a7aef649c80
Utilizing Least Significant Bit-Planes of RONI Pixels for Medical Image Watermarking
We propose a computationally efficient image border pixel based watermark embedding scheme for medical images. We considered the border pixels of a medical image as RONI (region of non-interest), since those pixels have no or little interest to doctors and medical professionals irrespective of the image modalities. Although RONI is used for embedding, our proposed scheme still keeps distortion at a minimum level in the embedding region using the optimum number of least significant bit-planes for the border pixels. All these not only ensure that a watermarked image is safe for diagnosis, but also help minimize the legal and ethical concerns of altering all pixels of medical images in any manner (e.g, reversible or irreversible). The proposed scheme avoids the need for RONI segmentation, which incurs capacity and computational overheads. The performance of the proposed scheme has been compared with a relevant scheme in terms of embedding capacity, image perceptual quality (measured by SSIM and PSNR), and computational efficiency. Our experimental results show that the proposed scheme is computationally efficient, offers an image-content-independent embedding capacity, and maintains a good image quality of RONI while keeping all other pixels in the image untouched.
12,506,017
bbbcb2c0136d6943d98b691d2ca4e8003bbb74dc
Static image simulation of electronic visual prostheses
The development of electronic visual prostheses (artificial human vision/bionic eye systems) is steadily progressing due to the combined efforts of several international research teams. In order to anticipate informative image processing strategies that could be used in these prostheses systems, we have undertaken psychophysical testing using low quality images to simulate visual representation associated with electronic visual prostheses. Our objective is to investigate how much information and what types of information are needed to recognise or perceive a scene, when most of the original scene data is lost. This paper describes results from testing of 174 normally sighted subjects who viewed a set of low quality (low spatial resolution and low grey-scale) static images. These experiments have identified informative image processing operations which can improve understanding of picture content.
62,203,531
9ca6efedc140e0e669101eb32a2ffcce77202e4a
Scale-Invariant Features on the Sphere
This paper considers an application of scale-invariant feature detection using scale-space analysis suitable for use with wide field of view cameras. Rather than obtain scale- space images via convolution with the Gaussian function on the image plane, we map the image to the sphere and obtain scale-space images as the solution to the heat (diffusion) equation on the sphere which is implemented in the frequency domain using spherical harmonics. The percentage correlation of scale-invariant features that may be matched between any two wide-angle images subject to change in camera pose is then compared using each of these methods. We also present a means by which the required sampling bandwidth may be determined and propose a suitable anti-aliasing filter which may be used when this bandwidth exceeds the maximum permissible due to computational requirements. The results show improved performance using scale-space images obtained as the solution of the diffusion equation on the sphere, with additional improvements observed using the anti-aliasing filter.
2,144,355
1ccdf47e3498b1eac815b50abc9336b608a66e4e
Texture for script identification
The problem of determining the script and language of a document image has a number of important applications in the field of document analysis, such as indexing and sorting of large collections of such images, or as a precursor to optical character recognition (OCR). In this paper, we investigate the use of texture as a tool for determining the script of a document image, based on the observation that text has a distinct visual texture. An experimental evaluation of a number of commonly used texture features is conducted on a newly created script database, providing a qualitative measure of which features are most appropriate for this task. Strategies for improving classification results in situations with limited training data and multiple font types are also proposed.
9,448,504
d5235b179ac85a2bc46d793008ebc9bdd73fbd3a
Map Building Using Cheap Digital Cameras
Cheap digital cameras are readily available. They can be mounted on robots and used to build maps of the surrounding environment. However, these cameras suffer from several drawbacks such as a narrow field of view, low resolution and limited range due to perspective. These limitations can cause traditional approaches to Simultaneous Localization and Mapping to fail due to insufficient information content in the visual sensor data. This paper discusses these issues and presents a solution for indoor environments.
11,256,517
4353a6b44246b37c156c0fad62160575581b1d82
Adaptive unsupervised learning of human actions
Automatic detection of suspicious activities in CCTV camera feeds is crucial to the success of video surveillance systems. Such a capability can help transform the dumb CCTV cameras into smart surveillance tools for fighting crime and terror. Learning and classification of basic human actions is a precursor to detecting suspicious activities. Most of the current approaches rely on a non-realistic assumption that a complete dataset of normal human actions is available. This paper presents a different approach to deal with the problem of understanding human actions in video when no prior information is available. This is achieved by working with an incomplete dataset of basic actions which are continuously updated. Initially, all video segments are represented by Bags-Of-Words (BOW) method using only Term Frequency-Inverse Document Frequency (TF-IDF) features. Then, a data-stream clustering algorithm is applied for updating the system's knowledge from the incoming video feeds. Finally, all the actions are classified into different sets. Experiments and comparisons are conducted on the well known Weizmann and KTH datasets to show the efficacy of the proposed approach.
58,319,613
70dccbe6eb042a7761e5228ee97dc79b6b224d08
A human identification technique using images of the iris and wavelet transform
A new approach for recognizing the iris of the human eye is presented. Zero-crossings of the wavelet transform at various resolution levels are calculated over concentric circles on the iris, and the resulting one-dimensional (1-D) signals are compared with model features using different dissimilarity functions.
17,847,612
48d53d36dbf53fe1e350d9f97802606949a75706
Recognition of space curves based on the dyadic wavelet transform
An algorithm for recognising space curves is introduced. The algorithm includes two stages : construction of space curve representation and a matching procedure. The representation is based on a dyadic wavelet transform. A dissimilarity function is defined and used in the matching procedure. Experimental results show that for 3D objects that can be represented by space curves, this algorithm is robust and efficient in extracting and matching object information.<<ETX>>
11,856,995
f3de7433207685f2f6014fb1a856c3ac68bc6ab8
Directed Exploration Using a Modified Distance Transform
Mobile robots operating in unknown environments need to build maps. To do so they must have an exploration algorithm to plan a path. This algorithm should guarantee that the whole of the environment, or at least some designated area, will be mapped. The path should also be optimal in some sense and not simply a "random walk" which is clearly inefficient. When multiple robots are involved, the algorithm also needs to take advantage of the fact that the robots can share the task. In this paper we discuss a modification to the well-known distance transform that satisfies these requirements.
14,157,893
94f6e1f859ea6fa8e1a13564f479b197faeb9424
An image processing approach for estimating the number of live prawn larvae in water
We address the problem of accurate estimation of prawn larvae numbers in small water-filled containers using image processing techniques. Images of the containers are captured from sampled video signals of a camera suspended over the containers. The images are preprocessed for noise removal and enhancement. The resulting images are then processed to estimate the number of prawn larvae with an acceptable accuracy. Our preliminary results show that estimates of prawn larvae numbers with accuracy of between 90-95% are achievable under controlled conditions. Some issues of system development and practical implementation are discussed.
25,723,241
d3506edde715a3994634278588f196d03660341d
Attitude Estimation for a Fixed-Wing Aircraft Using Horizon Detection and Optical Flow
We develop a method for estimating the flight critical parameters of pitch angle, roll angle and the three body rates using horizon detection and optical flow. We achieve this through the use of an image processing front-end to detect candidate horizon lines through the use of morphological image processing and the Hough transform. The optical flow of the image for each candidate line is calculated, and using these measurements, we are able to estimate the body rates of the aircraft. Using an Extended Kalman Filter (EFK), the candidate horizon lines are propagated and tracked through successive image frames, with statistically unlikely horizon candidates eliminated. Results qualitativly describing the performance of the image processing front-end on real datasets are presented, followed by an analysis of the improvement when utilising the motion model of the vehicle.
14,386,442
269c1ce45ede73ae5fa6721642045cc119e0aa8e
Investigation of Fish-Eye Lenses for Small-UAV Aerial Photography
Aerial photography obtained by unmanned aerial vehicles (UAVs) is a rising market for their civil application. Small UAVs are believed to close gaps in niche markets, such as acquiring airborne image data for remote sensing purposes. Small UAVs can fly at low altitudes, in dangerous environments, and over long periods of time. However, their small lightweight construction leads to new problems, such as higher agility and more susceptibility to turbulence, which has a big impact on the quality of the data and their suitability for aerial photography. This paper investigates the use of fish-eye lenses to overcome field-of-view (FOV) issues for highly agile UAV platforms susceptible to turbulence. The fish-eye lens has the benefit of a large observation area (large FOV) and does not add additional weight to the aircraft, such as traditional mechanical stabilizing systems. We present the implementation of a fish-eye lens for aerial photography and mapping purposes, with potential use in remote sensing applications. We describe a detailed investigation from the fish-eye lens distortion to the registering of the images. Results of the process are presented using low-quality sensors typically found on small UAVs. The system was flown on a midsize platform (a more stable Cessna aircraft) and also on ARCAA's small (<10 kg) UAV platform. The effectiveness of the approach is compared for the two sized platforms.
25,005,172
b87184793ad0c5570188534a2fbbf46e8a84ce2d
Scene specific imaging for bionic vision implants
Progress within the field of bionic vision (visual prosthesis) implants is at clinical trial stage, with several blind patients fitted with implanted vision systems, in this paper we suggest the image processing required for these devices is adjusted depending on the scene type. Characteristics of simple scenes are listed along with image processing techniques required to best present information from that scene type.
61,077,886
b4dbf56e276038e04778163174b5c39d01f5feb6
Recursive two-dimensional median filtering algorithms for fast image root extraction
The authors develop and evaluate two new two-dimensional fast recursive median filtering algorithms, for image root extraction. The first recursive algorithm, which is called the fast binary root algorithm, converges to a root of a binary image in one pass including a number of local rescans. The second is referred to as the fast multilevel root algorithm. It converges to a root of a multilevel image in two passes, the second of which includes a number of local rescans. The number of ordering positions required by the author's algorithms is shown to be significantly less than those required by the standard and regular recursive median filters. >
121,950,806
54b30d29ed49dea8aef83df3f27f68370c0641c9
Visual perception of low quality images
There are several new applications where perception is required from low quality images. One such application are electronic visual prostheses, or "bionic eyes". These artificial vision systems involve electrical stimulation of nerve cells in the human visual system via implanted electrodes. In this paper we present results of our subjective tests based on simulating what might be seen by users of low quality vision systems. 225 normally sighted subjects viewed a set of low quality (low spatial resolution and low grey-scale) static images. We wished to quantify intelligibility/recognition for low quality images. Results from this testing form part of an image quality model to assess the usefulness of low quality images.
60,659,862
ae443ee5403c3e1f9145529c44007aaf0ebd213b
Wavelet-based affine invariant representation: a tool for recognizing planar objects in 3D space
A technique is developed to construct a representation of planar objects undergoing a general affine transformation. The representation can be used to describe planar or nearly planar objects in a three-dimensional space, observed by a camera under arbitrary orientations. The technique is based upon object contours, parameterized by an affine invariant parameter and the dyadic wavelet transform. The role of the wavelet transform is the extraction of multiresolution affine invariant features from the affine invariant contour representation. A dissimilarity function is also developed and used to distinguish among different object representations. This function makes use of the extrema on the representations, thus making its computation very efficient. A study of the effect of using different wavelet functions and their order or vanishing moments is also carried out. Experimental results show that the performance of the proposed representation is better than that of other existing methods, particularly when objects are heavily corrupted with noise.
122,806,482
1b7148fae60e09b6d36f3def9ac19b5e0f8b0c61
Object identification using the dyadic wavelet transform and indexing techniques
A wavelet based representation of planar objects is introduced. Based on this representation, a matching algorithm using indexing techniques is also developed for identifying unknown objects. Rather than considering all points on the representation, only extrema are used in constructing the look-up table and for matching. Simulations demonstrate that the proposed algorithm is effective and accurate in classifying objects under similarity transformations and in a noisy environment.
26,435,998
681eca6dbc33e4a292793abb8ae68bfca158b46b
Vibration Compensation for Fisheye Lenses in UAV Applications
Low-cost aerial vision systems need to face the challenges of using low quality products to perform aerial photography. Such systems are widely used in remote controlled aircrafts and unmanned aerial vehicles (UAVs) to collect aerial imagery and are used for image acquisition, terrain mapping or remote sensing. A one-pixel shift in a 0.8 mega pixel resolution image captured from a UAV operating at 1000ft will correspond to about 2.5m measurement error on the ground. In our case, a vibrating fisheye lens moving relative to the camera added new uncertainties in the collected images and required compensation. This paper presents a vibration compensation approach using a modified Hough Transform utilizing the circle shaped image provided by a fisheye lens. We define the fisheye circle boundary by using a Canny edge detector. Our vibration compensation was tested using our collected aerial images with enhanced performance in more than 80% of the cases.
5,982,058
1e9ffe216061371256a34fbfac4f1721e0f89540
Recognition of 2D object contours using the wavelet transform zero-crossing representation
A new algorithm to recognize a two-dimensional object of arbitrary shape is presented. The object boundary is first represented by a one-dimensional signal. This signal is then used to build the wavelet transform zero-crossing representation of the object. The algorithm is invariant to translation, rotation and scaling. Experimental results show that, compared with the use of Fourier descriptors, our algorithm gives more stable and accurate results.
122,527,140