doc_id
stringlengths 4
40
| title
stringlengths 7
300
| abstract
stringlengths 2
10k
⌀ | corpus_id
uint64 171
251M
|
---|---|---|---|
037417141e499f7443d498b73f060c32fba6120e | A novel NMOS transistor for high performance ESD protection devices in 0.18 /spl mu/m CMOS technology utilizing salicide process | The electrostatic discharge (ESD) threshold of fully salicided grounded-gate NMOS transistors (ggNMOSTs) and partially salicided ggNMOSTs consisting of dummy-gate and N-well resistor was studied by transmission line pulse (TLP) I-V curves, and HBM and machine model (MM) robustness. The state-of-the-art 0.18 /spl mu/m cobalt salicide CMOS process is used, and the thickness of the gate dielectric material is 35 /spl Aring/. Fully salicided ggNMOSTs have much lower values of second breakdown current (It2) than partially salicided ggNMOSTs, and with multi-finger structures, only partially salicided ggNMOSTs turn on uniformly. Using these partially salicided NMOSTs as protection devices, we acquired ESD immunity of >2 kV (HBM) and >200 V (MM). | 45,624,406 |
b2bf41bf5e5e44d746c1cac3edd058d7d346980e | Online Video Object Segmentation via Convolutional Trident Network | A semi-supervised online video object segmentation algorithm, which accepts user annotations about a target object at the first frame, is proposed in this work. We propagate the segmentation labels at the previous frame to the current frame using optical flow vectors. However, the propagation is error-prone. Therefore, we develop the convolutional trident network (CTN), which has three decoding branches: separative, definite foreground, and definite background decoders. Then, we perform Markov random field optimization based on outputs of the three decoders. We sequentially carry out these processes from the second to the last frames to extract a segment track of the target object. Experimental results demonstrate that the proposed algorithm significantly outperforms the state-of-the-art conventional algorithms on the DAVIS benchmark dataset. | 7,741,697 |
1bd3406452ff5f3cd93d77a934bc5459c123d8ff | Fabrication of Au-decorated 3D ZnO nanostructures as recyclable SERS substrates | Highly roughened Au-decorated 3D ZnO nanostructures were prepared using a combination of prism holographic lithography and atomic layer deposition techniques. The optimized SERS intensity from the prepared Au-coated 3D ZnO inverse structures was twenty times the intensity obtained from an Au-coated flat glass control substrate. The surfaces could be reused after the photocatalytic degradation and removal of adsorbates in the presence of ZnO. The Au-coated 3D ZnO structures described here offer an alternative to traditional single-use SERS substrates. | 47,563,754 |
c872c648e750a196e923afaff8ce557d3a8ff799 | Lossless compression of 3-D point data in QSplat representation | We propose a lossless compression algorithm for three-dimensional point data in graphics applications. In typical point representation, each point is treated as a sphere and its geometrical and normal data are stored in the hierarchical structure of bounding spheres. The proposed algorithm sorts child spheres according to their positions to achieve a higher coding gain for geometrical data. Also, the proposed algorithm compactly encodes normal data by exploiting high correlation between parent and child normals. Simulation results show that the proposed algorithm saves up to 60% of storage space. | 8,978,612 |
c780f338691c02b0da8be6e5ca70dab4959d0332 | Construction of regular 3D point clouds using octree partitioning and resampling | We propose a construction algorithm of regular 3D point clouds from irregular ones. An irregular point cloud is partitioned hierarchically using an octree, and the points in each octree node are projected onto a square face, called a base plane, of the cubic node. The geometry of the point cloud is then represented by the height fields on the base planes. The height fields are interpolated and then resampled uniformly on the base plane. Consequently, the original geometry is represented by the scalar height fields, each of which is defined at uniform grid points on a square region. Therefore, the resulting geometry can be easily processed by conventional 2D image processing techniques. | 206,963,574 |
2c47566b77aa6cc4a77fdba078022124290f88b7 | Progressive compression of 3D dynamic mesh sequences | An algorithm to compress 3D mesh sequences for dynamic objects is proposed in this work. Given an irregular mesh sequence, we construct a semi-regular mesh structure for the first frame and then map it to the subsequent frames bused on the hierarchical motion estimation. The regular structure of the resulting mesh sequence enables us to employ the zero-tree coding scheme to compress the motion compensation residuals efficiently. Simulation results demonstrate that the proposed algorithm provides significantly better compression performance than the static coder that encodes each frame independently. | 15,196,492 |
fc84ac3ac553aaf1616bde080c2fa1eabe7b9d65 | Adaptive image and video retargeting technique based on Fourier analysis | An adaptive image and video retargeting algorithm based on Fourier analysis is proposed in this work. We first divide an input image into several strips using the gradient information so that each strip consists of textures of similar complexities. Then, we scale each strip adaptively according to its importance measure. More specifically, the distortions, generated by the scaling procedure, are formulated in the frequency domain using the Fourier transform. Then, the objective is to determine the sizes of scaled strips to minimize the sum of distortions, subject to the constraint that the sum of their sizes should equal the size of the target output image. We solve this constrained optimization problem using the Lagrangian multiplier technique. Moreover, we extend the approach to the retargeting of video sequences. Simulation results demonstrate that the proposed algorithm provides reliable retargeting performance efficiently. | 9,064,479 |
23a880ecb2aeee47891783e17970990b83b04ed2 | Fully spray-coated inverted organic solar cells | A spray deposition method was adapted to fabricate inverted organic solar cells (IOSCs) for a low cost production. The spray-coating process was used to deposit a highly transparent sol-gel solution derived ZnO layer, P3HT:PCBM photoactive layer, and PEDOT:PSS layer. The IOSCs fabricated by fully spray-coating method shows similar performances with that of spin-coating method. In the spray coating process, the power conversion efficiency (PCE) was obtained 2.95 % at an effective photocurrent generated area of 0.38 cm2 under AM.1.5 simulated illumination. Comparing to spin-coating process, spray-coating method can be a promising substitution for the next-generation of OSCs. | 44,024,497 |
0e29b9eac6e28c1e8f1672a490b7eaf4d1a6454a | Air-stable inverted organic solar cells using TiO2 layer deposited by atomic layer deposition | We investigated the air-stable and highly efficient inverted organic solar cells (IOSCs) using TiO2 as electron selective layer prepared by an atomic layer deposition (ALD) method. Various thicknesses of TiO2 layer from 10 to 50 nm were adapted to confirm the thickness effect on device performance and stability. The IOSC with 20-nm-thick TiO2 layer showed the PCE of 3.01 % under AM 1.5 simulated illumination. Furthermore, the IOSCs showed better stability retaining over 80 % of the maximum PCE after 30 days. This suggests that IOSCs with TiO2 layer deposited by ALD gives a promising way for fabrication of air-stable organic solar cells. | 25,942,917 |
ec3e66e6be03e8521b52b06794ace274150a4859 | Region-based backlight compensation algorithm for images and videos | An algorithm for compensating the effects of backlight in images and videos is proposed in this work. We first determine the region of interest (ROI) to compensate mainly using a saliency map, which is based on darkness, skin color, color and texture prominency features. We then compute the compensating offset value for each pixel. Initial offset values are derived to improve the brightness and the contrast of the ROI, and also to provide temporally consistent output frames in case of the video compensation. Finally, we obtain the final offset values by minimizing an energy function, consisting of a data term and a smoothness term. Simulation results show that the proposed algorithm improves the picture qualities of backlit images and videos efficiently. | 8,003,294 |
b81fd5c674c4712cd3ae044ca6a7f72fdf4373ab | Robust transmission of video sequence over noisy channel using parity-check motion vector | Motion compensation-discrete cosine transform (MC-DCT) coding is an efficient compression technique for digital video sequences. However, the compressed video signal is vulnerable to transmission errors over noisy channels. In this paper, we introduce a novel concept of parity-check motion vector (PMV) into the MC-DCT coder in order to improve its error robustness. By inserting the redundant PMVs systematically into the compressed bitstream, the proposed algorithm is capable of recovering very severe transmission errors, such as loss of an entire frame, in addition to detecting the errors effectively without requesting any information from external devices. The proposed algorithm is implemented based on the H.263 coder, and tested intensively in a realistic error prone environment. It is shown that the proposed algorithm provides much better objective and subjective performances than the conventional H.263 coder in the error prone environment. | 43,398,990 |
f5fefe9cfa40f689c5234959477fec656c29bc72 | Panoramic scene generation from multi-view images with close foreground objects | An algorithm to generate a panorama from multi-view images, which contain foreground objects with varying depths, is proposed in this work. The proposed algorithm constructs a foreground panorama and a background panorama separately, and then merges them into a complete panorama. First, the foreground panorama is obtained by finding the translational displacements of objects between source images. Second, the background panorama is initialized using warped source images and then optimized to preserve spatial consistency and satisfy visual constraints. Then, the background panorama is extended by inserting seams and merged with the foreground panorama. Experimental results demonstrate that the proposed algorithm provides visually satisfying panoramas with all meaningful foreground objects, but without severe artifacts in the backgrounds. | 27,468,724 |
4d2a62529c153edb65edee43b3be712184b5e6bd | VGEF: Contrast enhancement of dark images using value gap expansion force and sorted histogram equalization | We propose a novel contrast enhancement method for dark images using the value gap expansion force (VGEF) and the sorted histogram equalization. Based on the observation that the inter-pixel relationship is analogous to the electrostatic force, we define the pixel field spread around each pixel and the pixel mass at each pixel position. We compute the VGEF exerted to a pixel by multiplying the pixel field and the pixel mass. Also, we sort the pixels of the same value into 5 clusters according to their VGEF magnitudes to reduce contour artifacts in the enhanced image. Finally, we obtain output pixel values using the transformation function that equalizes the sorted histogram. Experimental results demonstrate that the proposed algorithm effectively enhances the contrast of dark images, and is applicable to the performance improvement of the saliency detection as well. | 8,179,937 |
b90794d9e9283ea0e6a622cfa8fa15d67814a4f3 | Optimized Brightness Compensation and Contrast Enhancement for Transmissive Liquid Crystal Displays | An optimized brightness-compensated contrast enhancement (BCCE) algorithm for transmissive liquid crystal displays (LCDs) is proposed in this paper. We first develop a global contrast enhancement scheme to compensate for the reduced brightness when the backlight of an LCD device is dimmed for power reduction. We also derive a distortion model to describe the information loss due to the brightness compensation. Then, we formulate an objective function that consists of the contrast enhancement term and the distortion term. By minimizing the objective function, we maximize the backlight-scaled image contrast, subject to the constraint on the distortion. Simulation results show that the proposed BCCE algorithm provides high-quality images, even when the backlight intensity is reduced by up to 50-70% to save power. | 12,528,977 |
28155531d2b847999ea9351660b1bff9da5c8d6c | Video Stabilization Based on Feature Trajectory Augmentation and Selection and Robust Mesh Grid Warping | We propose a video stabilization algorithm, which extracts a guaranteed number of reliable feature trajectories for robust mesh grid warping. We first estimate feature trajectories through a video sequence and transform the feature positions into rolling-free smoothed positions. When the number of the estimated trajectories is insufficient, we generate virtual trajectories by augmenting incomplete trajectories using a low-rank matrix completion scheme. Next, we detect feature points on a large moving object and exclude them so as to stabilize camera movements, rather than object movements. With the selected feature points, we set a mesh grid on each frame and warp each grid cell by moving the original feature positions to the smoothed ones. For robust warping, we formulate a cost function based on the reliability weights of each feature point and each grid cell. The cost function consists of a data term, a structure-preserving term, and a regularization term. By minimizing the cost function, we determine the robust mesh grid warping and achieve the stabilization. Experimental results demonstrate that the proposed algorithm reconstructs videos more stably than the conventional algorithms. | 14,660,363 |
18195644bd2805fb4d203a3ad99263a78db8f08f | Error Concealment of H.264/AVC Video Frames for Mobile Video Broadcasting | An efficient frame error concealment (EC) algorithm for H.264 video is proposed in this work. The proposed algorithm temporally conceals a lost frame by combining several pixels or several motion vectors using an overlapping window. The proposed algorithm requires a low computational complexity, but provides better performance than the conventional frame error concealment methods. Therefore, the proposed algorithm is suitable for mobile video broadcasting applications. | 8,066,629 |
cab305a7fc8767a2a243dcbd6b51352a47eb46ea | Enhanced motion compensation algorithm based on second-order prediction | Several techniques based on the multiple reference frame scheme have been proposed to improve the motion prediction gain. Though these techniques yield higher prediction gain than the single reference frame scheme, they require tremendous computational complexity during the motion search procedure. Besides, blocking artifacts may be visible along the block boundaries, since each macroblock is predicted independently of its neighbors. To overcome these drawbacks, this paper proposes a novel motion compensation algorithm, based on the double reference frame (DRF), the double motion vector (DMV), and the searching position shifting (SPS) schemes. First, to reduce the motion vector bitrate and the computational complexity of motion search procedure, we constrain the number of reference frames to 2, and use only two motion vectors per block. Second, to alleviate the blocking artifacts and to get the better pel prediction, the searching position shifting scheme is introduced. Experimental results demonstrate that the proposed algorithm yields a 3-4 dB higher prediction gain than the single reference frame scheme. The subjective quality is also improved by alleviating the blocking artifacts. | 34,403,195 |
c7d739c908172446bf1865783146b27ccbd3eb72 | Predictive compression of geometry, color and normal data of 3-D mesh models | Predictive compression algorithms for geometry, color and normal data of three-dimensional (3-D) mesh models are proposed in this work. In order to eliminate redundancies in geometry data, we predict each vertex position by exploiting the position and angle information in neighboring triangles. To compress color data, we propose a mapping table scheme that compresses frequently recurring colors efficiently. For normal data, we propose an average predictor and a 6-4 subdivision quantizer to improve coding gain. Simulation results demonstrate that the proposed algorithm provides better performance than the MPEG-4 standard for 3-D mesh model coding (3-DMC). | 1,731,526 |
ca153487fc9e98d05b965896610757b9ae355273 | Large-Scale 3D Point Cloud Compression Using Adaptive Radial Distance Prediction in Hybrid Coordinate Domains | An adaptive range image coding algorithm for the geometry compression of large-scale 3D point clouds (LS3DPCs) is proposed in this work. A terrestrial laser scanner generates an LS3DPC by measuring the radial distances of objects in a real world scene, which can be mapped into a range image. In general, the range image exhibits different characteristics from an ordinary luminance or color image, and thus the conventional image coding techniques are not suitable for the range image coding. We propose a hybrid range image coding algorithm, which predicts the radial distance of each pixel using previously encoded neighbors adaptively in one of three coordinate domains: range image domain, height image domain, and 3D domain. We first partition an input range image into blocks of various sizes. For each block, we apply multiple prediction modes in the three domains and compute their rate-distortion costs. Then, we perform the prediction of all pixels using the optimal mode and encode the resulting prediction residuals. Experimental results show that the proposed algorithm provides significantly better compression performance on various range images than the conventional image or video coding techniques. | 16,582,470 |
49694be83375b6fd2bb36e6c070b2c80b1990598 | Improving motion picture quality of plasma display panels | An effective algorithm to reduce gray level disturbances in plasma display panels (PDPs) is proposed in this work. Gray level disturbances occur when PDPs display moving image sequences, since each gray level has a different light emission pattern. We design the subfield vector and the driving vectors to minimize the disturbances. First, we employ the lexicographically largest vector as the subfield vector, since it can flexibly control the shapes of light emission patterns. Then, we propose the 1st-order method to determine the driving vectors. Simulation results demonstrate that the proposed algorithm suppresses gray level disturbances effectively and provides faithful image quality. | 18,604,172 |
f214fcb5c98bbb0de6cc544245b176bc2b602e1f | FDQM: Fast Quality Metric for Depth Maps Without View Synthesis | We propose a fast quality metric for depth maps, called fast depth quality metric (FDQM), which efficiently evaluates the impacts of depth map errors on the qualities of synthesized intermediate views in multiview video plus depth applications. In other words, the proposed FDQM assesses view synthesis distortions in the depth map domain, without performing the actual view synthesis. First, we estimate the distortions at pixel positions, which are specified by reference disparities and distorted disparities, respectively. Then, we integrate those pixel-wise distortions into an FDQM score by employing a spatial pooling scheme, which considers occlusion effects and the characteristics of human visual attention. As a benchmark of depth map quality assessment, we perform a subjective evaluation test for intermediate views, which are synthesized from compressed depth maps at various bitrates. We compare the subjective results with objective metric scores. Experimental results demonstrate that the proposed FDQM yields highly correlated scores to the subjective ones. Moreover, FDQM requires at least 10 times less computations than conventional quality metrics, since it does not perform the actual view synthesis. | 30,278,249 |
2a4b56ac6e900cfb8359e42d987a5f36f717cc20 | Compression of 3-D triangle mesh sequences based on vertex-wise motion vector prediction | We propose an efficient geometry compression algorithm for three-dimensional (3-D) mesh sequences based on the two-stage vertex-wise motion vector (MV) prediction. In general, the MV of a vertex is highly correlated to those of the adjacent vertices. To exploit this high correlation, we define the neighborhood of a vertex, and predict the MV of the vertex from those of the neighborhood. The error vectors are related to the local shape changes of 3-D objects, and still have redundancy. To remove the redundancy, the error vectors are also predicted, at the second stage, spatially or temporally by using a rate-distortion optimization technique. It is shown that the proposed algorithm has simpler structure than the existing segment-based algorithm, and yields better compression performance. | 28,670,622 |
14e1a56c2ac136b8184a8ff92ef3817f4b40ac26 | Power-constrained contrast enhancement for OLED displays based on histogram equalization | A novel power-constrained contrast enhancement algorithm for organic light-emitting diode (OLED) displays is proposed in this work. We first develop the log-modified histogram equalization (LMHE) scheme, which reduces overstretching artifacts of the conventional histogram equalization technique. Then, we model the power consumption in OLED displays, and incorporate it into LMHE to achieve the optimal tradeoff between contrast enhancement and power saving. Simulation results demonstrate that the proposed algorithm can reduce the power consumption significantly, while preserving image qualities. | 14,041,195 |
5db6cb64697a36ffcfc6fc8ab7d5c8a94d5bcf35 | Video saliency detection based on spatiotemporal feature learning | A video saliency detection algorithm based on feature learning, called ROCT, is proposed in this work. To detect salient regions, we design multiple spatiotemporal features and combine those features using a support vector machine (SVM). We extract the spatial features of rarity, compactness, and center prior by analyzing the color distribution in each image frame. Also, we obtain the temporal features of motion intensity and motion contrast to identify visually important motions. We train an SVM classifier using the spatiotemporal features extracted from training video sequences. Finally, we compute the visual saliency of each patch in an input sequence using the trained classifier. Experimental results demonstrate that the proposed algorithm provides more accurate and reliable results of saliency detection than conventional algorithms. | 18,512,459 |
ee6335342e4ca9723ccfbb4aa528f67400a87882 | Flexible complexity control between encoder and decoder for video coding | We propose a novel video codec that can distribute computational complexity between the encoder and the decoder in a flexible manner. Since motion estimation is the most computationally intensive part in video coding, it is important to share the motion estimation task. In the proposed algorithm, the decoder estimates motion vectors by performing a partial three step search. The estimated motion vectors are sent back to the encoder via a feedback channel. The encoder then refines the received motion vectors. In this work, four operation modes corresponding to the encoder to decoder complexity ratios are presented. Experimental results demonstrate that the proposed algorithm allocates the complexity between the encoder and the decoder effectively, while providing promising compression performance | 16,277,652 |
ceff874704a949526d767b30b27848f9bb1833c9 | Fabrication of Au-Decorated 3D ZnO Nanostructures as Recyclable SERS Substrates | Highly roughened Au-decorated 3D ZnO nano-structures were prepared using a combination of prism holographic lithography and atomic layer deposition techniques. Prism holographic lithography is a simple and rapid method for fabricating ordered 3D nanostructures using the optical interference effects of multiple beams derived from a specially designed prism. Highly ordered reproducible surface-enhanced Raman scattering (SERS) substrates are needed for the reliable calibration of target analyte concentrations. A high density of Au nanoparticles separated by nanoscale gaps was generated on the Au-coated ZnO inverse structures. The nanogaps may function as strong hot spots for highly sensitive SERS-based chemical/biological sensors. The optimized SERS intensity from the prepared Au-coated 3D ZnO inverse structures was 20 times the intensity obtained from an Au-coated flat glass control substrate. The surfaces could be reused after the photocatalytic degradation and removal of adsorbates in the presence of ZnO. The Au-coated 3D ZnO structures described here offer an alternative to traditional single-use SERS substrates. | 34,066,679 |
1771500250f9fe0bb474d5f6bfac8439e5992b7b | Robust semi-regular mesh representation of 3D dynamic objects | In this paper, we propose a robust scheme to represent 3D dynamic objects, which are given as a sequence of irregular meshes. The sequence of irregular meshes is converted into a sequence of semi-regular meshes with time-invariant topology to facilitate the manipulation of 3D data. To this end, we sequentially perform the motion estimation of base meshes, the shape transformation of subdivision points, and the selective intra-remeshing. It is shown that a given sequence can be efficiently remeshed without parameterization data by exploiting temporal correlation in the sequence. Simulation results demonstrate that the proposed algorithm reproduces the original geometry faithfully with a sequence of semi-regular meshes. | 41,964,715 |
095b3561d4c8088c23c31ab9150b5c7328ecebae | Multiple random walkers and their application to image cosegmentation | A graph-based system to simulate the movements and interactions of multiple random walkers (MRW) is proposed in this work. In the MRW system, multiple agents traverse a single graph simultaneously. To achieve desired interactions among those agents, a restart rule can be designed, which determines the restart distribution of each agent according to the probability distributions of all agents. In particular, we develop the repulsive rule for data clustering. We illustrate that the MRW clustering can segment real images reliably. Furthermore, we propose a novel image cosegmentation algorithm based on the MRW clustering. Specifically, the proposed algorithm consists of two steps: inter-image concurrence computation and intra-image MRW clustering. Experimental results demonstrate that the proposed algorithm provides promising cosegmentation performance. | 14,914,367 |
637d36ba170efdacfee4f3dc5d39ba69e80346ed | Coding Order Decision of B Frames for Rate-Distortion Performance Improvement in Single-View Video and Multiview Video Coding | The coding gain that can be achieved by improving the coding order of B frames in the H.264/AVC standard is investigated in this work. We first represent the coding order of B frames and their reference frames with a binary tree. We then formulate a recursive equation to find out the binary tree that provides a suboptimal, but very efficient, coding order. The recursive equation is efficiently solved using a dynamic programming method. Furthermore, we extend the coding order improvement technique to the case of multiview video sequences, in which the quadtree representation is used instead of the binary tree representation. Simulation results demonstrate that the proposed algorithm provides significantly better R-D performance than conventional prediction structures. | 639,719 |
53c4faa9e1a0bfd722c0b5dbd4c26985c92f66f1 | On DCT coefficient distribution in video coding using quad-tree structured partition | The recent image/video coding standards adopt a full quad-tree structured block partitioning with a number of different transform sizes. However, the statistical behavior of the transformed coefficients becomes more difficult to estimate than the previous standards while the single pdf might be enough for modeling the distribution in the previous standards. In this paper, we propose a novel probability density to model the distribution of the transformed coefficients obtained from different transform sizes. The proposed density model is mathematically derived, and the parameters are estimated from previously coded samples. Rate-Distortion (R-D) model is also established from the proposed density model. It is shown in experimental results that the proposed pdf is efficiently used for approximating the rate-distortion (R-D) relation when the different transform sizes are adapted for a coding. | 16,390,992 |
585ce34f687f099bf88f636d963e43bc82912619 | Stitching of heterogeneous images using depth information | We propose a novel heterogeneous image stitching algorithm, which employs disparity information as well as color information. It is challenging to stitch heterogeneous images that have different background colors and diverse foreground objects. To overcome this difficulty, we set the criterion that objects should preserve their shapes in the stitched image. To satisfy this criterion, we derive an energy function using color and disparity gradients. As the gradients are highly correlated with object boundaries, we can find the optimal seam from the energy function, along which two images are pasted. Moreover, we develop a retargeting scheme to reduce the size of the stitched image further. Experimental results demonstrate that the proposed algorithm is a promising tool for stitching heterogeneous images. | 14,368,886 |
d0dc33850f3ed16eea1fcd933ed5fd80f87bed2e | Efficient distributed video coding using symmetric motion estimation and channel division | An efficient distributed video coding algorithm using symmetric motion estimation and channel division is proposed in this work. We employ the symmetric motion estimation to generate high quality side information for Wyner-Ziv frames. Also, in the channel division, we classify blocks in the side information into reliable ones and unreliable ones. Then, we transmit parity bits for unreliable blocks only, achieving a coding gain. Simulation results demonstrate that the proposed algorithm provides up to 4 dB better PSNR performance than the conventional distributed video coding algorithms. | 14,059,567 |
2562a82dc182104862ad1be749204f9d53a114bb | Virtual view synthesis using multi-view video sequences | A virtual view synthesis algorithm using multi-view video sequences, which inherits the advantages of both the forward warping and the inverse warping, is proposed in this work. First, we use the inverse warping to synthesize a virtual view without holes from the nearest two views. Second, we detect occluded regions in the synthesized view based on the uniqueness constraint. Then, we refine the occluded regions using the information in the farther views based on the forward warping technique. Simulation results demonstrate that the proposed algorithm provides significantly higher PSNR performances than the conventional inverse warping scheme. | 13,131,101 |
cfd08dc308862b091c010492a6d8ff9dfcfe8e74 | Depth-guided adaptive contrast enhancement using 2D histograms | A novel contrast enhancement (CE) algorithm using 2-dimensional (2D) histograms, which transforms pixel values adaptively based on the depth information, is proposed in this work. In general, foreground objects convey more important visual information than background regions. Hence we assign high CE priorities to foreground pixels using the depth values and generate a depth-guided 2D histogram. Then, we stretch the gray-level differences of adjacent foreground pixels more strongly than those of adjacent background pixels. Moreover, to enhance background regions as well, we design two transformation functions for the foreground and the background separately. By combining the two functions according to pixel depths, we obtain an adaptive space-variant transformation function, which is finally used to reconstruct the output image. Experimental results show that the proposed algorithm outperforms conventional CE algorithms by enhancing salient foreground objects efficiently and preserving background details faithfully. | 11,734,795 |
c5af6ecd188058eee9d0594ff90e4ed47e27c283 | Contrast enhancement based on layered difference representation | A novel contrast enhancement algorithm based on the layered difference representation is proposed in this work. We first represent gray-level differences at multiple layers in a tree-like structure. Then, based on the observation that gray-level differences, occurring more frequently in the input image, should be more emphasized in the output image, we solve a constrained optimization problem to derive the transformation function at each layer. Finally, we aggregate the transformation functions at all layers into the overall transformation function. Simulation results demonstrate that the proposed algorithm enhances images efficiently in terms of both objective quality and subjective quality. | 15,811,099 |
88788fa2a317ea2b53f3ef4cbfe4f6d49d8ea587 | Indium Oxide Thin-Film Transistors Fabricated by RF Sputtering at Room Temperature | Thin-film transistors (TFTs) were fabricated using an indium oxide (In<sub>2</sub>O<sub>3</sub>) thin film as the n-channel active layer by RF sputtering at room temperature. The TFTs showed a thickness-dependent performance in the range of 48-8 nm, which is ascribed to the total carrier number in the active layer. Optimum device performance at 8-nm-thick In<sub>2</sub>O<sub>3</sub> TFTs had a field-effect mobility of 15.3 cm<sup>2</sup> · V<sup>-1</sup> · s<sup>-1</sup>, a threshold voltage of 3.1 V, an ON-OFF current ratio of 2.2 × 10<sup>8</sup>, a subthreshold gate voltage swing of 0.25 V · decade<sup>-1</sup>, and, most importantly, a normally OFF characteristic. These results suggest that sputter-deposited In<sub>2</sub>O<sub>3</sub> is a promising candidate for high-performance TFTs for transparent and flexible electronics. | 24,315,528 |
0e9e0b4f4483ac8d02d136bc303f9de8b646c4ed | Low-power auto focus algorithm using modified DCT for the mobile phones | A new focus value calculation method based on the middle frequency component of DCT and applicable in auto focus function of a mobile phone is presented. Experimental results confirm the exactness in focus value calculation, power efficiency and immunity to impulsive noises of the proposed method. | 32,565,719 |
e92aedd6739d5c053dbff52e8fd60e380884f959 | Temporal Superpixels Based on Proximity-Weighted Patch Matching | A temporal superpixel algorithm based on proximity-weighted patch matching (TS-PPM) is proposed in this work. We develop the proximity-weighted patch matching (PPM), which estimates the motion vector of a superpixel robustly, by considering the patch matching distances of neighboring superpixels as well as the target superpixel. In each frame, we initialize superpixels by transferring the superpixel labels of the previous frame using PPM motion vectors. Then, we update the superpixel labels of boundary pixels, based on a cost function, composed of color, spatial, contour, and temporal consistency terms. Finally, we execute superpixel splitting, merging, and relabeling to regularize superpixel sizes and reduce incorrect labels. Experiments show that the proposed algorithm outperforms the state-of-the-art conventional algorithms significantly. | 5,684,001 |
b6037848cbe9c322dcef583a1942d094cb7c0d9c | Hybrid representation and rendering of indoor environments using meshes and point clouds | We propose a novel hybrid representation scheme for indoor environments using meshes and point clouds. Points are suitable for rendering detailed objects, while meshes can represent planar shapes more compactly. Based on these properties, we use points or meshes adaptively according to local characteristics of an input scene. More specifically, we first determine building structures, which can be represented by meshes, in indoor environments. Then, we create meshes corresponding to the building structures, while representing the remaining parts with points. Experimental results demonstrate that the proposed hybrid scheme renders indoor environments more efficiently than the point-based representation or the mesh-based representation. | 11,397,024 |
733d112e7a286df5227e11b7debce4fa12a09f1f | Combining Local Regularity Estimation and Total Variation Optimization for Scale-Free Texture Segmentation | Texture segmentation constitutes a standard image processing task, crucial for many applications. The present contribution focuses on the particular subset of scale-free textures and its originality resides in the combination of three key ingredients: First, texture characterization relies on the concept of local regularity; Second, estimation of local regularity is based on new multiscale quantities referred to as wavelet leaders; Third, segmentation from local regularity faces a fundamental bias variance tradeoff. In nature, local regularity estimation shows high variability that impairs the detection of changes, while a posteriori smoothing of regularity estimates precludes from locating correctly changes. Instead, the present contribution proposes several variational problem formulations based on total variation and proximal resolutions that effectively circumvent this tradeoff. Estimation and segmentation performance for the proposed procedures are quantified and compared on synthetic as well as on real-world textures. | 9,952,009 |
63d90224d7cecc1caaeb85e4ec6a5ecb17830313 | On Wigner-based sparse time-frequency distributions | Signals made of the superimposition of a reduced number of AM-FM components can be characterized by a time-frequency signature which consists of weighted trajectories in the plane, thus ending up with an ideal representation of their energy distribution that is intrinsically sparse. Elaborating on first studies that pioneered a compressed sensing solution to the question of approaching such an ideally localized distribution by selecting samples in the ambiguity domain and imposing sparsity in the time-frequency domain, the present paper discusses new advances aimed at achieving better performance in the construction of “cross-term-free” Wigner-type distributions. Improved optimization schemes are first proposed, that both speed up the computation and prove more versatile to accommodate for side constraints such as positivity. A special attention is then paid to the choice of the necessary measurements in the ambiguity plane (in fixed or adapted geometries), emphasizing the key role played by the Heisenberg minimum area, regardless of the signal complexity. | 7,393,051 |
47c8912b4f0627ba3d083889a976d092a63952f4 | Local regularity for texture segmentation: Combining wavelet leaders and proximal minimization | Texture segmentation constitutes a classical yet crucial task in image processing. In many applications of very different natures (biomedical, geophysics,...) textures are naturally defined in terms of their local regularity fluctuations, which can be quantified as the variations of local Hölder exponents. Furthermore, such images are often naturally embedded in the class of piece-wise constant local regularity functions. The present contribution aims at proposing and assessing a segmentation procedure for this class of images. Its originality is twofold: First, local regularity is estimated using wavelet leaders, a novel multiresolution quantity recently introduced for multifractal analysis but barely used in local regularity measurement, comparisons against wavelet coefficient based estimation are conducted; Second, the challenging minimal partition problem underlying segmentation is convexified and conducted within a customized proximal framework. The estimation of the number of regions and their target regularity is obtained from a total-variation estimate that enables the actual use of proximal minimization for texture segmentation. Performance is assessed and illustrated on synthetic textures. | 14,636,878 |
df4dd5514caf84446a861d2a5acfc39d9ff9269a | Non-Linear Wavelet Regression and Branch & Bound Optimization for the Full Identification of Bivariate Operator Fractional Brownian Motion | Self-similarity is widely considered the reference framework for modeling the scaling properties of real-world data. However, most theoretical studies and their practical use have remained univariate. Operator fractional Brownian motion (OfBm) was recently proposed as a multivariate model for self-similarity. Yet, it has remained seldom used in applications because of serious issues that appear in the joint estimation of its numerous parameters. While the univariate fractional Brownian motion requires the estimation of two parameters only, its mere bivariate extension already involves seven parameters that are very different in nature. The present contribution proposes a method for the full identification of bivariate OfBm (i.e., the joint estimation of all parameters) through an original formulation as a non-linear wavelet regression coupled with a custom-made Branch & Bound numerical scheme. The estimation performance (consistency and asymptotic normality) is mathematically established and numerically assessed by means of Monte Carlo experiments. The impact of the parameters defining OfBm on the estimation performance as well as the associated computational costs are also thoroughly investigated. | 15,723,385 |
51740e1acb0264b2e8d522098c7039efe4b8178a | Sparse Support Vector Machine for Intrapartum Fetal Heart Rate Classification | Fetal heart rate (FHR) monitoring is routinely used in clinical practice to help obstetricians assess fetal health status during delivery. However, early detection of fetal acidosis that allows relevant decisions for operative delivery remains a challenging task, receiving considerable attention. This contribution promotes sparse support vector machine classification that permits to select a small number of relevant features and to achieve efficient fetal acidosis detection. A comprehensive set of features is used for FHR description, including enhanced and computerized clinical features, frequency domain, and scaling and multifractal features, all computed on a large (1288 subjects) and well-documented database. The individual performance obtained for each feature independently is discussed first. Then, it is shown that the automatic selection of a sparse subset of features achieves satisfactory classification performance (sensitivity 0.73 and specificity 0.75, outperforming clinical practice). The subset of selected features (average depth of decelerations <inline-formula><tex-math notation="LaTeX">$\textrm {MAD}_{\rm dtrd}$</tex-math></inline-formula>, baseline level <inline-formula><tex-math notation="LaTeX">$\beta _0$</tex-math> </inline-formula>, and variability <inline-formula><tex-math notation="LaTeX">$H$</tex-math></inline-formula>) receives simple interpretation in clinical practice. Intrapartum fetal acidosis detection is improved in several respects: A comprehensive set of features combining clinical, spectral, and scale-free dynamics is used; an original multivariate classification targeting both sparse feature selection and high performance is devised; state-of-the-art performance is obtained on a much larger database than that generally studied with description of common pitfalls in supervised classification performance assessments. | 27,052,856 |
3bff1dc0609e1460aca92b8e03dd8d7c04230a09 | Multiclass SVM with graph path coding regularization for face classification | We consider the problem of learning graphs in a sparse multiclass support vector machines framework. For such a problem, sparse graph penalty is useful to select the significant features and interpret the results. Classical ℓ1-norm learns a sparse solution without considering the structure between the features. In this paper, a structural knowledge is encoded as directed acyclic graph and a graph path penalty is incorporated to multiclass SVM. The learned classifiers not only improve the performance, but also help in the interpretation of the learned features. The performance of the proposed method highly depends on an initialization graph. Two generic ways to initialize the graph between the features are considered: one is built from similarities while the other one uses Graphical Lasso. The experiments of face classification task on Extended YaleB database verify that: i) graph regularization with multiclass SVM improves the performance and also leads to a more sparse solution compared to ℓ1-norm. | 1,261,598 |
c52f5191f7932a122c8129665e310bdf24152d06 | Non-linear regression for bivariate self-similarity identification — application to anomaly detection in Internet traffic based on a joint scaling analysis of packet and byte counts | Internet traffic monitoring is a crucial task for network security. Self-similarity, a key property for a relevant description of internet traffic statistics, has already been massively and successfully involved in anomaly detection. Self-similar analysis was however so far applied either to byte or Packet count time series independently, while both signals are jointly collected and technically deeply related. The present contribution elaborates on a recently proposed multivariate self-similar model, Operator fractional Brownian Motion (OfBm), to analyze jointly self-similarity in bytes and packets. A non-linear regression procedure, based on an original Branch & Bound resolution procedure, is devised for the full identification of bivariate OfBm. The estimation performance is assessed by means of Monte Carlo simulations. Further, an Internet traffic anomaly detection procedure is proposed, that makes use of the vector of Hurst exponents underlying the OfBm based Internet data modeling. Applied to a large set of high quality and modern Internet data from the MAWI repository, proof-of-concept results in anomaly detection are detailed and discussed. | 13,110,804 |
85a1c17a9335d6335c48a106e57a5482c92ff9c4 | 2D Prony-Huang Transform: A New Tool for 2D Spectral Analysis | This paper provides an extension of the 1D Hilbert Huang transform for the analysis of images using recent optimization techniques. The proposed method consists of: 1) adaptively decomposing an image into oscillating parts called intrinsic mode functions (IMFs) using a mode decomposition procedure and 2) providing a local spectral analysis of the obtained IMFs in order to get the local amplitudes, frequencies, and orientations. For the decomposition step, we propose two robust 2D mode decompositions based on nonsmooth convex optimization: 1) a genuine 2D approach, which constrains the local extrema of the IMFs and 2) a pseudo-2D approach, which separately constrains the extrema of lines, columns, and diagonals. The spectral analysis step is an optimization strategy based on Prony annihilation property and applied on small square patches of the IMFs. The resulting 2D Prony-Huang transform is validated on simulated and real data. | 8,803,363 |
97f713c553ba6781d758227dedd5b55d9fba0358 | Relaxing Tight Frame Condition in Parallel Proximal Methods for Signal Restoration | A fruitful approach for solving signal deconvolution problems consists of resorting to a frame-based convex variational formulation. In this context, parallel proximal algorithms and related alternating direction methods of multipliers have become popular optimization techniques to approximate iteratively the desired solution. Until now, in most of these methods, either Lipschitz differentiability properties or tight frame representations were assumed. In this paper, it is shown that it is possible to relax these assumptions by considering a class of non-necessarily tight frame representations, thus offering the possibility of addressing a broader class of signal restoration problems. In particular, it is possible to use non-necessarily maximally decimated filter banks with perfect reconstruction, which are common tools in digital signal processing. The proposed approach allows us to solve both frame analysis and frame synthesis problems for various noise distributions. In our simulations, it is applied to the deconvolution of data corrupted with Poisson noise or Laplacian noise by using (non-tight) discrete dual-tree wavelet representations and filter bank structures. | 14,130,251 |
b31269331c1159171e92c1d80dee767a846c188c | 2D Hilbert-Huang Transform | This paper presents a 2D transposition of the Hilbert-Huang Transform (HHT), an empirical data analysis method designed for studying instantaneous amplitudes and phases of non-stationary data. The principle is to adaptively decompose an image into oscillating parts called Intrinsic Mode Functions (IMFs) using an Empirical Mode Decomposition method (EMD), and then to perform Hilbert spectral analysis on the IMFs in order to recover local amplitudes and phases. For the decomposition step, we propose a new 2D mode decomposition method based on non-smooth convex optimization, while for the instantaneous spectral analysis, we use a 2D transposition of Hilbert spectral analysis called monogenic analysis, based on Riesz transform and allowing to extract instantaneous amplitudes, phases, and orientations. The resulting 2D-HHT is validated on simulated data. | 17,230,555 |
96ce1fc43aee8ffccf96f5a7de978a412ec45ac6 | Multifractal-based texture segmentation using variational procedure | The present contribution aims at segmenting a scale-free texture into different regions, characterized by an a priori (unknown) multifractal spectrum. The multifractal properties are quantified using multiscale quantities C<sub>1, j</sub> and C<sub>2, j</sub> that quantify the evolution along the analysis scales 2<sup>j</sup> of the empirical mean and variance of a nonlinear transform of wavelet coefficients. The segmentation is performed jointly across all the scales j on the concatenation of both C<sub>1, j</sub> and C<sub>2, j</sub> by an efficient vectorial extension of a convex relaxation of the piecewise constant Potts segmentation problem. We provide comparisons with the scalar segmentation of the Hölder exponent as well as independent vectorial segmentations over C<sub>1</sub> and C<sub>2</sub>. | 2,138,690 |
ada3ff61171a5e5d8c1865d9fc89a9e4196f874b | On-The-Fly Approximation of Multivariate Total Variation Minimization | In the context of change-point detection, addressed by Total Variation minimization strategies, an efficient on-the-fly algorithm has been designed leading to exact solutions for univariate data. In this contribution, an extension of such an on-the-fly strategy to multivariate data is investigated. The proposed algorithm relies on the local validation of the Karush-Kuhn-Tucker conditions on the dual problem. Showing that the non-local nature of the multivariate setting precludes to obtain an exact on-the-fly solution, we devise an on-the-fly algorithm delivering an approximate solution, whose quality is controlled by a practitioner-tunable parameter, acting as a trade-off between quality and computational cost. Performance assessment shows that high quality solutions are obtained on-the-fly while benefiting of computational costs several orders of magnitude lower than standard iterative procedures. The proposed algorithm thus provides practitioners with an efficient multivariate change-point detection on-the-fly procedure. | 5,627,564 |
2d03a5661c8ada8987cdbebaaea82691560d320e | A Primal-Dual Algorithm for Link Dependent Origin Destination Matrix Estimation | Origin-destination matrix (ODM) estimation is a classical problem in transport engineering aiming to recover flows from every Origin to every Destination from measured traffic counts and a priori model information. Taking advantage of probe trajectories, whose capture is made possible by new measurement technologies, the present contribution extends the concept of ODM to that of link-dependent ODM (LODM). LODM also contains the flow distribution on links making specification of assignment models, e.g., by means of routing matrices, unnecessary. An original formulation of LODM estimation, from traffic counts and probe trajectories is presented as an optimization problem, where the functional to be minimized consists of five convex functions, each modeling a constraint or property of the transport problem: consistency with traffic counts, consistency with sampled probe trajectories, consistency with traffic conservation (Kirchhoff's law), similarity of flows having similar origins and destinations, and positivity of traffic flows. A proximal primal-dual algorithm is devised to minimize the designed functional, as the corresponding objective functions are not necessarily differentiable. A case study, on a simulated network and traffic, validates the feasibility of the procedure and details its benefits for the estimation of an LODM matching real-network constraints and observations. | 12,968,119 |
79359a8ebe7b0994b271d52b22676edc07adabaa | Bayesian-driven criterion to automatically select the regularization parameter in the ℓ1-Potts model | This contribution focuses, within the ℓ1-Potts model, on the automated estimation of the regularization parameter balancing the ℓ1 data fidelity term and the TVℓ0 penalization. Variational approaches based on total variation gained considerable interest to solve piecewise constant denoising problems thanks to their deterministic setting and low computational cost. However, the quality of the achieved solution strongly depends on the tuning of the regularization parameter. While recent works have tailored various hierarchical Bayesian procedures to additionally estimate the regularization parameter for Gaussian noise, less attention has been granted to Laplacian noise, of interested in numerous applications. This contribution promotes a fast and parameter-free denoising procedure for piecewise constant signals corrupted by Laplacian noise, that includes automated selection of the regularization parameter. It relies on the minimization of a Bayesian-driven criterion whose similarities with the ℓ1-Potts model permit to derive a computationally efficient algorithm. | 10,205,818 |
dddf6b0a3e25ea627f5efb7b7218dc5bcda161d8 | Parallel algorithm and hybrid regularization for dynamic PET reconstruction | To improve the estimation at the voxel level in dynamic Positron Emission Tomography (PET) imaging, we propose to develop a convex optimization approach based on a recently proposed parallel proximal method (PPXA). This class of algorithms was successfully employed for 2D deconvolution in the presence of Poisson noise and it is extended here to (dynamic) space + time PET image reconstruction. Hybrid regularization defined as a sum of a total variation and a sparsity measure is considered in this paper. The total variation is applied to each temporal-frame and a wavelet regularization is considered for the space+time data. Total variation allows us to smooth the wavelet artifacts introduced when the wavelet regularization is used alone. The proposed algorithm was evaluated on simulated dynamic fluorodeoxyglucose (FDG) brain data and compared with a regularized Expectation Maximization (EM) reconstruction. From the reconstructed dynamic images, parametric maps of the cerebral metabolic rate of glucose (CMRglu) were computed. Our approach shows a better reconstruction at the voxel level. | 15,542,120 |
9bdced8d7c9b4bf582e64a7203463f4351818ada | Proximity Operator of a Sum of Functions; Application to Depth Map Estimation | Proximal splitting algorithms for convex optimization are largely used in signal and image processing. They make possible to call the individual proximity operators of an arbitrary number of functions, whose sum is to be minimized. But the larger this number, the slower the convergence. In this letter, we show how to compute the proximity operator of a sum of two functions, for a certain type of functions operating on objects having a graph structure. The gain provided by avoiding unnecessary splitting is illustrated by an application to depth map estimation. | 31,599,714 |
2cae72224b88ae33d5946109d40b3dfa9e85a538 | Inverse problem formulation for regularity estimation in images | The identification of texture changes is a challenging problem that can be addressed by considering local regularity fluctuations in an image. This work develops a procedure for local regularity estimation that combines a convex optimization strategy with wavelet leaders, specific wavelet coefficients recently introduced in the context of multifractal analysis. The proposed procedure is formulated as an inverse problem that combines the joint estimation of both local regularity exponent and of the optimal weights underlying regularity measurement. Numerical experiments using synthetic texture indicate that the performance of the proposed approach compares favorably against other wavelet based local regularity estimation formulations. The method is also illustrated with an example involving real-world texture. | 13,278,563 |
a3c8dd0fa6bf0d3fd5e5573e94b03c3a7be5a33d | A 2-D spectral analysis method to estimate the modulation parameters in structured illumination microscopy | Structured illumination microscopy is a recent imaging technique that aims at going beyond the classical optical resolution limits by reconstructing a high-resolution image from several low-resolution images acquired through modulation of the transfer function of the microscope. A precise knowledge of the sinusoidal modulation parameters is necessary to enable the super-resolution effect expected after reconstruction. In this work, we investigate the retrieval of these parameters directly from the acquired data, using a novel 2D spectral estimation method. | 12,410,062 |
97e157f45eb773991d81d6bb9b9f1d1469323b3d | A wavelet-based quadratic extension method for image deconvolution in the presence of poisson noise | Iterative optimization algorithms such as the forward-backward and Douglas-Rachford algorithms have recently gained much popularity since they provide efficient solutions to a wide class of non-smooth convex minimization problems arising in signal/image recovery. However, when images are degraded by a convolution operator and a Poisson noise, a particular attention must be paid to the associated minimization problem. To solve it, we propose a new optimization method which consists of two nested iterative steps. The effectiveness of the proposed method is demonstrated via numerical comparisons. | 939,305 |
2762ff261079782b8fab5dfa6669c42c4ec54a26 | Multivariate optimization for multifractal-based texture segmentation | This work aims to segment a texture into different regions, each characterized by a priori unknown multifractal properties. The multifractal properties are quantified using the multiscale function C1,j that quantifies the evolution along analysis scales 2j of the empirical mean of the log of the wavelet leaders. The segmentation procedure is applied to local estimate of C1,j. It involves a multivariate Mumford-Shah relaxation formulated as a convex optimization problem involving a structure tensor penalization and an efficient algorithmic solution based on primal-dual proximal algorithm. The performances are evaluated on synthetic textures. | 217,562,635 |
2e156d24536b0af1942e999382775d09d9eafac0 | Non-smooth convex optimization for an efficient reconstruction in structured illumination microscopy | This work aims at proposing a new reconstruction procedure for structured illumination microscopy. The proposed method is based on some recent development in non-smooth convex optimization that allows to deal with Poisson negative log-likelihood as data fidelity term and with regularization terms allowing to extract sharp features. The performances of the proposed method are compared to the state-of-the-art of SIM reconstruction techniques. | 15,485,961 |
7546ca5fe3d21ef655ae1e8bc1abf8b3b9ffb8b8 | An improved variational mode decomposition method for internal waves separation | This paper proposes to revisit the 2-D Variational Mode Decomposition (2-D-VMD) in order to separate the incident and reflected waves in experimental images of internal waves velocity field. 2-D-VMD aims at splitting an image into a sequence of oscillating components which are centered around specific spatial frequencies. In this work we develop a proximal algorithm with local convergence guarantees, allowing more flexibility in order to deal with modes having different spectral properties and to add some optional constraints modeling prior informations. Our method is compared with the standard 2-D-VMD and with a Hilbert based strategy usually employed for processing internal waves images. | 10,458,415 |
87ed6642b249f8d804f13496b3efa06e8332b69e | Proximal method for geometry and texture image decomposition | We propose a variational method for decomposing an image into a geometry and a texture component. Our model involves the sum of two functions promoting separately properties of each component, and of a coupling function modeling the interaction between the components. None of these functions is required to be differentiable, which significantly broadens the range of decompositions achievable through variational approaches. The convergence of the proposed proximal algorithm is guaranteed under suitable assumptions. Numerical examples are provided that show an application of the algorithm to image decomposition and restoration in the presence of Poisson noise. | 519,498 |
d93471ab2f34cb1192b9f050cea62c562c2cb086 | Parallel Proximal Algorithm for Image Restoration Using Hybrid Regularization | Regularization approaches have demonstrated their effectiveness for solving ill-posed problems. However, in the context of variational restoration methods, a challenging question remains, namely how to find a good regularizer. While total variation introduces staircase effects, wavelet-domain regularization brings other artefacts, e.g., ringing. However, a tradeoff can be made by introducing a hybrid regularization including several terms not necessarily acting in the same domain (e.g., spatial and wavelet transform domains). While this approach was shown to provide good results for solving deconvolution problems in the presence of additive Gaussian noise, an important issue is to efficiently deal with this hybrid regularization for more general noise models. To solve this problem, we adopt a convex optimization framework where the criterion to be minimized is split in the sum of more than two terms. For spatial domain regularization, isotropic or anisotropic total variation definitions using various gradient filters are considered. An accelerated version of the Parallel Proximal Algorithm is proposed to perform the minimization. Some difficulties in the computation of the proximity operators involved in this algorithm are also addressed in this paper. Numerical experiments performed in the context of Poisson data recovery, show the good behavior of the algorithm as well as promising results concerning the use of hybrid regularization techniques. | 8,576,389 |
afc9b8bf495f0ffb008a9ebd04b08afed8c819a4 | Temporal wavelet denoising of PET sinograms and images | The level of noise in PET dynamic studies makes it difficult to provide accurate and robust kinetic parameters from time activity curves, particularly at the voxel level. Several approaches have been followed to lower noise: denoising reconstructed images with spatial wavelets, adding a priori information during reconstruction about the signal without noise, including the temporal dimension during reconstruction. In this work, we propose to use a temporal wavelet denoising approach, based on the characteristics of PET time activity curves in sinograms (or reconstructed images). This approach has recently been proposed in image processing and relies on discriminating signal from noise by including relevant “a priori” information (on the statistical distribution of the wavelet coefficient for a whole sinogram or reconstructed image), as well as appropriate noise formation model in the time domain. This approach is tested in a 2D spatial + 1D time Monte Carlo simulation mimicking brain, and compared with a standard denoising approach : SUREShrink. Preliminary results indicate that better performances are obtained for sinogram denoising with the proposed approach compared with SUREShrink, and that the resulting sinograms can be reconstructed with a weighted least-squares (WLS) algorithm for all techniques. Denoising in the reconstructed images with this approach was also investigated. | 11,551,218 |
20d68f4fed19c925ccc8c21b9381a80a7d574276 | Random primal-dual proximal iterations for sparse multiclass SVM | Sparsity-inducing penalties are useful tools in variational methods for machine learning. In this paper, we propose two block-coordinate descent strategies for learning a sparse multiclass support vector machine. The first one works by selecting a subset of features to be updated at each iteration, while the second one performs the selection among the training samples. These algorithms can be efficiently implemented thanks to the flexibility offered by recent randomized primal-dual proximal methods. Experiments carried out for the supervised classification of handwritten digits demonstrate the interest of considering the primal-dual approach in the context of block-coordinate descent. The efficiency of the proposed algorithms is assessed through a comparison of execution times and classification errors. | 1,353,776 |
76edea65872bfcff9b725328416ab3a1ead00341 | Bayesian Selection for the $\ell _2$ -Potts Model Regularization Parameter: 1-D Piecewise Constant Signal Denoising | Piecewise constant denoising can be solved either by deterministic optimization approaches, based on the Potts model, or by stochastic Bayesian procedures. The former lead to low computational time but require the selection of a regularization parameter, whose value significantly impacts the achieved solution, and whose automated selection remains an involved and challenging problem. Conversely, fully Bayesian formalisms encapsulate the regularization parameter selection into hierarchical models, at the price of high computational costs. This contribution proposes an operational strategy that combines hierarchical Bayesian and Potts model formulations, with the double aim of automatically tuning the regularization parameter and maintaining computational efficiency. The proposed procedure relies on formally connecting a Bayesian framework to a $\ell _2$ -Potts functional. Behaviors and performance for the proposed piecewise constant denoising and regularization parameter tuning techniques are studied qualitatively and assessed quantitatively, and shown to compare favorably against those of a fully Bayesian hierarchical procedure, both in accuracy and computational load. | 8,595,970 |
2fd4a0fcdad63604b3356ac374a92952f7c04438 | Two-step calibration method for multi-algorithm score-based face recognition systems by minimizing discrimination loss | We propose a new method for combining multi-algorithm score-based face recognition systems, which we call the two-step calibration method. Typically, algorithms for face recognition systems produce dependent scores. The two-step method is based on parametric copulas to handle this dependence. Its goal is to minimize discrimination loss. For synthetic and real databases (NIST-face and Face3D) we will show that our method is accurate and reliable using the cost of log likelihood ratio and the information-theoretical empirical cross-entropy (ECE). | 33,647,434 |
88152d7c10ad20890ee5b72fcfcaf556b15c8dfc | Verifying a User in a Personal Face Space | For user verification on a personal digital assistant (PDA), a fast and simple system is developed. In the enrollment phase, face detection and registration are done by a Viola-Jones based method, taking advantage of its accuracy and speed. The face feature vectors obtained this way are then used to build up a face space specific to the user by principal component analysis (PCA). Furthermore, the face variations caused by small registration shifts are also modeled, in order to better capture the variation in the face space, and simplify the enrollment. Current experiments show that this system is fast, efficient, and accurate | 12,547,419 |
96110efb104948efc3b1f7ca47021f47f801ffed | Biometric Systems under Morphing Attacks: Assessment of Morphing Techniques and Vulnerability Reporting | With the widespread deployment of biometric recognition systems, the interest in attacking these systems is increasing. One of the easiest ways to circumvent a biometric recognition system are so-called presentation attacks, in which artefacts are presented to the sensor to either impersonate another subject or avoid being recognised. In the recent past, the vulnerabilities of biometric systems to so-called morphing attacks have been unveiled. In such attacks, biometric samples of multiple subjects are merged in the signal or feature domain, in order to allow a successful verification of all contributing subjects against the morphed identity. Being a recent area of research, there is to date no standardised manner to evaluate the vulnerability of biometric systems to these attacks. Hence, it is not yet possible to establish a common benchmark between different morph detection algorithms. In this paper, we tackle this issue proposing new metrics for vulnerability reporting, which build upon our joint experience in researching this challenging attack scenario. In addition, recommendations on the assessment of morphing techniques and morphing detection metrics are given. | 3,462,103 |
110d474178b0bb5e2050537d89d08a76106ab736 | A landmark paper in face recognition | Good registration (alignment to a reference) is essential for accurate face recognition. The effects of the number of landmarks on the mean localization error and the recognition performance are studied. Two landmarking methods are explored and compared for that purpose: (1) the most likely-landmark locator (MLLL), based on maximizing the likelihood ratio, and (2) Viola-Jones detection. Both use the locations of facial features (eyes, nose, mouth, etc) as landmarks. Further, a landmark-correction method (BILBO) based on projection into a subspace is introduced. The MLLL has been trained for locating 17 landmarks and the Viola-Jones method for 5. The mean localization errors and effects on the verification performance have been measured. It was found that on the eyes, the Viola-Jones detector is about 1% of the interocular distance more accurate than the MLLL-BILBO combination. On the nose and mouth, the MLLL-BILBO combination is about 0.5% of the inter-ocular distance more accurate than the Viola-Jones detector. Using more landmarks will result in lower equal-error rates, even when the landmarking is not so accurate. If the same landmarks are used, the most accurate landmarking method gives the best verification performance | 17,479,110 |
79ae4c52abaaf671ec7089a6d131fc83129a6cb3 | Face reconstruction from image sequences for forensic face comparison | The authors explore the possibilities of a dense model-free three-dimensional (3D) face reconstruction method, based on image sequences from a single camera, to improve the current state of forensic face comparison. They propose a new model-free 3D reconstruction method for faces, based on the Lambertian reflectance model to estimate the albedo and to refine the 3D shape of the face. This method avoids any form of bias towards face models and is therefore suitable in a forensic face comparison process. The proposed method can reconstruct frontal albedo images, from multiple non-frontal images. Also a dense 3D shape model of the face is reconstructed, which can be used to generate faces under pose. In the authors’ experiments, the proposed method is able to improve the face recognition scores in more than 90% of the cases. Using the likelihood ratio framework, they show for the same experiment that for data initially unsuitable for forensic use, the reconstructions become meaningful in a forensic context in more than 60% of the cases. | 15,831,815 |
f3441d1facd88310f3367ba55147658d949cf159 | Multi-algorithm fusion with template protection | The popularity of biometrics and its widespread use introduces privacy risks. To mitigate these risks, solutions such as the helper-data system, fuzzy vault, fuzzy extractors, and cancelable biometrics were introduced, also known as the field of template protection. In parallel to these developments, fusion of multiple sources of biometric information have shown to improve the verification performance of the biometric system. In this work we analyze fusion of the protected template from two 3D recognition algorithms (multi-algorithm fusion) at feature-, score-, and decision-level. We show that fusion can be applied at the known fusion-levels with the template protection technique known as the Helper-Data System. We also illustrate the required changes of the Helper-Data System and its corresponding limitations. Furthermore, our experimental results, based on 3D face range images of the FRGC v2 dataset, show that indeed fusion improves the verification performance. | 18,993,984 |
bb4786397c09ba30b63b5d750a5a1f4b204abe8a | Making 2D face recognition more robust using AAMs for pose compensation | The problem of pose in 2D face recognition is widely acknowledged. Commercial systems are limited to near frontal face images and cannot deal with pose deviations larger than 15 degrees from the frontal view. This is a problem, when using face recognition for surveillance applications in which people can move freely. We suggest a preprocessing step to warp faces from a non frontal pose to a near frontal pose. We use view-based active appearance models to fit to a novel face image under a random pose. The model parameters are adjusted to correct for the pose and used to reconstruct the face under a novel pose. This preprocessing makes face recognition more robust with respect to variations in the pose. An improvement in the identification rate of 60% (from 15% to 75%) is obtained for faces under a pose of 45 degrees | 23,811,571 |
a1913c5e6953dd39443812cab221ab749672e31a | Spectral minutiae: A fixed-length representation of a minutiae set | Minutiae, which are the endpoints and bifurcations of fingerprint ridges, allow a very discriminative classification of fingerprints. However, a minutiae set is an unordered set and the minutiae locations suffer from various deformations such as translation, rotation and scaling. In this paper, we introduce a novel method to represent a minutiae set as a fixed-length feature vector, which is invariant to translation, and in which rotation and scaling become translations, so that they can be easily compensated for. By applying the spectral minutiae representation, we can combine the fingerprint recognition system with a template protection scheme, which requires a fixed-length feature vector. This paper also presents two spectral minutiae matching algorithms and shows experimental results. | 10,154,870 |
ee4c659ad75c302b223a3815a65aa2e304cccc30 | Binary Biometrics: An Analytic Framework to Estimate the Bit Error Probability under Gaussian Assumption | In recent years the protection of biometric data has gained increased interest from the scientific community. Methods such as the helper data system, fuzzy extractors, fuzzy vault and cancellable biometrics have been proposed for protecting biometric data. Most of these methods use cryptographic primitives and require a binary representation from the real-valued biometric data. Hence, the similarity of biometric samples is measured in terms of the Hamming distance between the binary vector obtained at the enrolment and verification phase. The number of errors depends on the expected error probability Pe of each bit between two biometric samples of the same subject. In this paper we introduce a framework for analytically estimating Pe under the assumption that the within-and between-class distribution can be modeled by a Gaussian distribution. We present the analytic expression of Pe as a function of the number of samples used at the enrolment (Ne) and verification (Nv) phases. The analytic expressions are validated using the FRGC v2 and FVC2000 biometric databases. | 15,624,355 |
8931174693023abd772eafc7a669aa7856610f2d | Biometric evidence evaluation: an empirical assessment of the effect of different training data | For an automatic comparison of a pair of biometric specimens, a similarity metric called ‘score’ is computed by the employed biometric recognition system. In forensic evaluation, it is desirable to convert this score into a likelihood ratio. This process is referred to as calibration. A likelihood ratio is the probability of the score given the prosecution hypothesis (which states that the pair of biometric specimens are originated from the suspect) is true divided by the probability of the score given the defence hypothesis (which states that the pair of biometric specimens are not originated from the suspect) is true. In practice, a set of scores (called training scores) obtained from the within-source and between-sources comparison is needed to compute a likelihood ratio value for a score. In likelihood ratio computation, the within-source and between-sources conditions can be anchored to a specific suspect in a forensic case or it can be generic within-source and between-sources comparisons independent of the suspect involved in the case. This results in two likelihood ratio values which differ in the nature of training scores they use and therefore consider slightly different interpretations of the two hypotheses. The goal of this study is to quantify the differences in these two likelihood ratio values in the context of evidence evaluation from a face, a fingerprint and a speaker recognition system. For each biometric modality, a simple forensic case is simulated by randomly selecting a small subset of biometric specimens from a large database. In order to be able to carry out a comparison across the three biometric modalities, the same protocol is followed for training scores set generation. It is observed that there is a significant variation in the two likelihood ratio values. | 45,024,752 |
87468101eb9c931d58c2a9d081d0abb253ef712e | Fixed FAR correction factor of score level fusion | In biometric score level fusion, the scores are often assumed to be independent to simplify the fusion algorithm. In some cases, the “average” performance under this independence assumption is surprisingly successful, even competing with a fusion that incorporates dependence. We present two main contributions in score level fusion: (i) proposing a new method of measuring the performance of a fusion strategy at fixed FAR via Jeffreys credible interval analysis and (ii) subsequently providing a method to improve the fusion strategy under the independence assumption by taking the dependence into account via parametric copulas, which we call fixed FAR fusion. Using synthetic data, we will show that one should take the dependence into account even for scores with a low dependence level. Finally, we test our method on some public databases (FVC2002, NIST-face, and Face3D), compare it to Gaussian mixture model and linear logistic methods, which are also designed to handle dependence, and notice its significance improvement with respect to our evaluation method. | 7,894,889 |
9eceb1996f66b8dbfbf70b75507dafa4861a3bef | Exploring How User Routine Affects the Recognition Performance of a Lock Pattern | To protect an Android smartphone against attackers, a lock pattern can be used. Nevertheless, shoulder-surfing and smudge attacks can be used to get access despite of this protection. To combat these attacks, biometric recognition can be added to the lock pattern, such that the lock-pattern application keeps track of the way users draw the pattern. This research explores how users change the way they draw lock patterns over time and its effect on the recognition performance of the pattern. A lock-pattern dataset has been collected and a classifier is proposed. In this research the best result was obtained using the x- and y-coordinate as the user's biometrics. Unfortunately, in this paper it is shown that adding biometrics to a lock pattern is only an additional security that provides no guarantee for a secure lock pattern. It is just a small improvement over using a lock pattern without biometric identification. | 4,853,543 |
069856cf03aeafe64d2b95e5e3af3691a904b456 | On the computation of the Kullback-Leibler measure for spectral distances | Efficient algorithms for the exact and approximate computation of the symmetrical Kullback-Leibler (1998) measure for spectral distances are presented for linear predictive coding (LPC) spectra. A interpretation of this measure is given in terms of the poles of the spectra. The performances of the algorithms in terms of accuracy and computational complexity are assessed for the application of computing concatenation costs in unit-selection-based speech synthesis. With the same complexity and storage requirements, the exact method is superior in terms of accuracy. | 18,517,224 |
fca323b9d65d3e755d49f8c404802c5352bb7cc5 | Subband coding of stereophonic digital audio signals | The exploitation of left-right correlation in a subband code for stereophonic audio signals is investigated. A transform of left and right signals into decorrelated intensity and error signals is presented. Although this can be seen as the optimal exploitation of redundancy, it yields only marginal gain in bit rate. If the reduced phase-sensitivity of the human observer can be exploited by encoding only the intensity signal, a substantial gain can be obtained. Preliminary results of a stereo codec are promising: at 192 kb/s good coding results have been obtained.<<ETX>> | 62,542,269 |
8639774bfb4476756f5f1df0c8845ccebbc501c8 | How Random Is a Classifier Given Its Area under Curve? | When the performance of a classifier is empirically evaluated, the Area Under Curve (AUC) is commonly used as a one dimensional performance measure. In general, the focus is on good performance (AUC towards 1). In this paper, we study the other side of the performance spectrum (AUC towards 0.50) as we are interested to which extend a classifier is random given its AUC. We present the exact probability distribution of the AUC of a truely random classifier, given a finite number of distinct genuine and imposter scores. It quantifies the "randomness" of the measured AUC. The distribution involves the restricted partition function, a well studied function in number theory. Although other work exists that considers confidence bounds on the AUC, the novelty is that we do not assume any underlying parametric or non- parametric model or specify an error rate. Also, in cases in which a limited number of scores is available, for example in forensic case work, the exact distribution can deviate from these models. For completeness, we also present an approximation using a normal distribution and confidence bounds on the AUC. | 3,459,859 |
1d9587872ab1f6ff17ae31ac6f1920015fbe6d17 | ForenFace: a unique annotated forensic facial image dataset and toolset | Few facial image datasets are suitable for forensic research. In this study, the authors present ForenFace, a facial image and video dataset. It contains video sequences and extracted images of 97 subjects recorded with six different surveillance camera of various types. Moreover, it also contains high-resolution images and 3D scans. The novelty of this dataset lies in two aspects: (i) a subset of 435 images (87 subjects, five images per subject) has been manually annotated, yielding a very rich forensically relevant annotation of almost 19.000 facial parts, and (ii) making available a toolset to create, view, and extract the annotation. The authors present protocols and the result of a baseline experiment in which two commercial software packages and an annotated facial feature contained in this dataset are compared. The dataset, the annotation and tools are available under a usage license. | 44,636,678 |
292c1b62ea827fe456311a4cd0377804cdafd586 | Spectral minutiae representations of fingerprints enhanced by quality data | Many fingerprint recognition systems are based on minutiae matching. However, the recognition accuracy of minutiae-based matching algorithms is highly dependent on the fingerprint minutiae quality. Therefore, in this paper, we introduce a quality integrated spectral minutiae algorithm, in which the minutiae quality information is incorporated to enhance the performance of the spectral minutiae fingerprint recognition system. In our algorithm, two types of quality data are used. The first one is the minutiae reliability, expressing the probability that a given point is indeed a minutia; the second one is the minutiae location accuracy, quantifying the error on the minutiae location. We integrate these two types of quality information into the spectral minutiae representation algorithm and achieve a decrease in the Equal Error Rate of over 20% in the experiment. | 15,126,260 |
35f2da67d426495b7a3307c148722e4de0070ae4 | Multi-Bits Biometric String Generation based on the Likelihood Ratio | Preserving the privacy of biometric information stored in biometric systems is becoming a key issue. An important element in privacy protecting biometric systems is the quantizer which transforms a normal biometric template into a binary string. In this paper, we present a user-specific quantization method based on a likelihood ratio approach (LQ). The bits generated from every feature are concatenated to form a fixed length binary string that can be hashed to protect its privacy. Experiments are carried out on both fingerprint data (FVC2000) and face data (FRGC). Results show that our proposed quantization method achieves a reasonably good performance in terms of FAR/FRR (when FAR is 10 4, the corresponding FRR are 16.7% and 5.77% for FVC2000 and FRGC, respectively). | 1,903,297 |
57aad58ef9cd902e761d0ee2de8dd0ba35960b86 | A 3-layer coding scheme for biometry template protection based on spectral minutiae | Spectral Minutiae (SM) representation enables the combination of minutiae-based fingerprint recognition systems with template protection schemes based on fuzzy commitment, but it requires error-correcting codes that can handle high bit error rates (i.e. above 40%). In this paper, we propose a 3-Layer coding scheme based on erasure codes for the SM-based biometric recognition system. Our approach is inspired by the fact that the Packet Error Rate (PER) is proportional to the Bit Error Rate (BER). Each packet is encoded by an Error Detection Code (EDC) and an Error Correction Code (ECC). The packet can only survive if it successfully passes the ECC and EDC decoder. With the erasure code, the system can reconstruct the secret key by only using the survived packets. By applying SM to the FVC2000-DB2 fingerprint database, the unprotected system achieves an EER of around 6% while our proposed coding scheme reaches an EER of approximately 6.5% with a 1032-bit secret key. | 14,786,519 |
09fabc607f78ec173cbb795ebe913aa5765660cc | On the use of spectral minutiae in high-resolution palmprint recognition | The spectral minutiae representation has been proposed as a novel method to minutiae-based fingerprint recognition, which can handle minutiae translation and rotation and improve matching speed. As high-resolution palmprint recognition is also mainly based on minutiae sets, we apply spectral minutiae representation to palmprints and implement spectral minutiae based matching. We optimize key parameters for the method by experimental study on the characteristics of spectral minutiae using both fingerprints and palmprints. However, experimental results show that spectral minutiae representation has much worse performance for palmprints than that for fingerprints. EER 15.89% and 14.2% are achieved on the public high-resolution palmprint database THUPALMLAB using location-based spectral minutiae representation (SML) and the complex spectral minutiae representation (SMC) respectively while 5.1% and 3.05% on FVC2002 DB2A fingerprint database. Based on statistical analysis, we find the worse performance for palmprints mainly due to larger non-linear distortion and much larger number of minutiae. | 11,000,861 |
0b55b31765f101535eac0d50b9da377f82136d2f | Biometric binary string generation with detection rate optimized bit allocation | Extracting binary strings from real-valued templates has been a fundamental issue in many biometric template protection systems. In this paper, we present an optimal bit allocation method (OBA). By means of it, a binary string at a pre-defined length with maximized overall detection rate is generated. Experiments with the binary strings and a Hamming distance classifier on FRGC and FERET databases show promising performance in terms of FAR and FRR. | 15,113,693 |
32182f85d278ffdf61406e98882434e85928fdcc | Interpolating autoregressive processes: a bound on the restoration error | An upper bound is obtained for the restoration error variance of a sample restoration method for autoregressive processes that was presented by A.J.E.M. Janssen et al. (ibid., vol.ASSP-34, p.317-30, Apr. 1986). The upper bound derived is lower if the autoregressive process has poles close to the unit circle of the complex plane. This situation corresponds to a peaky signal spectrum. The bound is valid for the case in which one sample is unknown in a realization of an autoregressive process of arbitrary finite order. > | 39,154,458 |
b97c7f82c1439fa1e4525e5860cb05a39cc412ea | Illumination Normalization Based on Simplified Local Binary Patterns for A Face Verification System | Illumination normalization is a very important step in face recognition. In this paper we propose a simple implementation of local binary patterns, which effectively reduces the variability caused by illumination changes. In combination with a likelihood ratio classifier, this illumination normalization method achieves very good recognition performance, with respect to both discrimination and generalization. A user verification system using this method has been successfully implemented on a mobile platform. | 7,980,688 |
14b7edcea466ae1d673b91ca0640bc0ed10b0a8a | Subband coding of digital audio signals without loss of quality | A subband coding system for high quality digital audio signals is described. To achieve low bit rates at a high quality level, it exploits the simultaneous masking effect of the human ear. It is shown how this effect can be used in an adaptive bit-allocation scheme. The proposed approach has been applied in two coding systems, a complex system in which signal is split into 26 subbands, each approximately one third of an octave wide, and a simpler 20-band system. Both systems have been designed for coding stereophonic 16-bit compact disk signals with a sampling frequency of 44.1 kHz. With the 26-band system high-quality results can be obtained at bit rates of 220 kb/s. With the 20-band system, similar results can be obtained at bit rates of 360 kb/s.<<ETX>> | 13,930,881 |
1301ab616b681f234adea7357d445e8b2c156773 | Spectral Representations of Fingerprint Minutiae Subsets | The investigation of the privacy protection of biometric templates gains more and more attention. The spectral minutiae representation is a novel method to represent a minutiae set as a fixed-length feature vector, which is invariant to translation, and in which rotation and scaling become translations, so that they can be easily compensated for. These characteristics enable the combination of fingerprint recognition systems with template protection schemes that require as an input a fixed-length feature vector. However, the limited overlap of a fingerprint pair can reduce the performance of the spectral minutiae representation algorithm. Therefore, in this paper, we introduce the spectral representations of fingerprint minutiae subsets to cope with the limited overlap problem. In the experiment, we improve the recognition performance from 0.32% to 0.12% in equal error rate after applying the spectral representations of minutiae subsets algorithm. | 13,167,459 |
86d8245bf0e173fe93f70f404003ce90ac0a6755 | Preventing the Decodability Attack Based Cross-Matching in a Fuzzy Commitment Scheme | Template protection techniques are used within biometric systems in order to safeguard the privacy of the system's subjects. This protection also includes unlinkability, i.e., preventing cross-matching between two or more reference templates from the same subject across different applications. In the literature, the template protection techniques based on fuzzy commitment, also known as the code-offset construction, have recently been investigated. Recent work presented the decodability attack vulnerability facilitating cross-matching based on the protected templates and its theoretical analysis. First, we extend the theoretical analysis and include the comparison between the system and cross-matching performance. We validate the presented analysis using real biometric data from the MCYT fingerprint database. Second, we show that applying a random bit-permutation process secures the fuzzy commitment scheme from cross-matching based on the decodability attack. | 10,227,430 |
7f5a8d2bc48d786dba031e4a465959a0546be2bf | Local Absolute Binary Patterns as Image Preprocessing for Grip-Pattern Recognition in Smart Gun | In a biometric verification system of a smart gun, the rightful user is recognized based on his hand-pressure pattern. The main factor which affects the verification performance of this system is the variation between the probe image and the gallery image of a subject, in particular when the probe and the gallery images have been recorded with a few weeks in between. One of the major variations is in the pressure distribution of images. In this work, we propose a novel preprocessing technique, Local Absolute Binary Patterns, prior to grip-pattern classification. With respect to a certain pixel in an image, Local Absolute Binary Patterns processing quantifies how its neighboring pixels are fluctuating. It will be shown that this technique can both reduce the variation of pressure distribution, and extract information of the hand shape in the image. Therefore, a significant improvement of the verification result has been achieved. | 39,903,865 |
50491a6e27f69f492c15610954dcae7a6d18d26a | Extraction of vocal-tract system characteristics from speech signals | We propose methods to track natural variations in the characteristics of the vocal-tract system from speech signals. We are especially interested in the cases where these characteristics vary over time, as happens in dynamic sounds such as consonant-vowel transitions. We show that the selection of appropriate analysis segments is crucial in these methods, and we propose a selection based on estimated instants of significant excitation. These instants are obtained by a method based on the average group-delay property of minimum-phase signals. In voiced speech, they correspond to the instants of glottal closure. The vocal-tract system is characterized by its formant parameters, which are extracted from the analysis segments. Because the segments are always at the same relative position in each pitch period, in voiced speech the extracted formants are consistent across successive pitch periods. We demonstrate the results of the analysis for several difficult cases of speech signals. | 6,050,897 |
1521c815e67572f3a90e44b67675b599bacdb687 | Occlusion-robust 3D face recognition using restoration and local classifiers | Occlusions complicate the process of identifying individuals using their 3D facial scans. We propose a 3D face recognition system that automatically removes occlusion artifacts and identifies the facial image using regional classifiers. Automatic localization of occluded areas is handled by using a generic face model. Restoration of missing information after occlusion removal is performed by the application of an improved version of Gappy Principal Component Analysis (GPCA), which we call partial Gappy PCA (pGPCA). After the removal of noisy data introduced by realistic occlusions, occlusion-free faces are represented by local regions. Local classifiers operating on these local regions are then fused to achieve occlusion-robust identification performance. Our experimental results obtained on realistically occluded facial images from the Bosphorus 3D face database illustrate that our occlusion compensation scheme drastically improves the recognition accuracy from 78.05% to 94.20%. | 18,551,427 |
5832c81059c3391b70e9b636ed7e1b56b6d747e7 | Complex spectral minutiae representation for fingerprint recognition | The spectral minutiae representation is designed for combining fingerprint recognition with template protection. This puts several constraints to the fingerprint recognition system: first, no relative alignment of two fingerprints is allowed due to the encrypted storage; second, a fixed-length feature vector is required as input of template protection schemes. The spectral minutiae representation represents a minutiae set as a fixed-length feature vector, which is invariant to translation, rotation and scaling. These characteristics enable the combination of fingerprint recognition systems with template protection schemes and allow for fast minutiae-based matching as well. In this paper, we introduce the complex spectral minutiae representation (SMC): a spectral representation of a minitiae set, as the location-based and the orientation-based spectral minutiae representations (SML and SMO), but it encodes minutiae orientations differently. SMC improves the recognition accuracy, expressed in term of the Equal Error Rate, about 2–4 times compared with SML and SMO. In addition, the paper presents two feature reduction algorithms: the Column-PCA and the Line-DFT feature reductions, which achieve a template size reduction around 90% and results in a 10–15 times higher matching speed (with 125,000 comparisons per second). | 7,063,051 |
ce7a385b791686f318313e94a0b573c456c1297f | Quantifying privacy and security of biometric fuzzy commitment | Fuzzy commitment is an efficient template protection algorithm that can improve security and safeguard privacy of biometrics. Existing theoretical security analysis has proved that although privacy leakage is unavoidable, perfect security from information-theoretical points of view is possible when bits extracted from biometric features are uniformly and independently distributed. Unfortunately, this strict condition is difficult to fulfill in practice. In many applications, dependency of binary features is ignored and security is thus suspected to be highly overestimated. This paper gives a comprehensive analysis on security and privacy of fuzzy commitment regarding empirical evaluation. The criteria representing requirements in practical applications are investigated and measured quantitatively in an existing protection system for 3D face recognition. The evaluation results show that a very significant reduction of security and enlargement of privacy leakage occur due to the dependency of biometric features. This work shows that in practice, one has to explicitly measure the security and privacy instead of trusting results under non-realistic assumptions. | 8,302,846 |
48a195c2343430838a9201efb533337ecb232a85 | Spectral Minutiae Representations for Fingerprint Recognition | The spectral minutiae representation introduced in this paper is a novel method to represent a minutiae set as a fixed-length feature vector, which is invariant to translation, and in which rotation becomes translation that can be easily compensated for. These characteristics enable the combination of fingerprint recognition systems with a template protection scheme that requires a fixed-length feature vector as input. In this paper, we will first introduce the spectral minutiae representation scheme. Then we will present several biometric fusion approaches to improve the biometric system performance by combining multiple sources of biometric information. The algorithms are evaluated on the FVC2000-DB2 database and showed promising results. | 14,656,291 |
bcff965bbfd48a5247a95cfb714b94b25a767f74 | Grid-Based Likelihood Ratio Classifiers for the Comparison of Facial Marks | Facial marks have been studied before, either as a complement to face recognition systems or for their suitability as a single biometric modality. In this paper, we use a subset of the FRGCv2 data set (12307 images and 568 subjects) to study the properties of facial marks, their spatial patterns, and classifiers acting upon these patterns. We observe differences between age and ethnic groups in the number of facial marks. Also, facial marks tend to be clustered. We present six forensically relevant aspects with respect to the design and evaluation of classifiers. These aspects help to systematically study factors that influence performance characteristics (discriminating power and calibration loss) of these classifiers. Calibration loss is of particular forensic importance; it essentially measures how well the classifier output can be used as strength of evidence in a court of law. We use various facial mark grids to which the facial mark spatial patterns are assigned. We find that a classifier that utilizes the facial mark grid of a specific subject outperforms all other classifiers. We also observe that the calibration loss of such subject-based classifier indicates that small grid cell sizes should be avoided. | 20,880,658 |
f09dbaa8e1492345c51796a5473a33ba79382cff | Decision Level Fusion of Fingerprint Minutiae Based Pseudonymous Identifiers | In a biometric template protected authentication system, a pseudonymous identifier is the part of a protected biometric template that can be compared directly against other pseudonymous identifiers. Each compared pair of pseudonymous identifiers results in a verification decision testing whether both attributes are derived from the same individual. Compared to an unprotected system, most existing biometric template protection methods cause to a certain extent, degradation in biometric performance. Therefore fusion is a promising method to enhance the biometric performance in template protected systems. Compared to feature level fusion and score level fusion, decision level fusion exhibits not only the least fusion complexity, but also the maximum interoperability across different biometric features, systems based on scores, and even individual algorithms. However, performance improvement via decision level fusion is not obvious. It is influenced by both the dependency and the performance gap among the conducted tests for fusion. We investigate in this paper several scenarios (multi-sample, multi-instance, multi- sensor, and multi-algorithm) when fusion is performed on binary decisions obtained from verification of fingerprint minutiae based pseudonymous identifiers. We demonstrate the influence on biometric performance from decision level fusion in different fusion scenarios on a multi-sensor fingerprint database. | 28,023,914 |