Query Text
stringlengths
10
59.9k
Ranking 1
stringlengths
10
4.53k
Ranking 2
stringlengths
10
50.9k
Ranking 3
stringlengths
10
6.78k
Ranking 4
stringlengths
10
59.9k
Ranking 5
stringlengths
10
6.78k
Ranking 6
stringlengths
10
59.9k
Ranking 7
stringlengths
10
59.9k
Ranking 8
stringlengths
10
6.78k
Ranking 9
stringlengths
10
59.9k
Ranking 10
stringlengths
10
50.9k
Ranking 11
stringlengths
13
6.78k
Ranking 12
stringlengths
14
50.9k
Ranking 13
stringlengths
24
2.74k
score_0
float64
1
1.25
score_1
float64
0
0.25
score_2
float64
0
0.25
score_3
float64
0
0.24
score_4
float64
0
0.24
score_5
float64
0
0.24
score_6
float64
0
0.21
score_7
float64
0
0.07
score_8
float64
0
0.03
score_9
float64
0
0.01
score_10
float64
0
0
score_11
float64
0
0
score_12
float64
0
0
score_13
float64
0
0
An Efficient Reversible Data Hiding Scheme Based on Reference Pixel and Block Selection In this paper, we propose an efficient data hiding scheme based on reference pixel and block selection to further improve the embedding performance of histogram shifting. Specifically, we first divide the original image into non-overlapping blocks of an adjustable size. Then for each block, we assign the median of pixels as the reference pixel and the number of pixels equal to the reference value as the smooth level. In this way, difference histograms for each smooth level can be constructed. We embed the secret data using histogram shifting from the highest level histogram to lower level ones instead of sequential embedding. By this means, our proposed reversible data hiding scheme can adaptively embed data in the smooth blocks and thus improve the marked image quality with a comparable embedding capacity. The experimental results also demonstrate its superiority over some state-of-the-art reversible data hiding works.
Lossless Data Embedding Using Generalized Statistical Quantity Histogram Histogram-based lossless data embedding (LDE) has been recognized as an effective and efficient way for copyright protection of multimedia. Recently, a LDE method using the statistical quantity histogram has achieved good performance, which utilizes the similarity of the arithmetic average of difference histogram (AADH) to reduce the diversity of images and ensure the stable performance of LDE. However, this method is strongly dependent on some assumptions, which limits its applications in practice. In addition, the capacities of the images with the flat AADH, e.g., texture images, are a little bit low. For this purpose, we develop a novel framework for LDE by incorporating the merits from the generalized statistical quantity histogram (GSQH) and the histogram-based embedding. Algorithmically, we design the GSQH driven LDE framework carefully so that it: (1) utilizes the similarity and sparsity of GSQH to construct an efficient embedding carrier, leading to a general and stable framework; (2) is widely adaptable for different kinds of images, due to the usage of the divide-and-conquer strategy; (3) is scalable for different capacity requirements and avoids the capacity problems caused by the flat histogram distribution; (4) is conditionally robust against JPEG compression under a suitable scale factor; and (5) is secure for copyright protection because of the safe storage and transmission of side information. Thorough experiments over three kinds of images demonstrate the effectiveness of the proposed framework.
Reversible image watermarking on prediction errors by efficient histogram modification A reversible data hiding algorithm is proposed, in which the efficiency of modifying a pair of histogram bins is considered. Multiple pairs of histogram bins can be further selected for data embedding in sequence, while pre-process of pixel values is performed to prevent the possible overflow and underflow. Embedding with the prediction errors is investigated with a new prediction scheme. In each of the four prediction modes, a large amount of prediction errors can be produced from the host image. Moreover, all combinations of the four modes to generate a number of histogram pairs are enumerated to obtain the best performance. Blind extraction and recovery are enabled by embedding a pre-computed location map and other overhead information into the watermarked image. Promising experimental results are obtained on a variety of test images. Compared with the existing algorithms, the image content is better preserved in high payload data hiding.
Efficient Generalized Integer Transform for Reversible Watermarking In this letter, an efficient integer transform based reversible watermarking is proposed. We first show that Tian's difference expansion (DE) technique can be reformulated as an integer transform. Then, a generalized integer transform and a payload-dependent location map are constructed to extend the DE technique to the pixel blocks of arbitrary length. Meanwhile, the distortion can be controlled by preferentially selecting embeddable blocks that introduce less distortion. Finally, the superiority of the proposed method is experimental verified by comparing with other existing schemes.
Improved Embedding for Prediction-Based Reversible Watermarking This paper aims at reducing the embedding distortion of prediction error expansion reversible watermarking. Instead of embedding the entire expanded difference into the current pixel, the difference is split between the current pixel and its prediction context. The modification of the context generates an increase of the following prediction errors. Global optimization is obtained by tuning the amount of data embedded into context pixels. Prediction error expansion reversible watermarking schemes based on median edge detector (MED), gradient-adjusted predictor (GAP), and a simplified GAP version, SGAP, are investigated. Improvements are obtained for all the predictors. Notably good results are obtained for SGAP-based schemes. The improved SGAP appears to outperform GAP-based reversible watermarking.
Reversible data embedding using a difference expansion Reversible data embedding has drawn lots of interest recently. Being reversible, the original digital content can be completely restored. We present a novel reversible data-embedding method for digital images. We explore the redundancy in digital images to achieve very high embedding capacity, and keep the distortion low.
Software Technology in the 1990's: Using a New Paradigm First Page of the Article
The concept of a supercompiler A supercompiler is a program transformer of a certain type. It traces the possible generalized histories of computation by the original program, and compiles an equivalent program, reducing in the process the redundancy that could be present in the original program. The nature of the redundancy that can be eliminated by supercompilation may be various, e.g., some variables might have predefined values (as in partial evaluation), or the structure of control transfer could be made more efficient (as in lazy evaluation), or it could simply be the fact that the same variable is used more than once. The general principles of supercompilation are described and compared with the usual approach to program transformation as a stepwise application of a number of equivalence rules. It is argued that the language Refal serves the needs of supercompilation best. Refal is formally defined and compared with Prolog and other languages. Examples are given of the operation of a Refal supercompiler implemented at CCNY on an IBM/370.
A Privacy-Preserving and Copy-Deterrence Content-Based Image Retrieval Scheme in Cloud Computing. With the increasing importance of images in people’s daily life, content-based image retrieval (CBIR) has been widely studied. Compared with text documents, images consume much more storage space. Hence, its maintenance is considered to be a typical example for cloud storage outsourcing. For privacy-preserving purposes, sensitive images, such as medical and personal images, need to be encrypted before outsourcing, which makes the CBIR technologies in plaintext domain to be unusable. In this paper, we propose a scheme that supports CBIR over encrypted images without leaking the sensitive information to the cloud server. First, feature vectors are extracted to represent the corresponding images. After that, the pre-filter tables are constructed by locality-sensitive hashing to increase search efficiency. Moreover, the feature vectors are protected by the secure kNN algorithm, and image pixels are encrypted by a standard stream cipher. In addition, considering the case that the authorized query users may illegally copy and distribute the retrieved images to someone unauthorized, we propose a watermark-based protocol to deter such illegal distributions. In our watermark-based protocol, a unique watermark is directly embedded into the encrypted images by the cloud server before images are sent to the query user. Hence, when image copy is found, the unlawful query user who distributed the image can be traced by the watermark extraction. The security analysis and the experiments show the security and efficiency of the proposed scheme.
Calculating path algorithms A calculational derivation is given of two abstract path algorithms. The first is an all-pairs algorithm, two well-known instances of which are Warshall's (reachability) algorithm and Floyd's shortest-path algorithm; instances of the second are Dijkstra's shortest-path algorithm and breadth-first/depth-first search of a directed graph. The basis for the derivations is the algebra of regular languages.
A Naïve Time Analysis and its Theory of Cost Equivalence Techniques for reasoning about extensional properties of functional programs are well understood but methods for analysing the underlying intensional or operational properties have been much neglected. This paper begins with the development of a simple but useful calculus for time analysis of non-strict functional programs with lazy lists. One limitation of this basic calculus is that the ordinary equational reasoning on functional programs is not valid. In order to buy back some of these equational properties we develop a non-standard operational equivalence relation called cost equivalence, by considering the number of computation steps as an 'observable' component of the evaluation process. We define this relation by analogy with Park's definition of bisimulation in ccs. This formulation allows us to show that cost equivalence is a contextual congruence (and thus is substitutive with respect to the basic calculus) and provides useful proof techniques for establishing cost-equivalence laws. It is shown that basic evaluation time can be derived by demonstrating a certain form of cost equivalence, and we give an axiomatization of cost equivalence which is complete with respect to this application. This shows that cost equivalence subsumes the basic calculus. Finally we show how a new operational interpretation of evaluation demands can be used to provide a smooth interface between this time analysis and more compositional approaches, retaining the advantages of both.
Analytical placement: A linear or a quadratic objective function? This paper addresses the problem of cell placement which is considered crucial for layout quality. Based on the combined analytical and partitioning strategy successfully applied in the GORDIAN placement tool, we discuss the consequences of using linear or quadratic ob- jective functions. By joining the linear objective with an efficient quadratic programming approach, and by applying a refined iterative partitioning scheme, we ob- tain placements of excellent quality. The effect of a quadratic and a linear objective function on the chip area after final routing is demonstrated for benchmark circuits and other circuits with up to 21000 cells.
Anti-Forensics with Steganographic Data Embedding in Digital Images E-forensics investigates and extracts confidential information from electronic products; in other words, the anti-forensics indicates that evidences in those products are imperceptible and undetected. This work presents an anti-forensic steganography method that can embed and extract messages from images. Highlight of exploiting modification direction (HoEMD) and adaptive EMD (AdEMD) methods use the module operation and take into account of the sensitive nature of a human visual system. The HoEMD approach exploits the pixel directions. A pixel with a larger change implies more pixel directions and, ultimately, a larger embedding capacity. The pixel differencing in the proposed AdEMD method is used to evaluate whether the pixel located in the edge area can tolerate a larger change than that of the pixel location in a smooth area. To successfully extract a message, the differencing value is maintained on the same level before and after data is concealed; a delicate adjusting phase is used as well. In contrast with the PVD and LSB replacement method of Wu et al., LSB substitution method, and Lee and Chen's data hiding scheme based on modulus function, the proposed steganography system has a larger embedding capacity and a higher image quality. Effectiveness of the proposed steganography schemes over that of a previous blind steganalyzer is demonstrated using the statistical attack of Chi-square analysis.
Reversible data hiding based on local histogram shifting with multilayer embedding We use the two neighboring points of the peak point to embed the secret data.We improve the histogram of embedded cover image to ensure security of secret data.The multilayer embedding of pyramid structure is adopted to increase the capacity.Our proposed method has a higher embedding capacity with lower distortion. In this paper, we present a new reversible data hiding method based on histogram shifting using localization. Our proposed method selects peak point as the reference point, then uses the two neighboring points of the peak point to achieve secret data embedding based on histogram shifting and the peak point keeps unchanged. In the extraction end, we no longer need the key information about the peak point, we can directly find the peak point from the histogram to extract the secret data. We also exploit the localization to make the histogram of embedded cover image become almost the same as the histogram of the original cover image. The embedding capacity is also increased rapidly by the localization with multilayer embedding. Experimental results show that our proposed method is effective and superior.
1.2
0.002899
0.00274
0.002532
0.002041
0.000199
0
0
0
0
0
0
0
0
Coevolutionary genetic watermarking for owner identification. In this paper, we propose a coevolutionary genetic watermarking scheme based on wavelet packet transform. Wavelet packet transform can be viewed as a generalization of the discrete wavelet transform, and a best wavelet basis in the sense of some cost metric can be found within a large library of permissible bases. Coevolutionary genetic algorithm is employed to select an appropriate basis from permissible bases of wavelet packet transform, and it determines the subbands for watermark embedding. Experimental results demonstrate that proposed method can increase the capability to resist specific image processing methods while keeping quality of the watermarked image acceptable. Moreover, the architecture of cooperative coevalutionary genetic algorithm is particularly suitable in distributed computing environment. This characteristic would make genetic watermarking schemes more applicable in real-world applications.
Prediction-Based Reversible Data Hiding with Content Characteristics Reversible data hiding is one of the popular topics in watermarking researches, and it belongs to the branch of digital rights management (DRM) applications. Similar to conventional watermarking techniques, for reversible data hiding, at the encoder, secret information can be embedded into original images, and the marked image can be obtained. Different from conventional watermarking, due to the term 'reversible', it implies that at the decoder, both the original image and embedded secret should be perfectly separated from marked image. It requires keeping the reversibility of proposed algorithm, while looking for good image quality and large amount of secret for embedding. With the prediction-based algorithm, output image can be predicted, and differences between original and predicted images can be altered to make reversible data hiding possible. We also utilize inherent characteristics of original images for reaching better performances. Simulation results reveal that with our algorithm, comparable or better performances can be observed due to the characteristics from different images.
Fidelity Enhancement of Reversible Data Hiding for Images with Prediction-Based Concepts Reversible data hiding has attracted more and more attention in recent years. With reversible data hiding, it requires embedding secret data into original image with devised algorithm at the encoder, and marked image can then be delivered to the decoder. At the decoder, both the secret data and original image should be perfectly separated from marked image to keep the reversibility. There are several practical ways to make reversible data hiding possible, and one of the latest methods belongs to the prediction-based method. By carefully manipulating differences between predicted and original images, reversible data hiding can be achieved. We propose an enhanced method for manipulating the difference histogram, and we observe the better performances than existing scheme in literature. Possible ways for enhancing embedding capacity are also pointed out for the extension of our method in the future.
Reversible Data Hiding Scheme Based on Image Inpainting Reversible/lossless image data hiding schemes provide the capability to embed secret information into a cover image where the original carrier can be totally restored after extracting the secret information. This work presents a high performance reversible image data hiding scheme especially in stego image quality control using image inpainting, an efficient image processing skill. Embeddable pixels chosen from a cover image are initialized to a fixed value as preprocessing for inpainting. Subsequently, these initialized pixels are repaired using inpainting technique based on partial differential equations (PDE). These inpainted pixels can be used to carry secret bits and generate a stego image. Experimental results show that the proposed scheme produces low distortion stego images and it also provides satisfactory hiding capacity.
Metadata-based image watermarking for copyright protection In this paper, we propose a practical application for copyright protection of images with watermarking. The EXIF metadata of images and error-control codes are integrated into our algorithm and corresponding applications. Application for robust watermarking is one of the major branches in digital rights management (DRM) systems and digital forensics. Here, we focus on the copyright protection for images taken by ordinary cameras. By use of robust watermarking, it generally alters selected coefficients of the contents to accomplish the embedding process. Should the received image be in question, the watermark embedded beforehand can be extracted to indicate the copyright owner of such an image. We consider not only the image contents itself, but we also employ the EXIF metadata, which serves as the role of watermark, to be integrated into our scheme to make copyright protection possible. Moreover, for enhancing the performance for copyright protection, channel coding is employed and better protection capability can be expected. Taking the manufacturer, camera model, date and time stamp, and other important information in the EXIF metadata into account, conventional watermarking techniques can be applied to ordinary pictures taken by ourselves, and watermarked images with good quality can be produced. Even when the marked image has been intentionally modified, the original EXIF with selected information can mostly be recovered from the channel decoding process. Simulation results present the effectiveness of such an implementation.
Embedding capacity raising in reversible data hiding based on prediction of difference expansion Most of the proposed methods of reversible data hiding based on difference expansion require location maps to recover cover images. Although the location map can be compressed by a lossless compression algorithm, this lowers embedding capacity and increases computational cost during the procedures of embedding and extracting. The study presents an adaptive reversible data scheme based on the prediction of difference expansion. Since each cover pixel generally resembles its surrounding pixels, most of the difference values between the cover pixels and their corresponding predictive pixels are small; therefore, the proposed scheme gains from embedding capacity by taking full advantage of the large quantities of smaller difference values where secret data can be embedded. The proposed scheme offers several advantages, namely, (1) the location map is no more required, (2) the embedding capacity can be adjusted depending on the practical applications, and (3) the high embedding capacity with minimal visual distortion can be achieved. Moreover, the experimental results demonstrate that the proposed scheme yields high embedding capacity by comparing the related schemes that are proposed recently.
Information hiding in edges: A high capacity information hiding technique using hybrid edge detection. The multimedia security is becoming more and more important as the data being exchanged on the Internet is increasing exponentially. Though cryptography is one of the methods which is used to secure the data during transit, but the camouflaged appearance of the scrambled data alerts the adversary about some critical information being shared. In such a scenario, steganography has been used as an alternate solution to secure the secret information. In this paper a color image steganographic algorithm based on hybrid edge detection is proposed. The color image is partitioned into constituent Red (R), Green (G) and Blue (B) planes. Hybrid edge detection is used for finding the edge and non-edge pixels of Green and Blue planes of cover image. The Green and Blue planes are used for hiding the data while Red plane holds the pixel status (whether edge or non-edge) of these planes. The RC4 encryption algorithm is used to encrypt secret message before embedding it in the cover image to enhance security of the secret data. A fragile watermark/logo (whose size is less than 1% of total secret data) has been embedded, besides secret data in the cover image, to facilitate content authentication and early tamper detection. At the receiver, firstly logo is extracted. If it is same as one embedded at transmitter, indicating that secret data has not been altered during transit, secret data is extracted. Otherwise (if extracted logo is not same as used at input) the receiver does not waste critical time to extract compromised data but sends an automatic retransmission request. Experimental investigations reveal that the proposed scheme is capable of providing high quality of stego-images for a fairly high pay load. A comparison of the proposed technique with some state of art schemes substantiates the above arguments.
Contrast-Adaptive Removable Visible Watermarking (CARVW) mechanism. Engineers have proposed many watermark mechanisms for protecting the content of digital media from unauthorized use. The visible watermark scheme indicates the copyright of digital media posted over the Internet by embedding an inconspicuous but recognizable pattern into media. However, the embedding process often results in serious distortion of the protected image. Since the strength of the watermark in conventional methods mainly depends on the feature of protected media, this may lead to unsatisfactory transparency of watermarked images. This paper proposes a removable solution for visible watermark mechanism. By adopting the subsampling technique, the method proposes a contrast-adaptive strategy to solve this problem. This method can also guarantee the essentials of general visible watermark schemes. Experimental results show that the proposed method outperforms related works in terms of preserving the quality of the restored image.
"Break our steganographic system": the ins and outs of organizing BOSS This paper summarizes the first international challenge on steganalysis called BOSS (an acronym for Break Our Steganographic System). We explain the motivations behind the organization of the contest, its rules together with reasons for them, and the steganographic algorithm developed for the contest. Since the image databases created for the contest significantly influenced the development of the contest, they are described in a great detail. Paper also presents detailed analysis of results submitted to the challenge. One of the main difficulty the participants had to deal with was the discrepancy between training and testing source of images - the so-called cover-source mismatch, which forced the participants to design steganalyzers robust w.r.t. a specific source of images. We also point to other practical issues related to designing steganographic systems and give several suggestions for future contests in steganalysis.
Recursive histogram modification: establishing equivalency between reversible data hiding and lossless data compression. State-of-the-art schemes for reversible data hiding (RDH) usually consist of two steps: first construct a host sequence with a sharp histogram via prediction errors, and then embed messages by modifying the histogram with methods, such as difference expansion and histogram shift. In this paper, we focus on the second stage, and propose a histogram modification method for RDH, which embeds the message by recursively utilizing the decompression and compression processes of an entropy coder. We prove that, for independent identically distributed (i.i.d.) gray-scale host signals, the proposed method asymptotically approaches the rate-distortion bound of RDH as long as perfect compression can be realized, i.e., the entropy coder can approach entropy. Therefore, this method establishes the equivalency between reversible data hiding and lossless data compression. Experiments show that this coding method can be used to improve the performance of previous RDH schemes and the improvements are more significant for larger images.
Efficient reversible data hiding in encrypted H.264/AVC videos Due to the security and privacy-preserving requirements for cloud data management, it is sometimes desired that video content is accessible in an encrypted form. Reversible data hiding in the encrypted domain is an emerging technology, as it can perform data hiding in encrypted videos without decryption, which preserves the confidentiality of the content. Furthermore, the original cover can be losslessly restored after decryption and data extraction. An efficient reversible data hiding scheme for encrypted H.264/AVC videos is proposed. During H.264/AVC encoding, the intraprediction mode, motion vector difference, and the sign bits of the residue coefficients are encrypted using a standard stream cipher. Then, the data-hider who does not know the original video content, may reversibly embed secret data into the encrypted H.264/AVC video by using a modified version of the histogram shifting technique. A scale factor is utilized for selecting the embedding zone, which is scalable for different capacity requirements. With an encrypted video containing hidden data, data extraction can be carried out either in the encrypted or decrypted domain. In addition, real reversibility is realized so that data extraction and video recovery are free of any error. Experimental results demonstrate the feasibility and efficiency of the proposed scheme. (C) 2014 SPIE and IS&T
A calculus of refinements for program derivations A calculus of program refinements is described, to be used as a tool for the step-by-step derivation of correct programs. A derivation step is considered correct if the new program preserves the total correctness of the old program. This requirement is expressed as a relation of (correct) refinement between nondeterministic program statements. The properties of this relation are studied in detail. The usual sequential statement constructors are shown to be monotone with respect to this relation and it is shown how refinement between statements can be reduced to a proof of total correctness of the refining statement. A special emphasis is put on the correctness of replacement steps, where some component of a program is replaced by another component. A method by which assertions can be added to statements to justify replacements in specific contexts is developed. The paper extends the weakest precondition technique of Dijkstra to proving correctness of larger program derivation steps, thus providing a unified framework for the axiomatic, the stepwise refinement and the transformational approach to program construction and verification.
Distributed Mobile Sink Routing for Wireless Sensor Networks: A Survey. The concentration of data traffic towards the sink in a wireless sensor network causes the nearby nodes to deplete their batteries quicker than other nodes, which leaves the sink stranded and disrupts the sensor data reporting. To mitigate this problem the usage of mobile sinks is proposed. Mobile sinks implicitly provide load-balancing and help achieving uniform energy-consumption across the network. However, the mechanisms to support the sink mobility (e.g., advertising the location of the mobile sink to the network) introduce an overhead in terms of energy consumption and packet delays. With these properties mobile sink routing constitutes an interesting research field with unique requirements. In this paper, we present a survey of the existing distributed mobile sink routing protocols. In order to provide an insight to the rationale and the concerns of a mobile sink routing protocol, design requirements and challenges associated with the problem of mobile sink routing are determined and explained. A definitive and detailed categorization is made and the protocols' advantages and drawbacks are determined with respect to their target applications.
Reversible data hiding in encrypted image using new embedding pattern and multiple judgments This work proposes a data embedding method in encrypted images. The proposed method introduces a new embedding pattern which using the pixel spatial correlation in a block and border pixels on the block sides to achieve a low error rate effectively. In the extraction end, the neighboring blocks are also adopted to predict the current block to calculate the smoothness more accurately. Experimental results show our proposed method performs better than Zhang's method and Hong's method. In the case of the block size is 8ź×ź8, the error rate for the cover image Lena in Zhang's method is 1.25 % and the error rate of Hong's method is 0.34 %, while the error rate in our proposed embedding method reduces to 0.2 %.
1.041948
0.041171
0.04
0.013724
0.006679
0.001908
0.000286
0.000086
0.000041
0.000015
0.000003
0
0
0
Multiterminal Global Routing: A Deterministic Approximation Scheme We consider the problem of routing multiterminal nets in a two-dimensional gate-array. Given a gate-array and a set of nets to be routed, we wish to find a routing that uses as little channel space as possible. We present a deterministic approximation algorithm that uses close to the minimum possible channel space. We cast the routing problem as a new form of zero-one multicommodity flow, an integer-programming problem. We solve this integer program approximately by first solving its linear-program relaxation and then rounding any fractions that appear in the solution to the linear program. The running time of the rounding algorithm is exponential in the number of terminals in a net but polynomial in the number of nets and the size of the array. The algorithm is thus best suited to cases where the number of terminals on each net is small.
A global router based on a multicommodity flow model The paper describes a new global router applicable for any object with a defined channel structure. This router can be used for the routing of chips and boards with rectilinear or nonrectilinear channel topology in a multilayer environment. This makes it more universal than other known global routers. The router is based on a multicommodity flow model in the graph form with hierarchical cost function. This model is proved to be NP-complete. An algorithm applied to this problem is based on minimax ideas. It moves from the solution optimal w/r to an initial cost function in the direction of the constraints by maximizing on each iteration the decrease in the number of channels at the highest level of overflow and the number of cells with overflown via count. If a solution exists for each iteration, then algorithm will converge in polynomially bounded number of steps to the solution of the multicommodity flow problem. If, for some iteration, a solution does not exist, then an escape procedure is applied and the process continues. Experimental results indicate that performance characteristics of this global router are not inferior to global routers applicable only to gate arrays.
A new polynomial-time algorithm for linear programming We present a new polynomial-time algorithm for linear programming. The running-time of this algorithm is O(n3-5L2), as compared to O(n6L2) for the ellipsoid algorithm. We prove that given a polytope P and a strictly interior point a &egr; P, there is a projective transformation of the space that maps P, a to P', a' having the following property. The ratio of the radius of the smallest sphere with center a', containing P' to the radius of the largest sphere with center a' contained in P' is O (n). The algorithm consists of repeated application of such projective transformations each followed by optimization over an inscribed sphere to create a sequence of points which converges to the optimal solution in polynomial-time.
Randomized rounding: a technique for provably good algorithms and algorithmic proofs We study the relation between a class of 0–1 integer linear programs and their rational relaxations. We give a randomized algorithm for transforming an optimal solution of a relaxed problem into a provably good solution for the 0–1 problem. Our technique can be a of extended to provide bounds on the disparity between the rational and 0–1 optima for a given problem instance.
An Algorithm for Path Connections and Its Applications
ISPD placement contest updates and ISPD 2007 global routing contest In 2005 and 2006, ISPD successfully hosted two placement contests and released a total of 16 benchmark circuits. These benchmarks are all derived from real industrial circuits and present modern physical design challenges such as scalability, variety of floorplans, movable macro handling, and congestion mitigation. Since their release, the ISPD placement benchmarks have been extensively used by the physical design community. Indeed, we have observed significant progress in placement and floorplanning in the last few years. Much of this success can be credited to the fact that the placement community finally has large, well-defined benchmark circuits available that allow for fair comparisons among different algorithms. In this presentation, we report the most recent results on ISPD placement benchmarks and review how much progress each placement tool has achieved. Continuing the tradition of spirited competition, ISPD 2007 presents a new contest in the global routing area. Similar to previous placement contests, a set of global routing benchmarks are released. These benchmarks are derived from the ISPD placement benchmark solutions; the level of complexity of these benchmarks is comparable to what real industry routing tools encounter. The global routing problem is formulated as a tile-based grid structure superimposed on the chip area; both 2D (single metal layer) and 3D (multiple metal layers) global routing instances will be released. The global routing solutions are evaluated on metrics such as total overflows, maximum overflow of a tile, routed wire length, and the number of vias. CPU time is not included this year to encourage high quality solutions. With placement and global routing benchmarks available, researchers in the fields of placement, floorplanning and global routing should have ample opportunities to attack realistic physical design challenges and contribute their solutions. The placement and global routing contests have attracted strong entries from research groups around the world. In recognition of the importance of the problems, IEEE CEDA and SRC have donated prizes for the winners. Each year of the contest has brought unexpected twists and turns; we anticipate that this and future years will be no different.
NTHU-Route 2.0: A fast and stable global router We present in this paper a fast and stable global router called NTHU-Route 2.0 that improves the solution quality and runtime of a state-of-the-art router, NTHU-Route, by the following enhancements: (1) a new history based cost function, (2) new ordering methods for congested region identification and rip-up and reroute, and (3) two implementation techniques. The experimental results show that NTHU-Router 2.0 solves all ISPD98 benchmarks with very good quality. Moreover, it routes 7 of 8 ISPD07 benchmarks without any overflow. In particular, for one of the ISPD07 benchmarks which are thought to be difficult cases previously, NTHU-Route 2.0 can completely eliminate its total overflow. NTHU-Route 2.0 also successfully solves 12 of 16 ISPD08 benchmarks without causing any overflow.
BoxRouter 2.0: architecture and implementation of a hybrid and robust global router In this paper, we present BoxRouter 2.0, a hybrid and robust global router with discussion on its architecture and implementation. As high performance VLSI design becomes more interconnect-dominant, efficient congestion elimination in global routing is in greater demand. Hence, we propose BoxRouter 2.0 which has strong ability to improve routability and minimize the number of vias with blockages, while minimizing wirelength. BoxRouter 2.0 is improved over [1], but can perform multi-layer routing with 2D global routing and layer assignment. Our 2D global routing is equipped with two ideas: robust negotiation-based A* search for routing stability, and topology-aware wire ripup for flexibility. After 2D global routing, 2D-to-3D mapping is done by the layer assignment which is powered by progressive via/blockage-aware integer linear programming. Experimental results show that BoxRouter 2.0 has better routability with comparable wirelength than other routers on ISPD07 benchmark, and it can complete (no overflow) ISPD98 benchmark for the first time in the literature with the shortest wirelength.
FastPlace: efficient analytical placement using cell shifting, iterative local refinement and a hybrid net model In this paper, we present FastPlace -- a fast, iterative, flat placement algorithm for large-scale standard cell designs. FastPlace is based on the quadratic placement approach. The quadratic approach formulates the wirelength minimization problem as a convex quadratic program, which can be solved efficiently by some analytical techniques. However it suffers from some drawbacks. First, the resulting placement has a lot of overlap among cells. Second, the resulting total wirelength may be long as the quadratic wirelength objective is only an indirect measure of the linear wirelength. Third, existing net models tend to create a lot of non-zero entries in the connectivity matrix, which slows down the quadratic program solver. To handle the above problems we propose: (1) An efficient Cell Shifting technique to remove cell overlap from the quadratic program solution and produce a global placement with even cell distribution. (2) An Iterative Local Refinement technique, to reduce the wirelength according to the half-perimeter measure. (3) A Hybrid Net Model which is a combination of the traditional clique and star models. This net model greatly reduces the number of non-zero entries in the connectivity matrix and results in a significant speedup of the solver. Experimental results show that FastPlace is on average 13.0 and 97.4 times faster than Capo and Dragon respectively. Correspondingly, the average wirelength is just 1.0% and 1.6% higher.
Supply Voltage Degradation Aware Analytical Placement Increasingly significant power/ground supply voltage degradation in nanometer VLSI designs leads to system performance degradation and even malfunction. Existing techniques focus on design and optimization of power/ground supply networks. In this paper, we propose supply voltage degradation aware placement, e.g., to reduce maximum supply voltage degradation by relocation of supply current sources. We represent supply voltage degradation at a P/G node as a function of supply currents and effective impedances (i.e., effective resistances in DC analysis) in a P/G network, and integrate supply voltage degradation in an analytical placement objective. For scalability and efficiency improvement, we apply random-walk, graph contraction and interpolation techniques to obtain effective resistances. Our experimental results show an average 20.9% improvement of worst-case voltage degradation and 11.7% improvement of average voltage degradation with only 4.3% wirelength increase.
The concept of a supercompiler A supercompiler is a program transformer of a certain type. It traces the possible generalized histories of computation by the original program, and compiles an equivalent program, reducing in the process the redundancy that could be present in the original program. The nature of the redundancy that can be eliminated by supercompilation may be various, e.g., some variables might have predefined values (as in partial evaluation), or the structure of control transfer could be made more efficient (as in lazy evaluation), or it could simply be the fact that the same variable is used more than once. The general principles of supercompilation are described and compared with the usual approach to program transformation as a stepwise application of a number of equivalence rules. It is argued that the language Refal serves the needs of supercompilation best. Refal is formally defined and compared with Prolog and other languages. Examples are given of the operation of a Refal supercompiler implemented at CCNY on an IBM/370.
Reversible data hiding A novel reversible data hiding algorithm, which can recover the original image without any distortion from the marked image after the hidden data have been extracted, is presented in this paper. This algorithm utilizes the zero or the minimum points of the histogram of an image and slightly modifies the pixel grayscale values to embed data into the image. It can embed more data than many of the ex...
A parallel algorithm for the monadic unification problem The monadic unification problem is introduced. AnO(log2n) parallel algorithm to solve this problem is given and shown to be correct.
Robust watermarking scheme and tamper detection based on threshold versus intensity A watermarking field is necessary to prove the copyright, ownership, authenticity and other related security aspects of the electronic data. Semi-Blind watermarking refers to need the watermark image during the extraction process. While, informed watermarking generates the watermark data from the original image itself. Moreover, Tamper detection is useful to discover the tamper zone in the image. In this paper, we propose a semi blind and informed watermarking approach. We build the watermark from the original image using Weber Law. Our approach aims to provide a high robustness and imperceptibility with perfectly tamper detection zone. We divide the original image into blocks and the main pixel is chosen for watermark insertion, where the embedding/extraction operates in the spatial domain. The tamper detection is tested by tampering watermarked image, and then based on the extracted attacked watermark; we can discover the tamper area. Also, the robustness watermarking aspect is proved against different kind of geometric and non-geometric attacks. Based on the experimental results, the imperceptibility and robustness of our watermarking approach are proven and showing perfectly the detection of the tamper zones.
1.058492
0.024949
0.020784
0.014933
0.008152
0.002645
0.00092
0.000351
0.000082
0.00001
0
0
0
0
A Dynamic Compilation Framework for Controlling Microprocessor Energy and Performance Dynamic voltage and frequency scaling (DVFS) is an effective technique for controlling microprocessor energy and performance. Existing DVFS techniques are primarily based on hardware, OS timeinterrupts, or static-compiler techniques. However, substantially greater gains can be realized when control opportunities are also explored in a dynamic compilation environment. There are several advantages to deploying DVFS and managing energy/performance tradeoffs through the use of a dynamic compiler. Most importantly, dynamic compiler driven DVFS is fine-grained, code-aware, and adaptive to the current microarchitecture environment. This paper presents a design framework of the run-time DVFS optimizer in a general dynamic compilation system. A prototype of the DVFS optimizer is implemented and integrated into an industrialstrength dynamic compilation system. The obtained optimization system is deployed in a real hardware platform that directly measures CPU voltage and current for accurate power and energy readings. Experimental results, based on physical measurements for over 40 SPEC or Olden benchmarks, show that significant energy savings are achieved with little performance degradation. SPEC2K FP benchmarks benefit with energy savings of up to 70% (with 0.5% performance loss). In addition, SPEC2K INT show up to 44% energy savings (with 5% performance loss), SPEC95 FP save up to 64% (with 4.9% performance loss), and Olden save up to 61% (with 4.5% performance loss). On average, the technique leads to an energy delay product (EDP) improvement that is 3X-5X better than static voltage scaling, and is more than 2X (22% vs. 9%) better than the reported DVFS results of prior static compiler work. While the proposed technique is an effective method for microprocessor voltage and frequency control, the design framework and methodology described in this paper have broader potential to address other energy and power issues such as di/dt and thermal control.
Live, Runtime Phase Monitoring and Prediction on Real Systems with Application to Dynamic Power Management Computer architecture has experienced a major paradigm shift from focusing only on raw performance to considering power-performance efficiency as the defining factor of the emerging systems. Along with this shift has come increased interest in workload characterization. This interest fuels two closely related areas of research. First, various studies explore the properties of workload variations and develop methods to identify and track different execution behavior, commonly referred to as "phase analysis". Second, a large complementary set of research studies dynamic, on-the-fly system management techniques that can adaptively respond to these differences in application behavior. Both of these lines of work have produced very interesting and widely useful results. Thus far, however, there exists only a weak link between these conceptually related areas, especially for real-system studies. Our work aims to strengthen this link by demonstrating a real-system implementation of a runtime phase predictor that works cooperatively with on-the-fly dynamic management. We describe a fully-functional deployed system that performs accurate phase predictions on running applications. The key insight of our approach is to draw from prior branch predictor designs to create a phase history table that guides predictions. To demonstrate the value of our approach, we implement a prototype system that uses it to guide dynamic voltage and frequency scaling. Our runtime phase prediction methodology achieves above 90% prediction accuracies for many of the experimented benchmarks. For highly variable applications, our approach can reduce mispredictions by more than 6X over commonly-used statistical approaches. Dynamic frequency and voltage scaling, when guided by our runtime phase predictor, achieves energy-delay product improvements as high as 34% for benchmarks with non-negligible variability, on average 7% better than previous methods and 18% better than a baseline unmanaged system.
An Analysis of Efficient Multi-Core Global Power Management Policies: Maximizing Performance for a Given Power Budget Chip-level power and thermal implications will continue to rule as one of the primary design constraints and performance limiters. The gap between average and peak power actually widens with increased levels of core integration. As such, if per-core control of power levels (modes) is possible, a global power manager should be able to dynamically set the modes suitably. This would be done in tune with the workload characteristics, in order to always maintain a chip-level power that is below the specified budget. Furthermore, this should be possible without significant degradation of chip-level throughput performance. We analyze and validate this concept in detail in this paper. We assume a per-core DVFS (dynamic voltage and frequency scaling) knob to be available to such a conceptual global power manager. We evaluate several different policies for global multi-core power management. In this analysis, we consider various different objectives such as prioritization and optimized throughput. Overall, our results show that in the context of a workload comprised of SPEC benchmark threads, our best architected policies can come within 1% of the performance of an ideal oracle, while meeting a given chip-level power budget. Furthermore, we show that these global dynamic management policies perform significantly better than static management, even if static scheduling is given oracular knowledge.
Is there a need for fuzzy logic? ''Is there a need for fuzzy logic?'' is an issue which is associated with a long history of spirited discussions and debate. There are many misconceptions about fuzzy logic. Fuzzy logic is not fuzzy. Basically, fuzzy logic is a precise logic of imprecision and approximate reasoning. More specifically, fuzzy logic may be viewed as an attempt at formalization/mechanization of two remarkable human capabilities. First, the capability to converse, reason and make rational decisions in an environment of imprecision, uncertainty, incompleteness of information, conflicting information, partiality of truth and partiality of possibility - in short, in an environment of imperfect information. And second, the capability to perform a wide variety of physical and mental tasks without any measurements and any computations [L.A. Zadeh, From computing with numbers to computing with words - from manipulation of measurements to manipulation of perceptions, IEEE Transactions on Circuits and Systems 45 (1999) 105-119; L.A. Zadeh, A new direction in AI - toward a computational theory of perceptions, AI Magazine 22 (1) (2001) 73-84]. In fact, one of the principal contributions of fuzzy logic - a contribution which is widely unrecognized - is its high power of precisiation. Fuzzy logic is much more than a logical system. It has many facets. The principal facets are: logical, fuzzy-set-theoretic, epistemic and relational. Most of the practical applications of fuzzy logic are associated with its relational facet. In this paper, fuzzy logic is viewed in a nonstandard perspective. In this perspective, the cornerstones of fuzzy logic - and its principal distinguishing features - are: graduation, granulation, precisiation and the concept of a generalized constraint. A concept which has a position of centrality in the nontraditional view of fuzzy logic is that of precisiation. Informally, precisiation is an operation which transforms an object, p, into an object, p^*, which in some specified sense is defined more precisely than p. The object of precisiation and the result of precisiation are referred to as precisiend and precisiand, respectively. In fuzzy logic, a differentiation is made between two meanings of precision - precision of value, v-precision, and precision of meaning, m-precision. Furthermore, in the case of m-precisiation a differentiation is made between mh-precisiation, which is human-oriented (nonmathematical), and mm-precisiation, which is machine-oriented (mathematical). A dictionary definition is a form of mh-precisiation, with the definiens and definiendum playing the roles of precisiend and precisiand, respectively. Cointension is a qualitative measure of the proximity of meanings of the precisiend and precisiand. A precisiand is cointensive if its meaning is close to the meaning of the precisiend. A concept which plays a key role in the nontraditional view of fuzzy logic is that of a generalized constraint. If X is a variable then a generalized constraint on X, GC(X), is expressed as X isr R, where R is the constraining relation and r is an indexical variable which defines the modality of the constraint, that is, its semantics. The primary constraints are: possibilistic, (r=blank), probabilistic (r=p) and veristic (r=v). The standard constraints are: bivalent possibilistic, probabilistic and bivalent veristic. In large measure, science is based on standard constraints. Generalized constraints may be combined, qualified, projected, propagated and counterpropagated. The set of all generalized constraints, together with the rules which govern generation of generalized constraints, is referred to as the generalized constraint language, GCL. The standard constraint language, SCL, is a subset of GCL. In fuzzy logic, propositions, predicates and other semantic entities are precisiated through translation into GCL. Equivalently, a semantic entity, p, may be precisiated by representing its meaning as a generalized constraint. By construction, fuzzy logic has a much higher level of generality than bivalent logic. It is the generality of fuzzy logic that underlies much of what fuzzy logic has to offer. Among the important contributions of fuzzy logic are the following: 1.FL-generalization. Any bivalent-logic-based theory, T, may be FL-generalized, and hence upgraded, through addition to T of concepts and techniques drawn from fuzzy logic. Examples: fuzzy control, fuzzy linear programming, fuzzy probability theory and fuzzy topology. 2.Linguistic variables and fuzzy if-then rules. The formalism of linguistic variables and fuzzy if-then rules is, in effect, a powerful modeling language which is widely used in applications of fuzzy logic. Basically, the formalism serves as a means of summarization and information compression through the use of granulation. 3.Cointensive precisiation. Fuzzy logic has a high power of cointensive precisiation. This power is needed for a formulation of cointensive definitions of scientific concepts and cointensive formalization of human-centric fields such as economics, linguistics, law, conflict resolution, psychology and medicine. 4.NL-Computation (computing with words). Fuzzy logic serves as a basis for NL-Computation, that is, computation with information described in natural language. NL-Computation is of direct relevance to mechanization of natural language understanding and computation with imprecise probabilities. More generally, NL-Computation is needed for dealing with second-order uncertainty, that is, uncertainty about uncertainty, or uncertainty^2 for short. In summary, progression from bivalent logic to fuzzy logic is a significant positive step in the evolution of science. In large measure, the real-world is a fuzzy world. To deal with fuzzy reality what is needed is fuzzy logic. In coming years, fuzzy logic is likely to grow in visibility, importance and acceptance.
Throughput optimal task allocation under thermal constraints for multi-core processors It is known that temperature gradients and thermal hotspots affect the reliability of microprocessors. Temperature is also an important constraint when maximizing the performance of processors. Although DVFS and DFS can be used to extract higher performance from temperature and power constrained single core processors, the full potential of multi-core performance cannot be exploited without the use of thread migration or task-to-core allocation schemes. In this paper, we formulate the problem of throughput-optimal task allocation on thermally constrained multi-core processors, and present a novel solution that includes optimal speed throttling. We show that the algorithms are implementable in real time and can be implemented in operating system's dynamic scheduling policy. The method presented here can result in a significant improvement in throughput over existing methods (5X over a naive scheme).
Cluster-level feedback power control for performance optimization Power control is becoming a key challenge for effectively operating a modern data center. In addition to reducing operation costs, precisely controlling power consumption is an essential way to avoid system failures caused by power capacity overload or overheating due to increasing high-density. Control-theoretic techniques have recently shown a lot of promise on power management thanks to their better control performance and theoretical guarantees on control accuracy and system stability. However, existing work over-simplifies the problem by controlling a single server independently from others. As a result, at the cluster level where multiple servers are correlated by common workloads and share common power supplies, power cannot be shared to improve application performance. In this paper, we propose a cluster-level power controller that shifts power among servers based on their performance needs, while controlling the total power of the cluster to be lower than a constraint. Our controller features a rigorous design based on an optimal multi-input-multi-output control theory. Empirical results demonstrate that our controller outperforms two state-of-the-art controllers, by having better application performance and more accurate power control.
Generative programming: methods, tools, and applications
Continuation-Based Program Transformation Strategies
Toward mechanical mathematics Results are reported here of a rather successful attempt of proving all theorems, totalling near 400, of Principia Mathematica which are strictly in the realm of logic, viz., the restricted predicate calculus with equality. A number of other problems of the same type are discussed. It is suggested that the time is ripe for a new branch of applied logic which may be called “inferential” analysis, which treats proofs as numerical analysis does calculations. This discipline seems capable, in the not too remote future, of leading to machine proofs of difficult new theorems. An easier preparatory task is to use machines to formalize proofs of known theorems. This line of work may also lead to mechanical checks of new mathematical results comparable to the debugging of a program.
A Model for Deliberation, Action, And Introspection This thesis investigates the problem of controlling or directing the reasoning and actions of a computer program. The basic approach explored is to view reasoning as a species of action, so that a program might apply its reasoning powers to the task of deciding what inferences to make as well as to deciding what other actions to take. A design for the architecture of reasoning programs is proposed. This architecture involves self-consciousness, intentional actions, deliberate adaptations, and a form of decision-making based on dialectical argumentation. A program based on this architecture inspects itself, describes aspects of itself to itself, and uses this self-reference and these self-descriptions in making decisions and taking actions. The program's mental life includes awareness of its own concepts, beliefs, desires, intentions, inferences, actions, and skills. All of these are represented by self-descriptions in a single sort of language, so that the program has access to all of these aspects of itself, and can reason about them in the same terms.
Fast and efficient lossless image compression We present a new method for lossless image compression that gives compression comparable to JPEG lossless mode with about five times the speed. Our method, called {\em FELICS}, is based on a novel use of two neighboring pixels for both prediction and error modeling. For coding we use single bits, adjusted binary codes, and Golomb or Rice codes. For the latter we present and analyze a provably good method for estimating the single coding parameter.
Postplacement rewiring by exhaustive search for functional symmetries We propose two new algorithms for rewiring: a postplacement optimization that reconnects pins of a given netlist without changing the logic function and gate locations. In the first algorithm, we extract small subcircuits consisting of several gates from the design and reconnect pins according to the symmetries of the subcircuits. To enhance the power of symmetry detection, we also propose a graph-based symmetry detector that can identify permutational and phase-shift symmetries on multiple input and output wires, as well as hybrid symmetries, creating abundant opportunities for rewiring. Our second algorithm, called long-range rewiring, is based on reconnecting equivalent pins and can augment the first approach for further optimization. We apply our techniques for wirelength optimization and observe that they provide wirelength reduction comparable to that achieved by detailed placement.
Adjustable prediction-based reversible data hiding Reversible data hiding has received considerable interest recently in the field of information hiding. However, most of difference expansion (DE-based) schemes suffer from the problem that image cannot be restored into its original one free from location map. Reversible data hiding scheme with location map cannot effectively raise hiding capacity; in addition, it also incurs the computational cost during embedding and extracting. This study presents a reversible data hiding scheme that is free of location map and thus the computational cost, associated with embedding and extracting, is lower than that of most DE-based schemes. Furthermore, our scheme solves the problem of the low embedding rate of Tseng and Hsieh@?s scheme because our scheme takes advantage of the bulk of relatively high small prediction-errors to raise embedding capacity. Experimental results demonstrate that the proposed scheme achieves the larger payload and the better image quality than some existing schemes by an adjustable control factor. The performance of the proposed scheme is more stable for the different images.
Reversible data hiding based on local histogram shifting with multilayer embedding We use the two neighboring points of the peak point to embed the secret data.We improve the histogram of embedded cover image to ensure security of secret data.The multilayer embedding of pyramid structure is adopted to increase the capacity.Our proposed method has a higher embedding capacity with lower distortion. In this paper, we present a new reversible data hiding method based on histogram shifting using localization. Our proposed method selects peak point as the reference point, then uses the two neighboring points of the peak point to achieve secret data embedding based on histogram shifting and the peak point keeps unchanged. In the extraction end, we no longer need the key information about the peak point, we can directly find the peak point from the histogram to extract the secret data. We also exploit the localization to make the histogram of embedded cover image become almost the same as the histogram of the original cover image. The embedding capacity is also increased rapidly by the localization with multilayer embedding. Experimental results show that our proposed method is effective and superior.
1.116
0.11
0.11
0.11
0.11
0.051333
0
0
0
0
0
0
0
0
Dynamic programming revisited: improving knapsack algorithms .  The contribution of this paper is twofold: At first an improved dynamic programming algorithm for the bounded knapsack problem is given. It decreases the running time for an instance with n items and capacity c from to , which is the same pseudopolynomial complexity as usually given for the 0--1 knapsack problem. In the second part a general approach based on dynamic programming is presented to reduce the storage requirements for combinatorial optimization problems where it is computationally more expensive to compute the explicit solution structure than the optimal solution value. Among other applications of this scheme it is shown that the 0--1 knapsack problem as well as the bounded knapsack problem can be solved in time and space.
On search, decision, and the efficiency of polynomial-time algorithms Recent advances in well-partial-order theory, especially the seminal contributions of Robertson and Seymour, have troubling consequences for those who would equate tractability with polynomial-time decidability. Specifically:many problems are now known to be decidable in low-degree polynomial time, but only by decision algorithms with overwhelmingly astronomical constants of proportionality,the existence of such a polynomial-time decision algorithm alone does not ensure that a corresponding search problem can be solved efficiently, andeven if both a decision problem and a corresponding search problem can be shown to be polynomial-time computable, there is no guarantee that correct algorithms can be found or even recognized within any bounded amount of time.In this paper, we present a number of techniques for dealing with these remarkable features of algorithms based on well-partially-ordered sets. Our main results include a general strategy with which such algorithms can be made constructive. With the aid of this method, we demonstrate that low-degree polynomial-time algorithms are now known for almost all of the catalogued applications of RS posets. We also prove that, despite the nonconstructive nature of the well-partial-order theory on which this line of research is based, no RS poset application can settle P @@@@ N P non-constructively by any established method of argument.
Parameterized Complexity and Approximation Algorithms Approximation algorithms and parameterized complexity are usually considered to be two separate ways of dealing with hard algorithmic problems. In this paper, our aim is to investigate how these two fields can be combined to achieve better algorithms than what any of the two theories could offer. We discuss the different ways parameterized complexity can be extended to approximation algorithms, survey results of this type and propose directions for future research.
Proved-patterns-based development for structured programs
Approximate local search in combinatorial optimization Local search algorithms for combinatorial optimization problems are in general of pseudopolynomial running time and polynomial-time algorithms are often not known for finding locally optimal solutions for NP-hard optimization problems. We introduce the concept of ε-local optimality and show that an ε-local optimum can be identified in time polynomial in the problem size and 1=ε whenever the corresponding neighborhood can be searched in polynomial time, for ε 0. If the neighborhood can be searched in polynomial time for a δ-local optimum, a variation of our main algorithm produces a (δ + ε)-local optimum in time polynomial in the problem size and 1/ε. As a consequence, a combinatorial optimization problem has a fully polynomial-time approximation scheme if and only if the problem of determining a better solution---the so-called augmentation problem---has a fully polynomial-time approximation scheme.
Event Based Sequential Program Development: Application to Constructing a Pointer Program In this article, I present an "event approach" used to formally develop sequential programs. It is based on the formalism of Action Systems [6] (and Guarded Commands [7]), which is is interesting because it involves a large number of pointer manipulations.
Algorithm theories and design tactics Algorithm theories represent the structure common to a class of algorithms, such as divide-and-conquer or backtrack. An algorithm theory for a class A provides the basis for design tactics — specialized methods for designing A-algorithms from formal problem specifications. We illustrate this approach with recent work on the theory of global search algorithms and briefly mention several others. Several design tactics have been implemented in the KIDS/CYPRESS system and have been used to semiautomatically derive many algorithms.
McLaren's masterpiece Abstract not available
ALICE a multi-processor reduction machine for the parallel evaluation CF applicative languages The functional or applicative languages have long been regarded as suitable vehicles for overcoming many of the problems involved in the production and maintenance of correct and reliable software. However, their inherent inefficiences when run on conventional von Neumann style machines have prevented their widespread acceptance. With the declining cost of hardware and the increasing feasibility of multi-processor architectures this position is changing, for, in contrast to conventional programs where it is difficult to detect those parts that may be executed, concurrently, applicative programs are ideally suited to parallel evaluation. In this paper we present a scheme for the parallel evaluation of a wide variety of applicative languages and provide an overview of the architecture of a machine on which it may be implemented. First we describe the scheme, which may be characterized as performing graph reduction, at the abstract level and discuss mechanisms that allow several modes of parallel evaluation to be achieved efficiently. We also show how a variety of languages are supported. We then suggest an implementation of the scheme that has the property of being highly modular; larger and faster machines being built by joining together smaller ones. Performance estimates illustrate that a small machine (of the size that we envisage would form the basic building block of large systems) would provide an efficient desk-top personal applicative computer, while the larger versions promise very high levels of performance Indeed. The machine is designed to be ultimately constructed from a small number of types of VLSI component. Finally we compare our approach with the other proposes schemes for the parallel evaluation of applicative languages and discuss planned future developments.
Quantitative steganalysis of digital images: estimating the secret message length The objective of steganalysis is to detect messages hidden in cover objects, such as digital images. In practice, the steganalyst is frequently interested in more than whether or not a secret message is present. The ultimate goal is to extract and decipher the secret message. However, in the absence of the knowledge of the stego technique and the stego and cipher keys, this task may be extremely time consuming or completely infeasible. Therefore, any additional information, such as the message length or its approximate placement in image features, could prove very valuable to the analyst. In this paper, we present general principles for developing steganalytic methods that can accurately estimate the number of changes to the cover image imposed during embedding. Using those principles, we show how to estimate the secret message length for the most common embedding archetypes, including the F5 and OutGuess algorithms for JPEG, EzStego algorithm with random straddling for palette images, and the classical LSB embedding with random straddling for uncompressed image formats. The paper concludes with an outline of ideas for future research such as estimating the steganographic capacity of embedding algorithms.
Qualified answers and their application to transformation Without Abstract
Digital image watermarking using balanced multiwavelets In this paper, a robust watermarking algorithm using balanced multiwavelet transform is proposed. The latter transform achieves simultaneous orthogonality and symmetry without requiring any input prefiltering. Therefore, considerable reduction in computational complexity is possible, making this transform a good candidate for real-time watermarking implementations such as audio broadcast monitoring and DVD video watermarking. The embedding scheme is image adaptive using a modified version of a well-established perceptual model. Therefore, the strength of the embedded watermark is controlled according to the local properties of the host image. This has been achieved by the proposed perceptual model, which is only dependent on the image activity and is not dependent on the multifilter sets used, unlike those developed for scalar wavelets. This adaptivity is a key factor for achieving the imperceptibility requirement often encountered in watermarking applications. In addition, the watermark embedding scheme is based on the principles of spread-spectrum communications to achieve higher watermark robustness. The optimal bounds for the embedding capacity are derived using a statistical model for balanced multiwavelet coefficients of the host image. The statistical model is based on a generalized Gaussian distribution. Limits of data hiding capacity clearly show that balanced multiwavelets provide higher watermarking rates. This increase could also be exploited as a side channel for embedding watermark synchronization recovery data. Finally, the analytical expressions are contrasted with experimental results where the robustness of the proposed watermarking system is evaluated against standard watermarking attacks.
Profiling an Incremental Data Flow Analysis Algorithm Incremental data flow analysis algorithms have been designed to deal efficiently with change in evolving software systems. These algorithms document the current state of a software system by incorporating change effects into previously derived information describing the definition and use of data in the system. Unfortunately, the performance of these algorithms cannot, in general, be characterized by analytic predictions of their expected behavior. It is possible, however, to observe their performance empirically and predict their average behavior. The authors report on experiments on the empirical profiling of a general-purpose, incremental data flow analysis algorithm. The algorithm, dominator based and coded in C, was applied to statistically significant numbers of feasible, random software systems of moderate size. The experimental results, with quantifiable confidence limits, substantiate the claim that incremental analyses are viable and grow more valuable as a software system grows in size.
New high-performance reversible data hiding method for VQ indices based on improved locally adaptive coding scheme. •We propose a new data hiding method using improved locally adaptive coding (ILAS).•ILAS uses the two-dimensional structure of image and correlation of adjacent indices.•Our proposed ILAS can significantly improve the performance of LAS.•Experiments show our method has a better performance compared with other methods.
1.21136
0.21136
0.21136
0.21136
0.10568
0.070453
0.004697
0.000419
0
0
0
0
0
0
Computing with Rewrite Systems
A fold/unfold transformation framework for rewrite theories extended to CCT Many transformation systems for program optimization, program synthesis, and program specialization are based on fold/unfold transformations. In this paper, we present a fold/unfold-based transformation framework for rewriting logic theories which is based on narrowing. For the best of our knowledge, this is the first fold/unfold transformation framework which allows one to deal with functions, rules, equations, sorts, and algebraic laws (such as commutativity and associativity). We provide correctness results for the transformation system w.r.t. the semantics of ground reducts. Moreover, we show how our transformation technique can be naturally applied to implement a Code Carrying Theory (CCT) system. CCT is an approach for securing delivery of code from a producer to a consumer where only a certificate (usually in the form of assertions and proofs) is transmitted from the producer to the consumer who can check its validity and then extract executable code from it. Within our framework, the certificate consists of a sequence of transformation steps which can be applied to a given consumer specification in order to automatically synthesize safe code in agreement with the original requirements. We also provide an implementation of the program transformation framework in the high-performance, rewriting logic language Maude which, by means of an experimental evaluation of the system, highlights the potentiality of our approach.
Programming with Invariants The use of a restricted class of invariants as part of a language supports both the accurate synthesis of high-level programs and their translation into efficient implementations.
Flagship: a parallel architecture for declarative programming The Flagship project aims to produce a computing technology based on the declarative style of programming. A major component of that technology is the design for a parallel machine which can efficiently exploit the implicit parallelism in declarative programs. This paper describes the computational models which expose this implicit parallelism, and outlines an architecture designed to exploit it. The operational issues, such as dynamic load balancing, which arise in such a system are discussed, and the mechanisms being used to evaluate the architecture are described.
Transformational programming: applications to algorithms and systems Ten years ago Cheatham and Wegbreit [4] proposed atransformational program development methodology based on notionsof top-down stepwise program refinement first expressed by Dijkstra[10] and Wirth [45]. A schema describing the process of thismethodology is given in fig. 1. To develop a program bytransformation, we first specify the program in as high a level ofabstraction and as great a degree of clarity as our programminglanguage admits. This high level problem statement program P isproved correct semimechanically according to some standard approach(see Flovd and Hoare [15, 21]), Next, using an interactive systemequipped with a library of encoded transformations, each of whichmaps a correct program into another equivalent program, we selectand apply transformations one at a time to successive versions ofthe program until we obtain a concrete, low level, effecientimplementation version P'. The goals of transformationalprogramming are to reduce programming labor, improve programreliability, and upgrade program performance. In order for labor tobe reduced, the effort required to obtain P, prove it correct, andderive P' by transformation should be less than the effort requiredto code P from scratch, and also to debug it. Program reliabilitywill be improved if P can be certified correct, and if eachtransformation preserves program meaning. Finally, programperformance will be upgraded if transformations are directedtowards increased efficiency.Experimental transformational systems that emphasize one or moreaspects of the methodology outlined above have been implemented byCheatham [5], Darlington [3], Loveman [27], Standish [41], Feather[14] Huet and Lang [11], and others. However, all of these systemsfall short of the goals, because of a number of reasons thatinclude,1 inability to mechanize the checking of transformationapplicability conditions2 reliance on large, unmanageable collections of low leveltransformations, and long arduous derivation sequences3 dependency on transformations whose potential for improvingprogram performance is unpredictable4 use of source languages insufficiently high level toaccommodate perspicuous initial program specifications and powerfulalgorithmic transformationsYet, convincing evidence that this new methodology will succeedhas come from recent advances in verification, programtransformations, syntax directed editting systems, and high levellanguages. These advances, discussed below, represent partialsolution to the problems stated above, and could eventually beintegrated into a single system1 The transformational approach to verification was pioneered byGerhart [19] and strengthened by the results of Schwartz [39],Scherlis [36], Broy et al [2], Koenig and Paige [26.31] Blaustein[1], and others. Due mainly to improved technology for themechanization of proofs of enabling conditions that justifyapplication of transformations, this approach is now at a pointwhere it can be effectively used in a system. Such mechanizationdepends strongly on program analysis, and, in particular, onreanalyses after a program is modified. Attribute grammars [24]have been shown to be especially useful in facilitating programanalysis [23]. Moreover, Reps [34] has discovered algorithm thatreevaluates attributes in optimal time after a program undergoessyntax directed editing changes (as are allowed on the CornellSynthesizer [43]). He has implemented his algorithm recently, andhas reported initial success2 There are encouraging indications that a transformationalsystem can be made to depend mainly on a small but powerfulcollection of transformations applied top-down fashion to programsspecified at various levels of abstraction from logic down toassembler. We envision such a system as a fairly conventionalsemiautomatic compiler which classes of transformations areselected semimechanically in a predetermined order, and arejustified by predicates supplied mechanically but provedsemimanually. Of particular importance is nondeterminism removalwhich has formulated by Sharir [40] could lead to a technique forturning naive, nondeterministic programs into deterministicprograms with emergent strategies. Such programs could then betransformed automatically by finite differencing [13, 16, 17, 18,29, 30, 31] and jamming [28, 31, 20] (which we have implemented)into programs whose data access paths are fully determined. TheSETL optimizer could improve these programs further byautomatically choosing efficient data structure representations andaggregations3 Of fundamental importance to the transformations justmentioned is the fact that they can be associated with speeduppredictions Fong and Ullman [16] were the first to characterize animportant class of algorithmic differencing transformations interms of accurate asymptotic speedup predictions, eg, they gaveconditions under which repeated calculation of a set former {x ins|k(x)} could be computed on O(#s) + cost(k) steps. By consideringstronger conditions and special cases for the boolean valuedsubpart k, Paige [31] later gave sharper speedup predictions (eg,either O(1) steps for each encounter of the set former or acumulative cost of O(#s) steps for every encounter) associated withanother differencing method. Both Morgenstern [28] and Paige [31]prove constant factor improvements due to their jammingtransformations (implemented by Morgenstern for the improvement offile processing, and by Paige for the optimization of programs).Constant factor speedup has also been observed for data structureselection by the method of basings but a supporting analytic studyhas not been presented [8, 37]4 Essential to the whole transformational process is a widespectrum programming language (or set of languages) that canexpress a program at every stage of development from the initialabstract specification down to its concrete implementationrealization. Since transformations applied to programs written atthe highest levels of abstraction are likely to make the mostfundamental algorithmic changes, it is important to stress abstractfeatures in our language. In addition to supportingtransformations, the highest level language dictions should supportlucid initial specifications, verification, and even programanalysts. Of special importance is SETL [38, 9], because itsabstract set theoretic dictions can model data structures andalgorithms easily, because its philosophy of avoiding hidden asymptotic costs facilitates program analysis, because its semanticsconforms to finite set theory and can accommodate a set theoreticprogram logic, and because it is wide spectrum. As is evidenced bythe work of Schwartz, Fong, Paige, and Sharir, SETL is also a richmedium for transformation.
A Survey of Proposed Architectures for the Execution of Functional Languages Functional and imperative programming languages are characterized and compared with regard to programming style and efficiency. Machine design issues, are characterized by interconnection topology, evaluation strategy, program and data representation, process management, and dynamic optimization techniques; short descriptions of a number of "functional" machines are given in terms of these issues. Multiprocessor issues and systems are particularly emphasized. Outstanding problems in the area are reviewed and an overall evaluation of proposed machines is given.
HOPE: An experimental applicative language An applicative language called HOPE is described and discussed. The underlying goal of the design and implementation effort was to produce a very simple programming language which encourages the construction of clear and manipulable programs. HOPE does not include an assignment statement; this is felt to be an important simplification. The user may freely define his own data types, without the need to devise a complicated encoding in terms of low-level types. The language is very strongly typed, and as implemented it incorporates a typechecker which handles polymorphic types and overloaded operators. Functions are defined by a set of recursion equations; the left-hand side of each equation includes a pattern used to determine which equation to use for a given argument. The availability of arbitrary higher-order types allows functions to be defined which 'package' recursion. Lazily-evaluated lists are provided, allowing the use of infinite lists which could be used to provide interactive input/output and concurrency. HOPE also includes a simple modularisation facility which may be used to protect the implementation of an abstract data type.
Program Reusability through Program Transformation How can a program written in pure applicative LISP be reused in a Fortran environment? One answer is by automatically transforming it from LISP into Fortran. In this paper we discuss a practical application of this technique-one that yields an efficient Fortran program. We view this process as an example of abstract programming, in which the LISP program constitutes an abstract specification for the Fortran version. The idea of strategy-a strategy for getting from LISP to Fortran-is basic to designing and applying the transformations. One strategic insight is that the task is easier if the LISP program is converted to ``recursive'' Fortran, and then the recursive Fortran program is converted to nonrecursive standard Fortran. Another strategic insight is that much of the task can be accomplished by converting the program from one canonical form to another. Developing a strategy also involves making various implementation decisions. One advantage of program transformation methodology is that it exposes such decisions for examination and review. Another is that it enables optimizations to be detected and implemented easily. Once a strategy has been discovered, it can be implemented by means of rewrite-rule transformations using the TAMPR program transformation system. The transformational approach to program reuse based on this strategy has a measure of elegance. It is also practical-the resulting Fortran program is 25 percent faster than its compiled LISP counterpart, even without extensive optimization.
Advanced techniques for logic program specialisation Availability on the WWW: http://www.cs.kuleuven.ac.be/∼michael.
Strengths and weaknesses of program verification systems For over a decade, major research efforts have been directed at developing and applying Program Verification Systems. Particular examples are the Gypsy Verification Environment (at The University of Texas at Austin and Computational Logic, Inc.), and Affirm-85 (at General Electric, Schenectady, New York).
On the design of low-density parity-check codes within 0.0045 dB of the Shannon limit We develop improved algorithms to construct good low-density parity-check codes that approach the Shannon limit very closely. For rate 1/2, the best code found has a threshold within 0.0045 dB of the Shannon limit of the binary-input additive white Gaussian noise channel. Simulation results with a somewhat simpler code show that we can achieve within 0.04 dB of the Shannon limit at a bit error rate of 10/sup -6/ using a block length of 10/sup 7/.
Second Generation Benchmarking and Application Oriented Evaluation Digital image watermarking techniques for copyright protection have become increasingly robust. The best algorithms perform well against the now standard benchmark tests included in the Stirmark package. However the stirmark tests are limited since in general they do not properly model the watermarking process and consequently are limited in their potential to removing the best watermarks. Here we propose a second generation benchmark for image watermarking which includes attacks which take into account powerful prior information about the watermark and the watermarking algorithms. We follow the model of the Stirmark benchmark and propose several new categories of tests including: denoising (ML and MAP), wavelet compression, watermark copy attack, active desynchronization, denoising, geometrical attacks, and denoising followed by perceptual remodulation. In addition, we take the important step of presenting results as a function of application. This is an important contribution since it is unlikely that one technology will be suitable for all applications.
Semantic patches considered helpful Modern software development is characterized by the use of libraries and interfaces. This software architecture carries down even to the operating system level. Linux, for example, is organized as a small kernel, complemented with libraries providing generic functionalities for use in implementing network access, file management, access to physical devices, etc. Much of the Linux source code then consists of service-specific files that use these libraries. These libraries are also used by the many OS-level services that are maintained outside of the Linux source tree.
Reversible data hiding based on local histogram shifting with multilayer embedding We use the two neighboring points of the peak point to embed the secret data.We improve the histogram of embedded cover image to ensure security of secret data.The multilayer embedding of pyramid structure is adopted to increase the capacity.Our proposed method has a higher embedding capacity with lower distortion. In this paper, we present a new reversible data hiding method based on histogram shifting using localization. Our proposed method selects peak point as the reference point, then uses the two neighboring points of the peak point to achieve secret data embedding based on histogram shifting and the peak point keeps unchanged. In the extraction end, we no longer need the key information about the peak point, we can directly find the peak point from the histogram to extract the secret data. We also exploit the localization to make the histogram of embedded cover image become almost the same as the histogram of the original cover image. The embedding capacity is also increased rapidly by the localization with multilayer embedding. Experimental results show that our proposed method is effective and superior.
1.042621
0.024242
0.021375
0.018182
0.013831
0.006061
0.000533
0.000098
0.000038
0.000005
0
0
0
0
Block cipher based separable reversible data hiding in encrypted images While most reversible data hiding in encrypted images (RDH-EI) are based on stream cipher, this paper aims to present an alternative method feasible for block-enciphered images. Before uploading data to a remote server, the content owner encrypts the original image with a block cipher algorithm using an encryption key. Then, the server embeds additional bits into the encrypted image with an embedding key to generate the marked encrypted image. On the recipient side, the additional bits can be extracted if the receiver has the embedding key. In case the receiver has only the encryption key, the marked encrypted image can be directly deciphered to a plaintext image with good quality. When both the embedding and encryption keys are available for the receiver, he can recover the original image without any errors. Compared with the existing block cipher based RDH-EI method, drawbacks of the encryption and the recovery are avoided, and good embedding payloads are achieved.
A new approach to commutative watermarking-encryption We propose a new approach to commutative watermarking-encryption (CWE). A permutation cipher is used to encrypt the multimedia data, which leaves the global statistics of the multimedia data intact. Therefore, any non-localized watermarking scheme that depends only on global statistics of the multimedia data can be combined with the permutation cipher to form a CWE scheme. We demonstrate this approach by giving a concrete implementation, which manipulates the global histogram to achieve watermark embedding/detection.
Reversible data hiding of full color JPEG2000 compressed bit-stream preserving bit-depth information We present a lossless data hiding method for JPEG2000 compressed data based on the reversible information hiding for binary images we have proposed. In JPEG2000 compression, full color images with RGB three colors are transformed to YCrCb color space, and then, for each color component, wavelet transform, quantization and entropy coding are performed independently. The wavelet coefficients of each color component are quantized, therefore, a least significant bit plane (LSB) would be extracted. The proposed method embeds additional information to be hidden into the quantized wavelet coefficients of the Y color component in a reversible way. To realize this, we embed not only secret data and a JBIG2 bit-stream of a part of the LSB plane but also the bit-depth of the quantized coefficients on some code-blocks. Experimental results demonstrate the feasibility of an application of the proposed method to image alteration detection for JPEG2000 compressed data.
Optimum reversible data hiding and permutation coding This paper is mainly devoted to investigating the connection between binary reversible data hiding and permutation coding. We start by undertaking an approximate combinatorial analysis of the embedding capacity of reversible watermarking in the binary Hamming case, which asymptotically shows that optimum reversible watermarking must involve not only “writing on dirty paper”, as in any blind data hiding scenario, but also writing on the dirtiest parts of the paper. The asymptotic analysis leads to the information-theoretical result given by Kalker and Willems more than a decade ago. Furthermore, the novel viewpoint of the problem suggests a near-optimum reversible watermarking algorithm for the low embedding distortion regime based on permutation coding. A practical implementation of permutation coding, previously proposed in the context of maximum-rate perfect steganography of memoryless hosts, can be used to implement the algorithm. The paper concludes with a discussion on the evaluation of the general rate-distortion bound for reversible data hiding.
Complete Separable Reversible Data Hiding in Encrypted Image. Reversible data hiding in encrypted image (RDHEI) is an emerging technology since it has good potential for practical applications such as encrypted image authentication, content owner identification and privacy protection. But there is one key problem of many existing published works, that the embedded data only can be extracted either before or after image decryption. In this paper, a complete separable reversible data hiding scheme in encrypted images is proposed. Additional data can be embedded into a cipher image which is encrypted by RC4 and can be extracted error-free both from the cipher domain and the plaintext domain. Moreover, the proposed method is simpler to calculate, while offering better performance. The results demonstrate that larger payload, better image quality, and error-free data extraction as well as image recovery are achieved.
Stereo Image Coding With Histogram Pair Based Reversible Data Hiding This paper presents a stereo image coding method using reversible data hiding technique so that the right frame can be recovered losslessly and the left frame can be reconstructed with high visual quality. Utilizing the similarity between two frames in a stereo image pair the required size of storage and transmission bandwidth for the stereo image pair can be reduced to 50 %. A residual error matrix with a dynamic range of [-255, 255] is obtained by applying a frame-wise disparity algorithm which first shifts the left frame horizontally by a certain amount and then computes its difference to the right frame. Next, thus the generated residual error image with gray levels [0, 255] is obtained losslessly by a proposed labeling scheme. JPEG2000 lossy compression is then applied to the residual error image. The histogram-pair based reversible data hiding scheme is then utilized to embed the JPEG2000 lossy compressed data into the right frame. Compared with the prior art, which uses a block-based disparity estimation algorithm and a location map based reversible data hiding, the proposed method has demonstrated that the stereo image can be reconstructed with higher visual quality and with faster processing speed. Specifically, the experiments have demonstrated that both the PSNR and visual quality of the reconstructed stereo image pair are higher than those achieved by the prior arts.
Reversible Jpeg-Based Hiding Method With High Hiding-Ratio The stego-images generated by many existing hiding techniques are not economic in size, and hence need compression. Unfortunately, compression usually destroys the secret content hidden inside. To solve this dilemma, some hiding methods based on compression code (rather than the image itself) are reported. This paper proposes a high-capacity and high-hiding-ratio "reversible" steganography method based on JPEG-compression code. In the proposed method, the JPEG compression code of an image is used as the cover media. An 8 x 8 hiding-capacity table is firstly evaluated, which is then utilized to modify the quantization table attached to the given JPEG code. The two quantization tables (modified and original) together can map the DCT coefficients of each block to some larger DCT coefficients, with secret data hidden inside these larger DCT coefficients. In the decoding process, after lossless extraction of the hidden secret data, the proposed method can also recover the original JPEG-compression code. Experimental results show that our method outperforms other JPEG-based hiding methods (reversible or not) regarding both hiding-ratio and stego-image's quality.
Pattern-Based Data Hiding for Binary Image Authentication by Connectivity-Preserving In this paper, a novel blind data hiding method for binary images authentication aims at preserving the connectivity of pixels in a local neighborhood is proposed. The "flippability" of a pixel is determined by imposing three transition criteria in a 3 times 3 moving window centered at the pixel. The "embeddability" of a block is invariant in the watermark embedding process, hence the watermark can be extracted without referring to the original image. The "uneven embeddability" of the host image is handled by embedding the watermark only in those "embeddable" blocks. The locations are chosen in such a way that the visual quality of the watermarked image is guaranteed. Different types of blocks are studied and their abilities to increase the capacity are compared. The problem of how to locate the "embeddable" pixels in a block for different block schemes is addressed which facilitates the incorporation of the cryptographic signature as the hard authenticator watermark to ensure integrity and authenticity of the image. Discussions on the security considerations, visual quality against capacity, counter measures against steganalysis and analysis of the computational load are provided. Comparisons with prior methods show superiority of the proposed scheme
Reversible Image Data Hiding with Contrast Enhancement In this letter, a novel reversible data hiding (RDH) algorithm is proposed for digital images. Instead of trying to keep the PSNR value high, the proposed algorithm enhances the contrast of a host image to improve its visual quality. The highest two bins in the histogram are selected for data embedding so that histogram equalization can be performed by repeating the process. The side information is embedded along with the message bits into the host image so that the original image is completely recoverable. The proposed algorithm was implemented on two sets of images to demonstrate its efficiency. To our best knowledge, it is the first algorithm that achieves image contrast enhancement by RDH. Furthermore, the evaluation results show that the visual quality can be preserved after a considerable amount of message bits have been embedded into the contrast-enhanced images, even better than three specific MATLAB functions used for image contrast enhancement.
Reversible data hiding based on multilevel histogram modification and pixel value grouping. Pixel value grouping helps to improve efficiency in exploiting block redundancy.Only the maximum and the minimum of a block can be chosen as to-be-shifted.Not only reference pixel but also grouping results preserve after embedding. This paper proposes a multilevel histogram modification based reversible data hiding scheme using a new difference generation strategy called pixel value grouping (PVG). It aims to produce shaper difference histogram by exploiting the high correlation among pixels within block. After sorting, pixel values are grouped according to their distribution. For each set of similar pixel values, real or virtual reference pixel will be determined to compute differences in the scope of pixel values group and next secret message is embedded through expansion embedding. By PVG, we success to greatly reduce the number of to-be-shifted pixels while producing sufficient EC and hence less distortion can be introduced for embedding the same payload. Moreover, the same grouping can be achieved at the decoder and the real or virtual reference pixel can be determined without any prior knowledge, which guarantees the reversibility. Experimental results demonstrate that our scheme outperforms previous related state-of-the-art schemes.
Low-density parity-check codes A low-density parity-check code is a code specified by a parity-check matrix with the following properties: each column contains a small fixed number of l's and each row contains a small fixed number of l's. The typical minimum distance of these codes increases linearly with block length for a fixed rate and fixed . When used with maximum likelihood decoding on a sufficiently quiet binary-input symmetric channel, the typical probability of decoding error decreases exponentially with block length for a fixed rate and fixed . A simple but nonoptimum decoding scheme operating directly from the channel a posteriori probabilities is described. Both the equipment complexity and the data-handling capacity in bits per second of this decoder increase approximately linearly with block length. For and a sufficiently low rate, the probability of error using this decoder on a binary symmetric channel is shown to decrease at least exponentially with a root of the block length. Some experimental results show that the actual probability of decoding error is much smaller than this theoretical bound.
Multibit versus multilevel embedding in high capacity difference expansion reversible watermarking The classical difference expansion (DE) reversible watermarking expands two times a difference in order to embed one bit of data. The upper limit of such DE schemes in a single embedding level is usually bounded by 1 bpp. In order to obtain more than 1 bpp, several embedding stages are chained. The multibit DE schemes expand n times the difference in order to embed up to log2 n bpp. This paper proposes a multibit extension of the difference expansion reversible watermarking and compares the results with the multilevel embedding case. The multibit reversible watermarking outperforms the multilevel one both in capacity and quality. Experimental results on standard test images are provided.
A computational approach to George Boole's discovery of mathematical logic This paper reports a computational model of Boole's discovery of Logic as a partof Mathematics. George Boole (1815-1864) found that the symbols of Logic behavedas algebraic symbols, and he then rebuilt the whole contemporary theory of Logic bythe use of methods such as the solution of algebraic equations. Study of the differenthistorical factors that influenced this achievement has served as background for ourtwo main contributions: a computational representation of Boole's Logic before...
Fiat: Deductive Synthesis of Abstract Data Types in a Proof Assistant We present Fiat, a library for the Coq proof assistant supporting refinement of declarative specifications into efficient functional programs with a high degree of automation. Each refinement process leaves a proof trail, checkable by the normal Coq kernel, justifying its soundness. We focus on the synthesis of abstract data types that package methods with private data. We demonstrate the utility of our framework by applying it to the synthesis of query structures -- abstract data types with SQL-like query and insert operations. Fiat includes a library for writing specifications of query structures in SQL-inspired notation, expressing operations over relations (tables) in terms of mathematical sets. This library includes a suite of tactics for automating the refinement of specifications into efficient, correct-by-construction OCaml code. Using these tactics, a programmer can generate such an implementation completely automatically by only specifying the equivalent of SQL indexes, data structures capturing useful views of the abstract data. Throughout we speculate on the new programming modularity possibilities enabled by an automated refinement system with proved-correct rules.
1.023907
0.025553
0.022513
0.022513
0.012053
0.007504
0.004506
0.001987
0.000822
0.00013
0.000017
0
0
0
LOCO-I: a low complexity, context-based, lossless image compression algorithm LOCO-I (low complexity lossless compression for images) is a novel lossless compression algorithm for continuous-tone images which combines the simplicity of Huffman coding with the compression potential of context models, thus “enjoying the best of both worlds.” The algorithm is based on a simple fixed context model, which approaches the capability of the more complex universal context modeling techniques for capturing high-order dependencies. The model is tuned for efficient performance in conjunction with a collection of (context-conditioned) Huffman codes, which is realized with an adaptive, symbol-wise, Golomb-Rice code. LOCO-I attains, in one pass, and without recourse to the higher complexity arithmetic coders, compression ratios similar or superior to those obtained with state-of-the-art schemes based on arithmetic coding. In fact, LOCO-I is being considered by the ISO committee as a replacement for the current lossless standard in low-complexity applications
Fuzzy logic-based matching pursuits for lossless predictive coding of still images This paper presents an application of fuzzy-logic techniques to the reversible compression of grayscale images. With reference to a spatial differential pulse code modulation (DPCM) scheme, prediction may be accomplished in a space-varying fashion either as adaptive, i.e., with predictors recalculated at each pixel, or as classified, in which image blocks or pixels are labeled in a number of classes, for which fitting predictors are calculated. Here, an original tradeoff is proposed; a space-varying linear-regression prediction is obtained through fuzzy-logic techniques as a problem of matching pursuit, in which a predictor different for every pixel is obtained as an expansion in series of a finite number of prototype nonorthogonal predictors, that are calculated in a fuzzy fashion as well. To enhance entropy coding, the spatial prediction is followed by context-based statistical modeling of prediction errors. A thorough comparison with the most advanced methods in the literature, as well as an investigation of performance trends and computing times to work parameters, highlight the advantages of the proposed fuzzy approach to data compression.
Speeding Up Lossless Image Compression: Experimental Results On A Parallel Machine Arithmetic encoders enable the best compressors both for bi-level images (JBIG) and for grey scale and color images (CALIC), but they are often ruled out because too complex. The compression gap between simpler techniques and state of the art compressors can be significant. Storer extended dictionary text compression to bi-level images to avoid arithmetic encoders (BLOCK MATCHING), achieving 70 percent of the compression of JBIG1 on the CCITT bi-level image test set. We were able to partition an image into up to a hundred areas and to apply the BLOCK MATCHING heuristic independently to each area with no loss of compression effectiveness. On the other hand, we presented in [5] a simple lossless compression heuristic for gray scale and color images (PALIC), which provides a highly parallelizable compressor and decompressor. In fact, it can be applied independently to each block of 8x8 pixels, achieving 80 percent of the compression obtained with LOCO-I (JPEG-LS), the current lossless standard in low-complexity applications. We experimented the BLOCK MATCHING and PALIC heuristics with up to 32 processors of a 256 Intel Xeon 3.06GHz processors machine in Italy (avogadro.cilea.it) on a test set of large topographic bi-level images and color images in RGB format. We obtained the expected speed-up of the compression and decompression times, achieving parallel running times about twenty-five times faster than the sequential ones.
Fast Low Bit-Rate 3D Searchless Fractal Video Encoding Video encoding techniques play an important role in data reduction. Fractal compression has received considerable attention in the past decades. While early methods presented prohibitively large encoding times, recent search less fractal encoders reduced this problem. A fast 3D purely fractal video encoder based on a flexible adaptive spatial subdivision data structure is proposed in this work. The method completely avoids any kind of search for a matching domain block and is capable of performing fast compression and decompression with high visual fidelity. Experimental results show that the developed approach outperforms the state-of-the-art x264 video encoder at very low bit rates in high motion video sequences in both structural dissimilarity measure and encoding time.
Image splicing detection using 2-D phase congruency and statistical moments of characteristic function A new approach to efficient blind image splicing detection is proposed in this paper. Image splicing is the process of making a composite picture by cutting and joining two or more photographs. The spliced image may introduce a number Of sharp transitions such as lines, edges and corners. Phase congruency has been known as a sensitive measure of these sharp transitions and hence been proposed as features for splicing detection. In addition to the phase information, the magnitude information is also used for splicing detection. Specifically, statistical moments of characteristic functions of wavelet subbands have been examined to catch the difference between the authentic images and spliced images. Consequently, the proposed scheme extracts image features from moments of wavelet characteristic functions and 2-D phase congruency for image splicing detection. The experiments have demonstrated that the proposed approach can achieve a higher detection rate as compared with the state-of-the-art.
Lossless Compression for Space Imagery in a Dynamically Reconfigurable Architecture This paper presents a novel dynamically reconfigurable hardware architecture for lossless compression and its optimization for space imagery. The proposed system makes use of reconfiguration to support optimal modeling strategies adaptively for data with different dimensions. The advantage of the proposed system is the efficient combination of different compression functions. For image data, we propose a new multi-mode image model which can detect the local features of the image and use different modes to encode regions with different features. Experimental results show that our system improves compression ratios of space image while maintaining low complexity and high throughput.
Entropy based merging of context models for efficient arithmetic coding The contextual coding of data requires in general a step which reduces the vast variety of possible contexts down to a feasible number. This paper presents a new method for non-uniform quantisation of contexts, which adaptively merges adjacent intervals as long as the increase of the contextual entropy is negligible. This method is incorporated in a framework for lossless image compression. In combination with an automatic determination of model sizes for histogram-tail truncation, the proposed approach leads to a significant gain in compression performance for a wide range of different natural images.
Two low cost algorithms for improved diagonal edge detection in JPEG-LS JPEG-LS is the latest lossless and near lossless image compression standard introduced by the Joint Photographic Experts Group (JPEG) in 1999. In this standard simple localized edge detection techniques are used in order to determine the predictive value of each pixel. These edge detection techniques only detect horizontal and vertical edges and the corresponding predictors have only been optimized for the accurate prediction of pixels in the locality of horizontal and/or vertical edges. As a result JPEG-LS produces large prediction errors in the locality of diagonal edges. We present two low cost algorithms for the detection and prediction of diagonal edge pixels in JPEG-LS. Experimental results show that the proposed schemes aid in the reduction of predictive mean squared error of up to 2-3 percent as compared to the standard
Quality Degradative Reversible Data Embedding Using Pixel Replacement Conventionally, reversible data embedding methods aim at maintaining high output image quality while sacrificing carrier capacity. Recently, as a new trend, some researchers exploited reversible data embedding techniques to severely degrade image quality. In this paper, a novel high carrier capacity data embedding technique is proposed to achieve quality degradation. An efficient pixel value estimation method called checkerboard based prediction is proposed and exploited to realize data embedding while achieving scrambling effect. Here, locations of the predicted pixels are vacated to embed information while degrading the image quality. Basic performance of the proposed method is verified through experiments using various standard test images. In the best case scenario, carrier capacity of 7.31 bpp is achieved while the image is severely degraded.
Reversible watermarking based on invariability and adjustment on pixel pairs A novel reversible data hiding scheme based on invariability of the sum of pixel pairs and pairwise difference adjustment (PDA) is presented in this letter. For each pixel pair, if a certain value is added to one pixel while the same value is subtracted from the other, then the sum of these two pixels will remain unchanged. How to properly select this value is the key issue for the balance between reversibility and distortion. In this letter, half the difference of a pixel pair plus 1-bit watermark has been elaborately selected to satisfy this purpose. In addition, PDA is proposed to significantly reduce the capacity consumed by overhead information. A series of experiments is conducted to verify the effectiveness and advantages of the proposed approach. © 2008 IEEE.
Depth no-synthesis-error model for view synthesis in 3-D video. Currently, 3-D Video targets at the application of disparity-adjustable stereoscopic video, where view synthesis based on depth-image-based rendering (DIBR) is employed to generate virtual views. Distortions in depth information may introduce geometry changes or occlusion variations in the synthesized views. In practice, depth information is stored in 8-bit grayscale format, whereas the disparity range for a visually comfortable stereo pair is usually much less than 256 levels. Thus, several depth levels may correspond to the same integer (or sub-pixel) disparity value in the DIBR-based view synthesis such that some depth distortions may not result in geometry changes in the synthesized view. From this observation, we develop a depth no-synthesis-error (D-NOSE) model to examine the allowable depth distortions in rendering a virtual view without introducing any geometry changes. We further show that the depth distortions prescribed by the proposed D-NOSE profile also do not compromise the occlusion order in view synthesis. Therefore, a virtual view can be synthesized losslessly if depth distortions follow the D-NOSE specified thresholds. Our simulations validate the proposed D-NOSE model in lossless view synthesis and demonstrate the gain with the model in depth coding.
Improvement of learning algorithms for RBF neural networks in a helicopter sound identification system This paper presents a set of optimizations in learning algorithms commonly used for training radial basis function (RBF) neural networks. These optimizations are applied to a RBF neural network used in identifying helicopter types, processing their rotor sounds. The first method uses an optimum learning rate in each iteration of training process. This method increases the speed of learning process and also achieves an absolute stability in network response. Another modification is applied to quick propagation (QP) method as a generalization that attains more learning speed. Finally, we introduced the general optimum steepest descent (GOSD) method, which contains both improvements in learning RBF networks. All modified methods are employed in training a system that recognizes helicopters' rotor sounds exploiting a RBF neural network. Comparing results of these learning methods with the previous ones yields interesting outcomes.
Classifier design with incomplete knowledge Pattern classification is a well studied problem in which the identity of an unknown pattern is determined to be one of M classes spanning the pattern space. A new criterion function, the Inck (Incomplete knowledge) criterion function, is proposed which approximates the error probability when the M classes do not span the entire pattern space. This criterion is based on the probabilistic measures obtained from a modified version of Dubuisson and Masson's statistical decision rule with reject. The error probability, or conversely the probability of correct classification, can be determined without having complete knowledge about the class distributions. The attractiveness of this criterion is that it is highly correlated with the statistical decision rule (with reject) which can be used by the classifier. Thus, the criterion provides a good indication of the classifier performance that can be expected with the reduced feature space.
Golomb-rice coding optimized via LPC for frequency domain audio coder We have devised a method to optimize Golomb-Rice coding of frequency spectra, aiming at its use in frequency domain audio coder, using spectral envelopes extracted by linear predictive coding (LPC) from amplitude spectra instead of conventional power spectra according to theoretical investigations. This optimization improves the efficiency of the Golomb-Rice coding by allocating Rice parameter at each frequency bin based on the value of the envelopes, resulting in the enhancement of the objective and subjective quality of the state-of-the-art wideband coder at 16 kbit/s. Therefore, the method introduced here is expected to be useful for coding audio signals at low-bit-rate and low-delay conditions, required in mobile communications.
1.001163
0.002438
0.002058
0.001852
0.001543
0.001389
0.001088
0.000926
0.000096
0.000014
0.000003
0.000001
0
0
Data embedding in digital images using critical functions. In this paper, “uniform embedding” (independent of image contents and pixel correlations while embedding) and “adaptive embedding” (depend on image contents and pixel correlations while embedding) in image steganography are investigated. A compact steganographic embedding function is proposed to ensure the correctness and efficiency, and a pixel correlation function is utilized to discriminate the image smoothness. Two feasible image steganographic frameworks using these critical functions are presented, and some well-known image steganographic methods can be derived from the proposed frameworks. The effectiveness of the proposed frameworks is experimentally validated by constructing and testing some special data hiding methods in the case of four neighboring pixel as a processing unit. Experimental results show that the proposed methods can achieve better visual performance and statistical undetectability compared with the prior works. Another promising merit of our work is the potential to provide steganographers general-purpose strategies to acquire new image steganographic methods.
A sufficient condition based on the Cauchy-Schwarz inequality for efficient template matching The paper proposes a technique aimed at reducing the num- ber of calculations required to cany out an exhaustive tem- plate matchmg process based on the Normalized Cross Cor- relation (NCC). The technique deploys an effective suffi- cient condition, relying on the recently introduced concept of Lwunded partial correlation, that allows rapid elimination of the (mints that can not provide a better cross-correlation score with respect to the current hest candidate. In this paper we devise anovel sufficient condition b&ed on the Cauchy- Schwarz inequality and compare the experimental results with those attained using the standard NCC-based template matching algorithm and the already known sufficient condi- tion based on the lensen inequality.
Image Encryption by Novel Cryptosystem Using Matrix Transformation. The Hill cipher is a famous symmetric cryptosystem that have several advantages in data encryption. However, the Hill cipher algorithm cannot encrypt images that contain large areas of a single color. Thus, it does not hide all features of the image which reveals patterns in the plaintext. Moreover, it can be easily broken with a known plaintext attack revealing weak security. In this paper, novel cryptosystem is used to encrypt image that overcomes these disadvantages. The novel cryptosystem uses randomly generated self-invertible matrix as an encryption key for each block encryption and also this method eliminates the computational complexity involved in finding inverse of the matrix while decryption. The proposed variant yields higher security and significantly superior encryption quality compared to the original one.
A Modification of the TPVD Algorithm for Data Embedding Pixel-Value-Differencing PVD methods for data hiding have the advantage of a high payload. These algorithms have however the problem of overflow/underflow pixels, thus a location map for those pixels usually ignored when embedding the message is necessary. In this paper, we modified the Tri-way Pixel-Value Differencing method that removes the need of the location map and fix the problem. Our proposal replaces the table of ranges to estimate the amount of information to be embedded by a function based on the floor and ceil functions. As for the problem of overflow/underflow pixels we tackle it by means of a linear transformation. The linear transformation is based on the floor function, so information is lost therefore a location map to compensate for this data lost is necessary to recover the embedded message. The inclusion of the map in the algorithm is also discussed. The technique uses two steganographic methods, namely, the tri-way method to store the message and a reversible steganographic method to store the map needed to invert the linear function in order to recover the encoded message.
Robust covert communication using high capacity watermarking. Generally, in watermarking techniques the size of the watermark is very small when compared to the host image. In other words, a little amount of watermark is embedded in the huge quantity of image pixels as the notice of legitimate ownership. Contrary to that idea, this is an attempt in which the capacity of watermarking is improved by embedding huge amount of watermark efficiently in the less quantity of image pixels. The core idea behind the proposed approach is to select watermarkable pixels from the host image based on the census transform and hamming distance followed by the embedding which is done by proposed spectral decompositions, i.e., Hankel, Circulant and Topelitz spectral decomposition. Finally, a reliable watermark extraction scheme is developed which is free from the false-positive detection problem of singular values. The experimental evaluation demonstrates that the proposed scheme is expeditiously able to withstand a variety of extreme attacks and highly suitable for covert communications.
Exploiting self-adaptive permutation-diffusion and DNA random encoding for secure and efficient image encryption. •Self-adaptive permutation–diffusion and DNA random encoding are developed.•DNA random encoding firstly disarranges the bit distribution of the plaintext.•The quantization processes are disturbed by the intrinsic features of the plaintext.•The introduced disturbances can be automatically retrieved in the decryption end.•The reusability of the random variables dramatically promote the efficiency.
Reversible data hiding in encrypted images using pseudorandom sequence modulation This work proposes a novel reversible data hiding scheme for encrypted images based on a pseudorandom sequence modulation mechanism. In the first phase, a content owner encrypts the original image for content protection. Then, a data-hider replaces a small proportion of data in LSB planes of encrypted image with the additional data and modifies the rest data in LSB planes according to the pseudorandom sequences modulated by the replaced and embedded data. With the encrypted image containing additional data, an additional-data user knowing the data-hiding key can extract the embedded additional data. And a content user with the encryption key may decrypt the encrypted image containing additional data to obtain the principal original content. If someone receives the decrypted image and has the data-hiding key, he can also successfully extract the additional data and perfectly recover the original image by exploiting the spatial correlation in natural image.
Reversible Data Hiding Using Controlled Contrast Enhancement and Integer Wavelet Transform The conventional reversible data hiding (RDH) algorithms pursue high Peak-Signal-to-Noise-Ratio (PSNR) at the certain amount of embedding bits. Recently, Wu et al. deemed that the improvement of image visual quality is more important than keeping high PSNR. Based on this viewpoint, they presented a novel RDH scheme, utilizing contrast enhancement to replace the PSNR. However, when a large number of bits are embedded, image contrast is over-enhanced, which introduces obvious distortion for human visual perception. Motivated by this issue, a new RDH scheme is proposed using the controlled contrast enhancement (CCE) and Haar integer wavelet transform (IWT). The proposed scheme has large embedding capacity while maintaining satisfactory visual perception. Experimental results have demonstrated the effectiveness of the proposed scheme.
Reversible data hiding using multi-pass pixel-value-ordering and pairwise prediction-error expansion. Pixel value ordering (PVO) prediction can achieve remarkable accuracy and thus provide a rather sharp histogram. In addition, the efficiency in histogram manipulation also attracts much attention in recent works. In this paper, a new reversible data hiding scheme based on multi-pass PVO and pairwise PEE is proposed. After dividing the host image into non-overlapped blocks, the largest/smallest two pixels within block are predicted to form a prediction-error pair and finally a 2D prediction-error histogram. Here, the third largest/smallest pixel no longer always serves as predicted value. Once any one error in a pair is shifted, we propose to adaptively re-calculate the other one. For smooth block, location information is considered and then more expandable errors are obtained. For normal block, the shifted pixel is involved in prediction and shifting two errors in a pair without carrying any bit can be avoided. Such multi-pass prediction leads to the so-called high and low power version of PVO-based pairwise PEE. Experimental results verify that their combination can achieve very efficient capacity-distortion trade-off and thus outperform previous PVO-based schemes.
Reversible data hiding based on reducing invalid shifting of pixels in histogram shifting. •A reversible data hiding scheme for effectively reducing distortion is proposed.•Reduce the number of invalid shifting pixels in histogram shifting.•The proposed method has a higher embedding capacity.
An improved lossless data hiding scheme based on image VQ-index residual value coding Copyright protection and information security have become serious problems due to the ever growing amount of digital data over the Internet. Reversible data hiding is a special type of data hiding technique that guarantees not only the secret data but also the cover media can be reconstructed without any distortion. Traditional schemes are based on spatial, discrete cosine transformation (DCT) and discrete wavelet transformation (DWT) domains. Recently, some vector quantization (VQ) based reversible data hiding schemes have been proposed. This paper proposes an improved reversible data hiding scheme based on VQ-index residual value coding. Experimental results show that our scheme outperforms two recently proposed schemes, namely side-match vector quantization (SMVQ)-based data hiding and modified fast correlation vector quantization (MFCVQ)-based data hiding.
Closest-point problems A number of seemingly unrelated problems involving the proximity of N points in the plane are studied, such as finding a Euclidean minimum spanning tree, the smallest circle enclosing the set, k nearest and farthest neighbors, the two closest points, and a proper straight-line triangulation. For most of the problems considered a lower bound of O(N log N) is shown. For all of them the best currently-known upper bound is O(N2) or worse. The purpose of this paper is to introduce a single geometric structure, called the Voronoi diagram, which can be constructed rapidly and contains all of the relevant proximity information in only linear space. The Voronoi diagram is used to obtain O(N log N) algorithms for all of the problems.
NTUplace3: An Analytical Placer for Large-Scale Mixed-Size Designs With Preplaced Blocks and Density Constraints In addition to wirelength, modern placers need to consider various constraints such as preplaced blocks and density. We propose a high-quality analytical placement algorithm considering wirelength, preplaced blocks, and density based on the log-sum-exp wirelength model proposed by Naylor and the multilevel framework. To handle preplaced blocks, we use a two-stage smoothing technique, i.e., Gaussian smoothing followed by level smoothing, to facilitate block spreading during global placement (GP). The density is controlled by white-space reallocation using partitioning and cut-line shifting during GP and cell sliding during detailed placement. We further use the conjugate gradient method with dynamic step-size control to speed up the GP and macro shifting to find better macro positions. Experimental results show that our placer obtains very high-quality results.
Robust reversible data hiding in encrypted image: Using fractal coding approach This paper presents a robust version of reversible data hiding method in encrypted images previously proposed by Zhang [8] and Wien [9]. In the original method, the image is partitioned into non-overlapping blocks, each divided into two neighbor sets. In these sets, one bit is embedded by flipping three LSBs of predefined pixels. The data extraction can be done by examining the block smoothness of neighboring blocks. However, this method is very sensitive to noise and attacks. Because noise and malicious activity usually alter adjacent blocks in the same manner. So, neighboring blocks do not have the same smoothness anymore. In this paper we propose a new method to select non-neighbor sets to decrease the effects of noise and therefore reduce bit error rate in encrypted image. These non-neighbor sets are selected by finding the most similar pairs as performed in the encoding phase of fractal image compression. Experimental results show that the proposed method outperforms previous approaches in the case of Gaussian noise contamination and results lower bit error rate.
1.031381
0.034286
0.034286
0.034286
0.028571
0.017143
0.00611
0.002796
0.00072
0.000119
0.000009
0
0
0
Quantization index modulation: a class of provably good methods for digital watermarking and information embedding We consider the problem of embedding one signal (e.g., a digital watermark), within another “host” signal to form a third, “composite” signal. The embedding is designed to achieve efficient tradeoffs among the three conflicting goals of maximizing the information-embedding rate, minimizing the distortion between the host signal and composite signal, and maximizing the robustness of the embedding. We introduce new classes of embedding methods, termed quantization index modulation (QIM) and distortion-compensated QIM (DC-QIM), and develop convenient realizations in the form of what we refer to as dither modulation. Using deterministic models to evaluate digital watermarking methods, we show that QIM is “provably good” against arbitrary bounded and fully informed attacks, which arise in several copyright applications, and in particular it achieves provably better rate distortion-robustness tradeoffs than currently popular spread-spectrum and low-bit(s) modulation methods. Furthermore, we show that for some important classes of probabilistic models, DC-QIM is optimal (capacity-achieving) and regular QIM is near-optimal. These include both additive white Gaussian noise (AWGN) channels, which may be good models for hybrid transmission applications such as digital audio broadcasting, and mean-square-error-constrained attack channels that model private-key watermarking applications
The Gaussian watermarking game Watermarking models a copyright protection mechanism where an original source sequence or "covertext" is modified before distribution to the public in order to embed some extra information. The embedding should be transparent (i.e., the modified data sequence or "stegotext" should be similar to the covertext) and robust (i.e., the extra information should be recoverable even if the stegotext is modified further, possibly by a malicious "attacker"). We compute the coding capacity of the watermarking game for a Gaussian covertext and squared-error distortions. Both the public version of the game (covertext known to neither attacker nor decoder) and the private version of the game (covertext unknown to attacker but known to decoder) are treated. While the capacity of the former cannot, of course, exceed the capacity of the latter, we show that the two are, in fact, identical. These capacities depend critically on whether the distortion constraints are required to be met in expectation or with probability one. In the former case, the coding capacity is zero, whereas in the latter it coincides with the value of related zero-sum dynamic mutual information games of complete and perfect information. We also compute the capacity when the attacker is restricted to additive attacks. This capacity turns out to be strictly larger than the watermarking capacity, thus demonstrating that additive attacks are suboptimal. In fact, under the additive attack restriction, capacity turns out to coincide with the capacity of Costa's (1983) model for "writing on dirty paper," thus demonstrating that in Costa's model, the independent and identically distributed (i.i.d.) Gaussian "noise" is the most malevolent power-limited "noise". Additionally, Costa's observation that in the presence of i.i.d. Gaussian "noise," an i.i.d. Gaussian "dirt" process that is noncausally known to the transmitter (but not receiver) does not reduce capacity, is extended to general ergodic "dirt" and to stationary (but not necessarily white) Gaussian "noise"
On the embedding limits of the discrete cosine transform This paper investigates the embedding capacity limits of high-capacity data hiding in color images based on a locally Adaptive-Region Discrete Cosine Transform (AR-DCT) frequency domain data hiding scheme, and explores the relationship between hiding capacity and image quality. It also compares the embedding capacities of various steganography schemes which have been recently published in the literature. Experimental results confirm that our proposed scheme successfully enhances hiding capacity while maintaining acceptable image quality and concludes that the capacity for our DCT hiding scheme can achieve extremely high bit rates of 20 bits-per-pixel, which is much higher than other DCT-based approaches, as well as other spatial and frequency domain schemes.
Color-secure digital image compression. The wide spread acquisition and use of ultra-high resolution color images obtained from high-resolution imaging sensors introduces open problems of optimal storage and transmission while securing important color information as well as preserving fine details in these high quality images. This paper describes a steganography-based paradigm for high-quality compression of fine-detailed color megapixel images highly applicable to forensic imaging applications. Our scheme combines space-domain and frequency-domain image processing operations where in the space domain, color-brightness separation is exploited, and in the frequency domain, discrete cosine transform energy compaction properties of the transformed luminance image is exploited. Experimental results as well as empirical observations show that our technique is very competitive with the highest quality JPEG image compression standard in the overall fidelity of the decompressed image while achieving high compression ratios. However, the main purpose of this new compression scheme is not to compete with the JPEG standard in terms of visual quality measures, but to provide a means for securing vital color information in the original image from potential tampering while allowing high compression ratios without loss of important fine details.
Content-based image watermarking via public-key cryptosystems Digital watermarking is a technique to insert an information-carrying digital signature into a digital media so that the signature can be extracted for variety of purposes including ownership authentication and content verification. We examine the weaknesses against common watermarking attacks of blockwise independent and content-based watermarking algorithms for image integrity verification, and implement a new and more secure invisible fragile public-key watermarking algorithm for color or grayscale images that increases the message digest size from the proposed 64 to 128 bits using the same small-size blocks and maintaining high-quality watermarked images and accurate localization of image changes. Our watermarking technique is capable to detect any changes made to the image since the time it was stamped, any changes to the pixel values and also to the dimensions of the image will be automatically detected and localized. Our scheme consists of a watermark-insertion process that uses a private key to embed a watermark image into a cover image, and a watermark-extraction process that uses a public key to extract the watermark from the watermarked image. The embedded watermark can only be extracted by someone who has possession of a proper verification key.
An information-theoretic model for steganography An information-theoretic model for steganography with a passive adversary is proposed. The adversary's task of distinguishing between an innocent cover message C and a modified message S containing hidden information is interpreted as a hypothesis testing problem. The security of a steganographic system is quantified in terms of the relative entropy (or discrimination) between the distributions of C and S, which yields bounds on the detection capability of any adversary. It is shown that secure steganographic schemes exist in this model provided the covertext distribution satisfies certain conditions. A universal stegosystem is presented in this model that needs no knowledge of the covertext distribution, except that it is generated from independently repeated experiments.
On Joint Coding for Watermarking and Encryption Abstract In continuation,to earlier works,where,the problem,of joint information,embedding and lossless compression,(of the composite,signal) was studied in the absence,[9] and in the presence [10] of attacks, here we consider the additional ingredient of protecting the secrecy of the watermark against an unauthorized party, which has no access to a secret key shared by the legitimate parties. In other words, we study the problem of joint coding for three objectives: information embedding, compression, and encryption. Our main,result is a coding theorem,that provides a single–letter characterization of the best achievable tradeoffs among,the following parameters: the distortion between,the composite signal and the covertext, the distortion in reconstructing the watermark by the legitimate receiver, the compressibility of the composite signal (with and without the key), and the equivocation of the watermark, as well as its reconstructed version, given the composite signal. In the attack–free case, if the key is independent of the covertext, this coding theorem gives rise to a threefold separation principle that tells that asymptotically, for long block codes, no optimality is lost by first applying a rate– distortion code to the watermark source, then encrypting the compressed codeword, and finally, embedding it into the covertext using the embedding scheme of [9]. In the more general case, however, this separation principle is no longer valid, as the key plays an additional role of side information,used by the embedding,unit. Index Terms: Information hiding, watermarking, encryption, data compression, sep-
When Seeing Isn'T Believing Current multimedia authentication technologies and their applications.
Intelligent reversible watermarking and authentication: Hiding depth map information for 3D cameras In this paper, we propose an application for 3D cameras by reversibly hiding the depth map in the corresponding 2D images. The proposed technique is prospective in cameras capable of capturing simultaneously the 2D image and resultant depth map of an object. 3D cameras equipped with self-embedding capability can serve two additional purposes, protection of the captured image and secure transmission of its depth map. The reversible watermarking, in addition to other features, guarantees the lossless recovery of an original image and separation of its depth map when required. For this purpose, a reversible watermarking scheme, based on genetic algorithm (GA), has been proposed which computes suitable threshold for each block of coefficients in the wavelet domain. Thus, a tradeoff is made between watermark imperceptibility and capacity using GA. The threshold map is embedded in addition to the actual payload and thus the proposed approach does not require histogram pre-processing in order to avoid overflow/underflow. The same threshold map has been used for authentication purpose by correlating it with the low-frequency coefficients of the 2D transformed image. Further to exploit the inherent redundancy in the depth map, which is the actual payload in this case, lossless compression has been employed before its embedding. Similarly, besides secret key-based permutation, a cryptographic layer is overlaid on the watermarking layer for security purposes. Experiments conducted on images and depth maps, obtained using a 3D camera and an optical microscopic system, validate the proposed concept.
A secure algorithm for biometric-based digital image watermarking in DCT domain This paper addresses the problem of ownership of digital watermark by inserting a biometric-based watermark in the digital host signal. The biometric-based digital watermark is made secure using an encryption technique using Arnold catmap before embedding. Biometric-based watermark is a potential solution of watermark ownership which can be physically or logically owned to prove ownership. This joint encryption and watermarking scheme has the potential of addressing the ownership of digital signals and keep the biometric data secure. Arnold transformation is used to encrypt the biometric data and discrete cosine transformation is used for embedding the watermark in the image. The watermarking method is chosen so that the data payload demand is met keeping the requirements of perceptual transparency and robustness. Experimental results of perceptual transparency indicates that the method maintains a good quality of transparency with a SNR more than 25 dB for the images. The proposed watermarking method is robust to signal processing attacks and the watermarks were identified using minutia feature point matching under these attacks. The overall design requirements of transparency, robustness and data payload are achieved optimally in this paper.
New Blind Steganalysis and its Implications The contribution of this paper is two-fold. First, we describe an improved version of a blind steganalysis method previously proposed by Holotyak et al.(1) and compare it to current state-of-the-art blind steganalyzers. The features for the blind classifier are calculated in the wavelet domain as higher-order absolute moments of the noise residual. This method clearly shows the benefit of calculating the features from the noise residual because it increases the features' sensitivity to embedding, which leads to improved detection results. Second, using this detection engine, we attempt to answer some fundamental questions, such as "how much can we improve the reliability of steganalysis given certain a priori side-information about the image source?" Moreover, we experimentally compare the security of three steganographic schemes for images stored in a raster format - (1) pseudo-random 1 embedding using ternary matrix embedding, (2) spatially adaptive ternary 1 embedding, and (3) perturbed quantization while converting a 16-bit per channel image to an 8-bit gray scale image.
Double expanding robust image watermarking based on Spread Spectrum technique and BCH coding. This paper presents a new algorithm for blind image watermarking which has a high robustness against common image processing attacks such as noise addition (Gaussian noise, Salt & Pepper noise, Speckle noise and etc.), JPEG and JPEG2000 compressions, Histogram Equalization, Average and Gaussian filters, Scaling and Cropping. According to this fact that a watermark with about 70 bits is enough for copyright protection, consequently in this paper a small watermark (64 bits) have been double expanded into multi larger meaningful bits with applying BCH error correction code and Spread Spectrum technique in order to reduce errors in extraction phase. Approximation subband of two levels DWT transform is divided into non-overlapping blocks and high frequency coefficients of DCT transform of each block is used for embedding the watermark. Embedding technique, which is used in this paper, is Spread Spectrum. Correlation between some coefficients of each embedded block and two predefined groups of random bits is used for watermark extraction, so this method is blind and does not need to the original image or additional information in extraction phase. Another idea, which is used in this paper, is calculating different gain factors for each block of approximation subband according to the texture of each block. Consequently this method allocates smaller gain factors to smooth blocks and larger gain factors to texture and rough blocks. So, manipulating in image will be more robust and imperceptible.
Use of production systems for modeling asynchronous, concurrent processes Because of the event-driven, nature of asynchronous, concurrent processes, production systems (PS's) are an attractive modeling tool. The system of interest can be modeled with a large number of independent states, with independent actions, and tile knowledge base can be conveniently encoded declaratively. However, asynchronous, concurrent processes normally have strict requirements for inter-process communication and coordination; this requires a substantial degree of inter-rule communication in the PS. The result of this is that a complex control structure is embedded in the short term memory (STM); this is generally considered unattractive for a number of reasons. This paper proposes a separate, explicit control structure for modeling asynchronous, concurrent processes With PS's. Specifically, the use of a Petri net is addressed. A system of asynchronous, concurrent processes can be modeled using 'PS's to model the individual processes or events and using a Petri net to model the relationships between the processes. Furthermore, a hierarchy of such networks is proposed; an allowable production rule action is the instantiation of another network. This is supported with a structured, hierarchial STM.
Fast mask assignment using positive semidefinite relaxation in LELECUT triple patterning lithography One of the most promising techniques in the 14 nm logic node and beyond is triple patterning lithography (TPL). Recently, LELECUT type TPL technology, where the third mask is used to cut the patterns, is discussed to alleviate native conflict and overlay problems in LELELE type TPL. In this paper, we formulate LELECUT mask assignment problem which maximizes the compliance to the lithography and apply a positive semidefinite relaxation. In our proposed method, the positive semidefinite relaxation is defined by extracting cut candidates from the layout, and a mask assignment is obtained from an optimum solution of the relaxation by randomized rounding technique.
1.003682
0.004512
0.003446
0.003119
0.002614
0.002238
0.001384
0.000746
0.000317
0.000084
0.000027
0.000005
0
0
ISPD 2015 Benchmarks with Fence Regions and Routing Blockages for Detailed-Routing-Driven Placement The ISPD~2015 placement-contest benchmarks include all the detailed pin, cell, and wire geometry constraints from the 2014 release, plus (a) added fence regions and placement blockages, (b) altered netlists including fixed macro blocks, (c) reduced standard cell area utilization via larger floorplan outlines, and (d)] specified upper limits on local cell-area density. Compared to the 2014 release, these new constraints add realism and increase the difficulty of producing detail-routable wirelength-driven placements.
A roadmap and vision for physical design This invited paper offers "roadmap and vision" for physical design. The main messages are as follows. (1) The high-level roadmap for physical design is static and well-known. (2) Basic problems remain untouched by fundamental research. (3) Academia should not overemphasize back- filling and formulation over innovation and optimization. (4) The physical design field must become more mature and efficient in how it prioritizes research directions and uses its human resources. (5) The scope of physical design must expand (up to package and system, down to manufacturing interfaces, out to novel implementation technologies, etc.), even as renewed focus is placed on basic optimization technology.
Nonsmooth Optimization Method for VLSI Global Placement The common objective of very large-scale integration (VLSI) placement problem is to minimize the total wirelength, which is calculated by the total half-perimeter wirelength (HPWL). Since the HPWL is not differentiable, various differentiable wirelength approximation functions have been proposed in analytical placement methods. In this paper, we reformulate the HPWL as an -norm model of the wirelength function, which is exact but nonsmooth. Based on the -norm wirelength model and exact calculation of overlapping areas between cells and bins, a nonsmooth optimization model is proposed for the VLSI global placement problem, and a subgradient method is proposed for solving the nonsmooth optimization problem. Moreover, local convergence of the subgradient method is proved under some suitable conditions. In addition, two enhanced techniques, i.e., an adaptive parameter to control the step size and a cautious strategy for increasing the penalty parameter, are also used in the nonsmooth optimization method. In order to make the placement method scalable, a multilevel framework is adopted. In the clustering stage, the best choice clustering algorithm is modified according to the -norm wirelength model to cluster the cells, and the nonsmooth optimization method is recursively used in the declustering stage. Comparisons of experimental results on the International Symposium on Physical Design (ISPD) 2005 and 2006 benchmarks show that the global placement method is promising.
Imagenet: A Large-Scale Hierarchical Image Database The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called "ImageNet", a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 5001000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond.
Tachyon: Reliable, Memory Speed Storage for Cluster Computing Frameworks Tachyon is a distributed file system enabling reliable data sharing at memory speed across cluster computing frameworks. While caching today improves read workloads, writes are either network or disk bound, as replication is used for fault-tolerance. Tachyon eliminates this bottleneck by pushing lineage, a well-known technique, into the storage layer. The key challenge in making a long-running lineage-based storage system is timely data recovery in case of failures. Tachyon addresses this issue by introducing a checkpointing algorithm that guarantees bounded recovery cost and resource allocation strategies for recomputation under commonly used resource schedulers. Our evaluation shows that Tachyon outperforms in-memory HDFS by 110x for writes. It also improves the end-to-end latency of a realistic workflow by 4x. Tachyon is open source and is deployed at multiple companies.
Local search algorithms for timing-driven placement under arbitrary delay models We present local search algorithms for timing-driven placement optimization. They find local slack optima for cells under arbitrary delay models and can be applied late in the design flow. The key ingredients are an implicit path straightening and a clustering of neighboring cells. Cell clusters are moved jointly to speed up the algorithm and escape suboptimal solutions, in which single cell algorithms are trapped, particularly in the presence of layer assignments. Given a cell cluster, we initially perform a line search for maximum slack on the straight line segment connecting the most critical upstream and downstream cells of the cluster. Thereby, the Euclidean path length is minimized. An iterative application will implicitly straighten the path. Later, slacks are improved further by applying ascent steps in estimated supergradient direction. The benefit of our algorithms is demonstrated experimentally within an industrial microprocessor design flow, and on recent ICCAD benchmarks circuits.
A Benchmark Suite to Jointly Consider Logic Synthesis and Physical Design This paper presents an improved benchmark suite to jointly consider logic synthesis and physical design. Usually, benchmark circuits were provided by the physical design and the logic synthesis communities separately, according to their specific needs. The files provided for each benchmark set were restricted to the views necessary for the community. Additional specifications of design intents are necessary to express optimization goals that can be shared by logic synthesis and physical design communities, as circuits alone do not carry sufficient information to establish a benchmark with a clear optimization goal. In this paper, we describe benchmarks as a set composed of circuits, design intents (constraints), floorplan, target library and technology. Disregarding pieces of information provided for the benchmarks can change the associated criticality and affect the combined or isolated outcome of logic synthesis and physical design. The proposition of this benchmark suite brings attention to the problem of considering adequately the complete context of design intent throughout the flow.
Recursive bisection based mixed block placement Many current designs contain a large number of standard cells intermixed with larger macro blocks. The range of size in these "mixed block" designs complicates the placement process considerably; traditional methods produce results that are far from satisfactory.In this paper we extend the traditional recursive bisection standard cell placement tool Feng Shui to directly consider mixed block designs. On a set of recent benchmarks, the new version obtains placements with wire lengths substantially lower than other current tools. Compared to Feng Shui 2.4, the placements of a Capo-based approach have 29% higher wire lengths, while the placements of mPG are 26% higher. Run times of our tool are also lower, and the general approach is scalable.
FastRoute: A Step to Integrate Global Routing into Placement Because of the increasing dominance of interconnect issues in advanced IC technology, placement has become a critical step in the IC design flow. To get accurate interconnect information during the placement process, it is desirable to incorporate global routing into it. However, previous global routers are computationally expensive. It is impractical to perform global routing repeatedly during placement. In this paper, we present an extremely fast and high-quality global router called FastRoute. In traditional global routing approaches, congestion is not considered during Steiner tree construction. So they have to rely on the time-consuming maze routing technique to eliminate routing congestion. Different from traditional approaches, we proposed a congestion-driven Steiner tree topology generation technique and an edge shifting technique to determine the good Steiner tree topologies and Steiner node positions. Based on the congestion-driven Steiner trees, we only need to apply maze routing to a small percentage of the two-pin nets once to obtain high quality global routing solutions. We also proposed a new cost function based on logistic function to direct the maze routing. Experimental results show that FastRoute generates less congested solutions in 132times and 64times faster runtimes than the state-of-the-art academic global routers Labyrinth (R. Kastner et al, 2000) and Chi Dispersion router (R. T. Hadsell and P. H. Madden, 2003), respectively. It is even faster than the highly-efficient congestion estimator FaDGloR (J. Westra and P. Groeneveld, 2005). The promising results make it possible to incorporate global routing directly into placement process without much runtime penalty. This could dramatically improve the placement solution quality. We believe this work will fundamentally change the way the EDA community look at and make use of global routing in the whole design flow
BoxRouter: a new global router based on box expansion and progressive ILP In this paper, we propose a new global router, BoxRouter, powered by the concept of box expansion and progressive integer linear programming (ILP). BoxRouter first uses a simple PreRouting strategy which can predict and capture the most congested regions with high fidelity compared to the final routing. Based on progressive box expansion initiated from the most congested region, BoxRouting is performed with progressive ILP and adaptive maze routing. It is followed by an effective PostRouting step which reroutes without rip-up to obtain smooth tradeoff between wirelength and routability. Our experimental results show that BoxRouter significantly outperforms the state-of-the-art published global routers, e.g., 79% better routability than [1(with similar wirelength and 2x speedup), 4.2% less wirelength and 16x speedup than [2](with similar routability). Given the fundamental importance of routing, such dramatic improvement shall sparkle renewed interests in routing which plays a key role in nanometer design and manufacturing closure.
UCID - An Uncompressed Colour Image Database Standardised image databases or rather the lack of them are one of the main weaknesses in the field of content based image retrieval (CBIR). Authors often use their own images or do not specify the source of their datasets. Naturally this makes comparison of results somewhat difficult. While a first approach towards a common colour image set has been taken by the MPEG 7 committee(1) their database does not cater for all strands of research in the CBIR community. In particular as the MPEG-7 images only exist in compressed form it does not allow for an objective evaluation of image retrieval algorithms that operate in the compressed domain or to judge the influence image compression has on the performance of CBIR algorithms. In this paper we introduce a new dataset, UCID (pronounced "use it") - an Uncompressed Colour Image Dataset which tries to bridge this gap. The UCID dataset currently consists of 1338 uncompressed images together with a ground truth of a series of query images with corresponding models that an ideal CBIR algorithm would retrieve. While its initial intention was to provide a dataset for the evaluation of compressed domain algorithms, the UCID database also represents a good benchmark set for the evaluation of any kind of CBIR method as well as an image set that can be used to evaluate image compression and colour quantisation algorithms.
Reversible data hiding based on wavelet spread spectrum This paper presents a reversible data hiding method based on wavelet spread spectrum and histogram modification. Using the spread spectrum scheme, we embed data in the coefficients of integer wavelet transform in high frequency subbands. The pseudo bits are also embedded so that the decoder does not need to know which coefficients have been selected for data embedding, thus enhancing data hiding efficiency. Histogram modification is used to prevent the underflow and overflow. Experimental results on some frequently used images show that our method has achieved superior performance in terms of high data embedding capacity and high visual quality of marked images, compared with the existing reversible data hiding schemes.
Program transformation and rewriting We present a basis for program transformation using term rewriting tools. A specification is expressed hierarchically by successive enrichments as a signature and a set of equations. A term can be computed by rewriting. Transformations come from applying a partial unfailing completion procedure to the original set of equations augmented by inductive theorems and a definition of a new function symbol following diverse heuristics. Moreover, the system must provide tools to prove inductive properties; to verify that enrichment produces neither junk nor confusion; and to check for ground confluence and termination. These properties are related to the correctness of the transformation.
Overhead for independent net approach for Global Routing Global Routing is one of the major Electronic Design Automation steps and it is classified as an NP-hard problem. We verified that 61% of the nets in ISPD 2008's benchmarks are shorter than 128 length units. We propose a method to cluster these nets using an independent net approach to perform global routing in massively parallel systems. We demonstrate that separating nets in clusters according to its length and processing it in parallel can reduce by 67 the processing time for collision detection, if compared with a sequential non-clustered analysis.
1.018674
0.021667
0.018504
0.016667
0.016667
0.009413
0.005556
0.001909
0.00038
0.000036
0
0
0
0
A reversible watermark scheme for 2D vector map based on reversible contrast mapping. Reversible watermark is suitable for the hosts with high precision requirement. However, the existed reversible watermark methods are mainly for raster images, and the reversible watermark schemes for vector graphics still have the defects such as low capacity and significant distortion. To counterstrike these situations, a reversible watermark scheme for two-dimensional vector map based on reversible contrast mapping is proposed. First, the coordinates of the vertices are chosen according to the data precision requirements; then, the data of two-dimensional vector maps are preprocessed to reduce distortion. After that, the encrypted watermark is embedded into the relative coordinates by using the reversible contrast mapping transform. Experimental results and analysis show that the proposed scheme can achieve higher payload, better reversibility, and invisibility than those of the existed schemes based on difference expansion, discrete cosine transform, and so on. It has great potential in the application of map data authentication and secure communication. Copyright (c) 2012 John Wiley & Sons, Ltd.
Three-dimensional histogram shifting for reversible data hiding. Histogram shifting is an important method of reversible data hiding. However, every pixel, difference, or prediction-error is respectively changed to hide a data bit in the traditional histogram shifting, which constrains the capacity-distortion embedding performance. An efficient three-dimensional histogram shifting is proposed for reversible data hiding in this paper. Take H.264 videos as covers to show this method. In a 4 × 4 quantized discrete cosine transform luminance block, which is not inferred by others, three alternating current coefficients are selected randomly as an embeddable group. According to the different values of the selected coefficient groups, they could be divided into different sets. Data could be hidden according to these sets. In the traditional histogram shifting, only one information bit could be hidden with at most one modification of one coefficient, whereas two data bits could be hidden at the same cost by using the proposed scheme. The superiority of the presented technique is verified through experiments.
Separable Reversible Data Hiding in Encrypted Signals with Public Key Cryptography. We propose separable reversible data hiding in an encrypted signal with public key cryptography. In our separable framework, the image owner encrypts the original image by using a public key. On receipt of the encrypted signal, the data-hider embeds data in it by using a data-hiding key. The image decryption and data extraction are independent and separable at the receiver side. Even though the receiver, who has only the data-hiding key, does not learn about the decrypted content, he can extract data from the received marked encrypted signal. However, the receiver who has only the private key cannot extract the embedded data, but he can directly decrypt the received marked encrypted signal to obtain the original image without any error. Compared with other schemes using a cipher stream to encrypt the image, the proposed scheme is more appropriate for cloud services without degrading the security level.
Reversible watermarking based on multi-dimensional prediction-error expansion. The reversible watermarking algorithm has been extensively studied in the field of information security in recent years, and prediction-error expansion (PEE) is a widely used technology for reversible watermarking embedding. Ou et al. (IEEE Trans Image Process 22(12):5010–5021, 2013) proposed a two-dimensional histogram shifting method to extend the conventional method, which yields better performance. This paper further expands the two-dimensional histogram shifting method, proposes a reversible watermarking algorithm based on multi-dimensional PEE and reduces the distortion by abandoning the embedding mappings with high distortion. The experimental result shows that the multi-dimensional PEE method can achieve relatively higher Peak Signal to Noise Ratio (PSNR) at the same embedding rate. In addition, it also proves that, the multi-dimensional PEE algorithm only needs to consider the simplest case of the embedding distortion threshold, and it is more suitable for images with simple and smooth texture.
A high capacity reversible data hiding method for 2D vector maps based on virtual coordinates In this paper, we propose a high capacity reversible data hiding method for 2D vector maps based on virtual coordinates. In the scheme, we calculate two virtual coordinates for each eligible coordinate, and embed c (c=1) secret bits by modifying the state value of the interval which is created by the two corresponding virtual coordinates. Since nearly every coordinate can carry c bits, the data capacity in bits achieves nearly 2c times the number of vertices in the vector map. In addition, to resist content preserving operations, the vertex traversing order is defined. Experimental results and analysis show that the proposed method provides good reversibility, invisibility and computational complexity and is robust against the feature rearrangement and vertex reversing attacks for polylines and polygons.
Reversible watermarking method based on asymmetric-histogram shifting of prediction errors. •A multi-prediction scheme is designed to calculate multiple prediction values for the current pixel.•An asymmetric selection function is defined to choose the suitable value from these prediction values.•An asymmetric error histogram is constructed by generating all the selected values.•We propose a complementary embedding strategy by combining two asymmetric error histograms.
A reversible data hiding method for H.264 with Shamir's (t, n)-threshold secret sharing. This paper proposes a new robust reversible data hiding scheme for H.264. The embedded data is first distributed into n sub-secrets with matrix equation by using Shamir’s (t, n)-threshold secret sharing to improve the robustness of the embedded data. Then we choose the block with prediction mode and embed the sub-secrets into the coefficients of the 4×4 discrete cosine transform (DCT) block of the selected frames which meet our conditions to avert the distortion drift. The experimental results show that this new robust reversible data hiding algorithm can get more robustness, effectively avert intra-frame distortion drift and get good visual quality.
Hierarchy-based reversible data hiding In this paper, we propose a new method for reversible data hiding by employing the hierarchical relationships of original images. There are many parameters for accessing the performances of reversible data hiding algorithms, including the output image quality, the hiding capacity, and the overhead for decoding. Considering the ease of implementation and the little overhead needed for decoding, we employ modification of difference values between pixels by using histogram-based scheme with extensions to pyramidal structure by utilizing inherent characteristics of original images. By doing so, global and local characteristics of original images can be utilized for hiding more capacity with acceptable quality of output image. With our method, better performances can be obtained with enhanced image quality, the more embedding capacity, and comparable amount of side information for decoding. More importantly, the reversibility of our method is guaranteed, meaning that original image and hidden message can both be perfectly recovered at the decoder. Simulation results demonstrate that proposed method in this paper outperforms those in conventional algorithms.
Reversible Data Hiding: Advances in the Past Two Decades. In the past two decades, reversible data hiding (RDH), also referred to as lossless or invertible data hiding, has gradually become a very active research area in the field of data hiding. This has been verified by more and more papers on increasingly wide-spread subjects in the field of RDH research that have been published these days. In this paper, the various RDH algorithms and researches have been classified into the following six categories: 1) RDH into image spatial domain; 2) RDH into image compressed domain (e.g., JPEG); 3) RDH suitable for image semi-fragile authentication; 4) RDH with image contrast enhancement; 5) RDH into encrypted images, which is expected to have wide application in the cloud computation; and 6) RDH into video and into audio. For each of these six categories, the history of technical developments, the current state of the arts, and the possible future researches are presented and discussed. It is expected that the RDH technology and its applications in the real word will continue to move ahead.
A reversible data hiding method for encrypted images Since several years, the protection of multimedia data is becoming very important. The protection of this multimedia data can be done with encryption or data hiding algorithms. To decrease the transmission time, the data compression is necessary. Since few years, a new problem is trying to combine in a single step, compression, encryption and data hiding. So far, few solutions have been proposed to combine image encryption and compression for example. Nowadays, a new challenge consists to embed data in encrypted images. Since the entropy of encrypted image is maximal, the embedding step, considered like noise, is not possible by using standard data hiding algorithms. A new idea is to apply reversible data hiding algorithms on encrypted images by wishing to remove the embedded data before the image decryption. Recent reversible data hiding methods have been proposed with high capacity, but these methods are not applicable on encrypted images. In this paper we propose an analysis of the local standard deviation of the marked encrypted images in order to remove the embedded data during the decryption step. We have applied our method on various images, and we show and analyze the obtained results.
Embedding image watermarks in dc components Both watermark structure and embedding strategy affect robustness of image watermarks. Where should watermarks be embedded in the discrete cosine transform (DCT) domain in order for the invisible image watermarks to be robust? Though many papers in the literature agree that watermarks should be embedded in perceptually significant components, dc components are explicitly excluded from watermark embedding. In this letter, a new embedding strategy for watermarking is proposed based on a quantitative analysis on the magnitudes of DCT components of host images. We argue that more robustness can be achieved if watermarks are embedded in dc components since dc components have much larger perceptual capacity than any ac components. Based on this idea, an adaptive watermarking algorithm is presented. We incorporate the feature of texture masking and luminance masking of the human visual system into watermarking. Experimental results demonstrate that the invisible watermarks embedded with the proposed watermark algorithm are very robust
A bounds inference method for vector-based memoization The dynamic-sized tabulation method can be used to eliminate redundant calls for certain classes of recursive programs. An innovative aspect of the method is the use of lambda abstractions that may subsequently be converted to bounded vectors, in order to share redundant calls via vector lookup.To facilitate this conversion to vector form, we propose a new inference method to conservatively determine the bounds for arithmetic parameters of recursive functions. Suitable techniques for inferring the safe bounds of these parameters are introduced, together with supporting transformations. The resulting method can obtain efficient vector-based programs without the need for run-time bounds checking.
Denotational semantics of a para-functional programming language Apara-functional programming language is a functional language that has been extended with special annotations that provide an extra degree of control over parallel evaluation. Of most interest are annotations that allow one to express the dynamic mapping of a program onto a known multiprocessor topology. Since it is quite desirable to provide a precise semantics for any programming language, in this paper adenotational semantics is given for a simple para-functional programming language with mapping annotations. A precise meaning is given not only to the normalfunctional behavior of the program (i.e., the answer), but also to theoperational notion of where (i.e., on what processor) expressions are evaluated. The latter semantics is accomplished through an abstract entity called anexecution tree.
Overhead for independent net approach for Global Routing Global Routing is one of the major Electronic Design Automation steps and it is classified as an NP-hard problem. We verified that 61% of the nets in ISPD 2008's benchmarks are shorter than 128 length units. We propose a method to cluster these nets using an independent net approach to perform global routing in massively parallel systems. We demonstrate that separating nets in clusters according to its length and processing it in parallel can reduce by 67 the processing time for collision detection, if compared with a sequential non-clustered analysis.
1.103778
0.1
0.1
0.1
0.026889
0.004107
0.001
0.000315
0.000132
0.000029
0
0
0
0
A reversible data hiding method for encrypted images Since several years, the protection of multimedia data is becoming very important. The protection of this multimedia data can be done with encryption or data hiding algorithms. To decrease the transmission time, the data compression is necessary. Since few years, a new problem is trying to combine in a single step, compression, encryption and data hiding. So far, few solutions have been proposed to combine image encryption and compression for example. Nowadays, a new challenge consists to embed data in encrypted images. Since the entropy of encrypted image is maximal, the embedding step, considered like noise, is not possible by using standard data hiding algorithms. A new idea is to apply reversible data hiding algorithms on encrypted images by wishing to remove the embedded data before the image decryption. Recent reversible data hiding methods have been proposed with high capacity, but these methods are not applicable on encrypted images. In this paper we propose an analysis of the local standard deviation of the marked encrypted images in order to remove the embedded data during the decryption step. We have applied our method on various images, and we show and analyze the obtained results.
A new approach to commutative watermarking-encryption We propose a new approach to commutative watermarking-encryption (CWE). A permutation cipher is used to encrypt the multimedia data, which leaves the global statistics of the multimedia data intact. Therefore, any non-localized watermarking scheme that depends only on global statistics of the multimedia data can be combined with the permutation cipher to form a CWE scheme. We demonstrate this approach by giving a concrete implementation, which manipulates the global histogram to achieve watermark embedding/detection.
Joint reversible data hiding and image encryption Image encryption process is jointed with reversible data hiding in this paper, where the data to be hided are modulated by different secret keys selected for encryption. To extract the hided data from the cipher-text, the different tentative decrypted results are tested against typical random distribution in both spatial and frequency domain and the goodness-of-fit degrees are compared to extract one hided bit. The encryption based data hiding process is inherently reversible. Experiments demonstrate the proposed scheme's effectiveness on natural and textural images, both in gray-level and binary forms.
Reversible And Robust Audio Watermarking Based On Spread Spectrum And Amplitude Expansion Recently, a technique that uses quantization index modulation and amplitude expansion has been proposed for reversible and robust audio watermarking. However, when applied to modified stego audio, the technique is not reversible. Here, a novel technique that is both reversible and robust is proposed for hiding data in audio. It is perfectly reversible for unmodified stego signals and is semi-reversible for perceptually coded stego signals. A robust payload is embedded by direct-sequence spread-spectrum modulation, with the sequence determined from the amplitude expansion in time and frequency of integer modified discrete cosine transform (MDCT) coefficients. Simultaneously, a reversible payload is embedded into the apertures in the amplitude histogram that result from amplitude expansion of the integer MDCT coefficients. The robustness and size of the reversible payload were evaluated by simulation for 20 music pieces. The reversible-payload capacity was approximately 4.8 kilobits per second. Encoding the stego signals into MP3, tandem MP3, and MPEG4AAC, and applying an additive single delay to the stego signals, revealed a maximum bit error rate of less than 6.5% with a robust payload of 7.2-bits per second. Measurement of the objective quality of the stego audio and audio recovered from the modified stego signal was done by using software based on the perceptual evaluation of audio quality algorithm. The results showed that the mean objective difference grade (ODG) was better than 'perceptible, but not annoying' for the stego audio. Semi-recovery from the perceptually coded stego signals was realized in terms of small differences in ODG between the recovered and coded signals.
Lossless embedding of depth hints in JPEG compressed color images. The conventional JPEG compressed domain lossless data hiding scheme has three attractive properties: file size is maintained, decoded image is unchanged, and embedded bitstream is compatible with the JPEG standard. This study introduces a new application of the compressed domain lossless data hiding scheme. Specifically, we present an algorithm that embeds depth hints to a compressed bitstream of a color image, which enables the end-user to extract depth hints and reconstruct the depth image. On the limited watermark capacity, depth hints are obtained using super-pixel color image segmentation and depth value clustering. The experimental results demonstrate the effectiveness of the proposed algorithm.
Pvo-Based Reversible Data Hiding For Encrypted Images This paper proposes a simple yet effective reversible data hiding (RDH) method for encrypted images. It guarantees exact data extraction and original image recovery from the marked encrypted image. Moreover, it is separable in the sense that the data extraction and the original image recovery can be independently processed. Specifically, we first encrypt the cover image by permutation in both block-and pixel-wise manners using a chaotic mapping. Then, we apply the pixel-value-ordering (PVO) embedding on each permuted block, to reversibly embed data into the encrypted image. Since the pixel value order is unchanged in each block after PVO embedding, the embedded data can be exactly extracted using the inverse PVO whether the marked image is decrypted or not. Experimental results show that the proposed method can provide a better decrypted marked image fidelity compared with some state-of-the-art works.
Beyond format-compliant encryption for JPEG image. In this work, a format-compliant encryption method with the data embedding feature for JPEG compressed image is proposed. First, DC coefficients are encoded based on the regions induced by the textural information carried by AC coefficients. Second, AC coefficients are scanned in eight different orders and the order that results in the smallest bitstream size is selected. Next, AC coefficients from each block are extracted in the form of Run/Size and Value, and manipulated to significantly increase the scope of permutation. Then the virtual queue decomposition is proposed to embed external information. All the processes are completely reversible where the embedded information can be extracted and the original content can be perfectly reconstructed from its processed counterpart. The performance of the proposed method is verified through experiments using various standard test images and the UCID dataset. The proposed method is also compared against the conventional format-compliant encryption methods, where its superiority in terms of robustness against sketch attacks, suppression of bitstream size increment, and data embedding are highlighted. In the best case scenario, the proposed method is able to generate an encrypted image of the same size as the original image (e.g., 512×512) with more than 5800 bits of additionally embedded information while achieving a compression gain of 1%.
High capacity reversible data hiding and content protection for radiographic images The watermarking of digital images has an important role in the protection of digital content with respect to many aspects. In this paper we present a reversible watermarking algorithm for hiding information into medical images having luminance histograms with particular characteristics. Some radiographic images have the property that not all the gray levels are present; this leads to sequences of 0 values (0-runs) in the corresponding histograms. It is possible to use these 0-runs to encode information by modifying pixels having gray levels contiguous to these runs; by encoding also the run information it is possible to restore the original image after extracting the stored data. In this work we present a novel reversible watermarking technique capable of exploiting all the 0-runs in the image histogram to achieve high capacity. We show that an optimization problem arises for those cases in which two or more non-zero frequency gray levels are contiguous to 0-runs. Part of the watermark information may be devoted to a digital signature of the original image, whose authenticity may also be verified by a user. HighlightsDesign of a reversible watermarking method for X-ray images.Fragile, histogram based watermarking technique.High capacity (up to more than 1bpp) with low distortion.Experiments on a large dataset including 100 X-ray images.
A Reversible Data Hiding Scheme Based on Code Division Multiplexing. In this paper, a novel code division multiplexing (CDM) algorithm-based reversible data hiding (RDH) scheme is presented. The covert data are denoted by different orthogonal spreading sequences and embedded into the cover image. The original image can be completely recovered after the data have been extracted exactly. The Walsh Hadamard matrix is employed to generate orthogonal spreading sequences, by which the data can be overlappingly embedded without interfering each other, and multilevel data embedding can be utilized to enlarge the embedding capacity. Furthermore, most elements of different spreading sequences are mutually cancelled when they are overlappingly embedded, which maintains the image in good quality even with a high embedding payload. A location-map free method is presented in this paper to save more space for data embedding, and the overflow/underflow problem is solved by shrinking the distribution of the image histogram on both the ends. This would further improve the embedding performance. Experimental results have demonstrated that the CDM-based RDH scheme can achieve the best performance at the moderate-to-high embedding capacity compared with other state-of-the-art schemes.
Reversible data embedding for high quality images using interpolation and reference pixel distribution mechanism This paper proposes a reversible data hiding method based on image interpolation and the detection of smooth and complex regions in the cover images. A binary image that represents the locations of reference pixels is constructed according the local image activity. In complex regions, more reference pixels are chosen and, thus, fewer pixels are used for embedding, which reduces the image degradation. On the other hand, in smooth regions, less reference pixels are chosen, which increases the embedding capacity without introducing significant distortion. Pixels are interpolated according to the constructed binary image, and the interpolation errors are then used to embed data through histogram shifting. The pixel values in the cover image are modified one grayscale unit at most to ensure that a high quality stego image can be produced. The experimental results show that the proposed method provides better image quality and embedding capacity compared with prior works.
Distortion-free secret image sharing method with two meaningful shadows. In this study, the authors propose a novel (2, 2) secret image sharing scheme in which a control parameter ω is used to change the payload easily. Since the modification of the original cover pixel values can be limited within a small range according to the value of ω, the shadow images can achieve excellent visual quality. In the extracting process, the secret image and the cover image can be rec...
Proceedings of the 7th workshop on Multimedia & Security, MM&Sec 2005, New York, NY, USA, August 1-2, 2005, 2006
Abacus: fast legalization of standard cell circuits with minimal movement Standard cell circuits consist of millions of standard cells, which have to be aligned overlap-free to the rows of the chip. Placement of these circuits is done in consecutive steps. First, a global placement is obtained by roughly spreading the cells on the chip, while considering all relevant objectives like wirelength, and routability. After that, the global placement is legalized, i.e., the cell overlap is removed, and the cells are aligned to the rows. To preserve the result of global placement, cells should be moved as little as possible during legalization This paper presents "Abacus", which is a fast approach to legalize standard cell circuits with minimal movement. The approach is based on sorting the cells according to their position first, and legalizing the cells one at a time then. Legalizing one cell is done by moving the cell from row to row until the optimal place with the lowest movement is found. Whenever a cell is moved to a row, the cells already aligned to the row are placed by dynamic programming to minimize their total movement. Therefore, our approach Abacus moves already legalized cells during legalization. In contrast to this, Tetris [1], which uses a similar legalization technique, does not move already legalized cells. Consequently, the average movement is about 30% lower in Abacus than in Tetris. On the other hand, the CPU time of the whole placement process is increased by only 7% with our legalization approach. Applying Abacus to routability-driven placement results in 1% improvement in routed wirelength
Secure Client Side Watermarking with Limited Key Size.
1.006658
0.006727
0.006085
0.005923
0.005714
0.003341
0.002144
0.001558
0.000738
0.000198
0.000038
0.000004
0
0
TOSSIM: accurate and scalable simulation of entire TinyOS applications Accurate and scalable simulation has historically been a key enabling factor for systems research. We present TOSSIM, a simulator for TinyOS wireless sensor networks. By exploiting the sensor network domain and TinyOS's design, TOSSIM can capture network behavior at a high fidelity while scaling to thousands of nodes. By using a probabilistic bit error model for the network, TOSSIM remains simple and efficient, but expressive enough to capture a wide range of network interactions. Using TOSSIM, we have discovered several bugs in TinyOS, ranging from network bit-level MAC interactions to queue overflows in an ad-hoc routing protocol. Through these and other evaluations, we show that detailed, scalable sensor network simulation is possible.
Idle sense: an optimal access method for high throughput and fairness in rate diverse wireless LANs We consider wireless LANs such as IEEE 802.11 operating in the unlicensed radio spectrum. While their nominal bit rates have increased considerably, the MAC layer remains practically unchanged despite much research effort spent on improving its performance. We observe that most proposals for tuning the access method focus on a single aspect and disregard others. Our objective is to define an access method optimized for throughput and fairness, able to dynamically adapt to physical channel conditions, to operate near optimum for a wide range of error rates, and to provide equal time shares when hosts use different bit rates.We propose a novel access method derived from 802.11 DCF [2] (Distributed Coordination Function) in which all hosts use similar values of the contention window CW to benefit from good short-term access fairness. We call our method Idle Sense, because each host observes the mean number of idle slots between transmission attempts to dynamically control its contention window. Unlike other proposals, Idle Sense enables each host to estimate its frame error rate, which can be used for switching to the right bit rate. We present simulations showing how the method leads to high throughput, low collision overhead, and low delay. The method also features fast reactivity and time-fair channel allocation.
CORD: Energy-Efficient Reliable Bulk Data Dissemination in Sensor Networks This paper presents CORD, a reliable bulk data dissemination protocol for propagating a large data object to all the nodes in a large scale sensor network. Unlike well- known reliable data dissemination protocols such as Deluge whose primary design criterion is to reduce the latency of object propagation, CORD's primary goal is to minimize energy consumption. To achieve its goals CORD employs a two phase approach in which the object is delivered to a subset of nodes in the network that form a connected dominating set in the first phase, and to the remaining nodes in the second phase. Further, CORD installs a coordinated sleep schedule on the nodes in the network whereby nodes that are not involved in receiving or transmitting data can turn off their radios to reduce their energy consumption. We evaluated the performance of CORD experimentally on both an indoor and outdoor sensor network testbed and via extensive simulations. Our results show that in comparison to Deluge (the de facto network reprogramming protocol for TinyOS) CORD significantly reduces the energy consumption for reliable data dissemination while achieving a comparable latency.
Correlated flooding in low-duty-cycle wireless sensor networks Flooding in low-duty-cycle wireless sensor networks is very costly due to asynchronous schedules of sensor nodes. To adapt existing flooding-tree-based designs for low-duty-cycle networks, we shall schedule nodes of common parents wake up simultaneously. Traditionally, energy optimality in a designated flooding-tree is achieved by selecting parents with the highest link quality. In this work, we demonstrate that surprisingly more energy can be saved by considering link correlation. Specifically, this work first experimentally verifies the existence of link correlation and mathematically proves that the energy consumption of broadcasting can be reduced by letting nodes with higher correlation receive packets simultaneously. A novel flooding scheme, named Correlated Flooding, is then designed so that nodes with high correlation are assigned to a common sender and their receptions of a broadcasting packet are only acknowledged by a single ACK. This unique feature effectively ameliorates the ACK implosion problem, saving energy on both data packets and ACKs. We evaluate Correlated Flooding with extensive simulations and a testbed implementation with 20 MICAz nodes. We show that Correlated Flooding saves more than 66% energy on ACKs and 15%-50% energy on data packets for most network settings, while having similar performance on flooding delay and reliability.
Cross-Layer Packet Size Optimization for Wireless Terrestrial, Underwater, and Underground Sensor Networks In this paper, a cross-layer solution for packet size optimization in wireless sensor networks (WSN) is introduced such that the effects of multi-hop routing, the broadcast nature of the physical wireless channel, and the effects of error control techniques are captured. A key result of this paper is that contrary to the conventional wireless networks, in wireless sensor networks, longer packets reduce the collision probability. Consequently, an optimization solution is formalized by using three different objective functions, i.e., packet throughput, energy consumption, and resource utilization. Furthermore, the effects of end-to-end latency and reliability constraints are investigated that may be required by a particular application. As a result, a generic, cross-layer optimization framework is developed to determine the optimal packet size in WSN. This framework is further extended to determine the optimal packet size in underwater and underground sensor networks. From this framework, the optimal packet sizes under various network parameters are determined.
Link Quality Aware Code Dissemination in Wireless Sensor Networks Wireless reprogramming is a crucial technique for software deployment in wireless sensor networks (WSNs). Code dissemination is a basic building block to enable wireless reprogramming. We present ECD, an Efficient Code Dissemination protocol leveraging 1-hop link quality information based on the TinyOS platform. Compared to prior works, ECD has three salient features. First, it supports dynamically configurable packet sizes. By increasing the packet size for high PHY rate radios, it significantly improves the transmission efficiency. Second, it employs an accurate sender selection algorithm to mitigate transmission collisions and transmissions over poor links. Third, it employs a simple impact-based backoff timer design to shorten the time spent in coordinating multiple eligible senders so that the largest impact sender is most likely to transmit. We implement ECD based on TinyOS and evaluate its performance extensively via testbed experiments and simulations. Results show that ECD outperforms state-of-the-art protocols, Deluge and MNP, in terms of completion time and data traffic (e.g., about 20 percent less traffic and 20-30 percent shorter completion time compared to Deluge).
Does Wireless Sensor Network Scale? A Measurement Study on GreenOrbs Sensor networks are deemed suitable for large-scale deployments in the wild for a variety of applications. In spite of the remarkable efforts the community put to build the sensor systems, an essential question still remains unclear at the system level, motivating us to explore the answer from a point of real-world deployment view. Does the wireless sensor network really scale? We present findings from a large-scale operating sensor network system, GreenOrbs, with up to 330 nodes deployed in the forest. We instrument such an operating network throughout the protocol stack and present observations across layers in the network. Based on our findings from the system measurement, we propose and make initial efforts to validate three conjectures that give potential guidelines for future designs of large-scale sensor networks. 1) A small portion of nodes bottlenecks the entire network, and most of the existing network indicators may not accurately capture them. 2) The network dynamics mainly come from the inherent concurrency of network operations instead of environment changes. 3) The environment, although the dynamics are not as significant as we assumed, has an unpredictable impact on the sensor network. We suggest that an event-based routing structure can be trained and thus better adapted to the wild environment when building a large-scale sensor network.
On Adaptive Strategies for an Extended Family of Golomb-Type Codes Off-centered, two-sided geometric distributions of the integers are often encountered in lossless image compression applications, as probabilistic models for prediction residuals. Based on a recent characterization of the family of optimal prefix codes for these distributions, which is an extension of the Golomb (1966) codes, we investigate adaptive strategies for their symbol-by-symbol prefix coding, as opposed to arithmetic coding. Our strategies allow for adaptive coding of prediction residuals at very low complexity. They provide a theoretical framework for the heuristic approximations frequently used when modifying the Golomb code, originally designed for one-sided geometric distributions of non-negative integers, so as to apply to the encoding of any integer.
Stochastic sensor activation for distributed state estimation over a sensor network. We consider distributed state estimation over a resource-limited wireless sensor network. A stochastic sensor activation scheme is introduced to reduce the sensor energy consumption in communications, under which each sensor is activated with a certain probability. When the sensor is activated, it observes the target state and exchanges its estimate of the target state with its neighbors; otherwise, it only receives the estimates from its neighbors. An optimal estimator is designed for each sensor by minimizing its mean-squared estimation error. An upper and a lower bound of the limiting estimation error covariance are obtained. A method of selecting the consensus gain and a lower bound of the activating probability is also provided.
Cross-layer optimization for energy-efficient wireless communications: a survey Since battery technology has not progressed as rapidly as semiconductor technology, power efficiency has become increasingly important in wireless networking, in addition to the traditional quality and performance measures, such as bandwidth, throughput, and fairness. Energy-efficient design requires a cross layer approach as power consumption is affected by all aspects of system design, ranging from silicon to applications. This article presents a comprehensive overview of recent advances in cross-layer design for energy-efficient wireless communications. We particularly focus on a system-based approaches toward energy optimal transmission and resource management across time, frequency, and spatial domains. Details related to energy-efficient hardware implementations are also covered. Copyright © 2008 John Wiley & Sons, Ltd.
Analysis and design of secure watermark-based authentication systems This paper focuses on a coding approach for effective analysis and design of secure watermark-based multimedia authentication systems. We provide a design framework for semi-fragile watermark-based authentication such that both objectives of robustness and fragility are effectively controlled and achieved. Robustness and fragility are characterized as two types of authentication errors. The authentication embedding and verification structures of the semi-fragile schemes are derived and implemented using lattice codes to minimize these errors. Based on the specific security requirements of authentication, cryptographic techniques are incorporated to design a secure authentication code structure. Using nested lattice codes, a new approach, called MSB-LSB decomposition, is proposed which we show to be more secure than previous methods. Tradeoffs between authentication distortion and implementation efficiency of the secure authentication code are also investigated. Simulations of semi-fragile authentication methods on real images demonstrate the effectiveness of the MSB-LSB approach in simultaneously achieving security, robustness, and fragility objectives.
Design tools for 3-D integrated circuits We present a set of design tools for 3-D Integration. Using these tools - a 3-D standard-cell placement tool, global routing tool, and layout editor - we have targeted existing standard-cell circuit netlists for fabrication using wafer bonding. We have analyzed the performance of several circuits using these tools and find that 3-D integration provides significant benefits. For example, relative to single-die placement, we observe on average 28% to 51% reduction in total wire length.
A low complexity motion compensated frame interpolation method In low bit-rate video communication, temporal subsampling is usually used due to limited available bandwidth. Motion compensated frame interpolation (MCFI) techniques are often employed in the decoder to restore the original frame rate and enhance the temporal quality. In this paper, we propose a low-complexity and high efficiency MCFI method. It first examines the motion vectors embedded in the bit-stream, then carries out overlapped block bi-directional motion estimation on those blocks whose embedded motion vectors are regarded as not accurate enough. Finally, it utilizes motion vector post-processing and overlapped block motion compensation to generate interpolated frames and further reduce blocking artifacts. Experimental results show that the proposed algorithm outperforms other methods in both PSNR and visual performance, while its complexity is also lower than other methods.
A reversible image authentication scheme based on compressive sensing In order to satisfy the requirement of reversible authentication as well as tamper localization and recovery, a reversible image authentication scheme based on compressive sensing (CS) is proposed. Double watermarks are employed, including a short one (perception Hash) for image integrity authentication and a long one for tamper localization and recovery. First, we embed the short watermark into the image in a reversible way. The embedding method is based on histogram modification of discrete Haar wavelet coefficients which is proposed in this paper. Then the long watermark, which is generated by CS sampling on the transformation coefficients of the non-overlapping image blocks, is registered to intellectual property rights (IRP) database for saving in a zero-watermarking way. At the authentication side, the receiver recovers the image after extracting the short watermark, and compares the Hash values generated from the recovered image with the short watermark for authentication. If the authentication is successful, the image can be completely restored to the original state in a reversible way; if the authentication fails, by utilizing the long watermark in the IRP database and CS reconstruction, it can achieve tamper localization and recovery. Meanwhile, experimental results show that the watermarked image has good imperceptibility. The proposed scheme is with good potential to be adopted for reversible image authentication.
1.020215
0.022785
0.018956
0.016495
0.014605
0.008021
0.003024
0.00084
0.000179
0.000075
0
0
0
0
Length-Matching Routing for High-Speed Printed Circuit Boards As the clock frequencies used in industrial applications increase, the timing requirements imposed on routing problems become tighter. So, it becomes important to route the nets within tight minimum and maximum length bounds. Although the problem of routing nets to satisfy maximum length constraints is a well-studied problem, there exists no sophisticated algorithm in the literature that ensures that minimum length constraints are also satisfied. In this paper, we propose a novel algorithm that effectively incorporates the min-max length constraints into the routing problem. Our approach is to use a Lagrangian relaxation framework to allocate extra routing resources around nets simultaneously during routing them. We also propose a graph model that ensures that all the allocated routing resources can be used effectively for extending lengths. Our routing algorithm automatically prioritizes resource allocation for shorter nets, and length minimization for longer nets so that all nets can satisfy their min-max length constraints. Our experiments demonstrate that this algorithm is effective even in the cases where length constraints are tight, and the layout is dense.
BSG-Route: A length-matching router for general topology Length-matching routing is a very important issue for PCB routing. Previous length-matching routers [1]--[3] all have assumptions on the routing topology whereas practical designs may be free of any topological constraint. In this paper, we propose a router that deals with general topology. Unlike previous routers, our router does not impose any restriction on the routing topology. Moreover, our router is gridless. Its performance does not depend on the routing grid size of the input while routers in [1]--[3] do. This is a big advantage because modern PCB routing configurations usually imply huge routing grids. The novelty of this work is that we view the length-matching routing problem as an area assignment problem and use a placement structure, Bounded-Sliceline Grid (BSG) [4], to help solving the problem. Experimental results show that our router can handle practical designs that previous routers can't handle. For designs that they could handle, our router runs much faster. For example, in one of our data, we obtain the result in 88 seconds while the router in [3] takes more than one day.
Hamilton Paths In Grid Graphs
System-level routing of mixed-signal ASICs in WREN This paper presents new techniques for gfobaf and detailed routing of the macrocell-style analog core of a mired-signal ASK. We combine a comparatively simple geometric model of rhe problem with an aggressive simulated annealing formula!ion that selects paths while accommodating numerous signal-integrity constraints. Experimental results demonstrate that it is critical lo aIrack such constraints both globally (system-level) and locally (channel-level) to meet designer-specified performance targets.
Algorithmic study of single-layer bus routing for high-speed boards As the clock frequencies used in industrial applications increase, the timing requirements on routing problems become tighter, and current routing tools cannot successfully handle these constraints any more. In this paper, the authors focus on the high-performance single-layer bus routing problem, where the objective is to match the lengths of all nets belonging to each bus. An effective approach to solve this problem is to allocate extra routing resources around short nets during routing, and use those resources for length extension afterwards. First, a provably optimal algorithm for routing nets with minimum-area maximum-length constraints is proposed. Then, this algorithm is extended to the case where minimum constraints are given as exact length bounds, and it is also proven that this algorithm is near-optimal. Both algorithms proposed are shown to be scalable for large circuits, since the respective time complexities are O(A) and O(AlogA), where A is the area of the intermediate region between chips.
2006 International Conference on Computer-Aided Design (ICCAD'06), November 5-9, 2006, San Jose, CA, USA
BoxRouter 2.0: architecture and implementation of a hybrid and robust global router In this paper, we present BoxRouter 2.0, a hybrid and robust global router with discussion on its architecture and implementation. As high performance VLSI design becomes more interconnect-dominant, efficient congestion elimination in global routing is in greater demand. Hence, we propose BoxRouter 2.0 which has strong ability to improve routability and minimize the number of vias with blockages, while minimizing wirelength. BoxRouter 2.0 is improved over [1], but can perform multi-layer routing with 2D global routing and layer assignment. Our 2D global routing is equipped with two ideas: robust negotiation-based A* search for routing stability, and topology-aware wire ripup for flexibility. After 2D global routing, 2D-to-3D mapping is done by the layer assignment which is powered by progressive via/blockage-aware integer linear programming. Experimental results show that BoxRouter 2.0 has better routability with comparable wirelength than other routers on ISPD07 benchmark, and it can complete (no overflow) ISPD98 benchmark for the first time in the literature with the shortest wirelength.
FLUTE: fast lookup table based wirelength estimation technique Wirelength estimation is an important tool to guide the design optimization process in early design stages. In this paper, we present a wirelength estimation technique called FLUTE. Our technique is based on pre-computed lookup table to make wirelength estimation very fast and very accurate for low degree nets. We show experimentally that for FLUTE, RMST, and HPWL, the average error in wirelength are 0.72%, 4.23%, and -8.71%, respectively, and the normalized runtime are 1, 1.24, and 0.16, respectively.
Efficient simulated evolution based rerouting and congestion-relaxed layer assignment on 3-D global routing The increasing complexity of interconnection designs has enhanced the importance of research into global routing when seeking high-routability (low overflow) results or rapid search paths that report wire-length estimations to a placer. This work presents two routing techniques, namely adaptive pseudorandom net-ordering routing and evolution-based rip-up and reroute using a two-stage cost function in a high-performance congestion-driven 2-D global router. We also propose two efficient via-minimization methods, namely congestion relaxation by layer shifting and rip-up and re-assignment, for a dynamic programming-based layer assignment. Experimental results demonstrate that our router achieves performance similar to the first two winning routers in ISPD 2008 Routing Contest in terms of both routability and wire length at a 1.42X and 25.84X faster routing speed. Besides, our layer assignment yields 3.5% to 5.6% fewer vias, 2.2% to 3.3% shorter wirelength and 13% to 27% less runtime than COLA.
Iterative Partitioning with Varying Node Weights The balanced partitioning problem divides the nodes of a [hyper]graph into groups of approximately equal weight (i.e., satisfying balance constraints) while minimizing the number of[hyper]edges that are cut (i.e., adjacent to nodes in different groups). Classic iterative algorithms use the pass paradigm [24] in performing single-node moves [16, 13] to improve the initial solution. To satisfy particular balance constraints, it is usual to require that intermediate solutions satisfy the constraints. Hence, many possible moves are rejected. Hypergraph partitioning heuristics have been traditionally proposed for and evaluated on hypergraphs with unit node weights only. Nevertheless, many real-world applications entail varying node weights, e.g., VLSI circuit partitioning where node weight typically represents cell area. Even when multilevel partitioning [3] is performed on unit-node-weight hypergraphs, intermediate clustered hypergraphs have varying node weights. Nothing prevents the use of conventional move-based heuristics when node weights vary, but their performance deteriorates, as shown by our analysis of partitioning results in [1]. We describe two effects that cause this deterioration and propose simple modifications of well-known algorithms to address them. Our baseline implementations achieve dramatic improvements over previously reported results (by factors of up to 25); explicitly addressing the described harmful effects provides further improvement. Overall results are superior to those of the PROP-REXest algorithm reported in [14], which addresses similar problems.
A framework for evaluating the data-hiding capacity of image sources An information-theoretic model for image watermarking and data hiding is presented in this paper. Previous theoretical results are used to characterize the fundamental capacity limits of image watermarking and data-hiding systems. Capacity is determined by the statistical model used for the host image, by the distortion constraints on the data hider and the attacker, and by the information available to the data hider, to the attacker, and to the decoder. We consider autoregressive, block-DCT, and wavelet statistical models for images and compute data-hiding capacity for compressed and uncompressed host-image sources. Closed-form expressions are obtained under sparse-model approximations. Models for geometric attacks and distortion measures that are invariant to such attacks are considered.
Lossless Data Hiding Using Integer Wavelet Transform and Threshold Embedding Technique This paper presents a new lossless data hiding method for digital images using integer wavelet transform and threshold embedding technique. Data are embedded into the least significant bit-plane (LSB) of high frequency CDF (2, 2) integer wavelet coefficients whose magnitudes are smaller than a certain predefined threshold. Histogram modification is applied as a preprocessing to prevent overflow/underflow. Experimental results show that this scheme outperforms the prior arts in terms of a larger payload (at the same PSNR) or a higher PSNR (at the same payload)
Partitioning using second-order information and stochastic-gainfunctions A probability-based partitioning algorithm, PROP, was introduced in [8] that achieved large improvements over traditional "deterministic" iterative-improvement techniques like Fidducia-Mattheyses (FM) and Krishnamurthy's look-ahead (LA) algorithm. While PROP's gain function has a greater futuristic component than PM or LA, it incorporates spatially local information-only information on the removal...
Dual-image-based reversible data hiding method using center folding strategy In recent years, dual-image techniques have been widely used in reversible data hiding. Most data hiding methods focus on finding the best predictive value to reduce the degree of stego image distortion. After finding a predicted value, most methods convert the secret data directly into secret symbols, using the decimal or other numeral systems, and then have these added to (or subtracted from) the predictive value (or pixel) to complete the embedding. However, this study found that the secret data are a very important factor affecting image quality. If the value of the secret symbols is too large, it may cause greater differences between the stego-pixels and original pixels. Hence, this study employs the center folding strategy to reduce the value of the secret symbols. The reduced symbols are then embedded in two stego images through an averaging method, to maintain image quality. In addition, underflow/overflow pixels are replaced with the original pixels to reduce unnecessary image distortion. The experimental results show that the proposed method has fairly good performance compared to the other dual-image techniques. For the same embedding capacity, its image quality is also better, on average, than that of other methods by at least 2dB. Furthermore, the proposed method determines the embedding capacity with the k value. The larger the k value, the higher the embedding capacity. The experimental results show that for the same image quality, the embedding capacity of the proposed method is better, on average, than that of other methods by at least 0.5 bits per pixel. The proposed method is applicable not only to experimental images, but also works well for general images in terms of embedding capacity, image quality and execution time. This research presented a dual Imaging-based reversible hiding technique.The Central Folding Strategy is applied to achieve low distortion purpose.The reduced symbols are embedded in two stego-images through an averaging method.The performance of the proposed method is better than that of other methods.The proposed method can adapt to different image characteristics.
1.052358
0.037805
0.035151
0.035151
0.021239
0.001876
0.000783
0.000078
0.000023
0.000002
0
0
0
0
An introduction to OpenAccess: an open source data model and API for IC design The OpenAccess database provides a comprehensive open standard data model and robust implementation for IC design flows. This paper describes how it improves interoperability among applications in an EDA flow. It details how OA benefits developers of both EDA tools and flows. Finally, it outlines how OA is being used in the industry, at semiconductor design companies, EDA tool vendors, and universities.
Imagenet: A Large-Scale Hierarchical Image Database The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called "ImageNet", a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 5001000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond.
Toward Quality EDA Tools and Tool Flows Through High-Performance Computing As the scale and complexity of VLSI circuits increase, Electronic Design Automation (EDA) tools become much more sophisticated and are held to increasing standards of quality. New-generation EDA tools must work correctly on a wider range of inputs, have more internal states, take more effort to develop, and offer fertile ground for programming mistakes. Ensuring quality of a commercial tool in realistic design flows requires rigorous simulation, non-trivial computational resources, accurate reporting of results and insightful analysis. However, time-to-market pressures encourage EDA engineers and chip designers to look elsewhere. Thus, the recent availability of cheap Linux clusters and grids shifts the bottleneck from hardware to logistical tasks, i.e., the speedy collection, reporting and analysis of empirical results. To be practically feasible, such tasks must be automated; they leverage high-performance computing to improve EDA tools. In this work we outline a possible infrastructure solution, called bX, explore relevant use models and describe our computational experience. In a specific application, we use bX to automatically build Pareto curves required for accurate performance analysis of randomized algorithms.
Tachyon: Reliable, Memory Speed Storage for Cluster Computing Frameworks Tachyon is a distributed file system enabling reliable data sharing at memory speed across cluster computing frameworks. While caching today improves read workloads, writes are either network or disk bound, as replication is used for fault-tolerance. Tachyon eliminates this bottleneck by pushing lineage, a well-known technique, into the storage layer. The key challenge in making a long-running lineage-based storage system is timely data recovery in case of failures. Tachyon addresses this issue by introducing a checkpointing algorithm that guarantees bounded recovery cost and resource allocation strategies for recomputation under commonly used resource schedulers. Our evaluation shows that Tachyon outperforms in-memory HDFS by 110x for writes. It also improves the end-to-end latency of a realistic workflow by 4x. Tachyon is open source and is deployed at multiple companies.
A high-performance, portable implementation of the MPI message passing interface standard MPI (Message Passing Interface) is a specification for a standard library for message passing that was defined by the MPI Forum, a broadly based group of parallel computer vendors, library writers, and applications specialists. Multiple implementations of MPI have been developed. In this paper, we describe MPICH, unique among existing implementations in its design goal of combining portability with high performance. We document its portability and performance and describe the architecture by which these features are simultaneously achieved. We also discuss the set of tools that accompany the free distribution of MPICH, which constitute the beginnings of a portable parallel programming environment. A project of this scope inevitably imparts lessons about parallel computing, the specification being followed, the current hardware and software environment for parallel computing, and project management; we describe those we have learned. Finally, we discuss future developments for MPICH, including those necessary to accommodate extensions to the MPI Standard now being contemplated by the MPI Forum.
The ISPD global routing benchmark suite This paper describes the ISPD global routing benchmark suite and related contests. Total 16 global routing benchmarks are produced from the ISPD placement contest benchmark suite using a variety of publicly available academic placement tools. The representative characteristics of the ISPD global routing benchmark suite include multiple metal layers with layer assignment requirement, wire and via width/space modeling, and macro porosity modeling. The benchmarks have routable nets from 200 thousand 1.6 million. While primarily intended for global routing, they can be certainly extended for detailed routing or routing congestion estimation. In conjunction with the previous ISPD placement contest benchmark suite, the new global routing benchmarks will present realistic and challenging physical design problems of modern complex IC designs
Generative programming: methods, tools, and applications
Continuation-Based Program Transformation Strategies
HOPE: An experimental applicative language An applicative language called HOPE is described and discussed. The underlying goal of the design and implementation effort was to produce a very simple programming language which encourages the construction of clear and manipulable programs. HOPE does not include an assignment statement; this is felt to be an important simplification. The user may freely define his own data types, without the need to devise a complicated encoding in terms of low-level types. The language is very strongly typed, and as implemented it incorporates a typechecker which handles polymorphic types and overloaded operators. Functions are defined by a set of recursion equations; the left-hand side of each equation includes a pattern used to determine which equation to use for a given argument. The availability of arbitrary higher-order types allows functions to be defined which 'package' recursion. Lazily-evaluated lists are provided, allowing the use of infinite lists which could be used to provide interactive input/output and concurrency. HOPE also includes a simple modularisation facility which may be used to protect the implementation of an abstract data type.
Verifying Correctness of Logic Programs We are convinced that logic programming needs specifications as much as traditional programming. For this reason, it is necessary to study also for logic programs how correctness with respect to a given specification can be asserted. Starting from Hogger's considerations on this topic, we supply a sufficient condition for completeness. Modularity and pre/post specifications of program modules are then discussed. We propose a sufficient condition for partial correctness with respect to a pre/post specification. Some small examples illustrate our technique.
Incremental Data-Flow Analysis An incremental update algorithm modifies the solution of a problem that has been changed, rather than re-solving the entire problem. ACINCF and ACINCB are incremental update algorithms for forward and backward data-flow analysis, respectively, based on our equations model of Allen-Cocke interval analysis. In addition, we have studied their performance on a “nontoy” structured programming language L. Given a set of localized program changes in a program written in L, we identify a priori the nodes in its flow graph whose corresponding data-flow equations may be affected by the changes. We characterize these possibly affected nodes by their corresponding program structures and their relation to the original change sites, and do so without actually performing the incremental updates. Our results can be refined to characterize the reduced equations possibly affected if structured loop exit mechanisms are used, either singly or together, thereby relating richness of programming language usage to the ease of incremental updating.
A data-hiding technique with authentication, integration, and confidentiality for electronic patient records. A data-hiding technique called the "bipolar multiple-number base" was developed to provide capabilities of authentication, integration, and confidentiality for an electronic patient record (EPR) transmitted among hospitals through the Internet. The proposed technique is capable of hiding those EPR related data such as diagnostic reports, electrocardiogram, and digital signatures from doctors or a hospital into a mark image. The mark image could be the mark of a hospital used to identify the origin of an EPR. Those digital signatures from doctors and a hospital could be applied for the EPR authentication. Thus, different types of medical data can be integrated into the same mark image. The confidentiality is ultimately achieved by decrypting the EPR related data and digital signatures with an exact copy of the original mark image. The experimental results validate the integrity and the invisibility of the hidden EPR related data. This newly developed technique allows all of the hidden data to be separated and restored perfectly by authorized users.
An improved lossless data hiding scheme based on image VQ-index residual value coding Copyright protection and information security have become serious problems due to the ever growing amount of digital data over the Internet. Reversible data hiding is a special type of data hiding technique that guarantees not only the secret data but also the cover media can be reconstructed without any distortion. Traditional schemes are based on spatial, discrete cosine transformation (DCT) and discrete wavelet transformation (DWT) domains. Recently, some vector quantization (VQ) based reversible data hiding schemes have been proposed. This paper proposes an improved reversible data hiding scheme based on VQ-index residual value coding. Experimental results show that our scheme outperforms two recently proposed schemes, namely side-match vector quantization (SMVQ)-based data hiding and modified fast correlation vector quantization (MFCVQ)-based data hiding.
New high-performance reversible data hiding method for VQ indices based on improved locally adaptive coding scheme. •We propose a new data hiding method using improved locally adaptive coding (ILAS).•ILAS uses the two-dimensional structure of image and correlation of adjacent indices.•Our proposed ILAS can significantly improve the performance of LAS.•Experiments show our method has a better performance compared with other methods.
1.2
0.2
0.2
0.2
0.1
0.006452
0
0
0
0
0
0
0
0
FLUTE: fast lookup table based wirelength estimation technique Wirelength estimation is an important tool to guide the design optimization process in early design stages. In this paper, we present a wirelength estimation technique called FLUTE. Our technique is based on pre-computed lookup table to make wirelength estimation very fast and very accurate for low degree nets. We show experimentally that for FLUTE, RMST, and HPWL, the average error in wirelength are 0.72%, 4.23%, and -8.71%, respectively, and the normalized runtime are 1, 1.24, and 0.16, respectively.
The future of interconnection technology Continuing advances in interconnection technology are seen as essential to continued improvements in integrated circuit performance. The recent introduction of copper metallization, dual-damascene processing, and fully articulated hierarchical wiring structures, along with the imminent introduction of low-dielectric-constant insulating materials, indicates an accelerating pace of innovation. Nevertheless, some authors have argued that such innovations will sustain chip-level performance improvements for only another generation or two. In light of this pessimism, current trends and probable paths in the future evolution of interconnection technology are reviewed. A simple model is developed and used to estimate future wiring requirements and to examine the value of further innovations in materials and architecture. As long as current trends continue, with memory arrays filling an increasing fraction of the total area of high-performance microprocessor chips, wiring need not be a performance limiter for at least another decade. Alternative approaches, such as optical interconnections on chip, have little to offer while the incremental elaboration of the traditional wiring systems is still rapidly advancing.
Probabilistic Congestion Prediction with Partial Blockages Fast and accurate routing congestion estimation is essential for optimizations such as floorplanning, placement, buffering, and physical synthesis that need to avoid routing congestion. Using a probabilistic technique instead of a global router has the advantage of speed and easy updating. Previously proposed probabilistic models (Lou, et al., 2001) (Westra et al., 2004) do not account for wiring that may already be fixed in the design, e.g., due to macro blocks or power rails. These "partial wiring blockages" certainly influence the global router, so they should also influence a probabilistic routing prediction algorithm. This work proposes a probabilistic congestion prediction metric that extends the work of (Westra et al., 2004) to model partial wiring blockages. The authors also show a new fast algorithm to efficiently generate the congestion map and demonstrate the effectiveness of our methods on real routing problems
A morphing approach to address placement stability Traditionally, research in global placement has focused on relatively few simple metrics, such as pure wirelength or routability estimates. However, in the real world today, designs are driven by not-so-simple issues such as timing and crosstalk. The future holds even more difficulties as physical models for devices and interconnects become increasingly complex and unpredictable. Adoption of an iterative methodology, where one incrementally fixes design errors, is a basic approach to tackling these problems. However, developers of placement algorithms have long neglected the need for an tool which can be easily adopted into an incremental design flow. We propose a novel placement approach called grid morphing, which is specifically tailored for an incremental approach to placement. In particular, our technique focuses on the stability of the placement, which is critical for minimization of perturbation of the final placement under changes applied to the input netlist. We comparethe stability of our approach to existing placement tools, and show through experiments that our approach still delivers good results under traditional placement metrics.
Temperature-aware routing in 3D ICs Three-dimensional integrated circuits (3D ICs) provide an attractive solution for improving circuit performance. Such solutions must be embedded in an electrothermally-conscious design methodology, since 3D ICs generate a significant amount of heat per unit volume. In this paper, we propose a temperature-aware 3D global routing algorithm with insertion of "thermal vias" and "thermal wires" to lower the effective thermal resistance of the material, thereby reducing chip temperature. Since thermal vias and thermal wires take up lateral routing space, our algorithm utilizes sensitivity analysis to judiciously allocate their usage, and iteratively resolve contention between routing and thermal vias and thermal wires. Experimental results show that our routing algorithm can effectively reduce the peak temperature and alleviate routing congestion.
Mixed-size placement via line search We describe a remarkably simple yet very effective line search technique for cell placement. Our method "corrects" errors in force scaling by sampling different force weights in each iteration of placement and selecting the best candidate placements based on an objective function. Our technique is not only very fast, but it does away with the need for the ad hoc scaling that has plagued prior force-directed methods. We describe the implementation of our method within a multilevel flow and show that it can achieve good wire lengths with competitive run-times compared to other academic tools. Specifically, we produce placements with 12% and 15% better HPWL than FengShui 5.0 and Capo 9.1, respectively, on the ICCAD04 mixed-size benchmarks, while presenting run-times that are 37% faster than Capo 9.1.
Wirelength estimation based on rent exponents of partitioning and placement Wirelength estimation is one of the most important Rent's rule applications. Traditional Rent exponent extraction is based on recursive bipartitioning. However, the obtained exponent may not be appropriate for wirelength estimation. In this paper, we propose the concepts of partitioning-based Rent exponent and placement-based Rent exponent. The relationship between them is analyzed and empirically verified. Experiments on large circuits show that for wirelength estimation, the Rent exponent extracted from placement is more reasonable than that from partitioning.
Multilevel fixed-point-addition-based VLSI placement A placement problem can be formulated as a quadratic program with nonlinear constraints. Those constraints make the problem hard. Omitting the constraints and solving the unconstrained problem results in a placement with substantial cell overlaps. To remove the overlaps, we introduce fixed points into the nonconstrained quadratic-programming formulation. Acting as pseudocells at fixed locations, they can be used to pull cells away from the dense regions to reduce overlapping. We present an in-depth study of the placement technique based on fixed-point addition and prove that fixed points are generalizations of constant additional forces used previously to eliminate cell overlaps. Experimental results on public-domain benchmarks show that the fixed-point-addition-based placer produces better results than the placer based on constant additional forces. We present an efficient multilevel placer based upon the fixed-point technique and demonstrate that it produces competitive results compared to the existing state-of-the-art placers.
Min-cut floorplacement Large macro blocks, predesigned datapaths, embedded memories, and analog blocks are increasingly used in application-specific integrated circuit (ASIC) designs. However, robust algorithms for large-scale placement of such designs have only recently been considered in the literature. Large macros can be handled by traditional floorplanning, but are harder to account for in min-cut and analytical placement. On the other hand, traditional floorplanning techniques do not scale to large numbers of objects, especially in terms of solution quality. The authors propose to integrate min-cut placement with fixed-outline floorplanning to solve the more general placement problem, which includes cell placement, floorplanning, mixed-size placement, and achieving routability. At every step of min-cut placement, either partitioning or wirelength-driven fixed-outline floorplanning is invoked. If the latter fails, the authors undo an earlier partitioning decision, merge adjacent placement regions, and refloorplan the larger region to find a legal placement for the macros. Empirically, this framework improves the scalability and quality of results for traditional wirelength-driven floorplanning. It has been validated on recent designs with embedded memories and accounts for routability. Additionally, the authors propose that free-shape rectilinear floorplanning can be used with rough module-area estimates before logic synthesis
MMP: a novel placement algorithm for combined macro block and standard cell layout design In this paper, an efficient mixed-mode placement algorithm called MMP is presented for the high performance mixed block and standard cell designs. Our approach combines the well-known quadratic placement with bottom-up clustering, as well as the slicing partitioning strategy. This approach can account for macro blocks and standard cells simultaneously. Our method is both very efficient and effective, while it can be run very fast, too. We have tested our algorithm on a set of sample circuits from industry and consistently obtained excellent results.
Software Technology in the 1990's: Using a New Paradigm First Page of the Article
Program refinement by transformation Program maintenance is simplified when the program to be modified can be viewed as an abstract algorithm to which clearly documented implementation decisions have been applied to produce an efficient realization. The Harvard Program Development System (PDS) [8] is a programming support environment that encourages users to take this view of programs. A user of the PDS creates transformations that incorporate implementation choices, and the system uses these transformations to refine concrete programs from their abstract counterparts. In addition to simplifying maintenance, this method supports the use of notational extensions and the development of program families. We describe the transformation facilities available to the user of the PDS, and we discuss aspects of the implementation of these facilities.
Dynamic programming revisited: improving knapsack algorithms .  The contribution of this paper is twofold: At first an improved dynamic programming algorithm for the bounded knapsack problem is given. It decreases the running time for an instance with n items and capacity c from to , which is the same pseudopolynomial complexity as usually given for the 0--1 knapsack problem. In the second part a general approach based on dynamic programming is presented to reduce the storage requirements for combinatorial optimization problems where it is computationally more expensive to compute the explicit solution structure than the optimal solution value. Among other applications of this scheme it is shown that the 0--1 knapsack problem as well as the bounded knapsack problem can be solved in time and space.
GIS Spatial Data Updating Algorithm Based on Digital Watermarking Technology
1.009761
0.010683
0.008386
0.006597
0.004668
0.003302
0.002332
0.001222
0.000425
0.000132
0
0
0
0
Active shape models—their training and application Model-based vision is firmly established as a robust approach to recognizing and locating known rigid objects in the presence of noise, clutter, and occlusion. It is more problematic to apply model-based methods to images of objects whose appearance can vary, though a number of approaches based on the use of flexible templates have been proposed. The problem with existing methods is that they sacrifice model specificity in order to accomodate variability, thereby compromising robustness during image interpretation. We argue that a model should only be able to deform in ways characteristics of the class of objects it represents. We describe a method for building models by learning patterns of variability from a training set of correctly annotated images. These models can be used for image search in an interative refinement algorithm analogous to that employed by Active Contour Models (Snakes). The key difference is that our Active Shape Models can only deform to fit the data in ways consistent with the training set. We show several practical examples where we have built such models and used them to locate partially occluded objdcts in noisy, cluttered images
A frequency domain approach to registration of aliased images with application to super-resolution Super-resolution algorithms reconstruct a high-resolution image from a set of low-resolution images of a scene. Precise alignment of the input images is an essential part of such algorithms. If the low-resolution images are undersampled and have aliasing artifacts, the performance of standard registration algorithms decreases. We propose a frequency domain technique to precisely register a set of aliased images, based on their low-frequency, aliasing-free part. A high-resolution image is then reconstructed using cubic interpolation. Our algorithm is compared to other algorithms in simulations and practical experiments using real aliased images. Both show very good visual results and prove the attractivity of our approach in the case of aliased input images. A possible application is to digital cameras where a set of rapidly acquired images can be used to recover a higher-resolution final image.
Robust fusion of irregularly sampled data using adaptive normalized convolution We present a novel algorithm for image fusion from irregularly sampled data. The method is based on the framework of normalized convolution (NC), in which the local signal is approximated through a projection onto a subspace. The use of polynomial basis functions in this paper makes NC equivalent to a local Taylor series expansion. Unlike the traditional framework, however, the window function of adaptive NC is adapted to local linear structures. This leads to more samples of the same modality being gathered for the analysis, which in turn improves signal-to-noise ratio and reduces diffusion across discontinuities. A robust signal certainty is also adapted to the sample intensities to minimize the influence of outliers. Excellent fusion capability of adaptive NC is demonstrated through an application of super-resolution image reconstruction.
A comparative analysis of image fusion methods There are many image fusion methods that can be used to produce high-resolution multispectral images from a high-resolution panchromatic image and low-resolution multispectral images. Starting from the physical principle of image formation, this paper presents a comprehensive framework, the general image fusion (GIF) method, which makes it possible to categorize, compare, and evaluate the existing...
Multisensor Remote Sensing Image Fusion Using Stationary Wavelet Transform: Effects Of Basis And Decomposition Level Stationary wavelet transform is an efficient algorithm for remote sensing image fusion. In this paper, we investigate the effects of orthogonal/ biorthogonal filters and decomposition depth on using stationary wavelet analysis for fusion. Spectral discrepancy and spatial distortion are used as quality measures. Empirical results lead to some recommendations on the wavelet filter parameters for use in remote sensing image fusion applications.
An Efficient Block Encryption Cipher Based on Chaotic Maps for Secure Multimedia Applications This paper presents an efficient chaotic-based block encryption cipher (CBBEC), which is a symmetric encryption algorithm designed with the potential of increasing security and improving performance. It is a hybrid mixture from chaos theory and cryptography. Its salient features include the use of eight working registers providing capability to handle 256-bits plaintext/ciphertext block sizes, employing a chaotic logistic map in key scheduling to generate session key used in encryption/decryption, the essential heavy use of data-dependent rotations and the inclusion of integer multiplication as an additional primitive operation. The use of multiplication with eight working registers greatly increases the diffusion achieved per round, allowing for greater security, fewer rounds and increased throughput. Comparative performance evaluation of the proposed chaotic-based block encryption cipher CBBEC with Rijndael, RC6 and RC5 is addressed. Experimental results show that the proposed CBBEC is a fast block cipher, showing its great potential in real-time encryption applications.
Segmentation of the left ventricle of the heart in 3-D+t MRI data using an optimized nonrigid temporal model. Modern medical imaging modalities provide large amounts of information in both the spatial and temporal domains and the incorporation of this information in a coherent algorithmic framework is a significant challenge. In this paper, we present a novel and intuitive approach to combine 3-D spatial and temporal (3-D + time) magnetic resonance imaging (MRI) data in an integrated segmentation algorith...
Improving resolution by image registration Image resolution can be improved when the relative displacements in image sequences are known accurately, and some knowledge of the imaging process is available. The proposed approach is similar to back-projection used in tomography. Examples of improved image resolution are given for gray-level and color images, when the unknown image displacements are computed from the image sequence.
Tailored reversible watermarking schemes for authentication of electronic clinical atlas. It is accepted that digital watermarking is quite relevant in medical imaging. However, due to the special nature of clinical practice, it is often required that watermarking not introduce irreversible distortions to medical images. The electronic clinical atlas has such a need of "lossless" watermarking. We present two tailored reversible watermarking schemes for the clinical atlas by exploiting its inherent characteristics. We have implemented the schemes and our experimental results look very promising.
Low-density parity-check codes A low-density parity-check code is a code specified by a parity-check matrix with the following properties: each column contains a small fixed number of l's and each row contains a small fixed number of l's. The typical minimum distance of these codes increases linearly with block length for a fixed rate and fixed . When used with maximum likelihood decoding on a sufficiently quiet binary-input symmetric channel, the typical probability of decoding error decreases exponentially with block length for a fixed rate and fixed . A simple but nonoptimum decoding scheme operating directly from the channel a posteriori probabilities is described. Both the equipment complexity and the data-handling capacity in bits per second of this decoder increase approximately linearly with block length. For and a sufficiently low rate, the probability of error using this decoder on a binary symmetric channel is shown to decrease at least exponentially with a root of the block length. Some experimental results show that the actual probability of decoding error is much smaller than this theoretical bound.
Automatic evaluation of summaries using N-gram co-occurrence statistics Following the recent adoption by the machine translation community of automatic evaluation using the BLEU/NIST scoring process, we conduct an in-depth study of a similar idea for evaluating summaries. The results show that automatic evaluation using unigram co-occurrences between summary pairs correlates surprising well with human evaluations, based on various statistical metrics; while direct application of the BLEU evaluation procedure does not always give good results.
Efficient Reductants Calculi using Partial Evaluation Techniques with Thresholding Reductants are a useful theoretical tool introduced for proving correctness properties in the context of generalized annotated logic programming. This concept was adapted to the more recent and flexible framework of multi-adjoint logic programming for solving a problem of incompleteness that arises when working with some lattices. In order to be complete, multi-adjoint logic programs must be extended with their set of reductants. In general, the notion of reductant may introduce an important efficiency drawback. In this work we provide a more refined version of this concept that we call PE-reductant, by using (threshold) partial evaluation techniques. Our proposal is intended to be semantically equivalent to the classical notion of reductant, and improves previous approaches at least in the following two efficiency criteria. Firstly, using the new definition of reductant, we can obtain computed answers for a given goal with a lesser computational effort than by using its precedent ones. Secondly, the proper construction of a reductant by means of partial evaluation methods, is drastically improved after introducing thresholding techniques which dynamically reduce the size of the underlying unfolding trees.
EEFF: a cross-layer designed energy efficient fast forwarding protocol for wireless sensor networks Most of the proposed communication protocols for wireless sensor networks focus on the energy efficiency. However, it often brings poor latency performance. In this paper, we develop a novel cross-layer designed asynchronous protocol EEFF (Energy Efficient and Fast Forwarding) for wireless sensor networks regarding low latency and energy efficiency. EEFF improves low power listening approach and introduces dynamical routing selection to reduce the latency. We evaluate the performance of the EEFF protocol in a testbed of 16 MICAZ motes and perform larger scale experiments with NS-2. These experiments demonstrate that EEFF significantly improves the latency and energy performance compared with state-of-art asynchronous protocols, and it's especially suitable for large scale dense wireless sensor networks.
Reversible data hiding based on local histogram shifting with multilayer embedding We use the two neighboring points of the peak point to embed the secret data.We improve the histogram of embedded cover image to ensure security of secret data.The multilayer embedding of pyramid structure is adopted to increase the capacity.Our proposed method has a higher embedding capacity with lower distortion. In this paper, we present a new reversible data hiding method based on histogram shifting using localization. Our proposed method selects peak point as the reference point, then uses the two neighboring points of the peak point to achieve secret data embedding based on histogram shifting and the peak point keeps unchanged. In the extraction end, we no longer need the key information about the peak point, we can directly find the peak point from the histogram to extract the secret data. We also exploit the localization to make the histogram of embedded cover image become almost the same as the histogram of the original cover image. The embedding capacity is also increased rapidly by the localization with multilayer embedding. Experimental results show that our proposed method is effective and superior.
1.035232
0.04
0.04
0.04
0.04
0.04
0.033839
0.021198
0.00001
0
0
0
0
0
Abacus: fast legalization of standard cell circuits with minimal movement Standard cell circuits consist of millions of standard cells, which have to be aligned overlap-free to the rows of the chip. Placement of these circuits is done in consecutive steps. First, a global placement is obtained by roughly spreading the cells on the chip, while considering all relevant objectives like wirelength, and routability. After that, the global placement is legalized, i.e., the cell overlap is removed, and the cells are aligned to the rows. To preserve the result of global placement, cells should be moved as little as possible during legalization This paper presents "Abacus", which is a fast approach to legalize standard cell circuits with minimal movement. The approach is based on sorting the cells according to their position first, and legalizing the cells one at a time then. Legalizing one cell is done by moving the cell from row to row until the optimal place with the lowest movement is found. Whenever a cell is moved to a row, the cells already aligned to the row are placed by dynamic programming to minimize their total movement. Therefore, our approach Abacus moves already legalized cells during legalization. In contrast to this, Tetris [1], which uses a similar legalization technique, does not move already legalized cells. Consequently, the average movement is about 30% lower in Abacus than in Tetris. On the other hand, the CPU time of the whole placement process is increased by only 7% with our legalization approach. Applying Abacus to routability-driven placement results in 1% improvement in routed wirelength
PRICE: power reduction by placement and clock-network co-synthesis for pulsed-latch designs Pulsed latches have emerged as a popular technique to reduce the power consumption and delay for clock networks. However, the current physical synthesis flow for pulsed latches still performs circuit placement and clock-network synthesis separately, which limits achievable power reduction. This paper presents the first work in the literature to perform placement and clock-network co-synthesis for pulsed-latch designs. With the interplay between placement and clock-network synthesis, the clock-network power and timing can be optimized simultaneously. Novel progressive network forces are introduced to globally guide the placer for iterative improvements, while the clock-network synthesizer makes use of updated latch locations to optimize power and timing locally. Experimental results show that our framework can substantially minimize power consumption and improve timing slacks, compared to existing synthesis flows.
Fast legalization for standard cell placement with simultaneous wirelength and displacement minimization Legalization is one of the most critical steps in modern placement designs. Since several objectives like wirelength, routability, or temperature are already optimized in global placement stage, the objective of legalization is not only to align the cells overlap-free to the rows, but also to preserve the solution of global placement, i.e., the displacement of cells needs to be minimized. However, minimizing displacement only is not enough for current timing-driven SoC designs. Blind displacement minimization may increase the half-perimeter wirelength (HPWL) of nets significantly that degrades the chip performance. In this paper, we propose a fast legalization algorithm for standard cell placement with simultaneous wirelength and displacement minimization. The main contributions of our work are: (1) a fast row selection technique by using k-medoid clustering approach; (2) an exact linear wirelength model to minimize both wirelength and total displacement; (3) a constant time approach to determine the median in trial placement stage. Compared with the state-of-the-art legalization algorithms, experimental results show that our legalizer acquires much better achievement in terms of HPWL, total and maximum displacements, and running time on legalized NTUplace3 global placement results on both ISPD 2005 and 2006 placement contest benchmarks.
Activity and register placement aware gated clock network design Clock gating is one of the most effective techniques to reduce clock network power dissipation. Although it has already been studied considerably, most of the previous works are restricted to either logic level or clock routing stage. Due to the restriction, clock gating often meets the trouble of wirelength overhead and frequent control signal switching, both of which degrade its effectiveness. Furthermore, previous design flows which insert gate logics after placement introduce a lot of overlaps, especially when there are lots of gate logics inserted. In this work, we propose a new design flow for low power gated clock network construction, in order to minimize the clock wirelength and the activity of control signals, and to eliminate the overlaps incurred by the gate logics. Our method begins with a coarse placement followed by soft register clustering. Then, we perform clock tree topology construction and zero skew clock routing to further reduce the power and the clock skew. Last, the gated clock network is fed back to the placer for incremental placement. Experimental results on ISCAS89 benchmarks demonstrate that our method outperforms previous algorithm of activity aware register placement in clock wirelength and clock power reduction with signal nets wirelength and signal nets power increase within 5% and 3%, respectively
BonnPlace Legalization: Minimizing Movement by Iterative Augmentation We describe BONNPLACELEGAL, an algorithm for VLSI placement legalization. Based on a minimum-cost flow algorithm that iteratively augments flows along paths, our approach ensures that only augmentations are considered that can be realized exactly by cell movements. Hence, this method avoids realization problems that are inherent to previous flow-based legalization algorithms. As a result, it combines the global perspective of minimum-cost flow approaches with the efficiency of local search algorithms. The tool is mainly designed to minimize total and maximum cell movement, but it is flexible enough to optimize other objective functions provided that the effect of single cell movements on them can be estimated efficiently. We compare our approach to legalization tools from industry and academia by experiments on dense recent real-world designs and public benchmarks. The results show that we are much faster and produce significantly better results in terms of average (linear and quadratic) and maximum movement than any other tool. The experiments also demonstrate that by minimizing squared movement we also produce a smaller increase in net length than the other tools.
Logical and physical restructuring of fan-in trees A symmetric-function fan-in tree (SFFT) is a fanout-free cone of logic that computes a symmetric function, so that all of the leaf nets in its support set are commutative. Such trees are frequently found in designs, especially when the design originated as two-level logic. These trees are usually created during logic synthesis, when there is no knowledge of the locations of the tree root or of the source gates of the leaf nets. Because of this, large SFFTs present a challenge to placement algorithms. The result is that the tree placements are generally far from optimal, leading to wiring congestion, excess buffering, and timing problems. Restructuring such trees can produce a more placeable and wire-efficient design. In this paper, we propose algorithms to identify and to restructure SFFTs during physical design. The key feature of an SFFT is that it can be implemented with various structures of a uniform set of gates with commutative inputs, i.e. AND, OR, or XOR. Drawing on the flexibility of SFFT logic structures, the proposed tree restructuring algorithm uses existing placement information to rebuild the SFFTs with reduced tree wire lengths. The experimental results demonstrate the efficiency and effectiveness of the algorithms.
Closing the Gap between Global and Detailed Placement: Techniques for Improving Routability Improving routability during both global and detailed routing stage has become a critical problem in modern VLSI design. In this work, we propose a placement framework that offers a complete coverage solution in considering both global and detailed routing congestion. A placement migration strategy is proposed, which improves detailed routing congestion while preserving the placement integrity that is optimized for global routability. Using the benchmarks released from ISPD2014 Contest, practical design rules in advanced node design are considered in our placement framework. Evaluation on routability of our placement framework is conducted using commercial router provided by the 2014 ISPD Contest organizers. Experimental results show that the proposed methodologies can effectively improve placement solutions for both global and detailed router.
Ripple: A Robust and Effective Routability-Driven Placer The significant mismatch between the objective of wirelength and routing congestion makes the routability issue even more important in placement. In this paper, we describe a routability-driven placer called Ripple. Each step, including global placement, legalization, and detailed placement, is made to trade-off between routability and wirelength. We propose a robust and effective flow by using cell inflation to relieve routing congestion. Cell inflation has traditionally been used to deal with congestion and we will discuss how this technique can be used easily and robustly in the global placement. Besides, unlike many previous works that focus on different types of swapping strategies, we analyze and propose some simple and effective approaches when considering routability in the legalization and detailed placement steps. Experimental results show that Ripple is particularly effective in improving routability. When compared to the top results in the ISPD 2011 Contest and SimPLR, Ripple can obtain the smallest overflow and half-perimeter wirelength on average, while the congestion hot spots are also distributed sparsely in Ripple.
Routability-Driven Placement and White Space Allocation We present a two-stage congestion-driven placement flow. First, during each refinement stage of our multilevel global placement framework, we replace cells based on the wirelength weighted by congestion level to reduce the routing demands of congested regions. Second, after the global placement stage, we allocate appropriate amounts of white space into different regions of the chip according to a congestion map by shifting cut lines in a top-down fashion and apply a detailed placer to legalize the placement and further reduce the half-perimeter wirelength while preserving the distribution of white space. Experimental results show that our placement flow can achieve the best routability with the shortest routed wirelength among publicly available placement tools on IBM v2 benchmarks. Our placer obtains 100% successful routings on 16 IBM v2 benchmarks with shorter routed wirelengths by 3.1% to 24.5% compared to other placement tools. Moreover, our white space allocation approach can significantly improve the routability of placements generated by other placement tools
Calibration of rent's rule models for three-dimensional integrated circuits In this paper, we determine the accuracy of Rahman's interconnect prediction model for three-dimensional (3-D) integrated circuits. Utilizing this model, we calculate the wiring requirement for a set of benchmark standard-cell circuits. We then obtain placed and routed wirelength figures for these circuits using 3-D standard-cell placement and global-routing tools we have developed. We find that the Rahman model predicts wirelengths accurately (to within 20% of placement and of routing, on average), and suggest some areas for minor improvement to the model.
Interleaved S+P pyramidal decomposition with refined prediction model Scalability and others functionalities such as the Region of In- terest encoding become essential properties of an efficient image coding scheme. Within the framework of lossless compression techniques, S+P and CALIC represent the state-of-the-art. The proposed Interleaved S+P algorithm outperforms these method while providing the desired properties. Based on the LAR (Lo- cally Adaptive Resolution) method, an original pyramidal decom- position combined with a DPCM scheme is elaborated. This solu- tion uses the S-transform in such a manner that a refined prediction context is available for each estimation steps. The image coding is done in two main steps, so that the first one supplies a LAR low- resolution image of good visual quality, and the second one allows a lossless reconstruction. The method exploits an implicit context modelling, intrinsic property of our content-based quad-tree like representation.
Integer DCT-based reversible watermarking for images using companding technique We present a high capacity reversible watermarking scheme using companding technique over integer DCT coefficients of image blocks. This scheme takes advantage of integer DCT coefficients' Laplacian-shape-like distribution, which permits low distortion between the watermarked image and the original one caused by the bit-shift operations of the companding technique in the embedding process. In our scheme, we choose AC coefficients in the integer DCT domain for the bit-shift operation, and therefore the capacity and the quality of the watermarked image can be adjusted by selecting different numbers of coefficients of different frequencies. To prevent overflows and underflows in the spatial domain caused by modification of the DCT coefficients. we design a block discrimination structure to find suitable blocks that can be used for embedding without overflow or underflow problems. We can also use this block discrimination structure to embed an overhead of location information of all blocks suitable for embedding. With this scheme, watermark bits can be embedded in the saved LSBs of coefficient blocks, and retrieved correctly during extraction, while the original image can be restored perfectly.
Homomorphic Encryption from Learning with Errors: Conceptually-Simpler, Asymptotically-Faster, Attribute-Based. We describe a comparatively simple fully homomorphic encryption (FHE) scheme based on the learning with errors (LWE) problem. In previous LWE-based FHE schemes, multiplication is a complicated and expensive step involving "relinearization". In this work, we propose a new technique for building FHE schemes that we call the approximate eigenvector method. In our scheme, for the most part, homomorphic addition and multiplication are just matrix addition and multiplication. This makes our scheme both asymptotically faster and (we believe) easier to understand. In previous schemes, the homomorphic evaluator needs to obtain the user's "evaluation key", which consists of a chain of encrypted secret keys. Our scheme has no evaluation key. The evaluator can do homomorphic operations without knowing the user's public key at all, except for some basic parameters. This fact helps us construct the first identity-based FHE scheme. Using similar techniques, we show how to compile a recent attribute-based encryption scheme for circuits by Gorbunov et al. into an attribute-based FHE scheme that permits data encrypted under the same index to be processed homomorphically.
GIS Spatial Data Updating Algorithm Based on Digital Watermarking Technology
1.008811
0.013798
0.012074
0.010486
0.008268
0.006981
0.004605
0.002748
0.001126
0.000159
0
0
0
0
The first 50 years of electronic watermarking Electronic watermarking can be traced back as far as 1954. The last 10 years has seen considerable interest in digital watermarking, due, in large part, to concerns about illegal piracy of copyrighted content. In this paper, we consider the following questions: is the interest warranted? What are the commercial applications of the technology? What scientific progress has been made in the last 10 years? What are the most exciting areas for research? And where might the next 10 years take us? In our opinion, the interest in watermarking is appropriate. However, we expect that copyright applications will be overshadowed by applications such as broadcast monitoring, authentication, and tracking content distributed within corporations. We further see a variety of applications emerging that add value to media, such as annotation and linking content to the Web. These latter applications may turn out to be the most compelling. Considerable progress has been made toward enabling these applications--perceptual modelling, security threats and countermeasures, and the development of a bag of tricks for efficient implementations. Further progress is needed in methods for handling geometric and temporal distortions. We expect other exciting developments to arise from research in informed watermarking.
Security evaluation of informed watermarking schemes In this paper, security evaluation of an important watermarking class based on quantization is given. Theoretical developments and practical simulations are used to measure the security level of watermarking techniques. We give the contribution of each observation available to the attacker on the total gathered information about the watermarking secrecy. By leading on watermarking technique weaknesses, we find that using the Quantization Index Modulation (QIM) with continuous secret key is equivalent to using a secret key with two states. The latter is easier to estimate than a continuous key. Then, we propose a secure version of a classical trellis coded quantization watermarking. The security is guaranteed thanks to the trellis path generated from the discrete key and the message. We show that the spread transform can represent a second or alternative security level for watermarking systems. It allows to increase the watermarking security level and to keep the embedded message hard to read for unauthorise user.
Content-based image watermarking via public-key cryptosystems Digital watermarking is a technique to insert an information-carrying digital signature into a digital media so that the signature can be extracted for variety of purposes including ownership authentication and content verification. We examine the weaknesses against common watermarking attacks of blockwise independent and content-based watermarking algorithms for image integrity verification, and implement a new and more secure invisible fragile public-key watermarking algorithm for color or grayscale images that increases the message digest size from the proposed 64 to 128 bits using the same small-size blocks and maintaining high-quality watermarked images and accurate localization of image changes. Our watermarking technique is capable to detect any changes made to the image since the time it was stamped, any changes to the pixel values and also to the dimensions of the image will be automatically detected and localized. Our scheme consists of a watermark-insertion process that uses a private key to embed a watermark image into a cover image, and a watermark-extraction process that uses a public key to extract the watermark from the watermarked image. The embedded watermark can only be extracted by someone who has possession of a proper verification key.
Security of ownership watermarking of digital images based on singular value decomposition Recently watermarking algorithms of digital images based on singular value decomposition (SVD) have been proposed. Most SVD-based watermarking techniques use singular values as the embedding watermark information. These SVD-based techniques are advantageous for watermarking images since slight changes in the singular values do not significantly affect the image quality. However, it has been reported that these SVD-based watermarking algorithms, as implemented, suffer from a very high probability of false positive detections of watermarks. The false positive detection vulnerability of two recent SVD-based watermarking schemes is exposed. Two solutions are proposed to mitigate this vulnerability without changing the design principle of these watermarking algorithms. (C) 2010 SPIE and IS&T. [DOI: 10.1117/1.3327935]
Capacity of Steganographic Channels This work investigates a central problem in steganography, that is: How much data can safely be hidden without being detected? To answer this question, a formal definition of steganographic capacity is presented. Once this has been defined, a general formula for the capacity is developed. The formula is applicable to a very broad spectrum of channels due to the use of an information-spectrum approach. This approach allows for the analysis of arbitrary steganalyzers as well as nonstationary, nonergodic encoder and attack channels. After the general formula is presented, various simplifications are applied to gain insight into example hiding and detection methodologies. Finally, the context and applications of the work are summarized in a general discussion.
A capacity estimation technique for JPEG-to-JPEG image watermarking In JPEG-to-JPEG image watermarking (J2J), the input is a JPEG image file. After watermark embedding, the image is JPEG-compressed such that the output file is also a JPEG file. We use the human visual system (HVS) model to estimate the J2J data hiding capacity of JPEG images, or the maximum number of bits that can be embedded in JPEG-compressed images. A.B. Watson's HVS model (Proc. SPIE Human Vision, Visual Process., and Digital Display IV, p.202-16, 1993) is modified to estimate the just noticeable difference (JND) for DCT coefficients. The number of modifications to DCT coefficients is limited by JND in order to guarantee the invisibility of the watermark. Our capacity estimation method does not assume any specific watermarking method and thus would apply to any watermarking method in the J2J framework.
Digital Watermaking for ROI Medical Images by Using Compresses Signature Image In medical images, ROI (Region Of Interest) is an area which contains important information and must be stored without any distortion. We propose a digital watermarking method which avoids the distortion of the image data in ROI by embedding signature information into other area than the ROI. Signature image compressed by a progressive coding algorithm is used as the signature information. The most significant information of the signature information is embedded in the nearest area to the ROI. The proposed method can detect the signature image with moderate quality from a clipped image including the ROI. Furthermore, by dividing the contour of the ROI into several regions and embedding the signature information in the regions respectively, the signature image with moderate quality can be acquired from a clipped image including only part of the ROI.
Reliable detection of LSB steganography in color and grayscale images A large number of commercial steganographic programs use the Least Significant Bit embedding (LSB) as the method of choice for message hiding in 24-bit, 8-bit color images, and grayscale images. It is commonly believed that changes to the LSBs of colors cannot be detected due to noise that is always present in digital images. In this paper, we describe a new very accurate and reliable method that can detect LSB embedding in randomly scattered pixels in both 24-bit color images and 8-bit grayscale or color images. It is based on our previous work on lossless data embedding [1]. By inspecting the differences in the number of regular and singular groups for the LSB and the "shifted LSB plane", we can reliably detect messages as short as 0.03bpp.
Context-based, adaptive, lossless image coding We propose a context-based, adaptive, lossless image codec (CALIC). The codec obtains higher lossless compression of continuous-tone images than other lossless image coding techniques in the literature. This high coding efficiency is accomplished with relatively low time and space complexities. The CALIC puts heavy emphasis on image data modeling. A unique feature of the CALIC is the use of a large number of modeling contexts (states) to condition a nonlinear predictor and adapt the predictor to varying source statistics. The nonlinear predictor can correct itself via an error feedback mechanism by learning from its mistakes under a given context in the past. In this learning process, the CALIC estimates only the expectation of prediction errors conditioned on a large number of different contexts rather than estimating a large number of conditional error probabilities. The former estimation technique can afford a large number of modeling contexts without suffering from the context dilution problem of insufficient counting statistics as in the latter approach, nor from excessive memory use. The low time and space complexities are also attributed to efficient techniques for forming and quantizing modeling contexts
Methods for image authentication: a survey Image authentication techniques have recently gained great attention due to its importance for a large number of multimedia applications. Digital images are increasingly transmitted over non-secure channels such as the Internet. Therefore, military, medical and quality control images must be protected against attempts to manipulate them; such manipulations could tamper the decisions based on these images. To protect the authenticity of multimedia images, several approaches have been proposed. These approaches include conventional cryptography, fragile and semi-fragile watermarking and digital signatures that are based on the image content. The aim of this paper is to present a survey and a comparison of emerging techniques for image authentication. Methods are classified according to the service they provide, that is strict or selective authentication, tamper detection, localization and reconstruction capabilities and robustness against different desired image processing operations. Furthermore, we introduce the concept of image content and discuss the most important requirements for an effective image authentication system design. Different algorithms are described and we focus on their comparison according to the properties cited above.
Reversible watermarking based on invariability and adjustment on pixel pairs A novel reversible data hiding scheme based on invariability of the sum of pixel pairs and pairwise difference adjustment (PDA) is presented in this letter. For each pixel pair, if a certain value is added to one pixel while the same value is subtracted from the other, then the sum of these two pixels will remain unchanged. How to properly select this value is the key issue for the balance between reversibility and distortion. In this letter, half the difference of a pixel pair plus 1-bit watermark has been elaborately selected to satisfy this purpose. In addition, PDA is proposed to significantly reduce the capacity consumed by overhead information. A series of experiments is conducted to verify the effectiveness and advantages of the proposed approach. © 2008 IEEE.
Cross-layer QoS-aware communication for ultra wide band wireless multimedia sensor networks Wireless Multimedia Sensor Networks (WMSNs) are distributed systems of wirelessly networked devices that allow retrieving video and audio streams, still images, and scalar sensor data. WMSNs will be a crucial component of mission-critical networks to protect the operation of strategic national infrastructure, provide support to counteract emergencies and threats, and enhance infrastructure for tactical military operations. To enable these applications, WMSNs require the sensor network paradigm to be re-thought in view of the need for mechanisms to deliver multimedia content with a pre-defined level of quality of service (QoS). In this paper, a new cross-layer communication architecture based on the time-hopping impulse radio ultra wide band technology is described, whose objective is to reliably and flexibly deliver QoS to heterogeneous applications in WMSNs, by leveraging and controlling interactions among different layers of the protocol stack according to applications requirements. Simulations show that the proposed system achieves the performance objectives of WMSNs without sacrificing on the modularity of the overall design.
Efficient diversification of web search results In this paper we analyze the efficiency of various search results diversification methods. While efficacy of diversification approaches has been deeply investigated in the past, response time and scalability issues have been rarely addressed. A unified framework for studying performance and feasibility of result diversification solutions is thus proposed. First we define a new methodology for detecting when, and how, query results need to be diversified. To this purpose, we rely on the concept of "query refinement" to estimate the probability of a query to be ambiguous. Then, relying on this novel ambiguity detection method, we deploy and compare on a standard test set, three different diversification methods: IASelect, xQuAD, and OptSelect. While the first two are recent state-of-the-art proposals, the latter is an original algorithm introduced in this paper. We evaluate both the efficiency and the effectiveness of our approach against its competitors by using the standard TREC Web diversification track testbed. Results shown that OptSelect is able to run two orders of magnitude faster than the two other state-of-the-art approaches and to obtain comparable figures in diversification effectiveness.
Robust reversible data hiding in encrypted image: Using fractal coding approach This paper presents a robust version of reversible data hiding method in encrypted images previously proposed by Zhang [8] and Wien [9]. In the original method, the image is partitioned into non-overlapping blocks, each divided into two neighbor sets. In these sets, one bit is embedded by flipping three LSBs of predefined pixels. The data extraction can be done by examining the block smoothness of neighboring blocks. However, this method is very sensitive to noise and attacks. Because noise and malicious activity usually alter adjacent blocks in the same manner. So, neighboring blocks do not have the same smoothness anymore. In this paper we propose a new method to select non-neighbor sets to decrease the effects of noise and therefore reduce bit error rate in encrypted image. These non-neighbor sets are selected by finding the most similar pairs as performed in the encoding phase of fractal image compression. Experimental results show that the proposed method outperforms previous approaches in the case of Gaussian noise contamination and results lower bit error rate.
1.028069
0.022864
0.022864
0.022864
0.011441
0.005716
0.002278
0.000706
0.000171
0.00005
0.000017
0
0
0
A Block-Based Inter-Band Lossless Hyperspectral Image Compressor We propose a hyperspectral image compressor called BH which considers its input image as being partitioned into square blocks, each lying entirely within a particular band, and compresses one such block at a time by using the following steps: first predict the block from the corresponding block in the previous band, then select a predesigned code based on the prediction errors, and nally encode the predictor coeffcient and errors. Apart from giving good compression rates and being fast, BH can provide random access to spatial locations in the image. We hypothesize that BH works well because it accommodates the rapidly changing image brightness that often occurs in hyperspectral images. We also propose an intraband compressor called LM which is worse than BH, but whose performance helps explain BH's performance.
Lossless Hyperspectral-Image Compression Using Context-Based Conditional Average In this paper, a new algorithm for lossless compression of hyperspectral images is proposed. The spectral redundancy in hyperspectral images is exploited using a context-match method driven by the correlation between adjacent bands. This method is suitable for hyperspectral images in the band-sequential format. Moreover, this method compares favorably with the recent proposed lossless compression algorithms in terms of compression, with a lower complexity.
An Efficient Reordering Prediction-Based Lossless Compression Algorithm for Hyperspectral Images In this letter, we propose an efficient lossless compression algorithm for hyperspectral images; it is based on an adaptive spectral band reordering algorithm and an adaptive backward previous closest neighbor (PCN) prediction with error feedback. The adaptive spectral band reordering algorithm has some strong points. It can adaptively determine the range of spectral bands needed to be reordered, ...
Arithmetic coding for data compression The state of the art in data compression is arithmetic coding, not the better-known Huffman method. Arithmetic coding gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.
Recent Developments in Context-Based Predictive Techniques for Lossless Image Compression In this paper we describe some recent developments that have taken place in context-based predictive coding, in response to the JPEG/JBIG committee's recent call for proposals for a new international standard on lossless compression of continuous-tone images. We describe the different prediction techniques that were proposed and give a performance comparison. We describe the notion of context-base...
SICLIC: A Simple Inter-Color Lossless Image Coder Many applications require high quality color images. In order to alleviate storage space and transmission time, while preserving high quality, these images are losslessly compressed. Most of the image compression algorithms treat the color image, usually in RGB format, as a set of independent gray scale images. SICLIC is a novel inter-color coding algorithm based on a LOCO-like algorithm. It combines the simplicity of Golomb-Rice coding with the potential of context models, in both intra-color and inter-color encoding. It also supports intra-color and inter-color alphabet extension, in order to reduce the redundancy of code. SICLIC attains compression ratios superior to those obtained with most of the state-of-the-art compression algorithms and achieves compression ratios very close to those of Inter-Band CALIC, with much lower complexity. With arithmetic coding, SICLIC attains better compression than Inter-Band CALIC.
Shortcut deforestation in calculational form In functional programming, intermediate data structures are often used to "glue" together small programs. Deforestation is a program transformation to remove these intermediate data structures automatically. We present a simple algorithm for deforestation based on two fusion rules for hylomorphism, an expressive recursion pattern. A generic notation for hylomorphisms is introduced, where natural transformations are explicitly factored out, and it is used to represent programs. Our method...
Tight bounds for oblivious routing in the hypercube We prove that in anyN-node communication network with maximum degreed, any deterministic oblivious algorithm for routing an arbitrary permutation requires Ω(√N/d) parallel communication steps in the worst case. This is an improvement upon the Ω(√N/d3/2) bound obtained by Borodin and Hopcroft. For theN-node hypercube, in particular, we show a matching upper bound by exhibiting a deterministic oblivious algorithm that routes any permutation in Θ(√N/logN) steps. The best previously known upper bound was Θ(√N). Our algorithm may be practical for smallN (up to about 214 nodes).
Joint Subchannel Assignment and Power Allocation for OFDMA Femtocell Networks In this paper, we propose a joint subchannel and power allocation algorithm for the downlink of an orthogonal frequency-division multiple access (OFDMA) mixed femtocell/macrocell network deployment. Specifically, the total throughput of all femtocell user equipments (FUEs) is maximized while the network capacity of an existing macrocell is always protected. Towards this end, we employ an iterative approach in which OFDM subchannels and transmit powers of base stations (BS) are alternatively assigned and optimized at every step. For a fixed power allocation, we prove that the optimal policy in each cell is to give each subchannel to the user with the highest signal-to-interference-plus-noise ratio (SINR) on that subchannel. For a given subchannel assignment, we adopt the successive convex approximation (SCA) approach and transform the highly nonconvex power allocation problem into a sequence of convex subproblems. In the arithmetic-geometric mean (AGM) approximation, we apply geometric programming to find optimal solutions after condensing a posynomial into a monomial. On the other hand, logarithmic and underline{d}ifference-of-two-underline{c}oncave-functions (D.C.) approximations lead us to solving a series of convex relaxation programs. With the three proposed SCA-based power optimization solutions, we show that the overall joint subchannel and power allocation algorithm converges to some local maximum of the original design problem. While a central processing unit is required to implement the AGM approximation-based solution, each BS locally computes the optimal subchannel and power allocation for its own servicing cell in the logarithmic and D.C. approximation-based solutions. Numerical examples confirm the merits of the proposed algorithm.
Improving Evolutionary Testing By Flag Removal This paper argues that Evolutionary testing can be improved by transforming programs with flags into flag free programs. The approach is evaluated by comparing results from...
Generating all polynomial invariants in simple loops This paper presents a method for automatically generating all polynomial invariants in simple loops. It is first shown that the set of polynomials serving as loop invariants has the algebraic structure of an ideal. Based on this connection, a fixpoint procedure using operations on ideals and Grobner basis constructions is proposed for finding all polynomial invariants. Most importantly, it is proved that the procedure terminates in at most m+1 iterations, where m is the number of program variables. The proof relies on showing that the irreducible components of the varieties associated with the ideals generated by the procedure either remain the same or increase their dimension at every iteration of the fixpoint procedure. This yields a correct and complete algorithm for inferring conjunctions of polynomial equalities as invariants. The method has been implemented in Maple using the Groebner package. The implementation has been used to automatically discover non-trivial invariants for several examples to illustrate the power of the technique.
Postplacement rewiring by exhaustive search for functional symmetries We propose two new algorithms for rewiring: a postplacement optimization that reconnects pins of a given netlist without changing the logic function and gate locations. In the first algorithm, we extract small subcircuits consisting of several gates from the design and reconnect pins according to the symmetries of the subcircuits. To enhance the power of symmetry detection, we also propose a graph-based symmetry detector that can identify permutational and phase-shift symmetries on multiple input and output wires, as well as hybrid symmetries, creating abundant opportunities for rewiring. Our second algorithm, called long-range rewiring, is based on reconnecting equivalent pins and can augment the first approach for further optimization. We apply our techniques for wirelength optimization and observe that they provide wirelength reduction comparable to that achieved by detailed placement.
Adjustable prediction-based reversible data hiding Reversible data hiding has received considerable interest recently in the field of information hiding. However, most of difference expansion (DE-based) schemes suffer from the problem that image cannot be restored into its original one free from location map. Reversible data hiding scheme with location map cannot effectively raise hiding capacity; in addition, it also incurs the computational cost during embedding and extracting. This study presents a reversible data hiding scheme that is free of location map and thus the computational cost, associated with embedding and extracting, is lower than that of most DE-based schemes. Furthermore, our scheme solves the problem of the low embedding rate of Tseng and Hsieh@?s scheme because our scheme takes advantage of the bulk of relatively high small prediction-errors to raise embedding capacity. Experimental results demonstrate that the proposed scheme achieves the larger payload and the better image quality than some existing schemes by an adjustable control factor. The performance of the proposed scheme is more stable for the different images.
Reversible data hiding based on local histogram shifting with multilayer embedding We use the two neighboring points of the peak point to embed the secret data.We improve the histogram of embedded cover image to ensure security of secret data.The multilayer embedding of pyramid structure is adopted to increase the capacity.Our proposed method has a higher embedding capacity with lower distortion. In this paper, we present a new reversible data hiding method based on histogram shifting using localization. Our proposed method selects peak point as the reference point, then uses the two neighboring points of the peak point to achieve secret data embedding based on histogram shifting and the peak point keeps unchanged. In the extraction end, we no longer need the key information about the peak point, we can directly find the peak point from the histogram to extract the secret data. We also exploit the localization to make the histogram of embedded cover image become almost the same as the histogram of the original cover image. The embedding capacity is also increased rapidly by the localization with multilayer embedding. Experimental results show that our proposed method is effective and superior.
1.149984
0.149984
0.149984
0.004549
0.001499
0.000149
0
0
0
0
0
0
0
0
Reversible data hiding in dual stego-images using frequency-based encoding strategy. Reversible data hiding methods can embed secret data into multimedia to prevent attacks from hackers. Reversible data hiding methods based on multiple stego-images have been proposed extensively in recent years. This type of method can embed large amounts of secret bits into several stego-images. However, data embedding may cause serious distortion of stego-images. To solve this problem, Lu et al. proposed a center-folding strategy to reduce secret digits to narrow down the distance between the original pixel and the stego pixel in 2015. Although the center-folding strategy works well, it can still be improved. In their proposed scheme, the maximum secret digit seriously damages the image quality of the stego-image. If the maximum secret digit occurs often, then the visual quality of the stego-image decreases quickly. In this paper, we propose a frequency-based encoding method to solve this problem. The secret digit that occurs most frequently is encoded as the minimum absolute digit, thereby decreasing the frequency and level of modification. Experimental results showed that the proposed method can achieve a greater peak signal-to-noise ratio value than previous methods, thereby confirming that the proposed method is effective.
Dual image based reversible data hiding scheme using (7, 4) hamming code. In this paper, we propose a new dual-image based reversible data hiding scheme through (7,4) Hamming code (RDHHC) using shared secret key. A block of seven pixels are collected from cover image and copied into two arrays then it is adjusted redundant Least Significant Bits (LSBs) using odd parity such that any error creation is encountered at the sender end and recovered at the receiver end. Before data embedding, we first complement the bit at shared secret position. After that, secret message bit is embedded by error creation caused by tamper in any suitable position except secret position and that error is detected as well as corrected at the receiver end using Hamming error correcting code. One shared secret position κ and one shared secret key ξ help to perform data embedding, data extraction and recovery of the original image. The secret data and original cover image are successfully recovered at the receiver end from dual stego image. Finally, we compare our scheme with other state-of-the-art methods and obtain reasonably better performance in terms of PSNR.
Reversible data hiding in dual Stegano-image using an improved center folding strategy. In recent years, the dual stego-image reversible data embedding methods have been developed rapidly, e.g., exploiting modification direction, magic matrix, least significant bit matching, and center folding strategy. The kind of method can effectively embed secret data into two stego-images and maintain excellent image quality and enhance the security. In 2015, Lu et al. proposed a center folding strategy that can effectively encode messages as the smaller digits. The encoding procedure reduces the modification level of pixels, thereby maintaining good image quality. However, their strategy does not use the relationship between the adjacent digits to reduce the number of the largest digits. Inspired by joint neighboring coding, we proposed a dynamic encoding strategy to improve the center folding strategy. The encoding strategy can reduce the secret digits and decrease the occurrence frequency of the maximum digits, thereby substantially reducing the modification level of pixels. The advantage makes that the proposed method can achieve a higher PSNR value than previous methods under the same embedding rate.
Pairwise IPVO-based reversible data hiding Recently, Peng et al. proposed a reversible data hiding method based on improved pixel-value-ordering (PVO) and prediction-error expansion. In this paper, a novel method is proposed by extending Peng et al.’s work. In our method, three largest (or smallest) pixels in a block are utilized to generate two differences, and a new pixel modification strategy is proposed so that the PVO remains unchanged after data embedding. Taking three largest pixels for example, we utilize the third largest pixel to predict the second largest one, and meanwhile use the second largest one to predict the maximum. In this way, two differences are obtained. They are modified jointly so as to be embedded with log 23 bits instead of 2 bits in the traditional RDH methods. The advantage of doing so is to exclude situations where PVO is changed. Moreover, two embedding layers are utilized together to further decrease the embedding distortion. Extensive experiments verify that the proposed method outperforms Peng et al. ’s and some other state-of-the-art works. © 2017 Springer Science+Business Media, LLC
Dual-image-based reversible data hiding method using center folding strategy In recent years, dual-image techniques have been widely used in reversible data hiding. Most data hiding methods focus on finding the best predictive value to reduce the degree of stego image distortion. After finding a predicted value, most methods convert the secret data directly into secret symbols, using the decimal or other numeral systems, and then have these added to (or subtracted from) the predictive value (or pixel) to complete the embedding. However, this study found that the secret data are a very important factor affecting image quality. If the value of the secret symbols is too large, it may cause greater differences between the stego-pixels and original pixels. Hence, this study employs the center folding strategy to reduce the value of the secret symbols. The reduced symbols are then embedded in two stego images through an averaging method, to maintain image quality. In addition, underflow/overflow pixels are replaced with the original pixels to reduce unnecessary image distortion. The experimental results show that the proposed method has fairly good performance compared to the other dual-image techniques. For the same embedding capacity, its image quality is also better, on average, than that of other methods by at least 2dB. Furthermore, the proposed method determines the embedding capacity with the k value. The larger the k value, the higher the embedding capacity. The experimental results show that for the same image quality, the embedding capacity of the proposed method is better, on average, than that of other methods by at least 0.5 bits per pixel. The proposed method is applicable not only to experimental images, but also works well for general images in terms of embedding capacity, image quality and execution time. This research presented a dual Imaging-based reversible hiding technique.The Central Folding Strategy is applied to achieve low distortion purpose.The reduced symbols are embedded in two stego-images through an averaging method.The performance of the proposed method is better than that of other methods.The proposed method can adapt to different image characteristics.
Block cipher based separable reversible data hiding in encrypted images While most reversible data hiding in encrypted images (RDH-EI) are based on stream cipher, this paper aims to present an alternative method feasible for block-enciphered images. Before uploading data to a remote server, the content owner encrypts the original image with a block cipher algorithm using an encryption key. Then, the server embeds additional bits into the encrypted image with an embedding key to generate the marked encrypted image. On the recipient side, the additional bits can be extracted if the receiver has the embedding key. In case the receiver has only the encryption key, the marked encrypted image can be directly deciphered to a plaintext image with good quality. When both the embedding and encryption keys are available for the receiver, he can recover the original image without any errors. Compared with the existing block cipher based RDH-EI method, drawbacks of the encryption and the recovery are avoided, and good embedding payloads are achieved.
Reversible watermarking method based on asymmetric-histogram shifting of prediction errors. •A multi-prediction scheme is designed to calculate multiple prediction values for the current pixel.•An asymmetric selection function is defined to choose the suitable value from these prediction values.•An asymmetric error histogram is constructed by generating all the selected values.•We propose a complementary embedding strategy by combining two asymmetric error histograms.
Weaknesses in the Key Scheduling Algorithm of RC4 In this paper we present several weaknesses in the key scheduling algorithm of RC4, and describe their cryptanalytic significance. We identify a large number of weak keys, in which knowledge of a small number of key bits suffices to determine many state and output bits with non-negligible probability. We use these weak keys to construct new distinguishers for RC4, and to mount related key attacks with practical complexities. Finally, we show that RC4 is completely insecure in a common mode of operation which is used in the widely deployed Wired Equivalent Privacy protocol (WEP, which is part of the 802.11 standard), in which a fixed secret key is concatenated with known IV modifiers in order to encrypt different messages. Our new passive ciphertext-only attack on this mode can recover an arbitrarily long key in a negligible amount of time which grows only linearly with its size, both for 24 and 128 bit IV modifiers.
Reversible watermark using the difference expansion of a generalized integer transform. A reversible watermarking algorithm with very high data-hiding capacity has been developed for color images. The algorithm allows the watermarking process to be reversed, which restores the exact original image. The algorithm hides several bits in the difference expansion of vectors of adjacent pixels. The required general reversible integer transform and the necessary conditions to avoid underflow and overflow are derived for any vector of arbitrary length. Also, the potential payload size that can be embedded into a host image is discussed, and a feedback system for controlling this size is developed. In addition, to maximize the amount of data that can be hidden into an image, the embedding algorithm can be applied recursively across the color components. Simulation results using spatial triplets, spatial quads, cross-color triplets, and cross-color quads are presented and compared with the existing reversible watermarking algorithms. These results indicate that the spatial, quad-based algorithm allows for hiding the largest payload at the highest signal-to-noise ratio.
Lossless data embedding--new paradigm in digital watermarking One common drawback of virtually all current data embedding methods is the fact that the original image is inevitably distorted due to data embedding itself. This distortion typically cannot be removed completely due to quantization, bit-replacement, or truncation at the grayscales 0 and 255. Although the distortion is often quite small and perceptual models are used to minimize its visibility, the distortion may not be acceptable for medical imagery (for legal reasons) or for military images inspected under nonstandard viewing conditions (after enhancement or extreme zoom). In this paper, we introduce a new paradigm for data embedding in images (lossless data embedding) that has the property that the distortion due to embedding can be completely removed from the watermarked image after the embedded data has been extracted. We present lossless embedding methods for the uncompressed formats (BMP, TIFF) and for the JPEG format. We also show how the concept of lossless data embedding can be used as a powerful tool to achieve a variety of nontrivial tasks, including lossless authentication using fragile watermarks, steganalysis of LSB embedding, and distortion-free robust watermarking.
Lossless recovery of a VQ index table with embedded secret data This paper proposes a reversible data embedding scheme based on a VQ image compression technique which emphasizes that the original VQ compressed codes can be recovered after data extraction. In this proposed scheme, a VQ codebook, which had already been clustered into three groups, was adopted to achieve secret concealment and data recovery. In order to embed more secret data, the concepts of frequency clustering and trio extension were also introduced. Experimental results showed that the proposed scheme with the lossless recovery facility could work well.
Topic-focused multi-document summarization using an approximate oracle score We consider the problem of producing a multi-document summary given a collection of documents. Since most successful methods of multi-document summarization are still largely extractive, in this paper, we explore just how well an extractive method can perform. We introduce an "oracle" score, based on the probability distribution of unigrams in human summaries. We then demonstrate that with the oracle score, we can generate extracts which score, on average, better than the human summaries, when evaluated with ROUGE. In addition, we introduce an approximation to the oracle score which produces a system with the best known performance for the 2005 Document Understanding Conference (DUC) evaluation.
Algorithm specialization in generic programming: challenges of constrained generics in C++ Generic programming has recently emerged as a paradigm for developing highly reusable software libraries, most notably in C++. We have designed and implemented a constrained generics extension for C++ to support modular type checking of generic algorithms and to address other issues associated with unconstrained generics. To be as broadly applicable as possible, generic algorithms are defined with minimal requirements on their inputs. At the same time, to achieve a high degree of efficiency, generic algorithms may have multiple implementations that exploit features of specific classes of inputs. This process of algorithm specialization relies on non-local type information and conflicts directly with the local nature of modular type checking. In this paper, we review the design and implementation of our extensions for generic programming in C++, describe the issues of algorithm specialization and modular type checking in detail, and discuss the important design tradeoffs in trying to accomplish both.We present the particular design that we chose for our implementation, with the goal of hitting the sweet spot in this interesting design space.
Interpolation for synthesis on unbounded domains Synthesis procedures compile relational specifications into functions. In addition to bounded domains, synthesis procedures are applicable to domains such as mathematical integers, where the domain and range of relations and synthesized code is unbounded. Previous work presented synthesis procedures that generate self-contained code and do not require components as inputs. The advantage of this approach is that it requires only specifications as user input. On the other hand, in some cases it can be desirable to require that the synthesized system reuses existing components. This paper describes a technique to automatically synthesize systems from components. It is also applicable to repair scenarios where the desired sub-component of the system should be replaced to satisfy the overall specification. The technique is sound, and it is complete for constraints for which an interpolation procedure exists, which includes e.g. propositional logic, bitvectors, linear integer arithmetic, recursive structures, finite sets, and extensions of the theory of arrays.
1.035191
0.04
0.033333
0.02
0.010002
0.004028
0.001455
0.000556
0.000106
0.000022
0.000002
0
0
0
A reversible data hiding scheme for VQ indices using histogram shifting of prediction errors Abstract Concomitant with the rapid advancements in information technology, the issue of secure data transmission through the Internet has become increasingly important. Hiding data in images is an important technique in digital media; it facilitates confidential data transfer to receivers and renders the data virtually undetectable by third parties. In this paper, a novel reversible data hiding scheme based on vector quantization (VQ) is proposed. First, codebook sorting is employed to enhance the correlation of neighbor indices and then a prediction error technique is used to generate a high peak histogram. The secret data are then embedded via histogram shifting of prediction errors. Our proposed scheme utilizes only one codebook, unlike other similar methods, and the stego carrier (index table after embedding) can be decompressed without secret data extraction to avoid detection by third parties. Experimental results indicate that the proposed scheme has better visual image quality and greater embedding capacity than recently proposed schemes of a similar nature.
A Location Map Free Reversible Data Hiding Scheme for VQ-Compressed Images Based on Search Order Coding Most VQ based reversible data hiding schemes in the literature generate non-legitimate codes as output. Therefore such schemes are not suitable for some applications. In this paper a reversible data hiding scheme for VQ-compressed images based on search order coding (SOC) is proposed. The output of the proposed scheme is legitimate SOC code. Moreover the proposed scheme doesn't need any side information to achieve reversibility and also flexibility adjusts hiding capacity and stego image quality. Experimental results show feasibility and superiority of the proposed scheme when compared with the current scheme.
Data hiding of high compression ratio in VQ indices with neighboring correlations the progressive development of information and communication technology, the security and integrity of data transmission through public network have become important issues. Thus information hiding for images is one of the main research issues to maintain the data security and integrity. This paper proposed a data hiding method on the indices of VQ compression. For an index table of VQ compression, neighboring indices have similar index values in a great chance. Exploiting neighboring correlation among VQ indices, the proposed data hiding can enhance the embedding capacity and bit rate of encoding. In addition, our method can use the output bit stream to reconstruct the cover image. Experimental results show the proposed scheme performs better in average comparing with Lee et al.'s method [ 9 ] in 2013.
Information Hiding Based on Block Match Coding for Vector Quantization-Compressed Images In this paper, a novel highly efficient lossless data hiding scheme is proposed to achieve the goal of hiding secret data into vector quantization (VQ)-compressed images that can be losslessly reconstructed after the secret data is extracted in the decoder. For VQ-compressed images, the index-modifying and the side-match VQ (SMVQ) techniques can be applied to encode indices and hide secret data. In general, data hiding accompanied by the SMVQ technique can yield a higher embedding capacity and a lower bit rate, yet more time consuming. In contrast, data hiding accompanied by the index-modifying technique can yield a lower embedding capacity and a higher bit rate, yet less time consuming. To possess the advantages of the two techniques while removing the shortcomings, the neighboring processed indices are employed to speed up the processes of generating state codebooks required for encoding and hiding. To evaluate the effectiveness of this approach, various test images are employed in the experiments. As documented in the experimental results, it is shown that the performance of the proposed scheme is superior to former schemes in terms of compression ratio, embedding rate, processing efficiency, and embedding capacity.
A reversible compression code hiding using SOC and SMVQ indices Reversible data hiding recovers the original images without distortion after the secret data are extracted, so it has received considerable attention among researchers in recent years. This work presents a novel reversible data hiding scheme based on the search-order coding (SOC) algorithm and side match vector quantization (SMVQ). The compression performance is improved by encoding the index with either SOC or SMVQ during the data embedding phase. The index of an image block also is encoded by using a few bits, thus improving the compression rate. During the extracting phase, the receiver uses simple steps to extract the secret data and reconstruct the original SMVQ compression code. Our experimental results indicated that, using a size 64 state codebook, the average compression rate with our scheme was 0.41bpp, which is much better than the compression rates of Chang et al.'s, Yang and Lin's, and Yang et al.'s schemes, i.e., 0.50, 0.54, and 0.56bpp, respectively. Furthermore, in addition to reducing the time required for the embedding and extracting phases, the proposed scheme yields a higher embedding rate than the schemes of Yang and Lin and Yang et al.
Lossless recovery of a VQ index table with embedded secret data This paper proposes a reversible data embedding scheme based on a VQ image compression technique which emphasizes that the original VQ compressed codes can be recovered after data extraction. In this proposed scheme, a VQ codebook, which had already been clustered into three groups, was adopted to achieve secret concealment and data recovery. In order to embed more secret data, the concepts of frequency clustering and trio extension were also introduced. Experimental results showed that the proposed scheme with the lossless recovery facility could work well.
Reversible Data-Hiding Scheme for 2-D Vector Maps Based on Difference Expansion Reversible watermarking is suitable for hiding data in 2-D vector maps because the distortions induced by data embedding can be removed after extracting the hidden bits. In this paper, two reversible data-hiding schemes are explored based on the idea of difference expansion. The first scheme takes the coordinates of vertices as the cover data and hides data by modifying the differences between the...
A reversible data hiding scheme based on dual steganographic images This paper proposes a novel data hiding scheme for grayscale images with high embedding capacity and low image distortion, able to provide the desired security and full reversibility. The embedding capacity of proposed scheme is about 0.75 bpp providing high visual steganographic image quality. The experimental PSNR values are very high (up to 52.3 dB); therefore, the steganographic image is not only perceptually indistinguishable from the carrier but can also resist the increasingly sophisticated steganalytic techniques. Moreover, without two steganographic images obtained simultaneously, it is impossible for attackers to retrieve the complete secret message.
A Joint Encryption and Reversible Data Hiding Scheme Based on Integer-DWT and Arnold Map Permutation. A joint encryption and reversible data hiding (joint encryption-RDH) scheme is proposed in this paper. The cover image is transformed to the frequency domain with integer discrete wavelet transform (integer DWT) for the encryption and data hiding. Additional data is hidden into the permuted middle (LH, HL) and high (HH) frequency subbands of integer DWT coefficients with a histogram modification based method. A combination of permutations both in the frequency domain and in the spatial domain is imposed for the encryption. In the receiving end, the encrypted image with hidden data can be decrypted to the image with hidden data, which is similar to the original image without hidden data, by only using the encryption key; if someone has both the data hiding key and the encryption key, he can both extract the hidden data and reversibly recover the original image. Experimental results demonstrate that, compared with existing joint encryption-RDH schemes, the proposed scheme has gained larger embedding capacity, and the distribution of the encrypted image with data hidden has a random like behavior. It can also achieve the lossless restoration of the cover image.
Enhanced Image Trans-coding Using Reversible Data Hiding The primary application of watermarking and data hiding is for authentication, or to prove the ownership of digital media. In this paper, a new trans-coding system with the help of the technique of watermarking and data hiding is proposed. Side information is extracted before the image transcoding, such as resizing. In this paper, we focus on the problem of resizing in "thin edge" region. "Thin edge" structure normally cannot preserve after resizing process and become discrete. As the "thin edge" region is difficult to analyze real time and with a high degree of accuracy, data hiding can be used. The information of the "thin edge" region is generated and embedding into the multimedia content in encoder. Experimental results shown that there is a great improvement in the visual quality by using side information.
Reversible watermarking for images Reversible watermarking is a technique for embedding data in a digital host signal in such a manner that the original host signal can be restored in a bit-exact manner in the restoration process. In this paper, we present a general framework for reversible watermarking in multi-media signals. A mapping function, which is in general neither injective nor subjective, is used to map the input signal to a perceptually equivalent output signal. The resulting unused sample values of the output signal are used to encode additional (watermark) information and restoration data. At the 2003 SPIE conference, examples of this technique applied to digital audio were presented. In this paper we concentrate on color and gray-scale images. A particular challenge in this context is not only the optimization of rate-distortion, but also the measure of perceptual quality (i.e. the distortion). In literature distortion is often expressed in terms of PSNR, making comparison among different techniques relatively straightforward. We show that our general framework for reversible watermarking applies to digital images and that results can be presented in terms of PSNR rate-distortions. However, the framework allows for more subtle signal manipulations that are not easily expressed in terms of PSNR distortion. These chances involve manipulations of contrast and/or saturation.
Dimension-based analysis of hypotheticals from supreme court oral argument In this paper we examine a sequence of hypotheticals taken from a Supreme Court oral argument. We use the idea of a “dimension,” developed previously in our case-based reasoning system HYPO, to analyze the hypotheticals and to speculate on how the Justices might have arrived at them. The case we consider is taken from the area of Fourth Amendment law concerning warrantless search and seizure.
An asymptotic analysis of some expert fusion methods We study the asymptotic behavior of three classifier combination methods for two-class classification: average, median, and majority vote. Assuming that the estimates of the posterior probability given by individual classifiers constitute a sample from a distribution, we show that as the number of individual classifiers becomes large, median and majority will produce the same result but average may yield a completely different decision if the distribution is not symmetric.
A novel legitimacy preserving data hiding scheme based on LAS compressed code of VQ index tables Data hiding is a technique for secret and secure data storing and transmission that embeds data into a media such as an image, audio, video and so on, with minimal quality degradation of the media. Some developed data hiding schemes are reversible. Reversibility property allows the media to be recovered completely after extraction of the embedded data. Vector Quantization (VQ)-based image data hiding is one of the most popular study areas in the literature. However, most VQ-based reversible data hiding schemes generate non-legitimate codes as output. In other words output codes generated by such schemes could not be decoded by the conventional VQ or VQ based decoders and may arouse the attention of interceptors. On the other hand, the existing VQ based reversible data hiding schemes that generate legitimate VQ codes as output, suffer from low capacity and poor quality of stego-image. In this paper a novel reversible data hiding scheme for VQ-compressed images based on locally adaptive data compression scheme (LAS) is proposed. Unlike other schemes, the proposed scheme doesn't change the VQ indices; data is embedded by choosing one of the possible ways to encode each index. As a result, in comparison with the schemes that embed data by index replacement, in the proposed scheme no extra distortion is made by data embedding and the outputted codes are compatible with the conventional LAS decoder. These properties help to hide the existence of secret data and make the scheme suitable for steganography. Moreover, a framework to combine the proposed scheme with some other schemes to improve their capacity and embedding side information is proposed. Since LAS is a general data compression scheme, the proposed scheme could be used to embed data into any data formats. All existing LAS based data hiding schemes produce non-legitimate codes as their outputs and the proposed scheme is the first and only one that produces legitimate codes as output. Experimental results show that the proposed scheme outperforms the existing LAS based schemes and some other VQ based data hiding schemes. On average, the proposed scheme embeds 2.14 bits per index with almost the same bit-rate as the bit-rate of the VQ index table.
1.221
0.1105
0.1105
0.1105
0.020095
0.008844
0.00078
0.00022
0.000114
0.000055
0.000009
0
0
0
Data refinement by miracles Data refinement is the transformation in a computer program of one data type to another. Usually, we call the original data type ‘abstract’ and the final data type ‘concrete’. The concrete data type is said to represent the abstract. In spite of recent advances, there remain obvious data refinements that are difficult to prove. We give such a refinement and present a new technique that avoids the difficulty. Our innovation is the use of program fragments that do not satisfy Dijkstra's Law of the excluded miracle. These of course can never be implemented, so they must be eliminated before the final program is reached. But, in the intermediate stages of development, they simplify the calculations.
Laws of data refinement A specification language typically contains sophisticated data types that are expensive or even impossible to implement. Their replacement with simpler or more efficiently implementable types during the programming process is called data refinement. We give a new formal definiton of data refinement and use it to derive some basic laws. The derived laws are constructive in that used in conjunction with the known laws of procedural refinement they allow us to calculate a new specification from a given one in which variables are to be replaced by other variables of a different type.
Composing Reusable Software Components Through Views.
Specifying Concurrent Program Modules A method for specifying program modules in a concurrent program is described. It is based upon temporal logic, but uses new kinds of temporal assertions to make the specifications simpler and easier to understand. The semantics of the specifications is described informally, and a sequence of examples are given culminating in a specification of three modules comprising the alternating-bit communication protocol. A formal semantics is given in the appendix.
Reusing and Interconnecting Software Components First Page of the Article
Data abstraction and information hiding This article describes an approach for verifying programs in the presence of data abstraction and information hiding, which are key features of modern programming languages with objects and modules. This article draws on our experience building and using an automatic program checker, and focuses on the property of modular soundness: that is, the property that the separate verifications of the individual modules of a program suffice to ensure the correctness of the composite program. We found this desirable property surprisingly difficult to achieve. A key feature of our methodology for modular soundness is a new specification construct: the abstraction dependency, which reveals which concrete variables appear in the representation of a given abstract variable, without revealing the abstraction function itself. This article discusses in detail two varieties of abstraction dependencies: static and dynamic. The article also presents a new technical definition of modular soundness as a monotonicity property of verifiability with respect to scope and uses this technical definition to formally prove the modular soundness of a programming discipline for static dependencies.
Concepts as Syntactic Sugar The coming standard for C++ will contain language extensions. It can be expected that there will be some years between the release of the new standard and the time when most compilers will be compliant, like it happened for the 1998 standard. Concepts are an extension proposed for the new standard. We show how we can translate ConceptC++ code into pure C++03 using the C++ template mechanism. Such a translation tool could be used for example to port software written using a ConceptC++ compiler to architectures having only older compilers. Or a library written using concepts could be used by a project written in pure C++. The goal of this transformation is not to provide all capabilities of ConceptC++, but with restrictions we can afford, to be as simple as possible.
A unified approach for developing efficient algorithmic programs A unified approach called partition-and-recur for developing efficient and correct algorithmic programs is presented. An algorithm (represented by recurrence and initiation) is separated from program, and special attention is paid to algorithm manipulation rather than program calculus. An algorithm is exactly a set of mathematical formulae. It is easier for formal derivation and proof. After getting efficient and correct algorithm, a trivial transformation is used to get a final program. The approach covers several known algorithm design techniques, e.g. dynamic programming, greedy, divide-and-conquer and enumeration, etc. The techniques of partition and recurrence are not new. Partition is a general approach for dealing with complicated objects and is typically used in divide-and-conquer approach. Recurrence is used in algorithm analysis, in developing loop invariants and dynamic programming approach. The main contribution is combining two techniques used in typical algorithm development into a unified and systematic approach to develop general efficient algorithmic programs and presenting a new representation of algorithm that is easier for understanding and demonstrating the correctness and ingenuity of algorithmic programs.
Automating the Design of Algorithm This paper has two roughly independent parts. The first is devoted to the automation of program construction. The Kestrel Interactive Development System (KIDS) provides knowledge-based support for the derivation of correct and efficient programs from specifications. We trace the use of KIDS in deriving a scheduling algorithm. The derivation illustrates algorithm design, deductive inference, simplification, finite differencing, partial evaluation, data type refinement, and other techniques. All of the KIDS operations are automatic except the algorithm design tactics which require some interaction at present. Dozens of programs have been derived using the KIDS environment.
Transformation of logic programs: Foundations and techniques We present an overview of some techniques which have been proposed for the transformation of logic programs. We consider the so-called “rules + strategies” approach, and we address the following two issues: the correctness of some basic transformation rules w.r.t. a given semantics and the use of strategies for guiding the application of the rules and improving efficiency. We will also show through some examples the use and the power of the transformational approach, and we will briefly illustrate its relationship to other methodologies for program development.
Developing correct and efficient logic programs by transformation* this paper, from formal specifications one may obtain executable,efficient programs by using techniques for transforming logic programs. This is, indeed, one ofthe reasons that makes logic programming very attractive for program construction. Duringthis final step from specifications to programs, in order to improve efficiency one may want touse program transformation for avoiding multiple visits of data structures, or replacing complexforms of recursion by tail recursion, or reducing...
Efficient Reductants Calculi using Partial Evaluation Techniques with Thresholding Reductants are a useful theoretical tool introduced for proving correctness properties in the context of generalized annotated logic programming. This concept was adapted to the more recent and flexible framework of multi-adjoint logic programming for solving a problem of incompleteness that arises when working with some lattices. In order to be complete, multi-adjoint logic programs must be extended with their set of reductants. In general, the notion of reductant may introduce an important efficiency drawback. In this work we provide a more refined version of this concept that we call PE-reductant, by using (threshold) partial evaluation techniques. Our proposal is intended to be semantically equivalent to the classical notion of reductant, and improves previous approaches at least in the following two efficiency criteria. Firstly, using the new definition of reductant, we can obtain computed answers for a given goal with a lesser computational effort than by using its precedent ones. Secondly, the proper construction of a reductant by means of partial evaluation methods, is drastically improved after introducing thresholding techniques which dynamically reduce the size of the underlying unfolding trees.
Orthogonal Data Embedding for Binary Images in Morphological Transform Domain- A High-Capacity Approach This paper proposes a data-hiding technique for binary images in morphological transform domain for authentication purpose. To achieve blind watermark extraction, it is difficult to use the detail coefficients directly as a location map to determine the data-hiding locations. Hence, we view flipping an edge pixel in binary images as shifting the edge location one pixel horizontally and vertically. Based on this observation, we propose an interlaced morphological binary wavelet transform to track the shifted edges, which thus facilitates blind watermark extraction and incorporation of cryptographic signature. Unlike existing block-based approach, in which the block size is constrained by 3times3 pixels or larger, we process an image in 2times2 pixel blocks. This allows flexibility in tracking the edges and also achieves low computational complexity. The two processing cases that flipping the candidates of one does not affect the flippability conditions of another are employed for orthogonal embedding, which renders more suitable candidates can be identified such that a larger capacity can be achieved. A novel effective Backward-Forward Minimization method is proposed, which considers both backwardly those neighboring processed embeddable candidates and forwardly those unprocessed flippable candidates that may be affected by flipping the current pixel. In this way, the total visual distortion can be minimized. Experimental results demonstrate the validity of our arguments.
Reversible data hiding based on local histogram shifting with multilayer embedding We use the two neighboring points of the peak point to embed the secret data.We improve the histogram of embedded cover image to ensure security of secret data.The multilayer embedding of pyramid structure is adopted to increase the capacity.Our proposed method has a higher embedding capacity with lower distortion. In this paper, we present a new reversible data hiding method based on histogram shifting using localization. Our proposed method selects peak point as the reference point, then uses the two neighboring points of the peak point to achieve secret data embedding based on histogram shifting and the peak point keeps unchanged. In the extraction end, we no longer need the key information about the peak point, we can directly find the peak point from the histogram to extract the secret data. We also exploit the localization to make the histogram of embedded cover image become almost the same as the histogram of the original cover image. The embedding capacity is also increased rapidly by the localization with multilayer embedding. Experimental results show that our proposed method is effective and superior.
1.049991
0.017843
0.017575
0.009428
0.006115
0.000508
0.000186
0.000053
0.000007
0
0
0
0
0
Correlation-aware cross-layer design for network management of wireless sensor networks The observations of the nodes of a wireless sensor network are spatiotemporally correlated. Sensor nodes can exploit the correlation for enhancing network efficiency. However, an energy-efficient collaboration is required for better network management. For saving energy, sensor nodes schedule between Active and Sleep states. Nodes extract information from medium access control layer, and use that information along with the correlation of observations as a means of energy-efficient collaboration and proper scheduling of their Active and Sleep states. Furthermore, sensor nodes use non-deterministic reinforcement learning-based approach for reducing energy consumption and end-to-end delay by regulating the duration of their Sleep states. Extensive simulations have shown that the proposed cross-layer approach outperforms existing benchmark schemes in terms of end-to-end delay, data accuracy and energy efficiency.
The asymptotic convergence-rate of Q-learning In this paper we show that for discounted MDPs with discount factor $\gamma>1/2$ the asymptotic rate of convergence of Q-learning is O($1/t^{R(1-\gamma$)}) if $R(1-\gamma)<1/2$ and O($\sqrt{\log\log t/ t}$) otherwise provided that the state-action pairs are sampled from a fixed probability distribution. Here $R=p_{min}/p_{max}$ is the ratio of the minimum and maximum state-action occupation frequencies. The results extend to convergent on-line learning provided that $p_{min}>0$, where $p_{min}$ and $p_{max}$ now become the minimum and maximum state-action occupation frequencies corresponding to the stationary distribution.
A survey on multimedia-based cross-layer optimization in visual sensor networks. Visual sensor networks (VSNs) comprised of battery-operated electronic devices endowed with low-resolution cameras have expanded the applicability of a series of monitoring applications. Those types of sensors are interconnected by ad hoc error-prone wireless links, imposing stringent restrictions on available bandwidth, end-to-end delay and packet error rates. In such context, multimedia coding is required for data compression and error-resilience, also ensuring energy preservation over the path(s) toward the sink and improving the end-to-end perceptual quality of the received media. Cross-layer optimization may enhance the expected efficiency of VSNs applications, disrupting the conventional information flow of the protocol layers. When the inner characteristics of the multimedia coding techniques are exploited by cross-layer protocols and architectures, higher efficiency may be obtained in visual sensor networks. This paper surveys recent research on multimedia-based cross-layer optimization, presenting the proposed strategies and mechanisms for transmission rate adjustment, congestion control, multipath selection, energy preservation and error recovery. We note that many multimedia-based cross-layer optimization solutions have been proposed in recent years, each one bringing a wealth of contributions to visual sensor networks.
EEFF: a cross-layer designed energy efficient fast forwarding protocol for wireless sensor networks Most of the proposed communication protocols for wireless sensor networks focus on the energy efficiency. However, it often brings poor latency performance. In this paper, we develop a novel cross-layer designed asynchronous protocol EEFF (Energy Efficient and Fast Forwarding) for wireless sensor networks regarding low latency and energy efficiency. EEFF improves low power listening approach and introduces dynamical routing selection to reduce the latency. We evaluate the performance of the EEFF protocol in a testbed of 16 MICAZ motes and perform larger scale experiments with NS-2. These experiments demonstrate that EEFF significantly improves the latency and energy performance compared with state-of-art asynchronous protocols, and it's especially suitable for large scale dense wireless sensor networks.
An Energy Efficient Cross-Layer Design for Healthcare Monitoring Wireless Sensor Networks In wireless sensor networks (WSN), the nodes have limited energy resource. So power conservation in individual node is of significant importance. In this paper we derive a cross-layer design method that combines adaptive modulation and coding (AMC) at the physical layer and two sleep modes according to monitoring type at the medium access layer (MAC) in healthcare monitoring WSN, judiciously to maximize spectral efficiency, and to minimize energy consumption, under prescribed error performance constraints. To achieve maximum spectral efficiency in the entire SNR (signal-to-noise ratio) range, we advise a combined transmission mode switching between convolutionally code and unconvolutionally code mode, based which we analyze the energy consumption of the system with multi mobile nodes and a sink node and verify that it is energy efficient to adopt AMC instead of single modulation at physical layer in healthcare monitoring WSN.
Battery aware beacon enabled IEEE 802.15.4: An adaptive and cross-layer approach In Wireless Sensor Networks (WSNs), energy conservation is one of the main concerns challenging the cutting-edge standards and protocols. Most existing studies focus on the design of WSN energy efficient algorithms and standards. The standard IEEE 802.15.4 has emerged for WSNs in which the legacy operations are based on the principle that the power-operated battery is ideal and linear. However, the diffusion principle in batteries shows the nonlinear process when it releases a charge. Hence, we can prolong the network lifetime by designing optimized algorithms that reflect the battery characteristics. Within this context, this paper proposes a cross-layer algorithm to improve the performance of beacon enabled IEEE 802.15.4 network by allowing a Personal Area Network Coordinator (PANc) to tune its MAC behavior adaptively according to both the current remaining battery capacity and the network status. The performance of the new algorithm has been examined and compared against that of the legacy IEEE 802.15.4 MAC algorithm through extensive simulation experiments. The results show that the new technique reduces significantly the energy consumption and the average end-to-end delay.
Distributed visual-target-surveillance system in wireless sensor networks. A wireless sensor network (WSN) is a powerful unattended distributed measurement system, which is widely used in target surveillance because of its outstanding performance in distributed sensing and signal processing. This paper introduces a multiview visual-target-surveillance system in WSN, which can autonomously implement target classification and tracking with collaborative online learning and localization. The proposed system is a hybrid system of single-node and multinode fusion. It is constructed on a peer-to-peer (P2P)-based computing paradigm and consists of some simple but feasible methods for target detection and feature extraction. Importantly, a support-vector-machine-based semisupervised learning method is used to achieve online classifier learning with only unlabeled samples. To reduce the energy consumption and increase the accuracy, a novel progressive data-fusion paradigm is proposed for online learning and localization, where a feasible routing method is adopted to implement information transmission with the tradeoff between performance and cost. Experiment results verify that the proposed surveillance system is an effective, energy-efficient, and robust system for real-world application. Furthermore, the P2P-based progressive data-fusion paradigm can improve the energy efficiency and robustness of target surveillance.
Cross-layer QoS-aware communication for ultra wide band wireless multimedia sensor networks Wireless Multimedia Sensor Networks (WMSNs) are distributed systems of wirelessly networked devices that allow retrieving video and audio streams, still images, and scalar sensor data. WMSNs will be a crucial component of mission-critical networks to protect the operation of strategic national infrastructure, provide support to counteract emergencies and threats, and enhance infrastructure for tactical military operations. To enable these applications, WMSNs require the sensor network paradigm to be re-thought in view of the need for mechanisms to deliver multimedia content with a pre-defined level of quality of service (QoS). In this paper, a new cross-layer communication architecture based on the time-hopping impulse radio ultra wide band technology is described, whose objective is to reliably and flexibly deliver QoS to heterogeneous applications in WMSNs, by leveraging and controlling interactions among different layers of the protocol stack according to applications requirements. Simulations show that the proposed system achieves the performance objectives of WMSNs without sacrificing on the modularity of the overall design.
Measurement Analysis of the Live E! Sensor Network: Spatial-Temporal Correlations and Data Aggregation TCP throughput is one of the major communication quality metrics. In order to grasp this metric by active measurement while imposing a lower load, various estimation techniques have been proposed. These techniques are categorized as either equation-based ...
Load-Balanced Clustering Of Wireless Sensor Networks Wireless sensor networks have potential to monitor environments for both military and civil applications. Due to inhospitable conditions these sensors are not always deployed uniformly in the area of interest. Since sensors are generally constrained in on-board energy supply, efficient management of the network is crucial to extend the life of the sensors. Sensors' energy cannot support long haul communication to reach a remote command site and thus requires many levels of hops or a gateway to forward the data on behalf of the sensor. In this paper we propose an algorithm to network these sensors in to well define clusters with less-energy-constrained gateway nodes acting as cluster-heads, and balance load among these gateways. Simulation results show how our approach can balance the load and improve the lifetime of the system.
Improved detection of LSB steganography in grayscale images We consider methods for answering reliably the question of whether an image contains hidden data; the focus is on grayscale bitmap images and simple LSB steganography. Using a distributed computation network and a library of over 30,000 images we have been carefully evaluating the reliability of various steganalysis methods. The results suggest a number of improvements to the standard techiques, with particular benefits gained by not attempting to estimate the hidden message length. Extensive experimentation shows that the improved methods allow reliable detection of LSB steganography with between 2 and 6 times smaller embedded messages.
Visualization of 3D ultrasound data It is suggested that ultrasound data acquisition will play an increasing role in the future of medical imaging. Unlike magnetic resonance imaging (MRI) and computerized tomography (CT), ultrasound offers interactive visualization of underlying anatomy in real time. Additionally, ultrasound equipment costs far less and does not use ionizing radiation or require specialized facilities. The different methods for multidimensional medical imaging and scientific visualization are reviewed. Several volume visualization algorithms are discussed. They are multiplexer slicing, surface fitting, volume rendering, data classification, and viewing and shading. Three-dimensional ultrasound data display methods are also discussed.<>
Comparative Evaluation of Layout Density in 3T, 4T, and MT FinFET Standard Cells In this paper, issues related to the physical design and layout density of FinFET standard cells are discussed. Analysis significantly extends previous analyses, which considered the simplistic case of a single FinFET device or extremely simple circuits. Results show that analysis of a single device cannot predict the layout density of FinFET cells, due to the additional spacing constraints imposed by the standard cell structure. Results on the layout density of FinFET standard cell circuits are derived by building and analyzing various cell libraries in 32-nm technology, based on three-terminal (3T) and four-terminal (4T) devices, as well as on the recently proposed cells with mixed 3T-4T devices (MT). The results obtained for spacer- and lithography-defined FinFETs are observed from the technology scaling perspective by also considering 45- and 65-nm libraries. The effect of the fin and cell height on the layout density is studied. Results show that 3T and MT FinFET standard cells can have the same layout density as bulk cells (or better) for low (moderate) fin heights. Instead, 4T standard cells have an unacceptably worse layout density. Hence, MT standard cells turn out to be the only viable option to apply back biasing in FinFET standard cell circuits.
A reversible image authentication scheme based on compressive sensing In order to satisfy the requirement of reversible authentication as well as tamper localization and recovery, a reversible image authentication scheme based on compressive sensing (CS) is proposed. Double watermarks are employed, including a short one (perception Hash) for image integrity authentication and a long one for tamper localization and recovery. First, we embed the short watermark into the image in a reversible way. The embedding method is based on histogram modification of discrete Haar wavelet coefficients which is proposed in this paper. Then the long watermark, which is generated by CS sampling on the transformation coefficients of the non-overlapping image blocks, is registered to intellectual property rights (IRP) database for saving in a zero-watermarking way. At the authentication side, the receiver recovers the image after extracting the short watermark, and compares the Hash values generated from the recovered image with the short watermark for authentication. If the authentication is successful, the image can be completely restored to the original state in a reversible way; if the authentication fails, by utilizing the long watermark in the IRP database and CS reconstruction, it can achieve tamper localization and recovery. Meanwhile, experimental results show that the watermarked image has good imperceptibility. The proposed scheme is with good potential to be adopted for reversible image authentication.
1.204543
0.204543
0.204543
0.204543
0.204543
0.204543
0.102321
0.051198
0.001481
0.00042
0
0
0
0
Reliable embedding for robust reversible watermarking. It has been demonstrated that De. Vleeschouwer's histogram rotation (HR) scheme suffers from serious "salt-and-pepper" noise in the watermarked images. Recently, some improved methods are proposed to handle this problem. However they are at the cost of capacity. For this purpose, we first develop a reliable histogram rotation method based on block selection (RHR-BS) to avoid the flipping of pixels and to improve the invisibility and reversibility of HR. In view of a little reduction of capacity in RHR-BS, a reliable HR method based on pixel adjustment (RHR-PA) is then proposed, which modulates the range of grayscale values of pixels to remove the "salt-and-pepper" noise in HR, and to avoid the decrease of capacity in RHR-BS. Extensive experiments are conducted to show the effectiveness of the proposed RHR-BS and RHR-PA.
Local Feature Based Geometric-Resistant Image Information Hiding Watermarking aims to hide particular information into some carrier but does not change the visual cognition of the carrier itself. Local features are good candidates to address the watermark synchronization error caused by geometric distortions and have attracted great attention for content-based image watermarking. This paper presents a novel feature point-based image watermarking scheme against geometric distortions. Scale invariant feature transform (SIFT) is first adopted to extract feature points and to generate a disk for each feature point that is invariant to translation and scaling. For each disk, orientation alignment is then performed to achieve rotation invariance. Finally, watermark is embedded in middle-frequency discrete Fourier transform (DFT) coefficients of each disk to improve the robustness against common image processing operations. Extensive experimental results and comparisons with some representative image watermarking methods confirm the excellent performance of the proposed method in robustness against various geometric distortions as well as common image processing operations.
Reversible Watermarking Based on Statistical Quantity Histogram The histogram shifting based reversible watermarking techniques have attracted increasing interests due to their low computational complexity, high visual quality and considerable capacity. However, those methods suffer from unstable performance because they fail to consider the diversity of grayscale histograms for various images. For this purpose, we develop a novel histogram shifting based method by introducing a block statistical quantity (BSQ). The similarity of BSQ distributions for different images reduces the diversity of grayscale histograms and guarantees the stable performance of the proposed method. We also adopt different embedding schemes to prevent the issues of overflow and underflow. Moreover, by selecting the block size, the capacity of the proposed watermarking scheme becomes adjustable. The experimental results of performance comparisons with other existing methods are provided to demonstrate the superiority of the proposed method.
Local histogram based geometric invariant image watermarking Compared with other existing methods, the feature point-based image watermarking schemes can resist to global geometric attacks and local geometric attacks, especially cropping and random bending attacks (RBAs), by binding watermark synchronization with salient image characteristics. However, the watermark detection rate remains low in the current feature point-based watermarking schemes. The main reason is that both of feature point extraction and watermark embedding are more or less related to the pixel position, which is seriously distorted by the interpolation error and the shift problem during geometric attacks. In view of these facts, this paper proposes a geometrically robust image watermarking scheme based on local histogram. Our scheme mainly consists of three components: (1) feature points extraction and local circular regions (LCRs) construction are conducted by using Harris-Laplace detector; (2) a mechanism of grapy theoretical clustering-based feature selection is used to choose a set of non-overlapped LCRs, then geometrically invariant LCRs are completely formed through dominant orientation normalization; and (3) the histogram and mean statistically independent of the pixel position are calculated over the selected LCRs and utilized to embed watermarks. Experimental results demonstrate that the proposed scheme can provide sufficient robustness against geometric attacks as well as common image processing operations.
A reversible acoustic steganography for integrity verification Advanced signal-processing technology has provided alternative countermeasures against malicious attacks and tampering with digital multimedia, which are serious issues. We propose a reversible acoustic steganography scheme to verify the integrity of acoustic data with probative importance from being illegally used. A hash function is used as a feature value to be embedded into original acoustic target data as a checksum of the data's originality. We compute the target original signal with an Integer Discrete Cosine Transform (intDCT) that has low computational complexity. Embedding space in the DCT domain is reserved for feature values and extra payload data, enabled by amplitude expansion in high-frequency spectrum of cover data. Countermeasures against overflow/underflow have been taken with adaptive gain optimization. Experimental evaluation has shown the distortion caused by embedding has been controlled under a level that is perceptible. Lossless hiding algorithm ensures this scheme is reversible.
Robust Lossless Image Data Hiding Designed for Semi-Fragile Image Authentication Recently, among various data hiding techniques, a new subset, lossless data hiding, has received increasing interest. Most of the existing lossless data hiding algorithms are, however, fragile in the sense that the hidden data cannot be extracted out correctly after compression or other incidental alteration has been applied to the stego-image. The only existing semi-fragile (referred to as robust in this paper) lossless data hiding technique, which is robust against high-quality JPEG compression, is based on modulo-256 addition to achieve losslessness. In this paper, we first point out that this technique has suffered from the annoying salt-and-pepper noise caused by using modulo-256 addition to prevent overflow/underflow. We then propose a novel robust lossless data hiding technique, which does not generate salt-and-pepper noise. By identifying a robust statistical quantity based on the patchwork theory and employing it to embed data, differentiating the bit-embedding process based on the pixel group's distribution characteristics, and using error correction codes and permutation scheme, this technique has achieved both losslessness and robustness. It has been successfully applied to many images, thus demonstrating its generality. The experimental results show that the high visual quality of stego-images, the data embedding capacity, and the robustness of the proposed lossless data hiding scheme against compression are acceptable for many applications, including semi-fragile image authentication. Specifically, it has been successfully applied to authenticate losslessly compressed JPEG2000 images, followed by possible transcoding. It is expected that this new robust lossless data hiding algorithm can be readily applied in the medical field, law enforcement, remote sensing and other areas, where the recovery of original images is desired.
Robust Reversible Watermarking via Clustering and Enhanced Pixel-Wise Masking Robust reversible watermarking (RRW) methods are popular in multimedia for protecting copyright, while preserving intactness of host images and providing robustness against unintentional attacks. However, conventional RRW methods are not readily applicable in practice. That is mainly because: 1) they fail to offer satisfactory reversibility on large-scale image datasets; 2) they have limited robustness in extracting watermarks from the watermarked images destroyed by different unintentional attacks; and 3) some of them suffer from extremely poor invisibility for watermarked images. Therefore, it is necessary to have a framework to address these three problems, and further improve its performance. This paper presents a novel pragmatic framework, wavelet-domain statistical quantity histogram shifting and clustering (WSQH-SC). Compared with conventional methods, WSQH-SC ingeniously constructs new watermark embedding and extraction procedures by histogram shifting and clustering, which are important for improving robustness and reducing run-time complexity. Additionally, WSQH-SC includes the property-inspired pixel adjustment to effectively handle overflow and underflow of pixels. This results in satisfactory reversibility and invisibility. Furthermore, to increase its practical applicability, WSQH-SC designs an enhanced pixel-wise masking to balance robustness and invisibility. We perform extensive experiments over natural, medical, and synthetic aperture radar images to show the effectiveness of WSQH-SC by comparing with the histogram rotation-based and histogram distribution constrained methods.
Line-Based Cubism-Like Image—A New Type of Art Image and its Application to Lossless Data Hiding new method of combining art image generation and data hiding to enhance the camouflage effect for various information-hiding applications is proposed. First, a new type of computer art, called line-based Cubism-like image, which keeps a characteristic of the Cubism art—abstraction by prominent lines and regions from multiple viewpoints—is proposed. In the creation process with an input source image, prominent line segments in the image are detected and rearranged to form an abstract region-type art image of the Cubism flavor. Data hiding with the minimal distortion is carried out skillfully during the process of recoloring the regions in the generated art image by shifting the pixels' colors for the minimum amount of ${\pm}$1 while keeping the average colors of the regions unchanged. Based on a rounding-off property in integer-valued color computation, the proposed data hiding technique is proved by theorems to be reversible, and thus useful for lossless recovery of the cover art image from the stego-image. Four security enhancement measures are also adopted to prevent hackers from extracting embedded data correctly. Experimental results show the feasibility of the proposed method.
High-capacity reversible data hiding based on multi-histogram modification. Reversible data hiding is a technique that embeds a message into a host image with acceptable visual distortion and then recovers the image without any data loss while extracting the embedded message. The previous schemes mainly suffer from an unresolved problem that the imperceptibility of a marked image decreases severely as the embedding capacity increases. Extending the histogram modification technique, this study proposes a novel scheme that utilizes multiple histograms to increase embedding capacity while keeping marked-image quality. Unlike most histogram modification schemes, the multi-histogram scheme does not suffer from overflow and underflow during histogram shift. This scheme can yield the embedding capacity of 1 bit per pixel (bpp) at the PSNR of 48.13 db for a 512 × 512 grayscale image. To reduce the overhead during message embedding, the work further proposes an iterative multi-histogram scheme. Comprehensive experimental results show that both the schemes can achieve high embedding capacity and image quality.
Adaptive DE-based reversible steganographic technique using bilinear interpolation and simplified location map In this paper, an adaptive DE-based reversible steganographic scheme with bilinear interpolation and simplified location map is proposed. In traditional reversible difference expansion (DE) scheme, it suffers from two problems: the embeddable location is considered insufficient and the embedding payload control capability in single layer embedding is weak. For the first problem, the kernel of bilinear interpolation is applied to effectively improve the number of the embeddable location while the quality of the stego-image can be maintained at a good level. In addition, the proposed simplified location map is used for the existing adaptive embedding rule to improve the second problem where the secret data can be adaptively embedded and also the load of additional information can be reduced. The experimental results revealed that the proposed scheme presented better visual quality of the stego-image and carried larger embedding payload than some other revised DE schemes, such as Alattar's and Lee's schemes.
High-fidelity reversible data hiding scheme based on pixel-value-ordering and prediction-error expansion This paper presents a high-fidelity reversible data hiding scheme for digital images based on a new prediction strategy called pixel-value-ordering (PVO) and the well-known prediction-error expansion (PEE) technique. Specifically, a host image is first divided into non-overlapped equal-sized blocks. Then the maximum and minimum values of each block are predicted by other pixels of the block according to their pixel value orders. With such a PVO-based predictor, data embedding is implemented via PEE. The incorporation of PVO into PEE has an advantage in reducing the number of shifted pixels, and thus it can alleviate the degradation in image quality. Consequently, the proposed method can embed adequate data into a host image with rather limited distortion. The PSNR of a marked image versus its original one is guaranteed to be above 51.14dB. In addition, a solution is provided to further improve the embedding performance by priorly using the flat blocks to embed data while leaving the rough ones unchanged. We demonstrate the proposed method outperforms some advanced prior arts with abundant experimental results.
Difference-expansion based reversible and visible image watermarking scheme. A reversible image watermarking scheme recovers the original cover image after extracting the embedded watermarks. A visible image watermarking scheme embeds watermarks to create a visible watermark effect on the cover image. A general reversible image watermarking scheme embeds invisible watermarks. This paper presents a reversible and visible image watermarking scheme that uses a conventional difference-expansion method. The cover image is first segmented to non-overlapped k×k blocks. Each block is then applied to two watermarking schemes; a difference-expansion based invisible watermarking scheme and a visible watermarking scheme to embed one watermark bit. Exceeding numbers, larger than 255 or smaller than 0, generated from the difference-expansion method require being recorded for a lossless recovery. Experimental results show that the proposed scheme embeds visible watermarks with few recorded exceeding numbers. However, not recording any exceeding numbers still results in a high similarity of extracted watermark image and good quality of recovered cover image.
A computational approach to George Boole's discovery of mathematical logic This paper reports a computational model of Boole's discovery of Logic as a partof Mathematics. George Boole (1815-1864) found that the symbols of Logic behavedas algebraic symbols, and he then rebuilt the whole contemporary theory of Logic bythe use of methods such as the solution of algebraic equations. Study of the differenthistorical factors that influenced this achievement has served as background for ourtwo main contributions: a computational representation of Boole's Logic before...
Designing verifiable expert systems With the long-range goal of developing a practical methodology for complete verification and validation of expert systems, the authors have investigated methods for designing knowledge bases so that they can be easily verified. These methods are based on extensions of traditional software verification technology and on emerging artificial intelligence programming technology. After discussing the background of this work, the authors describe the methods and their benefits to verification and discuss ongoing and future research activities
1.026538
0.027836
0.020168
0.015949
0.006833
0.004665
0.001353
0.000359
0.000135
0.000072
0.000032
0.000002
0
0
POLAR 2.0: An Effective Routability-Driven Placer A wirelength-driven placer without considering routability would lead to unroutable results. To mitigate routing congestion, there are two basic approaches: (1) minimizing the routing demand; (2) distributing the routing demand properly. In this paper, we propose a new placer POLAR 2.0 emphasizing both approaches. To minimize the routing demand, POLAR 2.0 attaches very high importance to maintaining a good wirelength-driven placement in the global placement stage. To distribute the routing demand, cells in congested regions are spread out by a novel routability-driven rough legalization in a global manner and by a history based cell inflation technique in a local manner. The experimental results based on ICCAD 2012 contest benchmark suite show that POLAR 2.0 outperforms all published academic routability-driven placers both in runtime and quality.
Nonsmooth Optimization Method for VLSI Global Placement The common objective of very large-scale integration (VLSI) placement problem is to minimize the total wirelength, which is calculated by the total half-perimeter wirelength (HPWL). Since the HPWL is not differentiable, various differentiable wirelength approximation functions have been proposed in analytical placement methods. In this paper, we reformulate the HPWL as an -norm model of the wirelength function, which is exact but nonsmooth. Based on the -norm wirelength model and exact calculation of overlapping areas between cells and bins, a nonsmooth optimization model is proposed for the VLSI global placement problem, and a subgradient method is proposed for solving the nonsmooth optimization problem. Moreover, local convergence of the subgradient method is proved under some suitable conditions. In addition, two enhanced techniques, i.e., an adaptive parameter to control the step size and a cautious strategy for increasing the penalty parameter, are also used in the nonsmooth optimization method. In order to make the placement method scalable, a multilevel framework is adopted. In the clustering stage, the best choice clustering algorithm is modified according to the -norm wirelength model to cluster the cells, and the nonsmooth optimization method is recursively used in the declustering stage. Comparisons of experimental results on the International Symposium on Physical Design (ISPD) 2005 and 2006 benchmarks show that the global placement method is promising.
PRICE: power reduction by placement and clock-network co-synthesis for pulsed-latch designs Pulsed latches have emerged as a popular technique to reduce the power consumption and delay for clock networks. However, the current physical synthesis flow for pulsed latches still performs circuit placement and clock-network synthesis separately, which limits achievable power reduction. This paper presents the first work in the literature to perform placement and clock-network co-synthesis for pulsed-latch designs. With the interplay between placement and clock-network synthesis, the clock-network power and timing can be optimized simultaneously. Novel progressive network forces are introduced to globally guide the placer for iterative improvements, while the clock-network synthesizer makes use of updated latch locations to optimize power and timing locally. Experimental results show that our framework can substantially minimize power consumption and improve timing slacks, compared to existing synthesis flows.
Fast legalization for standard cell placement with simultaneous wirelength and displacement minimization Legalization is one of the most critical steps in modern placement designs. Since several objectives like wirelength, routability, or temperature are already optimized in global placement stage, the objective of legalization is not only to align the cells overlap-free to the rows, but also to preserve the solution of global placement, i.e., the displacement of cells needs to be minimized. However, minimizing displacement only is not enough for current timing-driven SoC designs. Blind displacement minimization may increase the half-perimeter wirelength (HPWL) of nets significantly that degrades the chip performance. In this paper, we propose a fast legalization algorithm for standard cell placement with simultaneous wirelength and displacement minimization. The main contributions of our work are: (1) a fast row selection technique by using k-medoid clustering approach; (2) an exact linear wirelength model to minimize both wirelength and total displacement; (3) a constant time approach to determine the median in trial placement stage. Compared with the state-of-the-art legalization algorithms, experimental results show that our legalizer acquires much better achievement in terms of HPWL, total and maximum displacements, and running time on legalized NTUplace3 global placement results on both ISPD 2005 and 2006 placement contest benchmarks.
Closing the Gap between Global and Detailed Placement: Techniques for Improving Routability Improving routability during both global and detailed routing stage has become a critical problem in modern VLSI design. In this work, we propose a placement framework that offers a complete coverage solution in considering both global and detailed routing congestion. A placement migration strategy is proposed, which improves detailed routing congestion while preserving the placement integrity that is optimized for global routability. Using the benchmarks released from ISPD2014 Contest, practical design rules in advanced node design are considered in our placement framework. Evaluation on routability of our placement framework is conducted using commercial router provided by the 2014 ISPD Contest organizers. Experimental results show that the proposed methodologies can effectively improve placement solutions for both global and detailed router.
Obstacle-aware clock-tree shaping during placement Traditional IC design flows optimize clock networks before signal-net routing and are limited by the quality of register placement. Existing publications also reflect this bias and focus mostly on clock routing. The few known techniques for register placement exhibit significant limitations and do not account for recent progress in large-scale placement and obstacle-aware clock-network synthesis. In this work, we integrate clock network synthesis within global placement by optimizing register locations. We propose the following techniques: (1) obstacle-aware virtual clock-tree synthesis; (2) arboreal clock-net contraction force with virtual-node insertion, which can handle multiple clock domains and gated clocks; (3) an obstacle-avoidance force. Our work is validated on large-size benchmarks with numerous macro blocks. Experimental results show that our software implementation, called Lopper, prunes clock-tree branches to reduce their length by 30.0%~36.6% and average total dynamic power consumption by 6.8%~11.6% ver- sus conventional approaches.
Taming the complexity of coordinated place and route IC performance, power dissipation, size, and signal integrity are now dominated by interconnects. However, with ever-shrinking standard cells, blind minimization of interconnect during placement causes routing failures. Hence, we develop Coordinated Placeand-Route (CoPR) with (i) a Lightweight Incremental Routing Estimation (LIRE) frequently invoked during placement, (ii) placement techniques that address three types of routing congestion, and (iii) an interface to congestion estimation that supports new types of incrementality. LIRE comprehends routing obstacles and nonuniform routing capacities, and relies on a cache-friendly, fully incremental routing algorithm. Our implementation extends and improves our winning entry at the ICCAD 2012 Contest.
mFAR: fixed-points-addition-based VLSI placement algorithm A placement problem can be formulated as a quadratic program with non-linear constraints. Those constraints make the problem hard. Omitting the constraints and solving the unconstraint problem results in placement with substantial cell overlaps. To remove the overlaps, we introduce fixed points into the non-constrained quadratic-programming formulation. Acting as pseudo cells at fixed locations, they can be used to pull cells away from the dense regions to reduce overlapping. In this paper, we present a large-scale placement algorithm based on fixed-point addition.
Analytical placement: A linear or a quadratic objective function? This paper addresses the problem of cell placement which is considered crucial for layout quality. Based on the combined analytical and partitioning strategy successfully applied in the GORDIAN placement tool, we discuss the consequences of using linear or quadratic ob- jective functions. By joining the linear objective with an efficient quadratic programming approach, and by applying a refined iterative partitioning scheme, we ob- tain placements of excellent quality. The effect of a quadratic and a linear objective function on the chip area after final routing is demonstrated for benchmark circuits and other circuits with up to 21000 cells.
A High-Quality Mixed-Size Analytical Placer Considering Preplaced Blocks and Density Constraints In addition to wirelength, modern placers need to consider various constraints such as preplaced blocks and density. We propose a high-quality analytical placement algorithm considering wirelength, preplaced blocks, and density based on the log-sum-exp wirelength model proposed by Naylor et al. (2001) and the multilevel framework. To handle preplaced blocks, we use a two-stage smoothing technique, Gaussian smoothing followed by level smoothing, to facilitate block spreading during global placement. The density is controlled by white-space re-allocation using partitioning and cut-line shifting during global placement and cell sliding during detailed placement. We further use the conjugate gradient method with dynamic step-size control to speed up the global placement and macro shifting to find better macro positions. Experimental results show that our placer obtains the best published results
Fragile watermarking scheme using a hierarchical mechanism This paper proposes a novel fragile watermarking scheme with a hierarchical mechanism, in which pixel-derived and block-derived watermark data are carried by the least significant bits of all pixels. On the receiver side, after identifying the blocks containing tampered content, the watermark data hidden in the rest blocks are exploited to exactly locate the tampered pixels. Moreover, using exhaustive attempts, the proposed scheme is capable of recovering the original watermarked version without any error.
Lossless Data Hiding: Fundamentals, Algorithms And Applications Recently, among various data hiding techniques, a new subset called lossless data hiding has drawn tremendous interests. By lossless data hiding, it is meant that the marked media can be reversed to the original cover media without any distortion after the hidden data are retrieved. After a careful study of all lossless data hiding algorithms published up to today, we classify the existing algorithms into three categories: 1) Those developed for fragile authentication; 2) Those developed aiming at large embedding capacity; 3) Those developed for semi-fragile authentication. The mechanisms, merits, drawbacks and applications of these algorithms are analyzed, and some future research issues are addressed in this paper.
Accelerating tropical cyclone analysis using LambdaRAM, a distributed data cache over wide-area ultra-fast networks Data-intensive scientific applications require rapid access to local and geographically distributed data, however, there are significant I/O latency bottlenecks associated with storage systems and wide-area networking. LambdaRAM is a high-performance, multi-dimensional, distributed cache, that takes advantage of memory from multiple clusters interconnected by ultra-high-speed networking, to provide applications with rapid access to both local and remote data. It mitigates latency bottlenecks by employing proactive latency-mitigation heuristics based on an application's access patterns. We present results using LambdaRAM to rapidly stride through remote multi-dimensional NASA Modeling, Analysis and Prediction (MAP) 2006 project datasets, based on time and geographical coordinates, to compute wind shear for cyclone and hurricane and tropical cyclone analysis. Our current experiments have demonstrated up to a 20-fold speedup in the computation of wind shear with LambdaRAM.
Ciel*A*B*-Based Near-Lossless Compression Of Prepress Images Lossless image compression algorithms used in the prepress workflow suffer from the disadvantage that only moderate compression ratios can be achieved. Most lossy compression schemes achieve much higher compression ratios but there is no easy way to limit the image difference they introduce. Near-lossless image compression schemes are based on lossless techniques, but they give an opportunity to put constraints on the unavoidable pixel loss. The constraints are usually expressed in terms of differences within the individual CMYK separations and this error criterion does not match the human visual system. In this paper, we present a near-lossless image compression scheme which aims at limiting the pixel difference such as observed by the human visual system. It uses the subjectively equidistant CIEL*a*b*-space to express allowable color differences. Since the CMYK to CIEL*a*b* transform maps a 4D space onto a 3D space, singularities would occur resulting in a loss of the gray component replacement information; therefore an additional dimension is added. The error quantization is based on an estimated linearization of the CIEL*a*b* transform and on the singular value decomposition of the resulting Jacobian matrix. Experimental results on some representative CMYK test images show that the visual image quality is improved and that higher compression ratios can be achieved before the visual difference is detected by a human observer.
1.026424
0.026755
0.025649
0.017767
0.013482
0.01043
0.007989
0.004219
0.001113
0.000054
0
0
0
0
Four-chamber heart modeling and automatic segmentation for 3-D cardiac CT volumes using marginal space learning and steerable features. We propose an automatic four-chamber heart segmentation system for the quantitative functional analysis of the heart from cardiac computed tomography (CT) volumes. Two topics are discussed: heart modeling and automatic model fitting to an unseen volume. Heart modeling is a nontrivial task since the heart is a complex nonrigid organ. The model must be anatomically accurate, allow manual editing, an...
Robust left ventricle segmentation from ultrasound data using deep neural networks and efficient search methods The automatic segmentation of the left ventricle of the heart in ultrasound images has been a core research topic in medical image analysis. Most of the solutions are based on low-level segmentation methods, which uses a prior model of the appearance of the left ventricle, but imaging conditions violating the assumptions present in the prior can damage their performance. Recently, pattern recognition methods have become more robust to imaging conditions by automatically building an appearance model from training images, but they present a few challenges, such as: the need of a large set of training images, robustness to imaging conditions not present in the training data, and complex search process. In this paper we handle the second problem using the recently proposed deep neural network and the third problem with efficient searching algorithms. Quantitative comparisons show that the accuracy of our approach is higher than state-of-the-art methods. The results also show that efficient search strategies reduce ten times the run-time complexity.
A frequency domain approach to registration of aliased images with application to super-resolution Super-resolution algorithms reconstruct a high-resolution image from a set of low-resolution images of a scene. Precise alignment of the input images is an essential part of such algorithms. If the low-resolution images are undersampled and have aliasing artifacts, the performance of standard registration algorithms decreases. We propose a frequency domain technique to precisely register a set of aliased images, based on their low-frequency, aliasing-free part. A high-resolution image is then reconstructed using cubic interpolation. Our algorithm is compared to other algorithms in simulations and practical experiments using real aliased images. Both show very good visual results and prove the attractivity of our approach in the case of aliased input images. A possible application is to digital cameras where a set of rapidly acquired images can be used to recover a higher-resolution final image.
Introduction of Sensor Spectral Response Into Image Fusion Methods. Application to Wavelet-Based Methods Usual image fusion methods inject features from a high spatial resolution panchromatic sensor into every low spatial resolution multispectral band trying to preserve spectral signatures and improve spatial resolution to that of the panchromatic sensor. The objective is to obtain the image that would be observed by a sensor with the same spectral response (i.e., spectral sensitivity and quantum eff...
Remote sensing image fusion using the curvelet transform This paper presents an image fusion method suitable for pan-sharpening of multispectral (MS) bands, based on nonseparable multiresolution analysis (MRA). The low-resolution MS bands are resampled to the fine scale of the panchromatic (Pan) image and sharpened by injecting highpass directional details extracted from the high-resolution Pan image by means of the curvelet transform (CT). CT is a nonseparable MRA, whose basis functions are directional edges with progressively increasing resolution. The advantage of CT with respect to conventional separable MRA, either decimated or not, is twofold. Firstly, directional detail coefficients matching image edges may be preliminarily soft-thresholded to achieve a noise reduction that is better than that obtained in the separable wavelet domain. Secondly, modeling of the relationships between high-resolution detail coefficients of the MS bands and of the Pan image is more fitting, being accomplished in the directional multiresolution domain. Experiments are carried out on very-high-resolution MS+Pan images acquired by the QuickBird and Ikonos satellite systems. Fusion simulations on spatially degraded data, whose original MS bands are available for reference, show that the proposed curvelet-based fusion method performs slightly better than the state-of-the art. Fusion tests at the full scale reveal that an accurate and reliable Pan-sharpening, little affected by local inaccuracies even in the presence of complex and detailed urban landscapes, is achieved by the proposed method.
A New Method for Encrypting Images with Few Details Using Rijndael and RC6 Block Ciphers in the Electronic Code Book Mode In this paper, we propose an efficient method for encrypting images with few details using Rijndael and RC6 block ciphers in Electronic Code Book ECB mode. Images with few details are images with large areas of similar gray levels such as medical images, infrared images, and logo images. This leads to encryption leakage if the Rijndael or RC6 block ciphers are used. The proposed method solves this problem by using a preprocessing step to eliminate the repeated patterns before encryption. A comparison is held between encryption of images with few details with preprocessing and encryption without preprocessing. Experimental results verify that the proposed preprocessing method gives the encryption algorithms the ability to encrypt images with few details in an efficient manner in the ECB mode.
Extension of phase correlation to subpixel registration. In this paper, we have derived analytic expressions for the phase correlation of downsampled images. We have shown that for downsampled images the signal power in the phase correlation is not concentrated in a single peak, but rather in several coherent peaks mostly adjacent to each other. These coherent peaks correspond to the polyphase transform of a filtered unit impulse centered at the point of registration. The analytic results provide a closed-form solution to subpixel translation estimation, and are used for detailed error analysis. Excellent results have been obtained for subpixel translation estimation of images of different nature and across different spectral bands.
Super-resolution image reconstruction: a technical overview A new approach toward increasing spatial resolution is required to overcome the limitations of the sensors and optics manufacturing technology. One promising approach is to use signal processing techniques to obtain an high-resolution (HR) image (or sequence) from observed multiple low-resolution (LR) images. Such a resolution enhancement approach has been one of the most active research areas, and it is called super resolution (SR) (or HR) image reconstruction or simply resolution enhancement. In this article, we use the term "SR image reconstruction" to refer to a signal processing approach toward resolution enhancement because the term "super" in "super resolution" represents very well the characteristics of the technique overcoming the inherent resolution limitation of LR imaging systems. The major advantage of the signal processing approach is that it may cost less and the existing LR imaging systems can be still utilized. The SR image reconstruction is proved to be useful in many practical cases where multiple frames of the same scene can be obtained, including medical imaging, satellite imaging, and video applications. The goal of this article is to introduce the concept of SR algorithms to readers who are unfamiliar with this area and to provide a review for experts. To this purpose, we present the technical review of various existing SR methodologies which are often employed. Before presenting the review of existing SR algorithms, we first model the LR image acquisition process.
Tailored reversible watermarking schemes for authentication of electronic clinical atlas. It is accepted that digital watermarking is quite relevant in medical imaging. However, due to the special nature of clinical practice, it is often required that watermarking not introduce irreversible distortions to medical images. The electronic clinical atlas has such a need of "lossless" watermarking. We present two tailored reversible watermarking schemes for the clinical atlas by exploiting its inherent characteristics. We have implemented the schemes and our experimental results look very promising.
Design of capacity-approaching irregular low-density parity-check codes We design low-density parity-check (LDPC) codes that perform at rates extremely close to the Shannon capacity. The codes are built from highly irregular bipartite graphs with carefully chosen degree patterns on both sides. Our theoretical analysis of the codes is based on the work of Richardson and Urbanke (see ibid., vol.47, no.2, p.599-618, 2000). Assuming that the underlying communication channel is symmetric, we prove that the probability densities at the message nodes of the graph possess a certain symmetry. Using this symmetry property we then show that, under the assumption of no cycles, the message densities always converge as the number of iterations tends to infinity. Furthermore, we prove a stability condition which implies an upper bound on the fraction of errors that a belief-propagation decoder can correct when applied to a code induced from a bipartite graph with a given degree distribution. Our codes are found by optimizing the degree structure of the underlying graphs. We develop several strategies to perform this optimization. We also present some simulation results for the codes found which show that the performance of the codes is very close to the asymptotic theoretical bounds
Software development cost estimation approaches - A survey Abstract This paper summarizes,several classes of software cost estimation models and techniques: parametric models, expertise-based techniques, learning-oriented techniques, dynamics-based models, regression-basedmodels, and composite-Bayesian techniques for integrating expertisebased and regression-basedmodels. Experience to date indicates that neural-net and dynamics-based techniques are less mature than the other classes of techniques, but that all classes of techniques are challenged by the rapid pace of change in software technology. The primary conclusion is that no single technique is best for all situations, and that a careful comparison,of the results of several approaches is most likely to produce realistic estimates. ,,,1
Efficient Reductants Calculi using Partial Evaluation Techniques with Thresholding Reductants are a useful theoretical tool introduced for proving correctness properties in the context of generalized annotated logic programming. This concept was adapted to the more recent and flexible framework of multi-adjoint logic programming for solving a problem of incompleteness that arises when working with some lattices. In order to be complete, multi-adjoint logic programs must be extended with their set of reductants. In general, the notion of reductant may introduce an important efficiency drawback. In this work we provide a more refined version of this concept that we call PE-reductant, by using (threshold) partial evaluation techniques. Our proposal is intended to be semantically equivalent to the classical notion of reductant, and improves previous approaches at least in the following two efficiency criteria. Firstly, using the new definition of reductant, we can obtain computed answers for a given goal with a lesser computational effort than by using its precedent ones. Secondly, the proper construction of a reductant by means of partial evaluation methods, is drastically improved after introducing thresholding techniques which dynamically reduce the size of the underlying unfolding trees.
EEFF: a cross-layer designed energy efficient fast forwarding protocol for wireless sensor networks Most of the proposed communication protocols for wireless sensor networks focus on the energy efficiency. However, it often brings poor latency performance. In this paper, we develop a novel cross-layer designed asynchronous protocol EEFF (Energy Efficient and Fast Forwarding) for wireless sensor networks regarding low latency and energy efficiency. EEFF improves low power listening approach and introduces dynamical routing selection to reduce the latency. We evaluate the performance of the EEFF protocol in a testbed of 16 MICAZ motes and perform larger scale experiments with NS-2. These experiments demonstrate that EEFF significantly improves the latency and energy performance compared with state-of-art asynchronous protocols, and it's especially suitable for large scale dense wireless sensor networks.
Reversible data hiding based on local histogram shifting with multilayer embedding We use the two neighboring points of the peak point to embed the secret data.We improve the histogram of embedded cover image to ensure security of secret data.The multilayer embedding of pyramid structure is adopted to increase the capacity.Our proposed method has a higher embedding capacity with lower distortion. In this paper, we present a new reversible data hiding method based on histogram shifting using localization. Our proposed method selects peak point as the reference point, then uses the two neighboring points of the peak point to achieve secret data embedding based on histogram shifting and the peak point keeps unchanged. In the extraction end, we no longer need the key information about the peak point, we can directly find the peak point from the histogram to extract the secret data. We also exploit the localization to make the histogram of embedded cover image become almost the same as the histogram of the original cover image. The embedding capacity is also increased rapidly by the localization with multilayer embedding. Experimental results show that our proposed method is effective and superior.
1.065159
0.069476
0.066667
0.066667
0.066667
0.066667
0.050519
0.028562
0.000009
0
0
0
0
0
Automatic Optimization of Multi-paradigm Declarative Programs This paper investigates the optimization by fold/unfold of functional-logic programswith operational semantics based on needed narrowing. Transformation sequences are automatically guided by tupling, a powerful strategy that avoids multiple accesses to data structures and redundant sub-computations. We systematically decompose in detail the internal structure of tupling in three low-level transformation phases (definition introduction, unfolding and abstraction with folding) that constitute the core of our automatic tupling algorithm. The resulting strategy is (strongly) correct and complete, efficient, elegant and realistic. In addition (and most important), our technique preserves the natural structure of multi-paradigm declarative programs, which contrasts with prior pure functional approaches that produce corrupt integrated programs with (forbidden) overlapping rules.
Improved Tupling for Optimizing Multi-paradigm Declarative Programs This paper investigates the optimization by fold/unfold of declarative programs that integrate the best features from both functional and logic programming. Transformation sequences are guided by a mixed strategy which, in three low-level transformation phases. successfully combines two well-known heuristics -composition and tupling-, thus avoiding the construction of intermediate data structures and redundant. sub-computations. In particular, whereas composition is able to produce a single function definition for some nested (composed) functions. the tupling method merges non-nested functions calls into a new, function definition called eureka. We solve the non trivial problem of discovering the set of calls to be tupled in an incremental way.. i.e. chaining different eureka definitions where only non-nested calls sharing common variables are taken into account. Moreover, by appropriately combining both strategies, together with a simplification pre-process based on a kind of normalization, we automatically optimize a wide range of programs (with nested and/or non-nested function calls) at a very low cost.
Transformation Rules and Strategies for Functional-Logic Programs This paper abstracts the contents of a PhD dissertation entitled 'Transformation Rules and Strategies for Functional-Logic Programs' which has been recently defended. These techniques are based on fold/unfold transformations and they can be used to optimize integrated (functional-logic) programs for a wide class of applications. Experimental results shows that typical examples in the field of Artificial Intelligence are successfully enhanced by our transformation system SYNTH. The thesis presents the first approach of these methods for declarative languages that integrate the best features from functional and logic programming.
Effective Optimization of Multiple Traversals in Lazy Languages Tupling transformation strategy can be applied toeliminate redundant calls in a program and also to eliminate multiple traversals of data structures. While theformer application can produce super-linear speedup inthe transformed program, the effectiveness of the latterhas yet to be appreciated. In this paper, we investigatethe pragmatic issues behind elimination of multiple datatraversal in the context of lazy languages, and propose aframework of tupling tactic called strictness-guided...
Safe Folding/Unfolding with Conditional Narrowing Functional logic languages with a complete operational se- mantics are based on narrowing, a generalization of term rewriting where unification replaces matching. In this paper, we study the semantic prop- erties of a general transformation technique called unfolding in the con- text of functional logic languages. Unfolding a program is defined as the application of narrowing steps to the calls in the program rules in some appropriate form. We show that, unlike the case of pure logic or pure functional programs, where unfolding is correct w.r.t. practically all available semantics, unrestricted unfolding using narrowing does not preserve program meaning, even when we consider the weakest notion of semantics the program can be given. We single out the conditions which guarantee that an equivalent program w.r.t. the semantics of computed answers is produced. Then, we study the combination of this technique with a folding transformation rule in the case of innermost conditional narrowing, and prove that the resulting transformation still preserves the computed answer semantics of the initial program, under the usual condi- tions for the completeness of innermost conditional narrowing. We also discuss a relationship between unfold/fold transformations and partial evaluation of functional logic programs.
On fuzzy unfolding: A multi-adjoint approach In the context of (fuzzy) logic programs, 'unfolding' means to transform a program rule by replacing an atom call of the body (of that rule) by its definition. Unfolding is a semantics-preserving program transformation technique that is able to improve programs, generating more efficient code, since it anticipates computation steps. Unfolding is the basis for developing sophisticated and powerful programming tools, such as fold/unfold transformation systems or partial evaluators. In this paper we address the problem of extending the classical definition of the unfolding rule (for pure logic programs) to the setting of multi-adjoint logic programming, where a fuzzy computed answer is a pair computed by a fuzzy generalization of the modus ponens inference rule. Our main contributions can be summarized as follows:*We proved the independence of the computation rule for multi-adjoint admissible computations. *Moreover, we defined a fuzzy unfolding rule and we demonstrated its strong correctness properties, that is, original and unfolded programs compute the same fuzzy computed answers for a given goal. *We also proved that unfolding transformations increase the efficiency of the residual programs, by reducing the length of fuzzy admissible derivations when solving goals.
Improving Control in Functional Logic Program Specialization We have recently defined a framework for Narrowing-driven Partial Evaluation (NPE) of functional logic programs. This method is as powerful as partial deduction of logic programs and positive supercompilation of functional programs. Although it is possible to treat complex terms containing primitive functions (e.g. conjunctions or equations) in the NPE framework, its basic control mechanisms do not allow for effective polygenetic specialization of these complex expressions. We introduce a sophisticated unfolding rule endowed with a dynamic narrowing strategy which permits flexible scheduling of the elements (in conjunctions) which are reduced during specialization. We also present a novel abstraction operator which extends some partitioning techniques defined in the framework of conjunctive partial deduction. We provide experimental results obtained from an implementation using the INDY system which demonstrate that the control refinements produce better specializations.
Inductive synthesis of equational programs An equational approach to the synthesis of functional and logic programs is taken. Typically, a target program contains equations that are only true in the standard model of the given domain rules. To synthesize such programs, induction is necessary. We propose heuristics for generalizing from a sequence of deductive consequences. These are combined with rewrite-based methods of inductive proof to derive provably correct programs.
Transforming constraint logic programs We study à la Tamaki-Sato transformations of constraint logic programs. We give an operational and fixpoint semantics of our constraint logic programs, show that the operational semantics is sound and complete with respect to the fixpoint semantics; we then extend the Tamaki-Sato transformation system into a fold-unfold transformation system which can take care of constraints and we give a direct proof of its correctness which is simpler than the Tamaki-Sato proof.
Synthesizing algorithms with performance constraints This paper describes MEDUSA, an experimental algorithm synthesizer. MEDUSA is characterized by its top-down approach, its use of cost-constraints, and its restricted number of synthesis methods. Given this model, we discuss heuristics used to keep this process from being unbounded search through the solution space. The results indicate that the performance criteria can be used effectively to help avoid combinatorial explosion. The system has synthesized a number of algorithms in its test domain (geometric intersection problems) without operator intervention.
An Enhanced Multilevel Algorithm for Circuit Placement This paper presents several important enhancements to therecently published multilevel placement package mPL.The improvements include (i) unconstrained quadratic relaxation on small, noncontiguous subproblems at every level of the hierarchy; (ii) improved interpolation (declustering)based on techniques from algebraic multigrid (AMG), and(iii) iterated V-cycles with additional geometric informationfor aggregation in subsequent V-cycles. The enhanced version of mPL, named mPL2, improves the total wirelength result by about 12% compared to the original version. The attractive scalability properties of the mPL run time have beenlargely retained, and the overall run time remains very competitive. Compared to gordian-l-domino on uniform-cell-size IBM/ISPD98 benchmarks, a speed-up of well over8x on large circuits (驴 100,000 cells or nets) is obtainedalong with an average improvement in total wirelength ofabout 2%. Compared to Dragon [32] on the same benchmarks, a speed-up of about 5x is obtained at the cost ofabout 4% increased wirelength. On the recently publishedPEKO synthetic benchmarks, mPL2 generates surprisinglyhigh-quality placements - roughly 60% closer to the optimal than those produced by Capo 8.5 and Dragon - inrun time about twice as long as Capo's and about 1/10th of Dragon's.
H.264/AVC video for wireless transmission H.264/AVC will be an essential component in emerging wireless video applications thanks to its excellent compression efficiency and network-friendly design. However, a video coding standard itself is only one component within the application and transmission environment. Its effectiveness strongly depends on the selection of appropriate modes and parameters at the encoder, at the decoder, as well as in the network. In this paper we introduce the features of the H.264/AVC coding standard that make it suitable for wireless video applications, including features for error resilience, bit rate adaptation, integration into packet networks, interoperability, and buffering considerations. Modern wireless networks provide many different means to adapt quality of service, such as forward error correction methods on different layers and end-to-end or link layer retransmission protocols. The applicability of all these encoding and network features depends on application constraints, such as the maximum tolerable delay, the possibility of online encoding, and the availability of feedback and cross-layer information. We discuss the use of different coding and transport related features for different applications, namely video telephony, video conferencing, video streaming, download-and-play, and video broadcasting. Guidelines for the selection of appropriate video coding tools, video encoder and decoder settings, as well as transport and network parameters are provided and justified. References to relevant research publications and standardization contributions are given.
Literate Program Derivation: A Case Study Literate programming is a notion that has been introduced by Don Knuth as a means for improving the art of programming by amalgamating explanatory text with the program proper. In this paper, we attempt to combine the principal ideas of literate programming with the concepts of formal program development, based on algebraic specifications and transformations. This experiment is performed using the sublinear string-searching algorithm of Boyer and Moore (1977) as an illustrating example.
Reversible data hiding based on local histogram shifting with multilayer embedding We use the two neighboring points of the peak point to embed the secret data.We improve the histogram of embedded cover image to ensure security of secret data.The multilayer embedding of pyramid structure is adopted to increase the capacity.Our proposed method has a higher embedding capacity with lower distortion. In this paper, we present a new reversible data hiding method based on histogram shifting using localization. Our proposed method selects peak point as the reference point, then uses the two neighboring points of the peak point to achieve secret data embedding based on histogram shifting and the peak point keeps unchanged. In the extraction end, we no longer need the key information about the peak point, we can directly find the peak point from the histogram to extract the secret data. We also exploit the localization to make the histogram of embedded cover image become almost the same as the histogram of the original cover image. The embedding capacity is also increased rapidly by the localization with multilayer embedding. Experimental results show that our proposed method is effective and superior.
1.063296
0.048842
0.048842
0.023969
0.015967
0.006396
0.000409
0.000137
0.00003
0
0
0
0
0
Degradation and encryption for outsourced PNG images in cloud storage. In cloud storage environment, users frequently store images and retrieve them by personal device, including mobile phones. However, the content of outsourced images will be leaked to the cloud server, which is not trusted by users. To protect the privacy of sensitive images, this paper proposes format-compliant degradation and encryption method for Portable Network Graphics PNG. More specifically, for PNG degradation, improved prefix method and noise generation methods are developed. Furthermore, for PNG encryption, a modified generalised-Feistel method is developed. Finally, our security analysis demonstrates that the proposed scheme is secure. Our experiment results also show that the scheme is efficient and practical.
Efficient binary conversion for paillier encrypted values We consider the framework of secure n-party computation based on threshold homomorphic cryptosystems as put forth by Cramer, Damgård, and Nielsen at Eurocrypt 2001. When used with Paillier's cryptosystem, this framework allows for efficient secure evaluation of any arithmetic circuit defined over ℤN, where N is the RSA modulus of the underlying Paillier cryptosystem. In this paper, we extend the scope of the framework by considering the problem of converting a given Paillier encryption of a value x∈ℤN into Paillier encryptions of the bits of x. We present solutions for the general case in which x can be any integer in {0,1,...,N – 1}, and for the restricted case in which x N/(n2κ) for a security parameter κ. In the latter case, we show how to extract the ℓ least significant bits of x (in encrypted form) in time proportional to ℓ, typically saving a factor of log2N /ℓ compared to the general case. Thus, intermediate computations that rely in an essential way on the binary representations of their input values can be handled without enforcing that the entire computation is done bitwise. Typical examples involve the relational operators such as
An E-learning System Based on Secure Data Storage Services in Cloud Computing Abundant reasonable computers, web resources and education content are identified to transform educational usage on-demand in the field of cloud infrastructure. Therefore, there comes a necessity to redesign the educational system to meet the needs better. The appearance of cloud based services supports the creation of latest generation e-learning systems for storing multimedia data within the cloud; it draws attention for academia and research area, which may be able to use high quality of resources. Even though the merits of cloud service are more attractive, the physical possession of users data is under security risk with respect to data correctness. This poses many new security challenges which have not been well explored. This paper focuses mainly on distributed data storage security for e-learning system, which has always been an important aspect of quality service. To make sure the correctness of users data within the cloud, an adaptable and effective auditing mechanism hitches the challenges and distributes erasure-coded data for e-learning web application. This extensive analysis shows that the auditing result achieves quick data error correction and localization of servers for malicious data modification attacks.
A coordinated mechanism for multimode user equipment accessing wireless sensor network User Equipment UE as a basic device in cellular networks may contain different types of sensor module for specified usage, called multimode UE. With the natural mobility characteristics, this new type of UE is similar to an enhanced type of mobile sensor, with more energy and more flexibility. Since Wireless Sensor Network WSN can be deployed easily as no major infrastructure and human control are needed, the integration of cellular network and wireless sensor network becomes attractive by combining the sensing part and the connection part in a flexible way, which expands both scalability and service applications for the networks. In this paper, we propose a coordinated accessing mechanism when UE interacts with WSN network. Through the analysis of the accessing problem, we can optimise the contention period for accessing in order to enhance the heterogeneous network performance. The simulation result shows the proposed accessing mechanism outperforms the conventional method in an integrated network.
Exploring the Feasibility of Fully Homomorphic Encryption In 2010, Gentry and Halevi presented the first FHE implementation. FHE allows the evaluation of arbitrary-functions directly on encrypted data on untrusted servers. However, even for the small setting with 2048 dimensions, the authors reported a performance of 1.8 s for a single bit encryption and 32 s for recryption on a high-end server. Much of the latency is due to computationally intensive multimillion-bit modular multiplications. In this paper, we introduce two optimizations coupled with a novel precomputation technique. In the first optimization called partial FFT, we adopt Strassen's FFT-based multiplication algorithm along with Barret reduction to speedup modular multiplications. For the encrypt primitive, we employ a window-based evaluation technique along with a modest degree of precomputation. In the full FFT optimization, we delay modular reductions and change the window algorithm, which allows us to carry out the bulk of computations in the frequency domain. We manage to eliminate all FFT conversion except the final inverse transformation drastically reducing the computation latency for all FHE primitives. We implemented the GH FHE scheme on two GPUs to further speedup the operations. Our experimental results with small parameter setting show speedups of 174, 7.6, and 13.5 times for encryption, decryption and recryption, respectively, when compared to the Gentry-Halevi implementation. The speedup is enhanced in the medium setting. However, in the large setting, memory becomes the bottleneck and the speedup is somewhat diminished.
Fully homomorphic encryption with relatively small key and ciphertext sizes We present a fully homomorphic encryption scheme which has both relatively small key and ciphertext size. Our construction follows that of Gentry by producing a fully homomorphic scheme from a “somewhat” homomorphic scheme. For the somewhat homomorphic scheme the public and private keys consist of two large integers (one of which is shared by both the public and private key) and the ciphertext consists of one large integer. As such, our scheme has smaller message expansion and key size than Gentry’s original scheme. In addition, our proposal allows efficient fully homomorphic encryption over any field of characteristic two.
Reversibility improved data hiding in encrypted images A novel reversible data hiding technique in encrypted images is presented in this paper. Instead of embedding data in encrypted images directly, some pixels are estimated before encryption so that additional data can be embedded in the estimating errors. A benchmark encryption algorithm (e.g. AES) is applied to the rest pixels of the image and a special encryption scheme is designed to encrypt the estimating errors. Without the encryption key, one cannot get access to the original image. However, provided with the data hiding key only, he can embed in or extract from the encrypted image additional data without knowledge about the original image. Moreover, the data extraction and image recovery are free of errors for all images. Experiments demonstrate the feasibility and efficiency of the proposed method, especially in aspect of embedding rate versus Peak Signal-to-Noise Ratio (PSNR).
Data-driven discovery of physical laws BACON.3 is a production system that discovers empirical laws. Although it does not attempt to model the human discovery process in detail, it incorporates some general heuristics that can lead to discovery in a number of domains. The main heuristics detect constancies and trends in data, and lead to the formulation of hypotheses and the definition of theoretical terms. Rather than making a hard distinction between data and hypotheses, the program represents information at varying levels of description. The lowest levels correspond to direct observations, while the highest correspond to hypotheses that explain everything so far observed. To take advantage of this representation, BACON.3 has the ability to carry out and relate multiple experiments, collapse hypotheses with identical conditions, ignore differences to let similar concepts be treated as equal, and to discover and ignore irrelevant variables. BACON.3 has shown its generality by rediscovering versions of the ideal gas law, Kepler's third law of planetary motion, Coulomb's law, Ohm's law, and Galileo's laws for the pendulum and constant acceleration.
Evolution strategies –A comprehensive introduction This article gives a comprehensive introduction into one of the main branches of evolutionary computation – the evolution strategies (ES) the history of which dates back to the 1960s in Germany. Starting from a survey of history the philosophical background is explained in order to make understandable why ES are realized in the way they are. Basic ES algorithms and design principles for variation and selection operators as well as theoretical issues are presented, and future branches of ES research are discussed.
An introduction to partial evaluation Partial evaluation provides a unifying paradigm for a broad spectrum of work in program optimization compiling interpretation and the generation of automatic program generators [Bjørner et al. 1987; Ershov 1992; and Jones et al. 1993]. It is a program optimization technique, perhaps better called program specialization, closely related to but different from Jørring and Scherlis' staging transformations [1986]. It emphasizes, in comparison with Burstall and Darlington [1977] and Jørring and Scherlis [1986] and other program transformation work, full automation and the generation of program generators as well as transforming single programs. Much partial evaluation work to date has concerned automatic compiler generation from an interpretive definition of programming language, but it also has important applications to scientific computing, logic programming, metaprogramming, and expert systems; some pointers are given later.
Issues in global routing In this paper, we present several shortcomings of current global routers and propose enhancements to remedy the problems. We propose that global routers incorporate aspects of both placement and detail routing in order to improve the quality of the global routing. In addition, we propose the use of a constrained die methodology to complement fixed and variable die design flows. This paper outlines the algorithms present in the commercial global routers available for InternetCAD.com, Inc.
Implementation and extensibility of an analytic placer Automated cell placement is a critical problem in very large scale integration (VLSI) physical design. New analytical placement methods that simultaneously spread cells and optimize wirelength have recently received much attention from both academia and industry. A novel and simple objective function for spreading cells over the placement area is described in the patent of Naylor et al. (U.S. Pat. 6301693). When combined with a wirelength objective function, this allows efficient simultaneous cell spreading and wirelength optimization using nonlinear optimization techniques. In this paper, we implement an analytic placer (APlace) according to these ideas (which have other precedents in the open literature), and conduct in-depth analysis of characteristics and extensibility of the placer. Our contributions are as follows. 1) We extend the objective functions described in (Naylor et al., U.S. Patent 6301693) with congestion information and implement a top-down hierarchical (multilevel) placer (APlace) based on them. For IBM-ISPD04 circuits, the half-perimeter wirelength of APlace outperforms that of FastPlace, Dragon, and Capo, respectively, by 7.8%, 6.5%, and 7.0% on average. For eight IBM-PLACE v2 circuits, after the placements are detail-routed using Cadence WRoute, the average improvement in final wirelength is 12.0%, 8.1%, and 14.1% over QPlace, Dragon, and Capo, respectively. 2) We extend the placer to address mixed-size placement and achieve an average of 4% wirelength reduction on ten ISPD'02 mixed-size benchmarks compared to results of the leading-edge solver, FengShui. 3) We extend the placer to perform timing-driven placement. Compared with timing-driven industry tools, evaluated by commercial detailed routing and static timing analysis, we achieve an average of 8.4% reduction in cycle time and 7.5% reduction in wirelength for a set of six industry testcases. 4) We also extend the placer to perform input/output-core coplacement and constraint handing for mixed-signal designs. Our paper aims to, and empirically demonstrates, that the APlace framework is a general, and extensible platform for "spatial embedding" tasks across many aspects of system physical implementation.
Eureka definitions for free! or disagreement points for fold/unfold transformations The fold/unfold framework of Burstall and Darlington is a very powerful framework for transforming function definitions in the form of recursion equation schemes. This may be used to transform a function so as to improve the efficiency of its implementation. However, for this to work the user must supply so-called Eureka definitions and it may require some ingenuity to construct these. This paper shows that a class of these Eureka definitions can be derived in a rather systematic way.
A survey on search results diversification techniques The quantity of information placed on the web has been greater than before and is increasing rapidly day by day. Searching through the huge amount of data and finding the most relevant and useful result set involves searching, ranking, and presenting the results. Most of the users probe into the top few results and neglect the rest. In order to increase user's satisfaction, the presented result set should not only be relevant to the search topic, but should also present a variety of perspectives, that is, the results should be different from one another. The effectiveness of web search and the satisfaction of users can be enhanced through providing various results of a search query in a certain order of relevance and concern. The technique used to avoid presenting similar, though relevant, results to the user is known as a diversification of search results. This article presents a survey of the approaches used for search result diversification. To this end, this article not only provides a technical survey of existing diversification techniques, but also presents a taxonomy of diversification algorithms with respect to the types of search queries.
1.2
0.2
0.2
0.2
0.2
0.05
0.002469
0
0
0
0
0
0
0
Cross-layer rate control for end-to-end proportional fairness in wireless networks with random access In this paper, we address the rate control problem in a multi-hop random access wireless network, with the objective of achieving proportional fairness amongst the end-to-end sessions. The problem is considered in the framework of nonlinear optimization. Compared to its counterpart in a wired network where link capacities are assumed to be fixed, rate control in a multi-hop random access network is much more complex and requires joint optimization at both the transport layer and the link layer. This is due to the fact that the attainable throughput on each link in the network is `elastic' and is typically a non-convex and non-separable function of the transmission attempt rates. Two cross-layer algorithms, a dual based algorithm and a primal based algorithm, are proposed in this paper to solve the rate control problem in a multi-hop random access network. Both algorithms can be implemented in a distributed manner, and work at the link layer to adjust link attempt probabilities and at the transport layer to adjust session rates. We prove rigorously that the two proposed algorithms converge to the globally optimal solutions. Simulation results are provided to support our conclusions.
The asymptotic convergence-rate of Q-learning In this paper we show that for discounted MDPs with discount factor $\gamma>1/2$ the asymptotic rate of convergence of Q-learning is O($1/t^{R(1-\gamma$)}) if $R(1-\gamma)<1/2$ and O($\sqrt{\log\log t/ t}$) otherwise provided that the state-action pairs are sampled from a fixed probability distribution. Here $R=p_{min}/p_{max}$ is the ratio of the minimum and maximum state-action occupation frequencies. The results extend to convergent on-line learning provided that $p_{min}>0$, where $p_{min}$ and $p_{max}$ now become the minimum and maximum state-action occupation frequencies corresponding to the stationary distribution.
A survey on multimedia-based cross-layer optimization in visual sensor networks. Visual sensor networks (VSNs) comprised of battery-operated electronic devices endowed with low-resolution cameras have expanded the applicability of a series of monitoring applications. Those types of sensors are interconnected by ad hoc error-prone wireless links, imposing stringent restrictions on available bandwidth, end-to-end delay and packet error rates. In such context, multimedia coding is required for data compression and error-resilience, also ensuring energy preservation over the path(s) toward the sink and improving the end-to-end perceptual quality of the received media. Cross-layer optimization may enhance the expected efficiency of VSNs applications, disrupting the conventional information flow of the protocol layers. When the inner characteristics of the multimedia coding techniques are exploited by cross-layer protocols and architectures, higher efficiency may be obtained in visual sensor networks. This paper surveys recent research on multimedia-based cross-layer optimization, presenting the proposed strategies and mechanisms for transmission rate adjustment, congestion control, multipath selection, energy preservation and error recovery. We note that many multimedia-based cross-layer optimization solutions have been proposed in recent years, each one bringing a wealth of contributions to visual sensor networks.
EEFF: a cross-layer designed energy efficient fast forwarding protocol for wireless sensor networks Most of the proposed communication protocols for wireless sensor networks focus on the energy efficiency. However, it often brings poor latency performance. In this paper, we develop a novel cross-layer designed asynchronous protocol EEFF (Energy Efficient and Fast Forwarding) for wireless sensor networks regarding low latency and energy efficiency. EEFF improves low power listening approach and introduces dynamical routing selection to reduce the latency. We evaluate the performance of the EEFF protocol in a testbed of 16 MICAZ motes and perform larger scale experiments with NS-2. These experiments demonstrate that EEFF significantly improves the latency and energy performance compared with state-of-art asynchronous protocols, and it's especially suitable for large scale dense wireless sensor networks.
An Energy Efficient Cross-Layer Design for Healthcare Monitoring Wireless Sensor Networks In wireless sensor networks (WSN), the nodes have limited energy resource. So power conservation in individual node is of significant importance. In this paper we derive a cross-layer design method that combines adaptive modulation and coding (AMC) at the physical layer and two sleep modes according to monitoring type at the medium access layer (MAC) in healthcare monitoring WSN, judiciously to maximize spectral efficiency, and to minimize energy consumption, under prescribed error performance constraints. To achieve maximum spectral efficiency in the entire SNR (signal-to-noise ratio) range, we advise a combined transmission mode switching between convolutionally code and unconvolutionally code mode, based which we analyze the energy consumption of the system with multi mobile nodes and a sink node and verify that it is energy efficient to adopt AMC instead of single modulation at physical layer in healthcare monitoring WSN.
A Survey on Cross-Layer Quality-of-Service Approaches in WSNs for Delay and Reliability-Aware Applications. Using wireless sensor networks (WSNs) in delay and reliability critical environments is highly desired due to their unique advantages such as low cost, ease of deployment, and redundancy. However, successful deployment of resource-limited WSNs in such applications requires strict quality-of-service (QoS) provisioning techniques to meet the desired latency and reliability targets of those applicati...
QoS in Wireless Multimedia Sensor Networks: A Layered and Cross-Layered Approach The emergence of wireless multimedia sensor networks (WMSN) has given birth to several civilian as well as defense applications. Some of the interesting applications employing low cost sensor nodes to manipulate rich multimedia content include traffic monitoring, border surveillance, smart homes, environment and habitat monitoring. Unlike the traditional sensor networks which are aimed at maximizing network lifetime by decreasing energy utilization, the main objective of WMSNs is optimized delivery of multimedia content along with energy efficiency. Multimedia communications in WMSNs, has stringent delay and high bandwidth requirement as compared to scalar data transfer in WSNs. Fulfilling these constraints in resource and energy constrained WMSNs is a huge challenge. In WMSNs, each layer of the protocol stack is responsible and fully involved in providing QoS guarantees. There is a need for new schemes at each layer of the protocol stack- from advanced coding techniques that reduce encoder complexity and achieve maximum compression to dynamic routing and MAC protocols that provide service differentiation and reduce end-to-end latency. In wireless sensor networks, where all layers have dependency on each other, QoS guarantees are possible through the cross layer interaction of different layers. This paper gives an overview of the different existing layered schemes in WMSNs, followed by a discussion on the significance and efficiency gains that can be achieved from cross layer interactions in WMSNs along with the review of the existing cross layer approaches. Finally, we identify the open research issues which have not been adequately addressed so far.
Distributed Cross-layer Protocol Design for Magnetic Induction Communication in Wireless Underground Sensor Networks Wireless underground sensor networks (WUSNs) enable many applications such as underground pipeline monitoring, power grid maintenance, mine disaster prevention, and oil upstream monitoring among many others. While the classical electromagnetic waves do not work well in WUSNs, the magnetic induction (MI) propagation technique provides constant channel conditions via small size of antenna coils in the underground environments. In this paper, instead of adopting currently layered protocols approach, a distributed cross-layer protocol design is proposed for MI-based WUSNs. First, a detailed overview is given for different communication functionalities from physical to network layers as well as the QoS requirements of applications. Utilizing the interactions of different layer functionalities, a distributed environment-aware protocol, called DEAP, is then developed to satisfy statistical QoS guarantees and achieve both optimal energy savings and throughput gain concurrently. Simulations confirm that the proposed cross-layer protocol achieves significant energy savings, high throughput efficiency and dependable MI communication for WUSNs.
Formal Modelling and Validation of a Novel Energy Efficient Cross-Layer MAC Protocol in Wireless Multi Hop Sensor Networks Using Time Petri Nets
Data Gathering Protocols for Wireless Sensor Networks with Mobile Sinks Wireless sensor networks with mobile sinks (mWSN) have attracted a lot of attention recently. In an mWSN, each mobile sink can move freely and unpredictably. In this paper, we design two efficient data gathering protocols for mWSNs. The first protocol (called AVRP) adopts Voronoi scoping plus dynamic anchor selection to handle the sink mobility issue. In the second protocol (called TRAIL), the trail of mobile sink is used for guiding packet forwarding as sinks move in the network. In TRAIL, to forward a data packet, integration of trail-based forwarding and random walk is used. Specifically, when no fresh trail of any sink is known, random walk is used; once a sensor on a fresh sink trail is reached, data packet will be forwarded along the trail. TRAIL is simple to implement and has small protocol overhead. Simulation results show the designed protocols have high performance and further AVRP is suitable for mWSNs with heavy traffic while TRAIL is suitable for mWSNs with light traffic.
Analysis and design of secure watermark-based authentication systems This paper focuses on a coding approach for effective analysis and design of secure watermark-based multimedia authentication systems. We provide a design framework for semi-fragile watermark-based authentication such that both objectives of robustness and fragility are effectively controlled and achieved. Robustness and fragility are characterized as two types of authentication errors. The authentication embedding and verification structures of the semi-fragile schemes are derived and implemented using lattice codes to minimize these errors. Based on the specific security requirements of authentication, cryptographic techniques are incorporated to design a secure authentication code structure. Using nested lattice codes, a new approach, called MSB-LSB decomposition, is proposed which we show to be more secure than previous methods. Tradeoffs between authentication distortion and implementation efficiency of the secure authentication code are also investigated. Simulations of semi-fragile authentication methods on real images demonstrate the effectiveness of the MSB-LSB approach in simultaneously achieving security, robustness, and fragility objectives.
Design tools for 3-D integrated circuits We present a set of design tools for 3-D Integration. Using these tools - a 3-D standard-cell placement tool, global routing tool, and layout editor - we have targeted existing standard-cell circuit netlists for fabrication using wafer bonding. We have analyzed the performance of several circuits using these tools and find that 3-D integration provides significant benefits. For example, relative to single-die placement, we observe on average 28% to 51% reduction in total wire length.
A low complexity motion compensated frame interpolation method In low bit-rate video communication, temporal subsampling is usually used due to limited available bandwidth. Motion compensated frame interpolation (MCFI) techniques are often employed in the decoder to restore the original frame rate and enhance the temporal quality. In this paper, we propose a low-complexity and high efficiency MCFI method. It first examines the motion vectors embedded in the bit-stream, then carries out overlapped block bi-directional motion estimation on those blocks whose embedded motion vectors are regarded as not accurate enough. Finally, it utilizes motion vector post-processing and overlapped block motion compensation to generate interpolated frames and further reduce blocking artifacts. Experimental results show that the proposed algorithm outperforms other methods in both PSNR and visual performance, while its complexity is also lower than other methods.
A reversible image authentication scheme based on compressive sensing In order to satisfy the requirement of reversible authentication as well as tamper localization and recovery, a reversible image authentication scheme based on compressive sensing (CS) is proposed. Double watermarks are employed, including a short one (perception Hash) for image integrity authentication and a long one for tamper localization and recovery. First, we embed the short watermark into the image in a reversible way. The embedding method is based on histogram modification of discrete Haar wavelet coefficients which is proposed in this paper. Then the long watermark, which is generated by CS sampling on the transformation coefficients of the non-overlapping image blocks, is registered to intellectual property rights (IRP) database for saving in a zero-watermarking way. At the authentication side, the receiver recovers the image after extracting the short watermark, and compares the Hash values generated from the recovered image with the short watermark for authentication. If the authentication is successful, the image can be completely restored to the original state in a reversible way; if the authentication fails, by utilizing the long watermark in the IRP database and CS reconstruction, it can achieve tamper localization and recovery. Meanwhile, experimental results show that the watermarked image has good imperceptibility. The proposed scheme is with good potential to be adopted for reversible image authentication.
1.102692
0.102276
0.102276
0.102276
0.102276
0.102276
0.052108
0.034101
0.000741
0.000253
0
0
0
0
MRF-based multispectral image fusion using an adaptive approach based on edge-guided interpolation In interpretation of remote sensing images, it is possible that the images which are supplied by different sensors wouldn't be understandable or we could not get vital information from them. For better visual perception of images, it is essential to operate series of pre-processing and elementary corrections and then operate a series of main processing for more precise analysis on the image. There are several approaches for processing which depend on type of remote sensing image. The discussed approach in this article, i.e. image fusion, is using natural colors of an optical image for adding color to gray-scale satellite image which gives us the ability to better observe the HR image of the OLI sensor of Landsat. This process previously with emphasis on details of fusion technique was performed, but we are going to relieve concept of interpolation process that did not have suitable attentions in past. In fact we see many important software tools such as ENVI and ERDAS as most famous remote sensing image processing software tools have only classical interpolation techniques (such as BL and CC). Therefore ENVI-based and ERDAS-based researches in image fusion area and even other fusion researches often do not use new and better interpolations and only are concentrating on fusion details for achievement of better quality, so we only focus on interpolation impact in fusion quality in a specific application, i.e. Landsat multi-spectral images. The important feature of this approach is using a statistical, adaptive, edge-guided and MRF-based interpolation method for improving color quality in MRF-based images with maintenance of high resolution in practice. Numerical Simulations show selection of suitable interpolation technique in MRF-based images creates better quality rather than classical interpolations.
A New Robust Reversible Blind Watermarking In Wavelet-Domain For Color Images Reversible watermarking methods are used for copyright protection and are able to recover the host image without distortion. Robust reversible watermarking technique should resist against intentional and unintentional image processing attacks. Robust reversible watermarking techniques should have three features namely imperceptibility, reversibility and robustness. In this paper, it is proposed to develop a new robust reversible blind watermarking for color images based on histogram construction of the wavelet coefficients constructed from the cover image. In the proposed approach, the red component of a host color image is decomposed into wavelet coefficients. Motivated by the excellent spatio-frequency localization properties of wavelets, this technique is proposed in the wavelet domain. The pixels are adjusted before watermark embedding such that both overflow and underflow of pixels during embedding is avoided and image is recovered without distortion. Based on histogram construction and the local sensitivity of Human Visual System (HVS) in wavelet domain, the watermark is embedded. For watermark extraction without host image, k-means clustering algorithm is proposed. The experimental results show that the proposed technique has good performance in terms of reversibility and robustness with the high quality of the watermarked image. The PSNR value of the recovered image is around 48 dB which proves that the quality of the recovered image is not degraded.
Reversible watermarking based on generalized histogram shifting This paper proposes a generalization of the well-known histogram shifting method for reversible watermarking. Binary watermark source is converted into a non-binary source that produces symbols from an alphabet of n (n>;2) symbols. A technique similar to that of generalized least significant bit (GLSB) is used to transform watermark; therefore, arithmetic coder is used in source coding stage. In order to embed data, histogram is shifted between border points according to the number of symbols that are being created by generalized watermark source and pixels that take the value of border point are modified like in the original histogram shifting method with the only difference that modification is no longer limited to only {0,1} but to {0,1,..., n-1}. It is shown analytically and verified through experimental results that this generalization can increase the flexibility of histogram shifting technique and permit the embedder to better adapt itself to the embedding problem.
Color demosaicking via directional linear minimum mean square-error estimation Digital cameras sample scenes using a color filter array of mosaic pattern (e.g., the Bayer pattern). The demosaicking of the color samples is critical to the image quality. This paper presents a new color demosaicking technique of optimal directional filtering of the green-red and green-blue difference signals. Under the assumption that the primary difference signals (PDS) between the green and red/blue channels are low pass, the missing green samples are adaptively estimated in both horizontal and vertical directions by the linear minimum mean square-error estimation (LMMSE) technique. These directional estimates are then optimally fused to further improve the green estimates. Finally, guided by the demosaicked full-resolution green channel, the other two color channels are reconstructed from the LMMSE filtered and fused PDS. The experimental results show that the presented color demosaicking technique outperforms the existing methods both in PSNR measure and visual perception.
A new distortion-free data embedding scheme for high-dynamic range images Distortion-free data embedding enables a cover image to be recovered from its stego-image without any distortion after the secret data have been extracted. This distortion-free property allows the appropriate recovery of highly sensitive images for which even the smallest modification of the cover images cannot be allowed. In this paper, a new distortion-free data embedding scheme is proposed for high-dynamic range (HDR) images. The proposed scheme uses all homogeneous representations of each pixel in an HDR image efficiently and effectively for data embedding to enhance the embedding capacity of the HDR cover image. First, in the embedding phase, all homogeneous representations are used, and each homogeneous representation is used to represent one pattern of secret bits. Then, to conceal the secret bits, the current homogeneous representation of the current processing pixel is modified by the corresponding homogeneous representation of the hidden secret bits. Experimental results confirmed that the proposed scheme has greater embedding capacity than three other existing schemes. In addition, the experimental results indicated that our scheme matched the visual quality of the stego-image by producing a tone-mapped cover image and its stego-image that were exactly the same. In other words, our scheme also provided the desired, distortion-free property.
Improved joint reversible data hiding in encrypted images. We propose a novel joint RDH-EI based on two rounds of embedding and recovery.A limitation of the recovery failure in the previous works can be avoided.Higher embedding payloads and fewer recovery errors are achieved. This paper proposes an improved method of reversible data hiding in encrypted images (RDH-EI). Three parties constitute the proposed system: the image owner, the remote server and the recipient. To preserve privacy, an image owner encrypts the original image using a stream cipher algorithm and uploads the ciphertext to a remote server. On server side, a data-hider is allowed to embed additional message into the encrypted image using a swapping/shifting based algorithm. After downloading the marked encrypted image from the server and implementing the decryption, a recipient can extract the hidden messages and losslessly recover the original image. Experimental results show that the proposed method achieves a larger payload than the related works. Meanwhile, a limitation in the related works that few bits can be embedded into the encrypted medical images is also eliminated in the proposed method.
Local-Prediction-Based Difference Expansion Reversible Watermarking This paper investigates the use of local prediction in difference expansion reversible watermarking. For each pixel, a least square predictor is computed on a square block centered on the pixel and the corresponding prediction error is expanded. The same predictor is recovered at detection without any additional information. The proposed local prediction is general and it applies regardless of the predictor order or the prediction context. For the particular cases of least square predictors with the same context as the median edge detector, gradient-adjusted predictor or the simple rhombus neighborhood, the local prediction-based reversible watermarking clearly outperforms the state-of-the-art schemes based on the classical counterparts. Experimental results are provided.
Block cipher based separable reversible data hiding in encrypted images While most reversible data hiding in encrypted images (RDH-EI) are based on stream cipher, this paper aims to present an alternative method feasible for block-enciphered images. Before uploading data to a remote server, the content owner encrypts the original image with a block cipher algorithm using an encryption key. Then, the server embeds additional bits into the encrypted image with an embedding key to generate the marked encrypted image. On the recipient side, the additional bits can be extracted if the receiver has the embedding key. In case the receiver has only the encryption key, the marked encrypted image can be directly deciphered to a plaintext image with good quality. When both the embedding and encryption keys are available for the receiver, he can recover the original image without any errors. Compared with the existing block cipher based RDH-EI method, drawbacks of the encryption and the recovery are avoided, and good embedding payloads are achieved.
On the design of low-density parity-check codes within 0.0045 dB of the Shannon limit We develop improved algorithms to construct good low-density parity-check codes that approach the Shannon limit very closely. For rate 1/2, the best code found has a threshold within 0.0045 dB of the Shannon limit of the binary-input additive white Gaussian noise channel. Simulation results with a somewhat simpler code show that we can achieve within 0.04 dB of the Shannon limit at a bit error rate of 10/sup -6/ using a block length of 10/sup 7/.
Reversible data-hiding for progressive image transmission This research presents a multi-resolution reversible data-hiding algorithm to enable multi-scale marked images that are transmitted progressively to be exactly recovered at the receiver side once hidden data has been extracted. Based on the spatially hierarchical multi-layer structures of progressive-image transmission, the proposed algorithm first decimates the incoming image pixels into a pre-specified number of hierarchical layers of pixels. Then, it modifies pixel values in each hierarchical layer by shifting the interpolated-difference-values histogram between two neighboring layers of pixels to embed secret information into the corresponding hierarchical layer images. The proposed algorithm offers a reversible data-hiding ability for applications that use progressive image transmission to render progressive-image authentication, information-tagging, covert communications, etc. With progressive-reversible data-hiding, users of progressive image transmission can receive each original progressive image and complete hidden messages related to the received progressive image. This allows users to make real-time definite decisions according to an application's requirements. In contrast to other reversible data-hiding schemes, the algorithm proposed in this study features reversible data-hiding in progressive-image transmission based on a hierarchical decimation and interpolation technique. The interpolating process is used to reduce the difference values between the target pixel values in one progressive layer and their interpolated ones. This increases the hiding capacity of interpolation-differences histogram shifting. The experimental results demonstrate that the proposed method provides a greater embedding capacity and maintains marked images at a higher quality. Moreover, the proposed method has a low computational complexity as it requires only simple arithmetic computations.
Toward mechanical mathematics Results are reported here of a rather successful attempt of proving all theorems, totalling near 400, of Principia Mathematica which are strictly in the realm of logic, viz., the restricted predicate calculus with equality. A number of other problems of the same type are discussed. It is suggested that the time is ripe for a new branch of applied logic which may be called “inferential” analysis, which treats proofs as numerical analysis does calculations. This discipline seems capable, in the not too remote future, of leading to machine proofs of difficult new theorems. An easier preparatory task is to use machines to formalize proofs of known theorems. This line of work may also lead to mechanical checks of new mathematical results comparable to the debugging of a program.
Searching by elimination We present a way of program derivation that is applicable to a wide class of searching problems. Compared to more conventional approaches, this method yields very elegant programs. For a general problem specification, we derive a rather general program scheme. For the specific problem at hand, the general scheme is refined — depending on the mathematical properties of the objects involved — to a particular program. This is illustrated by some examples, varying from elementary problems to more advanced problems.
An optimal algorithm for sink-finding
Large capacity and high quality reversible data hiding method based on enhanced side match vector quantization In recent years, many reversible data hiding techniques have been proposed to solve information security problem. In this paper, we propose a novel reversible data hiding method based on enhanced side-match vector quantization (ESMVQ). We embed as many secret bits into the images compressed by ESMVQ which uses the very small state codebook size of 4 to achieve a large hiding capacity and meanwhile ensure the compressed image quality by introducing a complementary state codebook. The experimental results demonstrate that the proposed method has a large embedding capacity and a good image quality. The experimental results also show that our proposed method outperforms the methods of Chang et al. (IEEE Trans on Circ and Syst for Vi Techno 16(10):1301–1308, ), Huang et al. (The Image Sci J 61(2):195–203, , Lee et al. (IEEE Trans on Inf Forensic and Secur 5(4):638–648, ) and Wang et al. (Inf Sci 246:69–82, ).
1.24
0.24
0.06
0.026667
0.013333
0.006667
0.00339
0.001557
0.000784
0.000208
0
0
0
0
A High Quality Reversible Data Hiding Method Using Interpolation Technique We propose a high quality and imperceptible reversible data hiding method in this paper. We divide the cover image into 5 by 5 blocks with non-overlapping boundaries, shrink them, and generate back to 5 by 5 blocks by interpolation technique. If the absolute value of the difference between the interpolation value and the original gray level is less than a threshold in the middle of the four 3 by 3 overlapping blocks of each 5 by 5 block. We may embed one bit of the secret data in this position. We apply PRNG to shuffle the hidden data before embedding. Simply follow the embedding steps to extract the embedded data where the location is embedded then shuffle back the order to find the hidden data. The experiment results demonstrate our method is quite imperceptible with high quality and can obtain the original cover image after extracting all hidden secret data.
Using Bias Optimization for Reversible Data Hiding Using Image Interpolation In this paper, we propose a reversible data hiding method in the spatial domain for compressed grayscale images. The proposed method embeds secret bits into a compressed thumbnail of the original image by using a novel interpolation method and the Neighbour Mean Interpolation (NMI) technique as scaling up to the original image occurs. Experimental results presented in this paper show that the proposed method has significantly improved embedding capacities over the approach proposed by Jung and Yoo.
Improving Security in Digital Images through Watermarking Using Enhanced Histogram Modification. Image transmission plays an important role in recent studies of engineering and scientific research fields. While transmitting the image, they have to be secured. Many approaches are available for secure transmission of images. This method focuses towards the use of invisible watermarking for encryption purpose. The attacker cannot able to find the difference when watermarking is used. There are two algorithms used for making the watermarking object. The performance evaluation is done by introducing various attacks to the watermarked object and it is done using Matlab 7.11.
Multi-Cloud Data Management using Shamir's Secret Sharing and Quantum Byzantine Agreement Schemes Cloud computing is a phenomenal distributed computing paradigm that provides flexible, low-cost on-demand data management to businesses. However, this so-called outsourcing of computing resources causes business data security and privacy concerns. Although various methods have been proposed to deal with these concerns, none of these relates to multi-clouds. This paper presents a practical data management model in a public and private multi-cloud environment. The proposed model BFT-MCDB incorporates Shamir's Secret Sharing approach and Quantum Byzantine Agreement protocol to improve trustworthiness and security of business data storage, without compromising performance. The performance evaluation is carried out using a cloud computing simulator called CloudSim. The experimental results show significantly better performance in terms of data storage and data retrieval compared to other common cloud cryptographic based models. The performance evaluation based on CloudSim experiments demonstrates the feasibility of the proposed multi-cloud data management model.
Industrial Wireless Sensor Networks 2016 The industrial wireless sensor network (IWSN) is the next frontier in the Industrial Internet of Things (IIoT), which is able to help industrial organizations to gain competitive advantages in industrial manufacturing markets by increasing productivity, reducing the costs, developing new products and services, and deploying new business models. The IWSN can bridge the gap between the existing industrial systems and cyber networks to offer both new challenges and opportunities for manufacturers.
Secure and imperceptible digital image steganographic algorithm based on diamond encoding in DWT domain. This paper presents a new efficient embedding algorithm in the wavelet domain of digital images based on the diamond encoding (DE) scheme. Current discrete wavelet transform (DWT) steganography adds an unacceptable distortion to the images and is considered as an ineffective in terms of security. Applying the DE scheme to the current DWT steganographic methods solves the problems of these methods, and reduces the distortion added to the images, and thus improves the embedding efficiency. The proposed algorithm first converts the secret image into a sequence of base-5 digits. After that, the cover image is transformed into the DWT domain and segmented into 2 × 1 coefficient pairs. The DE scheme is used then to change at most one coefficient of each coefficient pair to embed the base-5 digits. Experimental results depict that the proposed algorithm is more efficient in embedding compared to other methods in terms of embedding payload and image quality. Moreover, the proposed algorithm is attacked by well-known steganalysis software. Results are showing that the proposed algorithm is secure against the powerful universal steganalyzer “ensemble classifier” and the histogram attack. The results also reveal that the proposed algorithm is robust against different image processing attacks such as compression, added noise, and cropping attacks.
A high-performance reversible data-hiding scheme for LZW codes Hiding a message in compression codes can reduce transmission costs and simultaneously make the transmission more secure. In this paper, we propose a high-performance, data-hiding Lempel-Ziv-Welch (HPDH-LZW) scheme, which reversibly embeds data in LZW compression codes by modifying the value of the compression codes, where the value of the LZW code either remains unchanged or is changed to the original value of the LZW code plus the LZW dictionary size according to the data to be embedded. Compared to other information-hiding schemes based on LZW compression codes, the proposed scheme achieves better hiding capacity by increasing the number of symbols available to hide secrets and also achieves faster hiding and extracting speeds due to the lower computation requirements. Our experimental results with the proposed scheme have confirmed both its high embedding capacity and its high speed when hiding and extracting data.
Reversible image watermarking by histogram modification for integer DCT coefficients We present a reversible watermarking scheme which achieves perfect restoration of both the embedded watermark and the original image during extraction. The proposed scheme embeds data by modifying those integer DCT coefficients with peak amplitudes in each coefficient histogram. The integer DCT performed over the original image is a lossless 8×8 block transform with high energy concentrating ability, which guarantees reversibility and high capacity/distortion ratio for the proposed watermarking scheme. In addition, this scheme provides a wide quality (PSNR) range from around 40 dB to 60 dB for the watermarked image, and an inherent fine adjustment capability for the quality (PSNR). Some experimental results are presented to demonstrate the high performance of our scheme in terms of capacity and the quality of the watermarked image.
Reversible data hiding in encrypted images using pseudorandom sequence modulation This work proposes a novel reversible data hiding scheme for encrypted images based on a pseudorandom sequence modulation mechanism. In the first phase, a content owner encrypts the original image for content protection. Then, a data-hider replaces a small proportion of data in LSB planes of encrypted image with the additional data and modifies the rest data in LSB planes according to the pseudorandom sequences modulated by the replaced and embedded data. With the encrypted image containing additional data, an additional-data user knowing the data-hiding key can extract the embedded additional data. And a content user with the encryption key may decrypt the encrypted image containing additional data to obtain the principal original content. If someone receives the decrypted image and has the data-hiding key, he can also successfully extract the additional data and perfectly recover the original image by exploiting the spatial correlation in natural image.
Shortcut deforestation in calculational form In functional programming, intermediate data structures are often used to &quot;glue&quot; together small programs. Deforestation is a program transformation to remove these intermediate data structures automatically. We present a simple algorithm for deforestation based on two fusion rules for hylomorphism, an expressive recursion pattern. A generic notation for hylomorphisms is introduced, where natural transformations are explicitly factored out, and it is used to represent programs. Our method...
An experimental program transformation and synthesis system This paper concentrates on the practical aspects of a program transformation system being developed. It describes the present performance of the system and outlines the techniques and heuristics used.
Optimal partitioners and end-case placers for standard-cell layout We study alternatives to classic Fiduccia-Mattheyses (FM)-based partitioning algorithms in the context of end-case processing for top-down standard-cell placement. While the divide step in the top-down divide and conquer is usually performed heuristically, we observe that optimal solutions can be found for many sufficiently small partitioning instances. Our main motivation is that small partitioning instances frequently contain multiple cells that are larger than the prescribed partitioning tolerance, and that cannot be moved iteratively while preserving the legality of a solution. To sample the suboptimality of FM-based partitioning algorithms, we focus on optimal partitioning and placement algorithms based on either enumeration or branch-and-bound that are invoked for instances below prescribed size thresholds, e.g., <10 cells for placement and <30 cells for partitioning. Such partitioners transparently handle tight balance constraints and uneven cell sizes while typically achieving 40% smaller cuts than best of several FM starts for instances between ten and 50 movable nodes and average degree 2-3. Our branch-and-bound codes incorporate various efficiency improvements, using results for hypergraphs (1993) and a graph-specific algorithm (1996). We achieve considerable speed-ups over single FM starts on such instances on average. Enumeration-based partitioners relying on Gray codes, while easier to implement and taking less time for elementary operations, can only compete with branch-and-bound on very small instances, where optimal placers achieve reasonable performance as well. In the context of a top-down global placer, the right combination of optimal partitioners and placers can achieve up to an average of 10% wirelength reduction and 50% CPU time savings for a set of industry testcases. Our results show that run-time versus quality tradeoffs may be different for small problem instances than for common large benchmarks, resulting in different comparisons of optimization algorithms. We therefore suggest that alternative algorithms be considered and, as an example, present detailed comparisons with the flow-based balanced partitioner heuristic
Target Association Rules: A New Behavioral Patterns for Point of Coverage Wireless Sensor Networks Recently, Knowledge Discovery Process has proven to be a promising tool for extracting the behavioral patterns of sensor nodes, from wireless sensor networks. In this paper, we propose a new kind of behavioral pattern, named Target-based Association Rules (TARs). TARs aim to discover the correlation among a set of targets monitored by a wireless sensor network at a border area. The major application of the Target-based Rules is to predict the location (target) of a missed reported event. Different data preparation mechanisms for accumulating the data needed for extracting TARs have been proposed. We refer to these mechanisms as Al-Node, Schedule-Buffer, and Fused-Schedule-Buffer. Several experiment studies have been conducted to evaluate the performance of the three proposed data preparation mechanisms. Results show that the Fused-Schedule-Buffer scheme outperforms the selected schemes in terms of energy consumption.
Hybrid local prediction error-based difference expansion reversible watermarking for medical images A hybrid local prediction-error based difference expansion reversible watermarking algorithm for hiding data into medical images is presented.The performance of hybrid local algorithm with median, median edge, gradient adjusted and local prediction algorithms is compared.The hybrid local prediction algorithm has the highest frequency of zero prediction error.The PSNR and embedding capacity are improved in the hybrid local algorithm. Embedding secret information into a cover media and extracting the information and the image without any distortion is known as reversible watermarking (RW). This paper analyzes the performance of hybrid local prediction error-based RW using difference expansion (DE). The cover medical image is split into non-overlapping blocks. The border pixels in each block are predicted using median edge detection (MED) prediction. The other pixels in the block are predicted using least square prediction, and the prediction error is expanded. The secret data are embedded into the cover medical image corresponding to the prediction error using the DE method. The predictor is also embedded into the cover medical image to recover the data at detection without any additional information. The simulation results show that, this method achieves better watermarked image quality and high embedding capacity when compared to other classical prediction methods: Median, MED, Rhombus and Gradient Adjusted Prediction. Display Omitted
1.22
0.22
0.22
0.22
0.22
0.073333
0.005
0.000385
0.000068
0
0
0
0
0
A reversible data transform algorithm using integer transform for privacy-preserving data mining. RDT aims to protect sensitive information from being revealed by data mining methods.A watermark can be embedded into the original data by RDT.Compared with the existing algorithms, RDT has better knowledge reservation.In addition, experimental results also show that RDT has a higher watermark payload. In the cloud computing environment, since data owners worry about private information in their data being disclosed without permission, they try to retain the knowledge within the data, while applying privacy-preserving techniques to the data. In the past, a data perturbation approach was commonly used to modify the original data content, but it also results in data distortion, and hence leads to significant loss of knowledge within the data. To solve this problem, this study introduced the concept of reversible integer transformation in the image processing domain and developed a Reversible Data Transform (RDT) algorithm that can disrupt and restore data. In the RDT algorithm, using an adjustable weighting mechanism, the degree of data perturbation was adjusted to increase the flexibility of privacy-preserving. In addition, it allows the data to be embedded with a watermark, in order to identify whether the perturbed data has been tampered with. Experimental results show that, compared with the existing algorithms, RDT has better knowledge reservation and is better in terms of effectively reducing information loss and privacy disclosure risk. In addition, it has a high watermark payload.
Reversible privacy preserving data mining: a combination of difference expansion and privacy preserving Privacy Preserving Data Mining (PPDM) can prevent private data from disclosure in data mining. However, the current PPDM methods damaged the values of original data where knowledge from the mined data cannot be verified from the original data. In this paper, we combine the concept and technique based on the reversible data hiding to propose the reversible privacy preserving data mining scheme in order to solve the irrecoverable problem of PPDM. In the proposed privacy difference expansion (PDE) method, the original data is perturbed and embedded with a fragile watermark to accomplish privacy preserving and data integrity of mined data and to also recover the original data. Experimental tests are performed on classification accuracy, probabilistic information loss, and privacy disclosure risk used to evaluate the efficiency of PDE for privacy preserving and knowledge verification.
A recent survey on image watermarking techniques and its application in e-governance. This survey presents a brief discussion of different aspects of digital image watermarking. Included in the present discussion are these general concepts: major characteristics of digital watermark, novel and recent applications of watermarking, different kinds of watermarking techniques and common watermark embedding and extraction process. In addition, recent state-of-art watermarking techniques, potential issues and available solutions are discussed in brief. Further, the performance summary of the various state-of-art watermarking techniques is presented in tabular format. This survey contribution will be useful for the researchers to implement efficient watermarking techniques for secure e-governance applications.
Reversible data hiding for JPEG images employing all quantized non-zero AC coefficients. •A very simple reversible data hiding scheme for JPEG images is proposed.•The proposed scheme provides a significantly higher embedding capacity.•The proposed scheme achieves better preservation of the marked JPEG file size.
Spread-spectrum watermarking of audio signals Watermarking has become a technology of choice for a broad range of multimedia copyright protection applications. Watermarks have also been used to embed format-independent metadata in audio/video signals in a way that is robust to common editing. In this paper, we present several novel mechanisms for effective encoding and detection of direct-sequence spread-spectrum watermarks in audio signals. The developed techniques aim at (i) improving detection convergence and robustness, (ii) improving watermark imperceptiveness, (iii) preventing desynchronization attacks, (iv) alleviating estimation/removal attacks, and finally, (v) establishing covert communication over a public audio channel. We explore the security implications of the developed mechanisms and review watermark robustness on a benchmark suite that includes a combination of audio processing primitives including: time- and frequency-scaling with wow-and-flutter, additive and multiplicative noise, resampling, requantization, noise reduction, and filtering.
A Privacy-Preserving and Copy-Deterrence Content-Based Image Retrieval Scheme in Cloud Computing. With the increasing importance of images in people’s daily life, content-based image retrieval (CBIR) has been widely studied. Compared with text documents, images consume much more storage space. Hence, its maintenance is considered to be a typical example for cloud storage outsourcing. For privacy-preserving purposes, sensitive images, such as medical and personal images, need to be encrypted before outsourcing, which makes the CBIR technologies in plaintext domain to be unusable. In this paper, we propose a scheme that supports CBIR over encrypted images without leaking the sensitive information to the cloud server. First, feature vectors are extracted to represent the corresponding images. After that, the pre-filter tables are constructed by locality-sensitive hashing to increase search efficiency. Moreover, the feature vectors are protected by the secure kNN algorithm, and image pixels are encrypted by a standard stream cipher. In addition, considering the case that the authorized query users may illegally copy and distribute the retrieved images to someone unauthorized, we propose a watermark-based protocol to deter such illegal distributions. In our watermark-based protocol, a unique watermark is directly embedded into the encrypted images by the cloud server before images are sent to the query user. Hence, when image copy is found, the unlawful query user who distributed the image can be traced by the watermark extraction. The security analysis and the experiments show the security and efficiency of the proposed scheme.
Adaptive reversible data hiding scheme based on integer transform In this paper, we present a new reversible data hiding algorithm based on integer transform and adaptive embedding. According to the image block type determined by the pre-estimated distortion, the parameter in integer transform is adaptively selected in different blocks. This allows embedding more data bits into smooth blocks while avoiding large distortion generated by noisy ones, and thus enables very high capacity with good image quality. For instance, by the proposed method, we can embed as high as 2.17bits per pixel into Lena image with a reasonable PSNR of 20.71dB. Experimental results demonstrate that the proposed method outperforms some state-of-the-art algorithms, especially for high capacity case.
Distortion-Free Data Embedding for Images One common drawback of virtually all current data embedding methods is the fact that the original image is inevitably distorted by some small amount of noise due to data embedding itself. This distortion typically cannot be removed completely due to quantization, bit-replacement, or truncation at the grayscales 0 and 255. Although the distortion is often quite small, it may not be acceptable for medical imagery (for legal reasons) or for military images inspected under unusual viewing conditions (after filtering or extreme zoom). In this paper, we introduce a general approach for high-capacity data embedding that is distortion-free (or lossless) in the sense that after the embedded information is extracted from the stego-image, we can revert to the exact copy of the original image before the embedding occurred. The new method can be used as a powerful tool to achieve a variety of non-trivial tasks, including distortion-free robust watermarking, distortion-free authentication using fragile watermarks, and steganalysis. The proposed concepts are also extended to lossy image formats, such as the JPG.
Lightweight Run-Time Code Generation Run-time code generation is an alternative and complementto compile-time program analysis and optimization. Staticanalyses are inherently imprecise because most interestingaspects of run-time behavior are uncomputable. By deferringaspects of compilation to run time, more precise informationabout program behavior can be exploited, leading togreater opportunities for code improvement.The cost of performing optimization at run time is ofparamount importance, since it must be repaid by...
Topic-focused multi-document summarization using an approximate oracle score We consider the problem of producing a multi-document summary given a collection of documents. Since most successful methods of multi-document summarization are still largely extractive, in this paper, we explore just how well an extractive method can perform. We introduce an "oracle" score, based on the probability distribution of unigrams in human summaries. We then demonstrate that with the oracle score, we can generate extracts which score, on average, better than the human summaries, when evaluated with ROUGE. In addition, we introduce an approximation to the oracle score which produces a system with the best known performance for the 2005 Document Understanding Conference (DUC) evaluation.
Qualified answers and their application to transformation Without Abstract
Efficient spatial image watermarking via new perceptual masking and blind detection schemes The aim of this paper is to improve the performance of spatial domain watermarking. To this end, a new perceptual mask and a new detection scheme are proposed. The proposed spatial perceptual mask is based on the cover image prediction error sequence and matches very well with the properties of the human visual system. It exhibits superior performance compared to existing spatial masking schemes. Moreover, it allows for a significantly increased strength of the watermark while, at the same time, the watermark visibility is decreased. The new blind detection scheme comprises an efficient prewhitening process and a correlation-based detector. The prewhitening process is based on the least-squares prediction error filter and substantially improves the detector's performance. The correlation-based detector that was selected is shown to be the most suitable for the problem at hand. The improved performance of the proposed detection scheme has been justified theoretically for the case of linear filtering plus noise attack and through extensive simulations. The theoretical analysis is independent of the proposed mask and the derived expressions can be used for any watermarking technique based on spatial masking. It is shown though that in most cases the detector performs better if the proposed mask is employed.
Geometrically invariant image watermarking using Polar Harmonic Transforms This paper presents an invariant image watermarking scheme by introducing the Polar Harmonic Transform (PHT), which is a recently developed orthogonal moment method. Similar to Zernike moment (ZM) and pseudo-Zernike moment (PZM) approaches, PHT is defined on a circular domain. The magnitudes of PHTs are invariant to image rotation and scaling. Furthermore, the PHTs are free of numerical instability, so they are more suitable for watermarking. In this paper, the invariant properties of PHTs are investigated. During embedding, a subset of the accurate PHTs are modified according to the binary watermark sequence. Then a compensation image is formatted by reconstructing the modified PHT vector. The final watermarked image is obtained by adding the compensation image to the original image. In the decoder, the watermark can be retrieved from the magnitudes of the PHTs directly. Experimental results illustrate that the proposed scheme out-performs ZM/PZM based schemes in terms of embedding capacity and watermark robustness and is also robust to both geometric and signal processing based attacks. (C) 2012 Elsevier Inc. All rights reserved.
Balance transmission mechanism in underwater acoustic sensor networks AbstractWith the rapid development of underwater acoustic modem technology, underwater acoustic sensor networks (UWASNs) have more applications in long-term monitoring of the deployment area. In the underwater environment, the sensors are costly with limited energy. And acoustic communication medium poses new challenges, including high path loss, low bandwidth, and high energy consumption. Therefore, designing transmission mechanism to decrease energy consumption and to optimize the lifetime of UWASN becomes a significant task. This paper proposes a balance transmission mechanism, and divides the data transmission process into two phases. In the routing set-up phase, an efficient routing algorithm based on the optimum transmission distance is present to optimize the energy consumption of the UWASN. And then, a data balance transmission algorithm is introduced in the stable data transmission phase. The algorithm determines one-hop or multihop data transmission of the node to underwater sink according to the current energy level of adjacent nodes. Furthermore, detailed theoretical analysis evaluates the optimum energy levels in the UWASNs with different scales. The simulation results prove the efficiency of the BTM.
1.071111
0.048889
0.04
0.033333
0.013333
0.005714
0.000717
0
0
0
0
0
0
0
Joint Channel Selection and Power Control in Infrastructureless Wireless Networks: A Multi-Player Multi-Armed Bandit Framework This paper deals with the problem of efficient resource allocation in dynamic infrastructureless wireless networks. In a reactive interference-limited scenario, at each transmission trial, every transmitter selects a frequency channel from some common pool, together with a power-level. As a result, for all transmitters, not only the fading gain, but also the number and the power of interfering transmissions vary over time. Due to the absence of a central controller and time-varying network characteristics, it is highly inefficient for transmitters to acquire the global channel and network knowledge. Therefore, given no information, each transmitter selfishly gropes to maximize its average reward, which is a function of the channel quality as well as the joint selection profile of all transmitters. This scenario is modeled as an adversarial multi-player multi-armed bandit game, where players attempt to minimize their so-called regret, while at the network side desired is to achieve equilibrium in some sense. Based on this model and in order to solve the resource allocation problem, in this paper we develop two joint power-level and channel selection strategies. We prove that the gap between the average rewards achieved by our approaches and that based on the best fixed strategy converges to zero asymptotically. Moreover, the empirical joint frequencies of the game converge to the set of correlated equilibria, which is characterized for two special cases of the designed game.
Marine communities based congestion control in underwater wireless sensor networks Congestion control in Underwater Wireless Sensor Networks (UWSNs) is an important but challenging issue. TCP performs very poorly in UWSNs due to its inability to distinguish between packet losses due to congestion and those due to channel error. The existing congestion control protocols developed for terrestrial sensor networks cannot be applied to UWSNs because the characteristics of the underwater channel such as high bit error rates, high attenuation and propagation delays, multipath and Doppler distortion have not been considered. In this paper, a biologically-inspired congestion control protocol has been proposed for UWSNs based on the ability of marine communities to terminate with phytoplankton blooms and move the system back to equilibrium between species. The proposed distributed algorithm distinguishes between packet losses due to congestion and those due to high link error rates. It eliminates flow starvation and provides flow fairness. The channel effects of underwater propagation on packet losses are captured, the shadow zones are detected and the throughput of the flows from different nodes at the receiver is restored even with channel fading. Consequently, the system goes back to a stable state, where the different event flows coexist. The theoretical analysis and numerical evaluations show the performance of the proposed congestion control protocol in UWSNs.
Online learning algorithms for stochastic water-filling Water-filling is the term for the classic solution to the problem of allocating constrained power to a set of parallel channels to maximize the total data-rate. It is used widely in practice, for example, for power allocation to sub-carriers in multi-user OFDM systems such as WiMax. The classic water-filling algorithm is deterministic and requires perfect knowledge of the channel gain to noise ratios. In this paper we consider how to do power allocation over stochastically time-varying (i.i.d.) channels with unknown gain to noise ratio distributions. We adopt an online learning framework based on stochastic multi-armed bandits. We consider two variations of the problem, one in which the goal is to find a power allocation to maximize Σi E[log (1+SNRi)], and another in which the goal is to find a power allocation to maximize Σi log (1+E[SNRi]). For the first problem, we propose a cognitive water-filling algorithm that we call CWF1. We show that CWF1 obtains a regret (defined as the cumulative gap over time between the sum-rate obtained by a distribution-aware genie and this policy) that grows polynomially in the number of channels and logarithmically in time, implying that it asymptotically achieves the optimal time-averaged rate that can be obtained when the gain distributions are known. For the second problem, we present an algorithm called CWF2, which is, to our knowledge, the first algorithm in the literature on stochastic multi-armed bandits to exploit non-linear dependencies between the arms. We prove that the number of times CWF2 picks the incorrect power allocation is bounded by a function that is polynomial in the number of channels and logarithmic in time, implying that its frequency of incorrect allocation tends to zero.
Learning for Robust Routing Based on Stochastic Game in Cognitive Radio Networks. This paper studies the problem of spectrum-aware routing in a multi-hop, multi-channel cognitive radio network when malicious nodes in the secondary network attempt to block the path with mixed attacks. Based on the location and time-variant path delay information, we model the path discovery process as a non-cooperative stochastic game. By exploiting the structure of the underlying Markov Decisio...
Relay Selection in Underwater Acoustic Cooperative Networks: A Contextual Bandit Approach. Cooperative relay transmission is an attractive architecture for underwater acoustic networks. However, designing relay selection policies in the harsh underwater environment is difficult. In this letter, we model relay selection as a contextual bandit problem-an important extension of multi-armed bandit. Through this way, we can achieve relay selection based on a bit of contextual communication environment information about relay nodes instead of instantaneous or statistical channel state information. Our proposed relay selection technique enables highly stable performance of the cooperative system in a complex and changeable underwater environment, and the process of relay selection can be simplified and easily facilitate efficient cooperative transmission. Simulation results illustrate the effectiveness and the robustness of this relay selection technique.
Tier-Aware Resource Allocation in OFDMA Macrocell-Small Cell Networks We present a joint sub-channel and power allocation framework for downlink transmission in an orthogonal frequency-division multiple access (OFDMA)-based cellular network composed of a macrocell overlaid by small cells. In this framework, the resource allocation (RA) problems for both the macrocell and small cells are formulated as optimization problems. For the macrocell, we formulate an RA problem that is aware of the existence of the small cell tier. In this problem, the macrocell performs RA to satisfy the data rate requirements of macro user equipments (MUEs) while maximizing the tolerable interference from the small cell tier on its allocated sub-channels. Although the RA problem for the macrocell is shown to be a mixed integer nonlinear problem (MINLP), we prove that the macrocell can solve another alternate optimization problem that will yield the optimal solution with reduced complexity. For the small cells, following the same idea of tier-awareness, we formulate an optimization problem that accounts for both RA and admission control (AC) and aims at maximizing the number of admitted users while simultaneously minimizing the consumed bandwidth. Similar to the macrocell optimization problem, the small cell problem is shown to be an MINLP. We obtain a sub-optimal solution to the MINLP problem relying on convex relaxation. In addition, we employ the dual decomposition technique to have a distributed solution for the small cell tier. Numerical results confirm the performance gains of our proposed RA formulation for the macrocell over the traditional resource allocation based on minimizing the transmission power. Besides, it is shown that the formulation based on convex relaxation yields a similar behavior to the MINLP formulation. Also, the distributed solution converges to the same solution obtained by solving the corresponding convex optimization problem in a centralized fashion.
Distributed Estimation for Moving Target Based on State-Consensus Strategy This technical note studies the distributed estimation problem for a continuous-time moving target under switching interconnection topologies. A recursive distributed estimation algorithm is proposed by using state-consensus strategy, where a common gain is assigned to adjust the innovative and state-consensus information for each sensor in the network. Under mild conditions on observability and connectivity, the stability of the distributed estimation algorithm is analyzed. An upper bound and lower bound for the total mean square estimation error (TMSEE) are obtained by virtue of the common Lyapunov method and Kalman-Bucy filtering theory, respectively. Then a numerical simulation is given to verify the effectiveness of the proposed algorithm.
A Comprehensive Survey on Hierarchical-Based Routing Protocols for Mobile Wireless Sensor Networks: Review, Taxonomy, and Future Directions. Introducing mobility to Wireless Sensor Networks (WSNs) puts new challenges particularly in designing of routing protocols. Mobility can be applied to the sensor nodes and/or the sink node in the network. Many routing protocols have been developed to support the mobility of WSNs. These protocols are divided depending on the routing structure into hierarchical-based, flat-based, and location-based routing protocols. However, the hierarchical-based routing protocols outperform the other routing types in saving energy, scalability, and extending lifetime of Mobile WSNs (MWSNs). Selecting an appropriate hierarchical routing protocol for specific applications is an important and difficult task. Therefore, this paper focuses on reviewing some of the recently hierarchical-based routing protocols that are developed in the last five years for MWSNs. This survey divides the hierarchical-based routing protocols into two broad groups, namely, classical-based and optimized-based routing protocols. Also, we present a detailed classification of the reviewed protocols according to the routing approach, control manner, mobile element, mobility pattern, network architecture, clustering attributes, protocol operation, path establishment, communication paradigm, energy model, protocol objectives, and applications. Moreover, a comparison between the reviewed protocols is investigated in this survey depending on delay, network size, energy-efficiency, and scalability while mentioning the advantages and drawbacks of each protocol. Finally, we summarize and conclude the paper with future directions.
Distributed fault-tolerant topology control in wireless multi-hop networks In wireless multi-hop and ad-hoc networks, minimizing power consumption and at the same time maintaining desired properties of the network topology is of prime importance. In this work, we present a distributed algorithm for assigning minimum possible power to all the nodes in a static wireless network such that the resultant network topology is k-connected. In this algorithm, a node collects the location and maximum power information from all nodes in its vicinity, and then adjusts the power of these nodes in such a way that it can reach all of them through k optimal vertex-disjoint paths. The algorithm ensures k-connectivity in the final topology provided the topology induced when all nodes transmit with their maximum power is k-connected. We extend our topology control algorithm from static networks to networks having mobile nodes. We present proof of correctness for our algorithm for both static and mobile scenarios, and through extensive simulation we present its behavior.
Cross-Layer Network Lifetime Maximization in Interference-Limited WSNs
A language for task orchestration and its semantic properties Orc is a new language for task orchestration, a form of concurrent programming with applications in workflow, business process management, and web service orchestration. Orc provides constructs to orchestrate the concurrent invocation of services – while managing time-outs, priorities, and failure of services or communication. In this paper, we show a trace-based semantic model for Orc, which induces a congruence on Orc programs and facilitates reasoning about them. Despite the simplicity of the language and its semantic model, Orc is able to express a variety of useful orchestration tasks.
Wireplanning in logic synthesis In this paper, we propose a new logic synthesis methodology to deal with the increasing importance of the interconnect delay in deep- submicron technologies. We first show that conventional logic syn- thesis techniques can produce circuits which will have long paths even if placed optimally. Then, we characterize the conditions under which this can happen and propose logic synthesis techniques which produce circuits which are "better" for placement. Our proposed approach still separates logic synthesis from physical design.
Evaluating The Performance Of Reverse Encryption Algorithm (Rea) On The Databases Database encryption is a well established technology for protecting sensitive data. Unfortunately, the integration of existing encryption techniques with database systems causes undesirable performance degradation. It is a crucial technique in the security mechanisms of database. In this paper we propose a new encryption algorithm, which we call Reverse Encryption Algorithm (REA). Our new encryption algorithm REA is simple and yet leads to a cipher. It has achieved security requirements and is fast enough for most applications. REA algorithm is limiting the added time cost for encryption and decryption to don't degrade the performance of a database system. Also, we evaluate the performance of the proposed encryption algorithm REA and compare with the most common encryption algorithms. The performance measure of encryption schemes will be conducted in terms of encryption / decryption time. Experiment results show that our new algorithm outperforms other algorithms at encryption and decryption time.
Protecting patient confidential information based on ECG reversible data hiding Nowadays telecardiology is widely popular due to the fact that an increasing number of people are suffering from cardiac disease in the world. Therefore huge amount of ECG signal as well as patient confidential information will be transmitted via the Internet. Ibaida’s wavelet-based data hiding technique aims to protect patient confidential data utilizing ECG signal as a host media. But it cannot completely reconstruct the original ECG signal. Any alteration of the ECG may lead to an inaccurate diagnosis conclusion drawn by the doctor, which cannot be accepted by patients. In this paper, our elemental standpoint requires that both patient information and ECG signal must be perfectly restored at the extraction side. Firstly a method is proposed to embed patient confidential data into ECG signal, while keeping its high visual quality. Then we use a unified embedding-scrambling method to guarantee the security of patient privacy as well as the ECG signal itself. The structure of watermarked ECG signal is severely deconstructed. Both of the experimental results demonstrate that our proposed methods are reversible. Moreover the latter scheme can achieve high information payload.
1.22
0.22
0.22
0.22
0.22
0.11
0.01
0.006667
0.003333
0.001111
0
0
0
0
A new reliable optimized image watermarking scheme based on the integer wavelet transform and singular value decomposition for copyright protection. Although image watermarking schemes based on singular value decomposition (SVD) demonstrate high robustness and imperceptibility, they are exposed to the false positive problem (FPP). This drawback mostly occurs when embedding steps depend on singular values while singular vectors are used as secret keys. In this study, a new reliable SVD-based image watermarking scheme that uses integer wavelet transform (IWT) is proposed to overcome FPP and fulfil all watermarking requirements. Unlike in other schemes, the S and V matrices of the watermark are used as secret keys, whereas the S singular vector of the watermark is embedded into the singular values of the host image. The additional secret key is obtained from the watermarked image during the embedding process to increase security and avoid FPP completely. To improve the robustness, as well as achieve balance between robustness and imperceptibility, multi-objective ant colony optimization (MOACO) is utilized to find the optimal scaling factors, namely, multiple zooming factors. Results of the robustness, imperceptibility, and reliability tests demonstrate that the proposed IWT-SVD-MOACO scheme outperforms several previous schemes and avoids FPP completely.
Affine invariant image watermarking using intensity probability density-based Harris Laplace detector Feature point based image watermarking against geometric distortions has attracted great attention in recent years. However, for the state-of-the-art intensity based feature points detectors, the feature points often gather at textured portions of the image or on the edges where the change of intensity is significant, so that many feature points capture the same portion of the image, which makes the watermark be vulnerable to local geometric distortions. In this paper, we propose an affine invariant image watermarking scheme with good visual quality and reasonable resistance toward local geometric distortions, which utilizes the intensity probability density-based Harris-Laplace detector. Firstly, the uniform and robust feature points are extracted by utilizing modified Harris-Laplace detector, in which the intensity probability density gradient is used instead of intensity gradient. Then, the affine invariant local ellipse regions (LERs) are constructed adaptively according to the variation of local intensity probability density. Finally, the digital watermark is embedded into the affine invariant LERs in nonsubsampled contourlet transform (NSCT) domain by modulating the lowpass NSCT coefficients. By binding the watermark with the affine invariant LERs, the watermark detection can be done without synchronization error. Experimental results show that the proposed image watermarking is not only invisible and robust against common image processing operations such as sharpening, noise adding, and JPEG compression, but also robust against the global affine transforms and local geometric distortions.
Improvement on LEACH protocol of a wide-area wireless sensor network. This paper proposes to increase the efficiency energy of nodes which rapidly drops during the transmission of the LEACH (Low Energy Adaptive Clustering Hierarchy), using the method of the dual-hop layered in the sensor field. By introducing dual-hop method in the data transmission, the proposed single-hop method for short-range transmission and multi-hop transmission method between the cluster heads for remote transmission was introduce. By introducing a partial multi-hop method in the data transmission, a single-hop method for short range transmission method between the cluster heads for remote transmission was introduces. In the proposed DL-LEACH, the energy consumption of cluster head for remote transmission reduces and increases the energy efficiency of sensor node by reducing the transmission distance and simplifying the transmission routine for short-range transmission. As compared the general LEACH, it was adapted to a wider sensor field.
Reversible data hiding by adaptive modification of prediction errors Histogram Shifting (HS) is one of the most popular reversible data hiding techniques that has received tremendous attention from the research community in recent years. While histogram shifting offers many advantages, it suffers from relatively low payload, which restricts its applications significantly. In this work, a new reversible data hiding technique based on the modification of the histogram of prediction errors is proposed. The proposed method employs an adaptive strategy to vacate multiple bins as the embedding venues in order to increase the effective payload. The histogram bins are shifted dynamically based on their magnitudes. To maintain high quality for the output image, the distance of shifting is minimized for smaller prediction errors. On the other hand, the distance of shifting is allowed to be larger for larger prediction errors, which are of lower occurrences, to create more space for embedding. The proposed data hiding method is able to reversibly hide larger number of bits into the host image while achieving comparable output image quality when compared to the conventional histogram shifting based methods. The experimental results suggest that, on average, the proposed method is able to embed 0.247bpp into various standard test images, while still maintaining the visual quality at satisfactory level of ~48.9 dB.
Digital watermarking Digital watermarking is a key ingredient to copyright protection. It provides a solution to illegal copying of digital material and has many other useful applications such as broadcast monitoring and the recording of electronic transactions. Now, for the first time, there is a book that focuses exclusively on this exciting technology. Digital Watermarking covers the crucial research findings in the field: it explains the principles underlying digital watermarking technologies, describes the requirements that have given rise to them, and discusses the diverse ends to which these technologies are being applied. As a result, additional groundwork is laid for future developments in this field, helping the reader understand and anticipate new approaches and applications.
Near lossless reversible data hiding based on adaptive prediction In this paper we present a new near lossless reversible watermarking algorithm using adaptive prediction for embedding. The prediction is based on directional first-order differences of pixel intensities within a suitably selected neighborhood. The proposed scheme results to be computationally efficient and allows achieving high embedding capacity while preserving a high image quality. Extensive experimental results demonstrate the effectiveness of the proposed approach.
Improved rhombus interpolation for reversible watermarking by difference expansion The paper proposes an interpolation error expansion reversible watermarking algorithm. The main novelty of the paper is a modified rhombus interpolation scheme. The four horizontal and vertical neighbors are considered and, depending on their values, the interpolated pixel is computed as the average of the horizontal pixels, of the vertical pixels or of the entire set of four pixels. Experimental results are provided. The proposed scheme outperforms the results obtained by using the average on the four horizontal and vertical neighbors and the ones obtained by using well known predictors as MED or GAP.
Recursive histogram modification: establishing equivalency between reversible data hiding and lossless data compression. State-of-the-art schemes for reversible data hiding (RDH) usually consist of two steps: first construct a host sequence with a sharp histogram via prediction errors, and then embed messages by modifying the histogram with methods, such as difference expansion and histogram shift. In this paper, we focus on the second stage, and propose a histogram modification method for RDH, which embeds the message by recursively utilizing the decompression and compression processes of an entropy coder. We prove that, for independent identically distributed (i.i.d.) gray-scale host signals, the proposed method asymptotically approaches the rate-distortion bound of RDH as long as perfect compression can be realized, i.e., the entropy coder can approach entropy. Therefore, this method establishes the equivalency between reversible data hiding and lossless data compression. Experiments show that this coding method can be used to improve the performance of previous RDH schemes and the improvements are more significant for larger images.
A Novel Difference Expansion Transform for Reversible Data Embedding Reversible data embedding theory has marked a new epoch for data hiding and information security. Being reversible, the original data and the embedded data should be completely restored. Difference expansion transform is a remarkable breakthrough in reversible data-hiding schemes. The difference expansion method achieves high embedding capacity and keeps distortion low. This paper shows that the difference expansion method with the simplified location map and new expandability can achieve more embedding capacity while keeping the distortion at the same level as the original expansion method. Performance of the proposed scheme in this paper is shown to be better than the original difference expansion scheme by Tian and its improved version by Kamstra and Heijmans. This improvement can be possible by exploiting the quasi-Laplace distribution of the difference values.
A steganographic method for digital images using side match In order to provide larger embedding capacity and to minimize the distortion of the stego-image, a novel steganographic method using side information is presented in this paper. The method exploits the correlation between neighboring pixels to estimate the degree of smoothness or contrast of pixels. If the pixel is located in edge area, then it may tolerate larger changes than those in smooth areas. The two-sided, three-sided, and four-sided side match methods are employed in our scheme. The experimental results show that our method provides a large embedding capacity without making noticeable distortion. Besides, the embedded data can be extracted from the stego-image without referencing the original image.
A construction of a cipher from a single pseudorandom permutation We suggest a scheme for a block cipher which uses only one randomly chosen permutation,F. The key, consisting of two blocks,K 1 andK 2, is used in the following way. The message block is XORed withK 1 before applyingF, and the outcome is XORed withK 2, to produce the cryptogram block. We show that the resulting cipher is secure (when the permutation is random or pseudorandom). This removes the need to store, or generate a multitude of permutations.
H.264/AVC video for wireless transmission H.264/AVC will be an essential component in emerging wireless video applications thanks to its excellent compression efficiency and network-friendly design. However, a video coding standard itself is only one component within the application and transmission environment. Its effectiveness strongly depends on the selection of appropriate modes and parameters at the encoder, at the decoder, as well as in the network. In this paper we introduce the features of the H.264/AVC coding standard that make it suitable for wireless video applications, including features for error resilience, bit rate adaptation, integration into packet networks, interoperability, and buffering considerations. Modern wireless networks provide many different means to adapt quality of service, such as forward error correction methods on different layers and end-to-end or link layer retransmission protocols. The applicability of all these encoding and network features depends on application constraints, such as the maximum tolerable delay, the possibility of online encoding, and the availability of feedback and cross-layer information. We discuss the use of different coding and transport related features for different applications, namely video telephony, video conferencing, video streaming, download-and-play, and video broadcasting. Guidelines for the selection of appropriate video coding tools, video encoder and decoder settings, as well as transport and network parameters are provided and justified. References to relevant research publications and standardization contributions are given.
Multi-terminal nets do change conventional wire length distribution models Conventional models for estimating wire lengths in computer chips use Rent's rule to estimate the number of terminals between sets of gates. The number of interconnections then follows by taking into account that most nets are point-to-point connections. In this paper, we introduce a model for multi-terminal nets and we show that such nets have a fundamentally different influence on the wire length estimations than point-to-point nets. The multi-terminal net model is then used to estimate the wire length distribution in two cases: (i)m the distribution of source-sink pairs for applications of delay estimation and (ii) the distribution of Steiner tree lengths for applications related to routing resource estimation. The effects of including multi-terminal nets in the estimations are highlighted. Experiments show that the new estimated wire length distributions are close to the measured ones.
Integer transform based reversible watermarking incorporating block selection. •We use the invariant mean value of a block to evaluate the local complexity.•The block size can be set to a small value by the invariability of the mean value.•The reduced size location map is created by the invariability of the mean value.•We can modify flexibly each pixel in a block using DE or HS.
1.24
0.12
0.08
0.005714
0.003218
0.001905
0.000317
0.000068
0.000001
0
0
0
0
0
Semantics and axiomatics of a simple recursive language. In this paper, we provide a simple recursive programming language with a semantics and a formal proof system, along the lines of [5], [17] and [23]. We show that the semantics used is the “best” possible if one admits the validity of Algol's copy rule, and that the proof system is complete with respect to the semantics. The definitions and methods used are meant to provide a basis for a broader theory of program schemas, which models parallel as well as sequential programs.
Program Development By Transformations Applied To Relational Database Queries Queries to relational databases may be formulated in a class of non-procedural query languages based on Predicate Calculus. Using a wide spectrum language, procedures are introduced which express these Predicate Calculus formulations. Program transformation schemes are then utilized to develop correct, quantifier-free programs.
An Implementation Methodology for Semantic Data Base Models
Algebraic optimization of object-oriented query languages Advanced database applications demand new data modeling constructs beyond those available in relational databases. These include both new data structures, e.g. arrays and quadtrees, and an integration with the object-oriented programming paradigm. Declarative object-oriented query languages transfer the burden of choosing an efficient execution plan to the database query optimizer. The lack of a generally accepted object-oriented data model and the trend towards extensible systems demand an extensible framework for object-oriented query optimization. We suggest such an algebraic optimization framework which is based on the computational metaphor of processing bulk data. Bulk data are defined using an abstract definition of the notion of data collection which includes familiar and novel types of bulk data. In particular, we integrate neatly object-oriented notions such as object identity and user-defined methods. To obtain generally applicable results, we use an FP-like language in which programs are constructed from primitive functions using a fixed set of functional forms. The latter abstract common patterns of processing data collections. The resulting algebra of programs generalizes for any data collection the known laws for transforming relational queries. We go beyond FP by allowing data structures containing functions, thus supporting the optimization of programs required in database programming environments.
Correctness-preserving program transformations This paper extends the predicate calculus formalization of the partial correctness properties of programs (Ki, Go) to include the preservation of correctness under program transformations. The general notion of "program transformations which preserve properties" is fundamental to the theory of programming and programming languages. In the context of proofs of program correctness, transformations which preserve correctness can be used to improve less efficient, but easier to prove, programs. The basic argument in the use of correctness-preserving program transformations (hereafter CPTs) is:Assume that G is a program (with attached assertions) which has been proved correct with respect to some input-output relation Ain-Aout. Now suppose that S is some part of G, e.g. an expression, assertion, statement, etc., which is to be replaced by some other such part S' to produce the program G'. The goal is to prove that G' is also correct with respect to Ain-Aout and therefore the replacement preserves overall program correctness. Moreover, if the replacement has only a local effect, e.g. the body of a loop, then the proof of correctness-preservation should be restricted to that part of the program affected by the replacement.Section 2 reviews the current paradigm for proving program correctness. An example in section 3 illustrates CPTs in a sequence of improvements on a correct and simple, but inefficient, initial program. In section 4, the formalization of partial correctness properties of programs is recast as a semantic language definition using Knuth's semantic method (Kn1). This formalization is then used in section 5 to describe the mechanics of performing CPTs. In section 6, several questions about the formalization of sections 4 and 5 are discussed and a generalization is proposed. Finally, section 7 returns to a concrete example and suggests that the most effective use of CPTs is by identification of schematic forms. Related work is mentioned in section 8.
Convergence of program transformers in the metric space of trees In recent years increasing consensus has emerged that program transformers, e.g. partial evaluation and unfold/fold transformations, should terminate; a compiler should stop even if it performs fancy optimizations! A number of techniques to ensure termination of program transformers have been invented, but their correctness proofs are sometimes long and involved. We present a framework for proving termination of program transformers, cast in the metric space of trees . We first introduce the notion of an abstract program transformer ; a number of well-known program transformers can be viewed as instances of this notion. We then formalize what it means that an abstract program transformer terminates and give a general sufficient condition for an abstract program transformer to terminate. We also consider some specific techniques for satisfying the condition. As applications we show that termination of some well-known program transformers either follows directly from the specific techniques or is easy to establish using the general condition. Our framework facilitates simple termination proofs for program transformers. Also, since our framework is independent of the language being transformed, a single correctness proof can be given in our framework for program transformers that use essentially the same technique in the context of different languages. Moreover, it is easy to extend termination proofs for program transformers to accommodate changes to these transformers. Finally, the framework may prove useful for designing new termination techniques for program transformers.
A Folding Rule for Eliminating Existential Variables from Constraint Logic Programs The existential variables of a clause in a constraint logic program are the variables which occur in the body of the clause and not in its head. The elimination of these variables is a transformation technique which is often used for improving program efficiency and verifying program properties. We consider a folding transformation rule which ensures the elimination of existential variables and we propose an algorithm for applying this rule in the case where the constraints are linear inequations over rational or real numbers. The algorithm combines techniques for matching terms modulo equational theories and techniques for solving systems of linear inequations. Through some examples we show that an implementation of our folding algorithm has a good performance in practice.
On the transformation of logic programs with instantiation based computation rules We give a formal description of a new transformation technique for declarative Horn clause programs. Our method allows the compilation of control information. We introduce the notion of an instantiation-based computation rule and present an algorithm to transform a given pair (@?, q), consisting of a Horn clause program P and a query pattern q for @?, into a new program @?"n"e"u, which, for the given query pattern, will behave in the same way, under the standard computation rule of Prolog, as the original program @? behaves under a specified instantiation-based computation rule. We prove the soundness and completeness of the method and discuss its relation to existing techniques for the specification of control rules and the transformation of logic programs.
Improvement of Recursive Programs from a Logic Programming Point of View
Relating Accumulative and Non-accumulative Functional Programs We study the problem to transform functional programs, which intensively use append functions (like inefficient list reversal), into programs, which use accumulating parameters instead (like efficient list reversal). We give an (automatic) transformation algorithm for our problem and identify a class of functional programs, namely restricted 2- modular tree transducers, to which it can be applied. Moreover, since we get macro tree transducers as transformation result and since we also give the inverse transformation algorithm, we have a new characterization for the class of functions induced by macro tree transducers.
A unified approach for developing efficient algorithmic programs A unified approach called partition-and-recur for developing efficient and correct algorithmic programs is presented. An algorithm (represented by recurrence and initiation) is separated from program, and special attention is paid to algorithm manipulation rather than program calculus. An algorithm is exactly a set of mathematical formulae. It is easier for formal derivation and proof. After getting efficient and correct algorithm, a trivial transformation is used to get a final program. The approach covers several known algorithm design techniques, e.g. dynamic programming, greedy, divide-and-conquer and enumeration, etc. The techniques of partition and recurrence are not new. Partition is a general approach for dealing with complicated objects and is typically used in divide-and-conquer approach. Recurrence is used in algorithm analysis, in developing loop invariants and dynamic programming approach. The main contribution is combining two techniques used in typical algorithm development into a unified and systematic approach to develop general efficient algorithmic programs and presenting a new representation of algorithm that is easier for understanding and demonstrating the correctness and ingenuity of algorithmic programs.
The Nonsubsampled Contourlet Transform: Theory, Design, and Applications In this paper, we develop the nonsubsampled contourlet transform (NSCT) and study its applications. The construction proposed in this paper is based on a nonsubsampled pyramid structure and nonsubsampled directional filter banks. The result is a flexible multiscale, multidirection, and shift-invariant image decomposition that can be efficiently implemented via the a trous algorithm. At the core of the proposed scheme is the nonseparable two-channel nonsubsampled filter bank (NSFB). We exploit the less stringent design condition of the NSFB to design filters that lead to a NSCT with better frequency selectivity and regularity when compared to the contourlet transform. We propose a design framework based on the mapping approach, that allows for a fast implementation based on a lifting or ladder structure, and only uses one-dimensional filtering in some cases. In addition, our design ensures that the corresponding frame elements are regular, symmetric, and the frame is close to a tight one. We assess the performance of the NSCT in image denoising and enhancement applications. In both applications the NSCT compares favorably to other existing methods in the literature
Tutorial on VLSI Partitioning The tutorial introduces the partitioning with applications to VLSI circuit designs. The problem formulations include two-way, multiway, and multi-level partitioning, partitioning with replication, and performance driven partitioning; We depict the models of multiple pin nets for the partitioning processes. To derive the optimum solutions, we describe the branch and bound method and the dynamic programming method for a special case of circuits. We also explain several heuristics including the group migration algorithms, network flow approaches, programming methods, Lagrange multiplier methods, and clustering methods. We conclude the tutorial with research directions.
Reversible data hiding based on local histogram shifting with multilayer embedding We use the two neighboring points of the peak point to embed the secret data.We improve the histogram of embedded cover image to ensure security of secret data.The multilayer embedding of pyramid structure is adopted to increase the capacity.Our proposed method has a higher embedding capacity with lower distortion. In this paper, we present a new reversible data hiding method based on histogram shifting using localization. Our proposed method selects peak point as the reference point, then uses the two neighboring points of the peak point to achieve secret data embedding based on histogram shifting and the peak point keeps unchanged. In the extraction end, we no longer need the key information about the peak point, we can directly find the peak point from the histogram to extract the secret data. We also exploit the localization to make the histogram of embedded cover image become almost the same as the histogram of the original cover image. The embedding capacity is also increased rapidly by the localization with multilayer embedding. Experimental results show that our proposed method is effective and superior.
1.200071
0.200071
0.133407
0.133402
0.016057
0.000123
0.000082
0.000073
0.000061
0.000041
0
0
0
0
Multibit versus multilevel embedding in high capacity difference expansion reversible watermarking The classical difference expansion (DE) reversible watermarking expands two times a difference in order to embed one bit of data. The upper limit of such DE schemes in a single embedding level is usually bounded by 1 bpp. In order to obtain more than 1 bpp, several embedding stages are chained. The multibit DE schemes expand n times the difference in order to embed up to log2 n bpp. This paper proposes a multibit extension of the difference expansion reversible watermarking and compares the results with the multilevel embedding case. The multibit reversible watermarking outperforms the multilevel one both in capacity and quality. Experimental results on standard test images are provided.
Reversible Information Hiding Considering Hierarchical Access Control This paper proposes a reversible information hiding for supporting hierarchical control of access to embedded data. The proposed method firstly hides the most important information to an image by the reversible manner in which information is enciphered. To a stego image to which once information is hidden, this method embeds different information that is enciphered with a different key. In the proposed method, only one key corresponding to the most important data is managed, and other keys are generated from the managed key through a one-way hash function. The privileged user receives the managed key and he/she extracts and deciphers all hidden data, whereas the unprivileged user receives a key that for the least important data. The proposed method also has tamper detection ability. Simulation results show the effectiveness of the proposed method.
The Distortion Control Method of Reversible Contrast Mapping Hiding Scheme The reversible information hiding technique is to embed secret information in multi-media. Through extraction algorithm, the concealed information can be extracted and the original host media can be totally recovered. Coltuc et al. developed a simple and efficient reversible hiding scheme based on reversible contrast mapping (RCM), which does not need location maps to recover the host image. However, their scheme can not fully control the position where information is embedded, so the quality of the stego image is poor. Therefore, this paper proposed an enhanced hiding scheme based on Kamstra et al.’s distortion control technique to improve Coltuc et al.’s scheme. The experimental results show that the image quality of the proposed scheme is better than that of Coltuc et al.’s scheme for limited amount of embedded data.
Reversibility of image with balanced fidelity and capacity upon pixels differencing expansion Reversible data hiding has attracted considerable attention in recent years. Being reversible, the decoder can extract hidden data and recover the original image completely, and the difference expansion (DE) scheme can lead to a lossless pixel after secret data exacting. Furthermore, despite achieving pixel reversibility based on the concept of expanded differencing, the difference expansion scheme can cause enormous image distortion because of the size of the difference. The proposed scheme in this paper describes a novel prediction for achieving predictive error based reversible data hiding by considering the relation between a pixel and its neighboring pixel and using the predictor to identify the projected difference in pixel value. Experimental results show that the proposed scheme is capable of providing great embedding capacity without causing noticeable distortion by selecting the minimal predictor based on pixel expansion. In multilevel cases, this proposed method performs better than other existing methods. Moreover, the proposed scheme is able to pass the Chi-square test, a test used to find whether an image utilizes LSB for data hiding.
Histogram shape-based robust image watermarking method Developing a watermarking method that is robust to cropping attack and random bending attacks (RBAs) is a challenging task in image watermarking. In this paper, we propose a histogram-based image watermarking method to tackle with both cropping attack and RBAs. In this method first the gray levels are divided into groups. Secondly the groups for watermark embedding are selected according to the number of pixels in them, which makes this method fully based on the histogram shape of the original image and adaptive to different images. Then the watermark bits are embedded by modifying the histogram of the selected groups. Since histogram shape is insensitive to cropping and independent from pixel positions, the proposed method is robust to cropping attack and RBAs. Besides, it also has high robustness against other common attacks. Experimental results demonstrate the effectiveness of the proposed method.
Reversible watermarking method based on adaptive thresholding and companding technique Embedding and extraction of secret information as well as the restoration of the original un-watermarked image are highly desirable in sensitive applications such as military, medical, and law enforcement imaging. This paper presents a novel reversible watermarking approach for digital images using integer-to-integer wavelet transform, companding technique, and adaptive thresholding, enabling it to embed and recover the secret information as well as restore the image to its pristine state. The proposed method takes advantage of block-based watermarking and iterative optimization of threshold for companding which avoids histogram pre-and postprocessing. Consequently, it reduces the associated overhead usually required in most of the reversible watermarking techniques. As a result, it generates less distortion between the watermarked and the original image. Experimental results on regular as well as medical images show that the proposed method outperforms the existing reversible watermarking approaches reported in the literature.
Authentication of images for 3D cameras: Reversibly embedding information using intelligent approaches In this work, a reversible watermarking approach for authentication of 3D cameras based on computational intelligence is presented. Two intelligent techniques based on differential evolution (DE) and hybrid DE are employed to optimize the tradeoff between watermark imperceptibility and capacity. The proposed approach is suitable for images of 3D cameras. These cameras generally work on the concept of time-of-flight and not only produce the 2D image but also generate the corresponding depth map. In this approach, the depth map is considered as secret information and is hidden in the integer wavelet transform of the corresponding 2D image. The proposed technique is prospective for authenticating 3D camera images and allows the secure transmission of its depth map. It has the advantage of the lossless recovery of original 2D image as and when needed. The watermarking of the 2D images is based on integer wavelet transform and threshold optimization. The threshold map thus obtained using the intelligent optimization approaches is not only used for watermark embedding, but is also utilized for authentication purpose by correlating it with the corresponding 2D transformed image. Experiments conducted on images and depth maps obtained using 3D camera validate the proposed concept.
Machine learning based adaptive watermark decoding in view of anticipated attack We present an innovative scheme of blindly extracting message bits when a watermarked image is distorted. In this scheme, we have exploited the capabilities of machine learning (ML) approaches for nonlinearly classifying the embedded bits. The proposed technique adaptively modifies the decoding strategy in view of the anticipated attack. The extraction of bits is considered as a binary classification problem. Conventionally, a hard decoder is used with the assumption that the underlying distribution of the discrete cosine transform coefficients do not change appreciably. However, in case of attacks related to real world applications of watermarking, such as JPEG compression in case of shared medical image warehouses, these coefficients are heavily altered. The sufficient statistics corresponding to the maximum likelihood based decoding process, which are considered as features in the proposed scheme, overlap at the receiving end, and a simple hard decoder fails to classify them properly. In contrast, our proposed ML decoding model has attained highest accuracy on the test data. Experimental results show that through its training phase, our proposed decoding scheme is able to cope with the alterations in features introduced by a new attack. Consequently, it achieves promising improvement in terms of bit correct ratio in comparison to the existing decoding scheme.
Reversible steganography using extended image interpolation technique. •By analysis of statistical properties of adjacent pixels as well as image interpolation techniques, a novel interpolation method is put forward.•Under the conditions of ensuring the image quality, the proposed image interpolation method maximizes the difference values between neighboring pixels and improves the capacity.•The proposed scheme still has the advantage of lower computing complexity and better results between interpolating pixels.•The experimental results show that the performance of the proposed method is superior to several state-of-the-art methods.
Statistical analysis of several reversible data hiding algorithms In this paper, the truth that the current reversible data hiding algorithms are detectable is confirmed by experimental analysis method. By analyzing the distributions of horizontal pixel difference of natural images and watermarked images using several models, we find that the horizontal difference histogram of natural image is significantly altered after being embedded secret message. Furthermore, the difference between the horizontal and the vertical difference histogram of natural image is much less than that of the watermarked image. Then the presence of hiding message can be detected according to the distance between the horizontal and the vertical difference histogram. Experimental results demonstrate that our approach is effective and efficient than the already published schemes.
Reversible Steganography for BTC-compressed Images Reversible steganography becomes a popular hiding problem in recent years. A reversible steganographicmethod can reconstruct an original image without loss from the stego-image after extracting the embedded data. Unlike traditional reversible methods in which data is hidden in uncompressed images, we propose a reversible scheme for BTC (block truncation coding)-compressed images. The secret data embedded in the compressed image are more difficult to detect than in the uncompressed image. To achieve reversibility, the properties of side matching and BTC-compressed code are applied. The experimental results show that the proposed method is feasible for BTC-compressed images and can embed one more bit in each BTC-encoded block.
Reversible data hiding in encrypted images using cross division and additive homomorphism There are three problems in the existing reversible data hiding (RDH) algorithms in encrypted images: one is that some algorithms are not processed in the encrypted domain; another is that the reversibility which implies exact data extraction and perfect image recovery cannot be ensured in some cases; the last is that data expansion occurs when probabilistic public-key cryptosystem is used for image encryption in some homomorphic schemes. In this paper, a complete RDH in encrypted images is proposed. By using the idea of cross division and additive homomorphism, we solve all of the problems. Experimental results verify the superiority of the proposed method, which will have a good potential for practical applications of multimedia privacy protection. HighlightsHost image is divided into crosses.Homomorphic encryption is with no data expansion.Data hiding is directly processed in encrypted domain.Real reversibility is realized.Difference histogram is shifted several rounds.
Tragic loss or good riddance? The impending demise of traditional scholarly journals s of the AMS might be appropriate,so that &quot;proofs&quot; that the Earth is flat, or that special relativity is a Zionistconspiracy, would be kept out. Discussions of whether Bacon wrote Shakespeare'splays might be accepted (since there are interesting statistical approaches to thisquestion). There would also be digital signatures and digital timestamping, to provideauthentication. The precise rules for how the system would function wouldhave to be decided by experimentation. For example, one...
Unified lossless and near-lossless color image coding based on adaptive quantization This paper proposes a unified coding algorithm for lossless and near-lossless compression of still color images. The algorithm can control the Peak Signal-to-Noise Ratio (PSNR) of the reconstructed image. Furthermore, the distortion on the RGB plane is suppressed to within the level of ±p, where p is a certain small non-negative integer. In order to control the PSNR of the reconstructed image, an adaptive quantizer is designed on the basis of human visual criteria. Experimental results confirm the effectiveness of the proposed algorithm
1.030417
0.034587
0.028712
0.028712
0.028571
0.014356
0.004806
0.002407
0.000484
0.000032
0.000009
0
0
0
Unfold/fold transformations and loop optimization of logic programs Programs typically spend much of their execution time in loops. This makes the generation of efficient code for loops essential for good performance. Loop optimization of logic programming languages is complicated by the fact that such languages lack the iterative constructs of traditional languages, and instead use recursion to express loops. In this paper, we examine the application of unfold/fold transformations to three kinds of loop optimization for logic programming languages: recursion removal, loop fusion and code motion out of loops. We describe simple unfold/fold transformation sequences for these optimizations that can be automated relatively easily. In the process, we show that the properties of unification and logical variables can sometimes be used to generalize, from traditional languages, the conditions under which these optimizations may be carried out. Our experience suggests that such source-level transformations may be used as an effective tool for the optimization of logic programs.
Spes: A System for Logic Program Transformation Spes is an interactive system for transforming logic programs. It is intended as a formal tool for transforming Horn clauses specifications into correct and more efficient programs. The main transformations used in the Spes system are Unfolding and Folding.
Communicating agents for applicative concurrent programming A good program methodology should allow easy proofs of program correctness and should also incorporate methods for improving program efficiency. We try to achieve both aims by proposing, in the framework of the applicative programming style, a language by which one can specify a system of computing agents, which communicate by sending and receiving messages.
Explanation based program transformation Fold-unfold is a well known program transformation technique. Its major drawback is that folding requires an Eureka step to invent new procedures. In the context of logic programming, we present a technique where the folding is driven by an example. The transformation is aimed at programs suffering from inefficiencies due to the repetition of identical subcomputations. The execution of an example is analysed to locate repeated subcomputations. Then the structure of the example is used to control a fold-unfold-transformation of the program. The transformation can be automated. The method can be regarded as an extension of explanation based learning.
A technique for doing lazy evaluations in logic We develop a natural technique for defining functions in logic, i.e. PROLOG, which directly yields lazy evaluation. Its use does not require any change to the PROLOG interpreter. Function definitions run as PROLOG programs and so run very efficiently. It is possible to combine lazy evaluation with nondeterminism and simulate coroutining. It is also possible to handle infinite data structures and implement networks of communicating processes. We analyze this technique and develop a precise definition of lazy evaluation for lists. For further efficiency we show how to preprocess programs and ensure, using logical variables, that values of expressions once generated are remembered for future access. Finally, we show how to translate programs in a simple functional language into programs using this technique.
Basic transformation operations which preserve computed answer substitutions of logic programs Some transformation operations for logic programs, basic for partial deduction, program specialization, and transformation, and for program synthesis from specifications, are studied with respect to the minimal S-model semantics defined in [31, 15–17]. Such a semantics is, in our opinion, more interesting than the usual least Herbrand model one since it captures the program's behavior with respect to computed answers. The S-semantics is also the strongest semantics which is maintained by unrestricted unfolding [31]. For such operations, we single out general applicability conditions, and prove that they guarantee that the minimal S-model semantics of a program is not modified by the transformation. Some sufficient conditions, which are very common in practice and easy to verify, since they are mostly syntactical, are also supplied with simple exemplifications.
Unfold/fold Transformations Preserving Termination Properties The unfold/fold framework constitutes the spine of many pro- gram transformation strategies. However, by unrestricted use of folding the target program may terminate less often than the source program. Several authors have investigated the problem of setting up conditions of syntactic nature, i.e. not based on some well-founded ordering of the arguments, which guarantee preservation of termination properties. These conditions are typ- ically formulated in a way which makes it hard to grasp the basic intuition why they work, and in a way which makes it hard to give elegant proofs of correctness. The aim of this paper will be to give a more unified treatment by setting up a model which enables us to reason about termination preser- vation in a cleaner and more algebraic fashion. The model resembles a logic language and is parametrized with respect to evaluation order, but it should not be too difficult to transfer the ideas to other languages.
Unfolding-definition-folding, in this order, for avoiding unnecessary variables in logic programs We present an approach to the automatic improvement of performances of logic programs by using the unfold/fold transformation technique. A cause of program inefficiency is often the presence of variables which are unnecessary , in the sense that they force computations of redundant values or multiple visits of data structures. We propose a strategy which automatically transforms initial program versions into new and more efficient versions by avoiding unnecessary variables. Our strategy is an extension of the one which was introduced in an earlier paper by Proietti-Pettorossi (1990). It is based on the syntactical characterization of the unnecessary variables and it uses a composite transformation rule made out of unfolding-definition-folding steps, in this order. The strategy consists in the repeated application of that composite rule to each clause with unnecessary variables. It avoids the search for eureka definitions which is often required by other techniques proposed in the literature. We define a class of programs for which our transformation strategy is successful and we propose a variant of that strategy which uses the so-called generaliziation rule. This variant is always terminating, but, in general, not all unnecessary variables are eliminated. We finally present an enhancement of the proposed transformation techniques which exploits the functionality of some predicates.
A Naïve Time Analysis and its Theory of Cost Equivalence Techniques for reasoning about extensional properties of functional programs are well understood but methods for analysing the underlying intensional or operational properties have been much neglected. This paper begins with the development of a simple but useful calculus for time analysis of non-strict functional programs with lazy lists. One limitation of this basic calculus is that the ordinary equational reasoning on functional programs is not valid. In order to buy back some of these equational properties we develop a non-standard operational equivalence relation called cost equivalence, by considering the number of computation steps as an 'observable' component of the evaluation process. We define this relation by analogy with Park's definition of bisimulation in ccs. This formulation allows us to show that cost equivalence is a contextual congruence (and thus is substitutive with respect to the basic calculus) and provides useful proof techniques for establishing cost-equivalence laws. It is shown that basic evaluation time can be derived by demonstrating a certain form of cost equivalence, and we give an axiomatization of cost equivalence which is complete with respect to this application. This shows that cost equivalence subsumes the basic calculus. Finally we show how a new operational interpretation of evaluation demands can be used to provide a smooth interface between this time analysis and more compositional approaches, retaining the advantages of both.
Planware - Domain-Specific Synthesis of High-Performance Schedulers Planware is a domain-specific generator of high- performance scheduling software, currently being developed at Kestrel Institute. Architecturally, Planware is an extension of the Specware system with domain-independent and domain-dependent parts. The domain-independent part includes a general algorithm design facility (including mech- anisms to synthesize global-search and constraint propagation algorithms), as well as support for theorem-proving and witness finding. The domain- dependent part includes scheduling domain knowl- edge and architecture representations, and other domain-specific refinement knowledge that relates the scheduling domain to general algorithm design and data type refinement. Using Planware, the user interactively specifies a problem and then the sys- tem automatically generates a formal specification and refines it.
Compression of individual sequences via variable-rate coding Compressibility of individual sequences by the class of generalized finite-state information-lossless encoders is investigated. These encoders can operate in a variable-rate mode as well as a fixed-rate one, and they allow for any finite-state scheme of variable-length-to-variable-length coding. For every individual infinite sequencexa quantityrho(x)is defined, called the compressibility ofx, which is shown to be the asymptotically attainable lower bound on the compression ratio that can be achieved forxby any finite-state encoder. This is demonstrated by means of a constructive coding theorem and its converse that, apart from their asymptotic significance, also provide useful performance criteria for finite and practical data-compression tasks. The proposed concept of compressibility is also shown to play a role analogous to that of entropy in classical information theory where one deals with probabilistic ensembles of sequences rather than with individual sequences. While the definition ofrho(x)allows a different machine for each different sequence to be compressed, the constructive coding theorem leads to a universal algorithm that is asymptotically optimal for all sequences.
Efficient Steganographic Embedding by Exploiting Modification Direction A novel method of steganographic embedding in digital images is described, in which each secret digit in a (2n+1)-ary notational system is carried by n cover pixels and, at most, only one pixel is increased or decreased by 1. In other words, the (2n+1) different ways of modification to the cover pixels correspond to (2n+1) possible values of a secret digit. Because the directions of' modification are fully exploited, the proposed method provides high embedding efficiency that is better than previous techniques
An SVM-based robust digital image watermarking against desynchronization attacks In image watermarking area, the robustness against desynchronization attacks, such as rotation, translation, scaling, row or column removal, cropping, and local random bend, is still one of the most challenging issues. This paper presents a support vector machine (SVM)-based digital image-watermarking scheme, which is robust against a variety of common image-processing attacks and desynchronization attacks. To protect the copyright of a digital image, a signature (a watermark), which is represented by a binary image, is embedded in the digital image. The watermark embedding and watermark extraction issues can be treated as a classification problem involving binary classes. Firstly, a set of training patterns is constructed by employing two image features, which are the sum and variance of some adjacent pixels. This set of training patterns is gathered from a pair of images, an original image and its corresponding watermarked image in the spatial domain. Secondly, a quasi-optimal hyperplane (a binary classifier) can be realized by an SVM, and the SVM can be trained by utilizing the set of training patterns. Finally, the trained SVM is applied to classify a set of testing patterns. Following the results produced by the classifier (the trained SVM), the digital watermark can be embedded and retrieved. Experimental results show that the proposed scheme is invisible and robust against common signals processing such as median filtering, sharpening, noise adding, and JPEG compression, etc., and robust against desynchronization attacks such as rotation, translation, scaling, row or column removal, cropping, and local random bend, etc.
Reversible data hiding based on local histogram shifting with multilayer embedding We use the two neighboring points of the peak point to embed the secret data.We improve the histogram of embedded cover image to ensure security of secret data.The multilayer embedding of pyramid structure is adopted to increase the capacity.Our proposed method has a higher embedding capacity with lower distortion. In this paper, we present a new reversible data hiding method based on histogram shifting using localization. Our proposed method selects peak point as the reference point, then uses the two neighboring points of the peak point to achieve secret data embedding based on histogram shifting and the peak point keeps unchanged. In the extraction end, we no longer need the key information about the peak point, we can directly find the peak point from the histogram to extract the secret data. We also exploit the localization to make the histogram of embedded cover image become almost the same as the histogram of the original cover image. The embedding capacity is also increased rapidly by the localization with multilayer embedding. Experimental results show that our proposed method is effective and superior.
1.014692
0.021406
0.017923
0.011351
0.010935
0.006922
0.004851
0.001305
0.000144
0.000033
0
0
0
0
Nonstationary Blind Image Restoration using Variational Methods ABSTRACT The Variational Bayesian approach has recently been proposed to tackle the blind image restoration (BIR) problem. We consider extending the procedures to include realistic boundary,modelling and non-stationary image restoration. Correctly modelling the boundaries is essential for achieving accurate blind restorations of photographic images, whilst nonstationary models allow for better adaptation to local image features, and therefore improvements in quality. IndexTerms— BlindDeconvolution, Boundarymodelling, Nonstationary Image models, Variational Bayesian methods
Fast Low Bit-Rate 3D Searchless Fractal Video Encoding Video encoding techniques play an important role in data reduction. Fractal compression has received considerable attention in the past decades. While early methods presented prohibitively large encoding times, recent search less fractal encoders reduced this problem. A fast 3D purely fractal video encoder based on a flexible adaptive spatial subdivision data structure is proposed in this work. The method completely avoids any kind of search for a matching domain block and is capable of performing fast compression and decompression with high visual fidelity. Experimental results show that the developed approach outperforms the state-of-the-art x264 video encoder at very low bit rates in high motion video sequences in both structural dissimilarity measure and encoding time.
Digital Watermarking, 8th International Workshop, IWDW 2009, Guildford, UK, August 24-26, 2009. Proceedings
A Technique for High-Performance Data Compression First Page of the Article
Applications of Universal Context Modeling to Lossless Compression of Gray-Scale Images Inspired by theoretical results on universal modeling, a general framework for sequential modeling of gray-scale images is proposed and applied to lossless compression. The model is based on stochastic complexity considerations and is implemented with a tree structure. It is efficiently estimated by a modification of the universal algorithm Context. Several variants of the algorithm are described. The sequential, lossless compression schemes obtained when the context modeler is used with an arithmetic coder, are tested with a representative set of gray-scale images. The compression ratios are compared with those obtained with state-of-the-art algorithms available in the literature, with the results of the comparison consistently favoring the proposed approach.
A Sequential Quantization Strategy for Data Embedding and Integrity Verification Quantization-based embedding has been used for integrity verification in semi-fragile watermarking. However, some of the illegal modifications cannot be detected in the normal quantization-based methods, especially when the host values (i.e., the values chosen in a host signal for data embedding) are independent from each other. In this paper, a sequential quantization strategy (SQS) is proposed to make the modulation of a host value dependent on a certain number of the previous ones. Therefore, a balance between security improvement and tamper localization can be achieved for integrity verification. Furthermore, the proposed SQS is incorporated with a reversible data hiding mechanism. A new watermarking algorithm is then generated for mesh authentication. The experimental results show that the chance to detect illegal modifications is increased by adopting the SQS while the property of reversibility is achieved.
Distortion-Free Data Embedding for Images One common drawback of virtually all current data embedding methods is the fact that the original image is inevitably distorted by some small amount of noise due to data embedding itself. This distortion typically cannot be removed completely due to quantization, bit-replacement, or truncation at the grayscales 0 and 255. Although the distortion is often quite small, it may not be acceptable for medical imagery (for legal reasons) or for military images inspected under unusual viewing conditions (after filtering or extreme zoom). In this paper, we introduce a general approach for high-capacity data embedding that is distortion-free (or lossless) in the sense that after the embedded information is extracted from the stego-image, we can revert to the exact copy of the original image before the embedding occurred. The new method can be used as a powerful tool to achieve a variety of non-trivial tasks, including distortion-free robust watermarking, distortion-free authentication using fragile watermarks, and steganalysis. The proposed concepts are also extended to lossy image formats, such as the JPG.
Probability Models for Clutter in Natural Images We propose a framework for modeling clutter in natural images. Assuming that: 1) images are made up of 2D (projected) views of 3D (real) objects and 2) certain simplifying conditions hold, we derive an analytical density for natural images. This expression is shown to match well with the observed densities (histograms). In addition to deriving multidimensional densities, several extensions are also proposed.
An image fragile watermark scheme based on chaotic image pattern and pixel-pairs Fragile watermarking techniques for digital content have been studied in the past few years. Fragile watermarks are used to determine if a piece of watermarked digital content has been tampered, and distinguish tampered areas from non-tampered areas without referring to the original digital content. In this paper, a general framework for fragile watermark is proposed, and then a novel fragile watermarking scheme for image authentication is presented. The embedding process of fragile watermark starts from computing the difference image between the host image and its chaotic pattern, followed by mapping the difference image into a binary image. The binary image is then inserted into the least significant bit (LSB) bitplane of the host image. In addition, chaotic map is used to generate the chaotic pattern image, which can be used as secret key to improve the security of watermark algorithm. Due to employing permutation transform and chaotic image pattern, the corresponding position relation is broken between pixels in the watermarked image and the watermark. Simulation results and performance analysis show that the presented method is fast, secure and capable of detecting and localizing modification.
Improving various reversible data hiding schemes via optimal codes for binary covers. In reversible data hiding (RDH), the original cover can be losslessly restored after the embedded information is extracted. Kalker and Willems established a rate-distortion model for RDH, in which they proved out the rate-distortion bound and proposed a recursive code construction. In our previous paper, we improved the recursive construction to approach the rate-distortion bound. In this paper, we generalize the method in our previous paper using a decompression algorithm as the coding scheme for embedding data and prove that the generalized codes can reach the rate-distortion bound as long as the compression algorithm reaches entropy. By the proposed binary codes, we improve three RDH schemes that use binary feature sequence as covers, i.e., an RS scheme for spatial images, one scheme for JPEG images, and a pattern substitution scheme for binary images. The experimental results show that the novel codes can significantly reduce the embedding distortion. Furthermore, by modifying the histogram shift (HS) manner, we also apply this coding method to one scheme that uses HS, showing that the proposed codes can be also exploited to improve integer-operation-based schemes.
An edge-guided image interpolation algorithm via directional filtering and data fusion. Preserving edge structures is a challenge to image interpolation algorithms that reconstruct a high-resolution image from a low-resolution counterpart. We propose a new edge-guided nonlinear interpolation technique through directional filtering and data fusion. For a pixel to be interpolated, two observation sets are defined in two orthogonal directions, and each set produces an estimate of the pixel value. These directional estimates, modeled as different noisy measurements of the missing pixel are fused by the linear minimum mean square-error estimation (LMMSE) technique into a more robust estimate, using the statistics of the two observation sets. We also present a simplified version of the LMMSE-based interpolation algorithm to reduce computational cost without sacrificing much the interpolation performance. Experiments show that the new interpolation techniques can preserve edge sharpness and reduce ringing artifacts.
Counterfeiting attacks on block-wise dependent fragile watermarking schemes In this paper, we present three counterfeiting attacks on the block-wise dependent fragile watermarking schemes. We consider vulnerabilities such as the exploitation of a weak correlation among block-wise dependent watermarks to modify valid watermarked images, where they could still be verified as authentic, though they are actually not. Experimental results successfully demonstrate the practicability and consequences of the proposed attacks for some relevant schemes. The development of the proposed attack models can be used as a means to systematically examine the security levels of similar watermarking schemes.
Weights and topology: a study of the effects of graph construction on 3D image segmentation. Graph-based algorithms have become increasingly popular for medical image segmentation. The fundamental process for each of these algorithms is to use the image content to generate a set of weights for the graph and then set conditions for an optimal partition of the graph with respect to these weights. To date, the heuristics used for generating the weighted graphs from image intensities have largely been ignored, while the primary focus of attention has been on the details of providing the partitioning conditions. In this paper we empirically study the effects of graph connectivity and weighting function on the quality of the segmentation results. To control for algorithm-specific effects, we employ both the Graph Cuts and Random Walker algorithms in our experiments.
Completeness of Unfolding for Rewriting Logic Theories Many transformation systems for program optimization, program synthesis, and program specialization are based on fold/unfold transformations. In this paper, we investigate the semantic properties of a narrowing-based unfolding transformation that is useful to transform rewriting logic theories. We also present a transformation methodology that is able to determine whether an unfolding transformation step would cause incompleteness and avoid this problem by completing the transformed rewrite theory with suitable extra rules. More precisely, our methodology identifies the sources of incompleteness and derives a set of rules that are added to the transformed rewrite theory in order to preserve the semantics of the original theory.
1.045824
0.057143
0.031805
0.007775
0.004685
0.002722
0.001051
0.000179
0.000062
0.000029
0.000012
0
0
0
A Generalized Image Interpolation-based Reversible Data Hiding Scheme with High Embedding Capacity and Image Quality. Jung and Yoo proposed the first image interpolation-based reversible data hiding algorithm. Although their algorithm achieved superior interpolation results, the embedding capacity was insufficient. Lee and Huang proposed an improved algorithm to enhance the embedding capacity and the interpolation results. However, these algorithms present limitations to magnify the original image to any resolution and pixels in the boundary region of the magnified image are poorly manipulated. Furthermore, the capacity and the image quality can be improved further. This study modifies the pixel mapping scheme and adopts a bilinear interpolation to solve boundary artifacts. The modified reference pixel determination and an optimal pixel adjustment process can effectively enhance the embedding capacity and the image quality. The experimental results show our proposed algorithm achieves a higher embedding capacity under acceptable visual distortions, and can be applied to a magnified image at any resolution. Our proposed technique is feasible in reversible data hiding.
Accelerating compute intensive medical imaging segmentation algorithms using hybrid CPU-GPU implementations. Medical image processing is one of the most famous image processing fields in this era. This fame comes because of the big revolution in information technology that is used to diagnose many illnesses and saves patients lives. There are many image processing techniques used in this field, such as image reconstructing, image segmentation and many more. Image segmentation is a mandatory step in many image processing based diagnosis procedures. Many segmentation algorithms use clustering approach. In this paper, we focus on Fuzzy C-Means based segmentation algorithms because of the segmentation accuracy they provide. In many cases, these algorithms need long execution times. In this paper, we accelerate the execution time of these algorithms using Graphics Process Unit (GPU) capabilities. We achieve performance enhancement by up to 8.9x without compromising the segmentation accuracy.
Secure and imperceptible digital image steganographic algorithm based on diamond encoding in DWT domain. This paper presents a new efficient embedding algorithm in the wavelet domain of digital images based on the diamond encoding (DE) scheme. Current discrete wavelet transform (DWT) steganography adds an unacceptable distortion to the images and is considered as an ineffective in terms of security. Applying the DE scheme to the current DWT steganographic methods solves the problems of these methods, and reduces the distortion added to the images, and thus improves the embedding efficiency. The proposed algorithm first converts the secret image into a sequence of base-5 digits. After that, the cover image is transformed into the DWT domain and segmented into 2 × 1 coefficient pairs. The DE scheme is used then to change at most one coefficient of each coefficient pair to embed the base-5 digits. Experimental results depict that the proposed algorithm is more efficient in embedding compared to other methods in terms of embedding payload and image quality. Moreover, the proposed algorithm is attacked by well-known steganalysis software. Results are showing that the proposed algorithm is secure against the powerful universal steganalyzer “ensemble classifier” and the histogram attack. The results also reveal that the proposed algorithm is robust against different image processing attacks such as compression, added noise, and cropping attacks.
Steganographic method based on interpolation and LSB substitution of digital images Steganography is the method of hiding secret data in other data, such as video or an image. A reversible data hiding method can extract the cover image from a stego-image without distortion after extracting the hidden data. In this paper a semi-reversible data hiding method that utilizes interpolation and the least significant substitution technique is proposed. First, interpolation methods are used to scale up and down the cover image before hiding secret data for a higher capacity and quality. Secondly, the LSB substitution method is used to embed secret data. Experimental results show that the proposed method can embed a large amount of secret data while keeping very high visual quality, where the PSNR is guaranteed to be 37.54 dB ( k = 3) and 43.94 dB ( k = 2).
An image interpolation based reversible data hiding scheme using pixel value adjusting feature. In this paper, we propose an image interpolation based reversible data hiding scheme using pixel value adjusting feature. This scheme consists of two phases, namely: image interpolation and data hiding. In order to interpolate the original image, we propose a new image interpolation method which is based on the existing neighbor mean interpolation method. Our interpolation method takes into account all the neighboring pixels like the NMI method. However, it uses different weight-age as per their proximity. Thus, it provides the better quality interpolated image. In case of data hiding phase, secret data is embedded in the interpolated pixels in two passes. In the first pass, it embeds the secret data into the odd valued pixels and then in the second pass, the even valued pixels are used to embed the secret data. To ensure the reversibility of the proposed scheme, the location map is constructed for every pass. Basically, the proposed scheme only increases/decreases the pixel values during data hiding phase, which improves the performance of the proposed scheme in terms of computation complexity. Experimentally, our scheme is superior to the existing scheme in terms of data hiding capacity, image quality and computation complexity.
Dual-image-based reversible data hiding method using center folding strategy In recent years, dual-image techniques have been widely used in reversible data hiding. Most data hiding methods focus on finding the best predictive value to reduce the degree of stego image distortion. After finding a predicted value, most methods convert the secret data directly into secret symbols, using the decimal or other numeral systems, and then have these added to (or subtracted from) the predictive value (or pixel) to complete the embedding. However, this study found that the secret data are a very important factor affecting image quality. If the value of the secret symbols is too large, it may cause greater differences between the stego-pixels and original pixels. Hence, this study employs the center folding strategy to reduce the value of the secret symbols. The reduced symbols are then embedded in two stego images through an averaging method, to maintain image quality. In addition, underflow/overflow pixels are replaced with the original pixels to reduce unnecessary image distortion. The experimental results show that the proposed method has fairly good performance compared to the other dual-image techniques. For the same embedding capacity, its image quality is also better, on average, than that of other methods by at least 2dB. Furthermore, the proposed method determines the embedding capacity with the k value. The larger the k value, the higher the embedding capacity. The experimental results show that for the same image quality, the embedding capacity of the proposed method is better, on average, than that of other methods by at least 0.5 bits per pixel. The proposed method is applicable not only to experimental images, but also works well for general images in terms of embedding capacity, image quality and execution time. This research presented a dual Imaging-based reversible hiding technique.The Central Folding Strategy is applied to achieve low distortion purpose.The reduced symbols are embedded in two stego-images through an averaging method.The performance of the proposed method is better than that of other methods.The proposed method can adapt to different image characteristics.
A novel high capacity reversible data hiding scheme based on pixel intensity segmentation. There have been discussed several data hiding techniques which can hide the secret data in an image. However, after extracting the secret data some of the image information gets lost. It is an important issue to have a data hiding scheme which can extract the secret data as well as can restore the original image without any loss of information. Such types of schemes are called reversible data hiding schemes which are commonly used in sensitive military, legal, and medical applications. The existing reversible data hiding schemes either provide good hiding capacity but inferior stego-image quality or good stego-image quality but poor hiding capacity because the stego-image quality and the hiding capacity are diametrically related parameters. In this paper, we propose a novel high capacity reversible data hiding scheme which has high data hiding capacity while maintaining good quality stego-image. In this scheme, we first cryptographically encode the secret data using a private key so that even if an attacker is able to extract the embedded secret data, he cannot get the original secret message unless he has the private key. Our scheme hides the secret data in two phases. In first phase, it uniformly divides the pixel intensity levels i.e., 0-255, into odd sized segments. Then the image is scanned in zigzag order to identify the concealable pixels, which have the same value as the middle elements of a segment. Some of the secret data is hidden into these identified pixels. In second phase, the intensity levels are again divided in reverse order, i.e., 255 to 0, into odd sized segments of uniform length like as in the first phase. The resultant image is again scanned in zigzag order and concealable pixels are identified. The remaining secret data is embedded into the concealable pixels. To ensure reversibility of our algorithm, a location map is maintained for each phase. The location maps are compressed using JBIG1 scheme and are transmitted through a secure channel along with other auxiliary information that contains private key and segment size. Experimentally our scheme achieves very high capacity without deteriorating the image quality. It is because we select the exact middle pixels of the segments to hide the secret data, which ensures that the pixel value does not get altered unevenly while embedding the secret data. It further helps in increasing the hiding capacity. Moreover, it is very simple as it does not require much computation for embedding the secret data.
Reversible Data Embedment for Encrypted Cartoon Images Using Unbalanced Bit Flipping.
Wavelet-based reversible watermarking for authentication In the digital information age, digital content (audio, image, and video) can be easily copied, manipulated, and distributed. Copyright protection and content authentication of digital content has become an urgent problem to content owners and distributors. Digital watermarking has provided a valuable solution to this problem. Based on its application scenario, most digital watermarking methods can be divided into two categories: robust watermarking and fragile watermarking. As a special subset of fragile watermark, reversible watermark (which is also called lossless watermark, invertible watermark, erasable watermark) enables the recovery of the original, unwatermarked content after the watermarked content has been detected to be authentic. Such reversibility to get back unwatermarked content is highly desired in sensitive imagery, such as military data and medical data. In this paper we present a reversible watermarking method based on an integer wavelet transform. We look into the binary representation of each wavelet coefficient and embed an extra bit to "expandable" wavelet coefficient. The location map of all "expanded" coefficients will be coded by JBIG2 compression and these coefficient values will be losslessly compressed by arithmetic coding. Besides these two compressed bit streams, an SHA-256 hash of the original image will also be embedded for authentication purpose.
Reversible Audio Data Hiding Using Linear Prediction and Error Expansion Reversible data hiding is a technique where hidden data is embedded in host data, such that the consistency of the host is perfectly preserved and the host data is restored during the extraction process of the hidden data. Herein, improvements to the conventional technique for reversible data hiding of audio waveforms are proposed. As no location map is required to prevent amplitude overflow, the proposed method achieves a storage capacity of nearly 1 bit per sample of payload data. The proposed method is superior in terms of having a smaller loss of payload data and little objective quality degradation on average. Concealment of payload within the host data is also better than when using the conventional method.
Side match and overlap match vector quantizers for images. A class of vector quantizers with memory that are known as finite state vector quantizers (FSVQs) in the image coding framework is investigated. Two FSVQ designs, namely side match vector quantizers (SMVQs) and overlap match vector quantizers (OMVQs), are introduced. These designs take advantage of the 2-D spatial contiguity of pixel vectors as well as the high spatial correlation of pixels in typical gray-level images. SMVQ and OMVQ try to minimize the granular noise that causes visible pixel block boundaries in ordinary VQ. For 512 by 512 gray-level images, SMVQ and OMVQ can achieve communication quality reproduction at an average of 1/2 b/pixel per image frame, and acceptable quality reproduction. Because block boundaries are less visible, the perceived improvement in quality over ordinary VQ is even greater. Owing to the structure of SMVQ and OMVQ, simple variable length noiseless codes can achieve as much as 60% bit rate reduction over fixed-length noiseless codes.
Weaknesses in the Key Scheduling Algorithm of RC4 In this paper we present several weaknesses in the key scheduling algorithm of RC4, and describe their cryptanalytic significance. We identify a large number of weak keys, in which knowledge of a small number of key bits suffices to determine many state and output bits with non-negligible probability. We use these weak keys to construct new distinguishers for RC4, and to mount related key attacks with practical complexities. Finally, we show that RC4 is completely insecure in a common mode of operation which is used in the widely deployed Wired Equivalent Privacy protocol (WEP, which is part of the 802.11 standard), in which a fixed secret key is concatenated with known IV modifiers in order to encrypt different messages. Our new passive ciphertext-only attack on this mode can recover an arbitrarily long key in a negligible amount of time which grows only linearly with its size, both for 24 and 128 bit IV modifiers.
An Effective Optimization of Reference Pixel Arrangement in the Lossless Coding for HDTV image This paper presents a lossless image compression method for one frame of High-definition television (HDTV). We apply classified adaptive prediction, and then the prediction error is encoded by entropy coding of arithmetic coding. Then an image is divided into small blocks, and they are classified into some classes each of which correspond to one minimum mean square error (MMSE) linear predictor. In this paper, we consider the influence of the reference pixel of context and the number of the linear predictors for the compression ratio and encoding time. After the fast optimization of block classification with a small number of reference pixels of context, we optimize the number, and the arrangement of reference pixels based on variable selection technique which uses F-statistic (test statistic of partial regression coefficient) in multiple linear regression analysis. In this paper, we propose an effective optimization method of reference pixel arrangement in the lossless coding for HDTV still image. As a result of experiments, it is confirmed that our method achieves higher compression ratios and faster processing speed, than conventional one. It should be noted that the decoding speed of the proposed method is very fast.
A reversible high capacity data hiding scheme using pixel value adjusting feature In this paper, we propose a new reversible data hiding scheme that uses pixel value adjusting feature. It has two phases. In first phase, It scans the image diagonally from left to right and hides the some of the secret data into the odd valued pixels. In second phase, it also scans the image diagonally but in right to left order and hides the secret data into the even valued pixels. In the second phase, some of the pixels used for hiding in the first phase are again used to hide the secret data but this time their values are positively changed. In other words, if a pixel value is decremented in the first phase; this time, it is incremented so that more secret data can be embedded and the quality of the stego-image is also maintained. Though, our scheme has some overhead in hiding the secret data, yet it is able to provide good quality with high capacity. The scheme is very simple because at the time of hiding it doesn’t perform much computation; rather it simply increases or decreases the pixel value. Thus, it has very low computational complexity. The experimental results show that our proposed scheme is superior to many existing schemes.
1.113667
0.042556
0.04
0.018019
0.013333
0.007667
0.002222
0.000802
0.000201
0.000044
0.000006
0
0
0
A framework for control in production systems A formal model for representing control in production systems is defined. The formalism allows control to be directly specified independently of the conflict resolution scheme, and thus allows the issues of control and nondeterminism to be treated separately. Unlike previous approaches, it allows control to be examined within a uniform and consistent framework. It is shown that the formalism provides a basis for implementing control constructs which, unlike existing schemes, retain all the properties desired of a knowledge based system --- modularity, flexibility, extensibility and explanatory capacity. Most importantly, it is shown that these properties are not a function of the lack of control constraints, but of the type of information allowed to establish these constraints. Within the formalism it is also possible to provide a meaningful notion of the power of control constructs. This enables the types of control required in production systems to be examined and the capacity of various schemes to meet these requirements to be determined. Schemes for improving system efficiency and resolving nondeterminism are examined, and devices for representing such meta-level knowledge are described. In particular, the objectification of control information is shown to provide a better paradigm for problem solving and for talking about problem solving. It is also shown that the notion of control provides a basis for a theory of transformation of production systems, and that this provides a uniform and consistent approach to problems involving subgoal protection.
In defense of logic
Annotated production systems: a model for skill acquisition Annotated Production Systems provide a procedural model for skill acquisition by augmenting a production model of the skill with formal commentary describing plans, hugs, and interrelationships among various productions. This commentary supports efficient interpretation, self-debugging and self-improvement. The theory of annotated productions is developed by analyzing the skill of attitude instrument flying. An annotated production interpreter has been written that executes skill models which control a flight simulator. Preliminary evidence indicates that annotated productions effectively model certain bugs and certain learning behaviors characteristic of student.
Adaptive Production Systems Adaptive production systems are defined and used to illustrate adaptive techniques in production system construction. A learning paradigm is described with in the framework of adaptive production systems, and is applied to a simple rote learning task, a nonsense syllable association and discrimination task, and a serial pattern acquisition task. It is shown that with the appropriate production building mechanism, all three tasks can be solved using similar adaptive production system learning techniques.
Splitting and reduction heuristics in automatic theorem proving A theorem proving program has been written in LISP which attempts to speed up automatic theorem proving by the use of heuristics. Some of these heuristics are of a general nature, applicable to the proof of any theorem in mathematics, while others are designed for set theory. Their effect is to break the theorem into parts which are easier to prove. The proof of these parts is then attempted by resolution. Resolution, when used, is relegated to the job it does best, proving relatively easy assertions.
The promotion and accumulation strategies in transformational programming
Correctness-preserving program transformations This paper extends the predicate calculus formalization of the partial correctness properties of programs (Ki, Go) to include the preservation of correctness under program transformations. The general notion of "program transformations which preserve properties" is fundamental to the theory of programming and programming languages. In the context of proofs of program correctness, transformations which preserve correctness can be used to improve less efficient, but easier to prove, programs. The basic argument in the use of correctness-preserving program transformations (hereafter CPTs) is:Assume that G is a program (with attached assertions) which has been proved correct with respect to some input-output relation Ain-Aout. Now suppose that S is some part of G, e.g. an expression, assertion, statement, etc., which is to be replaced by some other such part S' to produce the program G'. The goal is to prove that G' is also correct with respect to Ain-Aout and therefore the replacement preserves overall program correctness. Moreover, if the replacement has only a local effect, e.g. the body of a loop, then the proof of correctness-preservation should be restricted to that part of the program affected by the replacement.Section 2 reviews the current paradigm for proving program correctness. An example in section 3 illustrates CPTs in a sequence of improvements on a correct and simple, but inefficient, initial program. In section 4, the formalization of partial correctness properties of programs is recast as a semantic language definition using Knuth's semantic method (Kn1). This formalization is then used in section 5 to describe the mechanics of performing CPTs. In section 6, several questions about the formalization of sections 4 and 5 are discussed and a generalization is proposed. Finally, section 7 returns to a concrete example and suggests that the most effective use of CPTs is by identification of schematic forms. Related work is mentioned in section 8.
Shortest-path routing in arbitrary networks We introduce an on-line protocol which routes any set ofNpackets along shortest paths with congestionCand dilationDthrough an arbitrary network inO(C+D+logN) steps, with high probability. This time bound is optimal up to the additive logN, and it has previously only been reached for bounded-degree leveled networks.
Impact of radio irregularity on wireless sensor networks In this paper, we investigate the impact of radio irregularity on the communication performance in wireless sensor networks. Radio irregularity is a common phenomenon which arises from multiple factors, such as variance in RF sending power and different path losses depending on the direction of propagation. From our experiments, we discover that the variance in received signal strength is largely random; however, it exhibits a continuous change with incremental changes in direction. With empirical data obtained from the MICA2 platform, we establish a radio model for simulation, called the Radio Irregularity Model (RIM). This model is the first to bridge the discrepancy between spherical radio models used by simulators and the physical reality of radio signals. With this model, we are able to analyze the impact of radio irregularity on some of the well-known MAC and routing protocols. Our results show that radio irregularity has a significant impact on routing protocols, but a relatively small impact on MAC protocols. Finally, we propose six solutions to deal with radio irregularity. We evaluate two of them in detail. The results obtained from both the simulation and a running testbed demonstrate that our solutions greatly improve communication performance in the presence of radio irregularity.
Combining evidence for automatic web session identification Contextual information provides an important basis for identifying and understanding users' information needs. Our previous work in traditional information retrieval systems has shown how using contextual information could improve retrieval performance. With the vast quantity and variety of information available on the Web, and the short query lengths within Web searches, it becomes even more crucial that appropriate contextual information is extracted to facilitate personalized services. However, finding users' contextual information is not straightforward, especially in the Web search environment where less is known about the individual users. In this paper, we will present an approach that has significant potential far studying Web users' search contexts. The approach automatically groups a user's consecutive search activities on the same search topic into one session. It uses Dempster-Shafer theory to combine evidence extracted from two sources, each of which is based on the statistical data from Web search logs. The evaluation we have performed demonstrates that our approach has achieved a significant improvement over previous methods of session identification.
Hierarchical production system Production systems have been used for representing domain-specific knowledge in a number of successful AI projects. In this short paper, we present a general concept of hierarchical production system (HPS). The HPS is an extended version of the current production system, in which two hierarchical structures of rules are employed. In one hierarchical structure, heuristic rules for conflict resolution are embedded and in another one, domain-specific rules are represented. The introduction of the hierarchical rule structures makes the HPS to be a useful framework for constructing knowledge-based systems.
Iterative water-filling for Gaussian vector multiple access channels We develop an efficient iterative water-filling algorithm to find an optimal transmit spectrum for maximum sum capacity in a Gaussian multiple access channel with vector inputs and a vector output. The iterative algorithm converges from any starting point and reaches within (K-1)/2 nats per output dimension from the K-user sum capacity after just one iteration
Anti-Forensics with Steganographic Data Embedding in Digital Images E-forensics investigates and extracts confidential information from electronic products; in other words, the anti-forensics indicates that evidences in those products are imperceptible and undetected. This work presents an anti-forensic steganography method that can embed and extract messages from images. Highlight of exploiting modification direction (HoEMD) and adaptive EMD (AdEMD) methods use the module operation and take into account of the sensitive nature of a human visual system. The HoEMD approach exploits the pixel directions. A pixel with a larger change implies more pixel directions and, ultimately, a larger embedding capacity. The pixel differencing in the proposed AdEMD method is used to evaluate whether the pixel located in the edge area can tolerate a larger change than that of the pixel location in a smooth area. To successfully extract a message, the differencing value is maintained on the same level before and after data is concealed; a delicate adjusting phase is used as well. In contrast with the PVD and LSB replacement method of Wu et al., LSB substitution method, and Lee and Chen's data hiding scheme based on modulus function, the proposed steganography system has a larger embedding capacity and a higher image quality. Effectiveness of the proposed steganography schemes over that of a previous blind steganalyzer is demonstrated using the statistical attack of Chi-square analysis.
Reversible data hiding based on local histogram shifting with multilayer embedding We use the two neighboring points of the peak point to embed the secret data.We improve the histogram of embedded cover image to ensure security of secret data.The multilayer embedding of pyramid structure is adopted to increase the capacity.Our proposed method has a higher embedding capacity with lower distortion. In this paper, we present a new reversible data hiding method based on histogram shifting using localization. Our proposed method selects peak point as the reference point, then uses the two neighboring points of the peak point to achieve secret data embedding based on histogram shifting and the peak point keeps unchanged. In the extraction end, we no longer need the key information about the peak point, we can directly find the peak point from the histogram to extract the secret data. We also exploit the localization to make the histogram of embedded cover image become almost the same as the histogram of the original cover image. The embedding capacity is also increased rapidly by the localization with multilayer embedding. Experimental results show that our proposed method is effective and superior.
1.249984
0.083328
0.083328
0.062496
0.000532
0
0
0
0
0
0
0
0
0
Reversible data hiding of full color JPEG2000 compressed bit-stream preserving bit-depth information We present a lossless data hiding method for JPEG2000 compressed data based on the reversible information hiding for binary images we have proposed. In JPEG2000 compression, full color images with RGB three colors are transformed to YCrCb color space, and then, for each color component, wavelet transform, quantization and entropy coding are performed independently. The wavelet coefficients of each color component are quantized, therefore, a least significant bit plane (LSB) would be extracted. The proposed method embeds additional information to be hidden into the quantized wavelet coefficients of the Y color component in a reversible way. To realize this, we embed not only secret data and a JBIG2 bit-stream of a part of the LSB plane but also the bit-depth of the quantized coefficients on some code-blocks. Experimental results demonstrate the feasibility of an application of the proposed method to image alteration detection for JPEG2000 compressed data.
Reversible And Robust Audio Watermarking Based On Spread Spectrum And Amplitude Expansion Recently, a technique that uses quantization index modulation and amplitude expansion has been proposed for reversible and robust audio watermarking. However, when applied to modified stego audio, the technique is not reversible. Here, a novel technique that is both reversible and robust is proposed for hiding data in audio. It is perfectly reversible for unmodified stego signals and is semi-reversible for perceptually coded stego signals. A robust payload is embedded by direct-sequence spread-spectrum modulation, with the sequence determined from the amplitude expansion in time and frequency of integer modified discrete cosine transform (MDCT) coefficients. Simultaneously, a reversible payload is embedded into the apertures in the amplitude histogram that result from amplitude expansion of the integer MDCT coefficients. The robustness and size of the reversible payload were evaluated by simulation for 20 music pieces. The reversible-payload capacity was approximately 4.8 kilobits per second. Encoding the stego signals into MP3, tandem MP3, and MPEG4AAC, and applying an additive single delay to the stego signals, revealed a maximum bit error rate of less than 6.5% with a robust payload of 7.2-bits per second. Measurement of the objective quality of the stego audio and audio recovered from the modified stego signal was done by using software based on the perceptual evaluation of audio quality algorithm. The results showed that the mean objective difference grade (ODG) was better than 'perceptible, but not annoying' for the stego audio. Semi-recovery from the perceptually coded stego signals was realized in terms of small differences in ODG between the recovered and coded signals.
Optimum reversible data hiding and permutation coding This paper is mainly devoted to investigating the connection between binary reversible data hiding and permutation coding. We start by undertaking an approximate combinatorial analysis of the embedding capacity of reversible watermarking in the binary Hamming case, which asymptotically shows that optimum reversible watermarking must involve not only “writing on dirty paper”, as in any blind data hiding scenario, but also writing on the dirtiest parts of the paper. The asymptotic analysis leads to the information-theoretical result given by Kalker and Willems more than a decade ago. Furthermore, the novel viewpoint of the problem suggests a near-optimum reversible watermarking algorithm for the low embedding distortion regime based on permutation coding. A practical implementation of permutation coding, previously proposed in the context of maximum-rate perfect steganography of memoryless hosts, can be used to implement the algorithm. The paper concludes with a discussion on the evaluation of the general rate-distortion bound for reversible data hiding.
Complete Separable Reversible Data Hiding in Encrypted Image. Reversible data hiding in encrypted image (RDHEI) is an emerging technology since it has good potential for practical applications such as encrypted image authentication, content owner identification and privacy protection. But there is one key problem of many existing published works, that the embedded data only can be extracted either before or after image decryption. In this paper, a complete separable reversible data hiding scheme in encrypted images is proposed. Additional data can be embedded into a cipher image which is encrypted by RC4 and can be extracted error-free both from the cipher domain and the plaintext domain. Moreover, the proposed method is simpler to calculate, while offering better performance. The results demonstrate that larger payload, better image quality, and error-free data extraction as well as image recovery are achieved.
Stereo Image Coding With Histogram Pair Based Reversible Data Hiding This paper presents a stereo image coding method using reversible data hiding technique so that the right frame can be recovered losslessly and the left frame can be reconstructed with high visual quality. Utilizing the similarity between two frames in a stereo image pair the required size of storage and transmission bandwidth for the stereo image pair can be reduced to 50 %. A residual error matrix with a dynamic range of [-255, 255] is obtained by applying a frame-wise disparity algorithm which first shifts the left frame horizontally by a certain amount and then computes its difference to the right frame. Next, thus the generated residual error image with gray levels [0, 255] is obtained losslessly by a proposed labeling scheme. JPEG2000 lossy compression is then applied to the residual error image. The histogram-pair based reversible data hiding scheme is then utilized to embed the JPEG2000 lossy compressed data into the right frame. Compared with the prior art, which uses a block-based disparity estimation algorithm and a location map based reversible data hiding, the proposed method has demonstrated that the stereo image can be reconstructed with higher visual quality and with faster processing speed. Specifically, the experiments have demonstrated that both the PSNR and visual quality of the reconstructed stereo image pair are higher than those achieved by the prior arts.
Reversible Jpeg-Based Hiding Method With High Hiding-Ratio The stego-images generated by many existing hiding techniques are not economic in size, and hence need compression. Unfortunately, compression usually destroys the secret content hidden inside. To solve this dilemma, some hiding methods based on compression code (rather than the image itself) are reported. This paper proposes a high-capacity and high-hiding-ratio "reversible" steganography method based on JPEG-compression code. In the proposed method, the JPEG compression code of an image is used as the cover media. An 8 x 8 hiding-capacity table is firstly evaluated, which is then utilized to modify the quantization table attached to the given JPEG code. The two quantization tables (modified and original) together can map the DCT coefficients of each block to some larger DCT coefficients, with secret data hidden inside these larger DCT coefficients. In the decoding process, after lossless extraction of the hidden secret data, the proposed method can also recover the original JPEG-compression code. Experimental results show that our method outperforms other JPEG-based hiding methods (reversible or not) regarding both hiding-ratio and stego-image's quality.
An improved VLC-based lossless data hiding scheme for JPEG images In this paper, a lossless data hiding scheme which directly embeds data into the bitstream of JPEG images is presented. For real cases, the JPEG code space is partly occupied, not all variable length codes (VLC) in the Huffman table are used during the JPEG image compression process. Thus, these unused VLCs can be made used of and data hiding can be performed by mapping one or more unused VLCs to one used VLC. Through analyzing the statistics of both used and unused VLCs, the proposed scheme can take full advantage of the unused VLCs by mapping Huffman codes according to a specific mapping strategy and reach higher capacity. The output stego image can keep exactly the same content as the original one and preserve the same file size, and if the file size is allowed to be enlarged, then our scheme can achieve a significant improvement of embedding capacity.
Reversible data hiding for JPEG images based on histogram pairs This paper proposes a lossless data hiding technique for JPEG images based on histogram pairs. It embeds data into the JPEG quantized 8x8 block DCT coefficients and can achieve good performance in terms of PSNR versus payload through manipulating histogram pairs with optimum threshold and optimum region of the JPEG DCT coefficients. It can obtain higher payload than the prior arts. In addition, the increase of JPEG file size after data embedding remains unnoticeable. These have been verified by our extensive experiments.
Circular interpretation of bijective transformations in lossless watermarking for media asset management The need for reversible or lossless watermarking methods has been highlighted in the literature to associate subliminal management information with losslessly processed media and to enable their authentication. The paper first analyzes the specificity and the application scope of lossless watermarking methods. It explains why early attempts to achieve reversibility are not satisfactory. They are restricted to well-chosen images, strictly lossless context and/or suffer from annoying visual artifacts. Circular interpretation of bijective transformations is proposed to implement a method that fulfills all quality and functionality requirements of lossless watermarking. Results of several bench tests demonstrate the validity of the approach.
Bi-stretch reversible data hiding algorithm for absolute moment block truncation coding compressed images Steganography is one of the most important approaches for secure transmission by concealing secret data into a host image imperceptibly. To achieve a good tradeoff between the hiding capacity and image quality, more work needs to be further researched. In this paper, to obtain satisfactory results, a Bi-Stretch Hiding (BSH) algorithm for absolute moment block truncation coding (AMBTC)-compressed image is proposed. In the scheme, the AMBTC-compressed image is divided into non-overlapped blocks first, after that, four feasible cases are employed to embed secret data, which takes advantage of the characteristics of the coefficients of the AMBTC-compressed image and lead tiny distortion of the AMBTC-compressed image. The experimental results demonstrate that the proposed BSH scheme outperforms the other state-of-the-art compression data hiding methods.
Reversible image steganographic scheme via predictive coding. The reversible image steganographic scheme in this study provides the ability to embed secret data into a host image and then recover the host image without losing any information when the secret data is extracted. In this paper, a reversible image steganographic scheme based on predictive coding is proposed by embedding secret data into compression codes during the lossless image compression. The proposed scheme effectively provides a lossless hiding mechanism in the compression domain. During the predictive coding stage, the proposed scheme embeds secret data into error values by referring to a hiding-tree. In an entropy decoding stage, the secret data can be extracted by referring to the hiding-tree, and the host image can be recovered during the predictive decoding stage. The experimental results show that the average hiding capacity of the proposed scheme is 0.992bits per pixel (bpp), and the host image can be reconstructed without losing any information when the secret data is extracted.
Reversible Data Hiding for 3D Meshes in the PVQ-Compressed Domain In this paper, we present a new data hiding method for 3D triangle meshes. While most of the existing data hiding schemes introduce some small amount of nonreversible distortion to the cover mesh, the new method is reversible and enables the cover mesh data to be completely restored when the payload is removed from the stego mesh. A noticeable difference between our method and others is that we embed data in the predictive vector quantization (PVQ) compressed domain by modifying the prediction mechanism during the compression process. Experimental results demonstrate the effectiveness of the proposed scheme.
Data-driven approaches to empirical discovery In this paper we track the development of research in empirical discovery. We focus on four machine discovery systems that share a number of features: the use of data-driven heuristics to constrain the search for numeric laws; a reliance on theoretical terms; and the recursive application of a few general discovery methods. We examine each system in light of the innovations it introduced over its predecessors, providing some insight into the conceptual progress that has occurred in machine discovery. Finally, we reexamine this research from the perspectives of the history and philosophy of science.
A new interpolation error expansion based reversible watermarking algorithm considering the human visual system Reversible watermarking has merged over the past few years as a promising solution for copyright protection, especially for applications like remote sensing, medical imaging and military applications which require lossless recovery of the host media. In this paper, we aim to extend the additive interpolation error expansion technique in [16]. We will consider the human visual system (HVS) to improve the embedding rate while maintaining the image visual quality. To this end, the just noticeable difference (JND) is used to embed more watermark bits. The experimental results show that the proposed algorithm can improve the embedding rate while preserving the image visual quality.
1.200736
0.200736
0.200736
0.100453
0.066912
0.04018
0.016784
0.007496
0.000914
0.000127
0.000047
0.000002
0
0
Stability and scalability in global routing As the complexity of physical implementation continues to grow with technology scaling, routability has emerged as a major concern and implementation flow bottleneck. Infeasibility of routing forces a loop back to placement, netlist optimization, or even RTL design and floorplanning. Thus, to maintain convergence and a manageable number of iterations in the physical implementation flow, it is necessary to accurately predict design routability as quickly as possible. Routability estimation during placement typically exploits rough but fast global routers. Fast global routers are integrated with placers and are supposed to provide accurate congestion estimation for each iterative placement optimization, with short turn-around time. Such integrated global routers (as well as congestion estimators without global routers) should give (1) fast, and (2) stably accurate decisions as to whether a given placement is indeed routable. In this paper, we evaluate four academic global routers [14] [1] [9] [4] in terms of stability and scalability. We perturb global routing problem instances in controlled ways, and analyze the sensitivity of routing outcomes and metrics. We observe scaling suboptimality and substantial noise in most of our experiments; this suggests a future need for new global router criteria and metrics.
An Analytical Placement Framework for 3-D ICs and Its Extension on Thermal Awareness In this paper, we present a high-quality analytical 3-D placement framework. We propose using a Huber-based local smoothing technique to work with a Helmholtz-based global smoothing technique to handle the nonoverlapping constraints. The experimental results show that this analytical approach is effective for achieving tradeoffs between the wirelength and the through-silicon-via (TSV) number. Compared to the state-of-the-art 3-D placer ntuplace3d, our placer achieves more than 20% wirelength reduction, on average, with a similar number of TSVs. Furthermore, we extend this analytical 3-D placement framework with thermal awareness. While 2-D thermal-aware placement simply follows uniform power distribution to minimize temperature, we show that the same criterion does not work for 3-D ICs. Instead, we are able to prove that when the TSV area in each bin is proportional to the lumped power consumption of that bin and the bins in all tiers directly above it, the peak temperature is minimized. Based on this criterion, we implement thermal awareness in our analytical 3-D placement framework. Compared with a TSV oblivious method, which only results in an 8% peak temperature reduction, our method reduces the peak temperature by 34%, on average, with slightly less wirelength overhead. These results suggest that considering the thermal effects of TSVs is necessary and effective during the placement stage.
Congestion prediction in early stages of physical design Routability optimization has become a major concern in physical design of VLSI circuits. Due to the recent advances in VLSI technology, interconnect has become a dominant factor of the overall performance of a circuit. In order to optimize interconnect cost, we need a good congestion estimation method to predict routability in the early designing stages. Many congestion models have been proposed but there's still a lot of room for improvement. Besides, routers will perform rip-up and reroute operations to prevent overflow, but most models do not consider this case. The outcome is that the existing models will usually underestimate the routability. In this paper, we have a comprehensive study on our proposed congestion models. Results show that the estimation results of our approaches are always more accurate than the previous congestion models.
LMgr: A low-M emory global router with dynamic topology update and bending-aware optimum path search Global routing remains a fundamental physical design problem. We observe that large circuits cause high memory cost1, and modern routers could not optimize the routing path of each two-pin subnet. In this paper, (1) we develop a dynamic topology update technique to improve routing quality (2) we improve the memory efficiency with negligible performance overhead (3) we prove the non-optimality of traditional maze routing algorithm (4) we develop a novel routing algorithm and prove that it is optimum (5) we design a new global router, LMgr, which integrates all the above techniques. The experimental results on the ISPD 2008 benchmark suite show that LMgr could outperform NTHU2.0, NTUgr, FastRoute3.0 and FGR1.1 on solution quality in 13 out of 16 benchmarks and peak memory cost in 15 out of 16 benchmarks, the average memory reduction over all the benchmarks is up to 77%.
Progress and challenges in VLSI placement research Given the significance of placement in IC physical design, extensive research studies performed over the last 50 years addressed numerous aspects of global and detailed placement. The objectives and the constraints dominant in placement have been revised many times over, and continue to evolve. Additionally, the increasing scale of placement instances affects the algorithms of choice for high-performance tools. We survey the history of placement research, the progress achieved up to now, and outstanding challenges.
TSV-aware analytical placement for 3D IC designs Through-silicon vias (TSVs) are required for transmitting signals among different dies for the three-dimensional integrated circuit (3D IC) technology. The significant silicon areas occupied by TSVs bring critical challenges for 3D IC placement. Unlike most published 3D placement works that only minimize the number of TSVs during placement due to the limitations in their techniques, this paper proposes a new 3D cell placement algorithm which can additionally consider the sizes of TSVs and the physical positions for TSV insertion during placement. The algorithm consists of three stages: (1) 3D analytical global placement with density optimization and whitespace reservation for TSVs, (2) TSV insertion and TSV-aware legalization, and (3) layer-by-layer detailed placement. In particular, the global placement is based on a novel weighted-average wirelength model, giving the first model in the literature that can outperform the well-known log-sum-exp wirelength model theoretically and empirically. Further, 3D routing can easily be accomplished by traditional 2D routers since the physical positions of TSVs are determined during placement. Compared with state-of-the-art 3D cell placement works, our algorithm can achieve the best routed wirelength, TSV counts, and total silicon area, in shortest running time.
An efficient and effective detailed placement algorithm In the past few years, there has been a lot of research in the area of global placement. In comparison, not much attention has been paid to the detailed placement problem. Existing detailed placers either fail to improve upon the excellent solution quality enabled by good global placers or are very slow. To handle the above problems, we focus on the detailed placement problem. We present an efficient and effective detailed placement algorithm to handle the wirelength minimization problem. The main contributions of our work are: (1) an efficient Global Swap technique to identify a pair of cells that can be swapped to reduce wirelength; (2) a flow that combines the Global Swap technique with other heuristics to produce very good wirelength; (3) an efficient single-segment clustering technique to optimally shift cells within a segment to minimize wirelength. On legalized mPL5 global placements on the IBM standard-cell benchmarks, our detailed placer can achieve 19.0%, 13.2% and 0.5% more wirelength reduction compared to Fengshui5.0, rowironing and Domino respectively. Correspondingly we are 3.6/spl times/ 2.8/spl times/ and 15/spl times/ faster. On the ISPD05 benchmarks (Gi-Joon Nam et al., 2005), we achieve 8.1% and 9.1% more wirelength reduction compared to Fengshui5.0 and rowironing respectively. Correspondingly we are 3.1/spl times/ and 2.3/spl times/ faster.
Timing-driven placement by grid-warping Grid-warping is a recent placement strategy based on a novel physical analogy: rather than move the gates to optimize their location, it elastically deforms a model of the 2-D chip surface on which the gates have been coarsely placed via a standard quadratic solve. In this paper, we introduce a timing-driven grid-warping formulation that incorporates slack-sensitivity-based net weighting. Given inevitable concerns about wirelength and runtime degradation in any timing-driven scheme, we also incorporate a more efficient net model and an integrated local improvement ("rewarping") step. An implementation of these ideas, Warp2, can improve worst-case negative slack by 37% on average, with very modest increases in wirelength and runtime.
Legalization algorithm for multiple-row height standard cell design. Typical standard cell placement algorithms assume that all cells are of the same height such that cells can be aligned along the placement rows. However, modern standard cell designs are getting more complicated and multiple-row height cell becomes more common. With multiple-row height cells, placement of cells are not independent among different rows. It turns out that most of the commonly used detailed placement and legalization techniques cannot be extended easily to handle the problem. We propose a novel algorithm in handling legalization of placement involving multiple-row height cells. The algorithm can efficiently legalize a local region of cells with various heights, which is especially useful for local cell movement, cell sizing, and buffer insertion. Experiments on the application of the technique in detailed placement show that our approach can effectively and efficiently legalize global placement results and obtain significant improvement in the objective function.
Efficient simulated evolution based rerouting and congestion-relaxed layer assignment on 3-D global routing The increasing complexity of interconnection designs has enhanced the importance of research into global routing when seeking high-routability (low overflow) results or rapid search paths that report wire-length estimations to a placer. This work presents two routing techniques, namely adaptive pseudorandom net-ordering routing and evolution-based rip-up and reroute using a two-stage cost function in a high-performance congestion-driven 2-D global router. We also propose two efficient via-minimization methods, namely congestion relaxation by layer shifting and rip-up and re-assignment, for a dynamic programming-based layer assignment. Experimental results demonstrate that our router achieves performance similar to the first two winning routers in ISPD 2008 Routing Contest in terms of both routability and wire length at a 1.42X and 25.84X faster routing speed. Besides, our layer assignment yields 3.5% to 5.6% fewer vias, 2.2% to 3.3% shorter wirelength and 13% to 27% less runtime than COLA.
Highly dynamic Destination-Sequenced Distance-Vector routing (DSDV) for mobile computers An ad-hoc network is the cooperative engagement of a collection of Mobile Hosts without the required intervention of any centralized Access Point. In this paper we present an innovative design for the operation of such ad-hoc networks. The basic idea of the design is to operate each Mobile Host as a specialized router, which periodically advertises its view of the interconnection topology with other Mobile Hosts within the network. This amounts to a new sort of routing protocol. We have investigated modifications to the basic Bellman-Ford routing mechanisms, as specified by RIP [5], to make it suitable for a dynamic and self-starting network mechanism as is required by users wishing to utilize ad hoc networks. Our modifications address some of the previous objections to the use of Bellman-Ford, related to the poor looping properties of such algorithms in the face of broken links and the resulting time dependent nature of the interconnection topology describing the links between the Mobile Hosts. Finally, we describe the ways in which the basic network-layer routing can be modified to provide MAC-layer support for ad-hoc networks.
Searching by elimination We present a way of program derivation that is applicable to a wide class of searching problems. Compared to more conventional approaches, this method yields very elegant programs. For a general problem specification, we derive a rather general program scheme. For the specific problem at hand, the general scheme is refined — depending on the mathematical properties of the objects involved — to a particular program. This is illustrated by some examples, varying from elementary problems to more advanced problems.
How to prove equivalence of term rewriting systems without induction A simple method is proposed for testing equivalence in a restricted domain of two given term rewriting systems. By using the Church-Rosser property and the teachability of term rewriting systems, the method allows us to prove equivalence of these systems without the explicit use of induction; this proof usually requires some kind of induction. The method proposed is a general extension of inductionless induction methods developed by Musser, Goguen, Huet and HuUot, and allows us to extend inductionless induetior~ concepts to not only term rewriting systems with the termination property, but also various reduction systems: term rewriting systems without the termination property, string rewriting systems, graph rewriting systems, combinatory reduction systems, and resolution systems. This method is applied to test equivalence of term rewriting systems, to prove the inductive theorems, and to derive a new term rewriting system from a given system by using equivalence transformation rules.
An image steganographic algorithm on smart mechanism of embedding secret data in images In reversible compressed image data hiding, the exact cover image is restored at approximate efficient rate after extracting the secret data from the stego-image. The improved version of the traditional histogram shifting method as proposed, prevents overflow and underflow to a greater extent. The primary objective of this paper is to develop a high data hiding capability in histogram-based reversible data hiding algorithm by introducing a relatively lower distortion after embedding the secret message.
1.068527
0.066667
0.041342
0.034555
0.018764
0.005665
0.00269
0.000434
0.000175
0.000045
0
0
0
0
Standard Cell Library Design and Optimization Methodology for ASAP7 PDK. Standard cell libraries are the foundation for the entire back-end design and optimization flow in modern application-specific integrated circuit designs. At 7nm technology node and beyond, standard cell library design and optimization is becoming increasingly difficult due to extremely complex design constraints, as described in the ASAP7 process design kit (PDK). Notable complexities include discrete transistor sizing due to FinFETs, complicated design rules from lithography and restrictive layout space from modern standard cell architectures. The design methodology presented in this paper enables efficient and high-quality standard cell library design and optimization with the ASAP7 PDK. The key techniques include exhaustive transistor sizing for cell timing optimization, transistor placement with generalized Euler paths and back-end design prototyping for library-level explorations.
Lazy man's logic synthesis Deriving a circuit for a Boolean function or improving an available circuit are typical tasks solved by logic synthesis. Numerous algorithms in this area have been proposed and implemented over the last 50 years. This paper presents a "lazy” approach to logic synthesis based on the following observations: (a) optimal or near-optimal circuits for many practical functions are already derived by the tools, making it unnecessary to implement new algorithms or even run the old ones repeatedly; (b) larger circuits are composed of smaller ones, which are often isomorphic up to a permutation/negation of inputs/outputs. Experiments confirm these observations. Moreover, a case-study shows that logic level minimization using lazy man's synthesis improves delay after LUT mapping into 4- and 6-input LUTs, compared to earlier work on high-effort delay optimization.
Continuous Integration and Its Tools Continuous integration has been around for a while now, but the habits it suggests are far from common practice. Automated builds, a thorough test suite, and committing to the mainline branch every day sound simple at first, but they require a responsible team to implement and constant care. What starts with improved tooling can be a catalyst for long-lasting change in your company's shipping culture. Continuous integration is more than a set of practices, it's a mindset that has one thing in mind: increasing customer value. The Web extra at http://youtu.be/tDl_cHfrJZo is an audio podcast of the Tools of the Trade column discusses how continuous integration is more than a set of practices, it's a mindset that has one thing in mind: increasing customer value.
ICCAD-2017 CAD contest in multi-deck standard cell legalization and benchmarks. An increasing number of multi-deck cells occupying multiple rows (e.g. multi-bit registers) are used in advanced node technologies to achieve low power and high performance. The multi-deck standard cell legalization not only should remove all overlaps between cells but also should satisfy delicate and complicated design rules with preserving the quality of the given placement by applying the minimal perturbation. In addition, the process must be fast and robust to handle the sheer number of cells in the state-of-the-art designs. For this purpose, we have defined an evaluation metric based on maximum, average cell movements, and Half Perimeter Wire Length (HPWL) as well as runtime of the legalization algorithm. In addition, we have introduced a set of benchmarks that include multi-deck cells with a range of heights (1 -- 4 row heights).
Horizontal benchmark extension for improved assessment of physical CAD research The rapid growth in complexity and diversity of IC designs, design flows and methodologies has resulted in a benchmark-centric culture for evaluation of performance and scalability in physicaldesign algorithm research. Landmark papers in the literature present vertical benchmarks that can be used across multiple design flow stages; artificial benchmarks with characteristics that mimic those of real designs; artificial benchmarks with known optimal solutions; as well as benchmark suites created by major companies from internal designs and/or open-source RTL. However, to our knowledge, there has been no work on horizontal benchmark creation, i.e., the creation of benchmarks that enable maximal, comprehensive assessments across commercial and academic tools at one or more specific design stages. Typically, the creation of horizontal benchmarks is limited by mismatches in data models, netlist formats, technology files, library granularity, etc. across different tools, technologies, and benchmark suites. In this paper, we describe methodology and robust infrastructure for horizontal benchmark extension\" that permits maximal leverage of benchmark suites and technologies in \"apples-to-apples\" assessment of both industry and academic optimizers. We demonstrate horizontal benchmark extensions, and the assessments that are thus enabled, in two well-studied domains: place-and-route (four combinations of academic placers/routers, and two commercial P&R tools) and gate sizing (two academic sizers, and three commercial tools). We also point out several issues and precepts for horizontal benchmark enablement.
Dr. CU: Detailed Routing by Sparse Grid Graph and Minimum-Area-Captured Path Search Different from global routing, detailed routing takes care of many detailed design rules and is performed on a significantly larger routing grid graph. In advanced technology nodes, it becomes the most complicated and time-consuming stage in the very large-scale integration physical design flow. We propose Dr. CU, an efficient and effective detailed router, to tackle the challenges. To handle a 3-D detailed routing grid graph of enormous size, a set of two-level sparse data structures is designed for runtime and memory efficiency. For handling the minimum-area constraint, an optimal correct-by-construction path search algorithm is proposed. Besides, an efficient bulk synchronous parallel scheme is adopted to further reduce the runtime usage. Compared with the other state-of-the-art academic detailed routers, Dr. CU reduces the number of design rule violations by one or two orders of magnitude. At the same time, it uses shorter wire length, fewer vias, and significantly less runtime. The source code of Dr. CU is available at https://github.com/cuhk-eda/dr-cu.
Gridless pin access in detailed routing In the physical design of VLSI circuits, routing is one of the most important tasks. Usually done towards the end of the design process, especially the detailed routing phase has to obey virtually all design rules. As the feature size become ever smaller, shifts towards gridless design paradigms are necessary and a formerly trivial task, namely pin access, now becomes difficult. This work presents and discusses gridless pin access. In particular, we show how to create a feasible and efficient gridless routing approach that can be fit into existing gridded routing flows, creating a practical overall routing solution. As a key ingredient, our approach explicitly addresses design rule conform (shortest) paths among geometric obstacles, also taking violations within the paths themselves into consideration. Furthermore, redundancy exploiting structures called circuitclasses are introduced, and based on these, further improvements are described. We evaluated the approach on current gridless designs and present respective results: the routing performance is improved greatly both with respect to runtime and quality of the results.
ISPD 2015 Benchmarks with Fence Regions and Routing Blockages for Detailed-Routing-Driven Placement The ISPD~2015 placement-contest benchmarks include all the detailed pin, cell, and wire geometry constraints from the 2014 release, plus (a) added fence regions and placement blockages, (b) altered netlists including fixed macro blocks, (c) reduced standard cell area utilization via larger floorplan outlines, and (d)] specified upper limits on local cell-area density. Compared to the 2014 release, these new constraints add realism and increase the difficulty of producing detail-routable wirelength-driven placements.
Recursive Function Smoothing of Half-Perimeter Wirelength for Analytical Placement Inspired by recent success of analytical placers that use a logarithmsum- exponential (LSE) to smooth half-perimeter wirelength (HPWL), we consider in this paper two alternative smoothing methods for HPWL by recursive extension of two-variable max functions. A limited memory Quasi-Newton solver is applied to solve the objective function combining both the smoothing function of HPWL and the penalty function that arises from cell density constraints. Experimental results show that our flow using these two smoothing functions and the solver produces placements with comparable HPWL compared to LSE smoothing-based methods. Our placement flow also produces placements with comparable routability and routed wirelength but with shorter runtime.
Redundant-via enhanced maze routing for yield improvement Redundant via insertion is a good solution to reduce the yield loss by via failure. However, the existing methods are all post-layout optimizations that insert redundant via after detailed routing. In this paper, we propose the first routing algorithm that considers feasibility of redundant via insertion in the detailed routing stage. Our routing problem is formulated as maze routing with redundant via constraints. The problem is transformed to a multiple constraint shortest path problem, and solved by Lagrangian relaxation technique. Experimental results show that our algorithm can find routing layout with much higher rate of redundant via than conventional maze routing.
Combining evidence for automatic web session identification Contextual information provides an important basis for identifying and understanding users' information needs. Our previous work in traditional information retrieval systems has shown how using contextual information could improve retrieval performance. With the vast quantity and variety of information available on the Web, and the short query lengths within Web searches, it becomes even more crucial that appropriate contextual information is extracted to facilitate personalized services. However, finding users' contextual information is not straightforward, especially in the Web search environment where less is known about the individual users. In this paper, we will present an approach that has significant potential far studying Web users' search contexts. The approach automatically groups a user's consecutive search activities on the same search topic into one session. It uses Dempster-Shafer theory to combine evidence extracted from two sources, each of which is based on the statistical data from Web search logs. The evaluation we have performed demonstrates that our approach has achieved a significant improvement over previous methods of session identification.
A functional representation of data structures with a hole
Task partitioning for multi-core network processors Network processors (NPs) typically contain multiple concurrent processing cores. State-of-the-art programming techniques for NPs are invariably low-level, requiring programmers to partition code into concurrent tasks early in the design process. This results in programs that are hard to maintain and hard to port to alternative architectures. This paper presents a new approach in which a high-level program is separated from its partitioning into concurrent tasks. Designers write their programs in a high-level, domain-specific, architecturally-neutral language, but also provide a separate Architecture Mapping Script (AMS). An AMS specifies semantics-preserving transformations that are applied to the program to re-arrange it into a set of tasks appropriate for execution on a particular target architecture. We (i) describe three such transformations: pipeline introduction, pipeline elimination and queue multiplexing; and (ii) specify when each can be safely applied. As a case study we describe an IP packet-forwarder and present an AMS script that partitions it into a form capable of running at 3Gb/s on an Intel IXP2400 Network Processor.
Bi-directional Pixel-Value Differencing Approach for Steganography
1.218489
0.218489
0.218489
0.110311
0.027767
0.016
0.002467
0.000304
0.000004
0
0
0
0
0
Generating Standardized Clinical Documents for Medical Information Exchanges The author mentions that in order to facilitate the exchange of medical information, Taiwan's Department of Health supported a project to standardize electronic medical records. Part of this project involved developing a system that automatically generates samples for the 108 different types of standard documents. The paper discusses the efforts to create a comprehensive specification system to help hospital staffs create EMRs that conform to the HL7 CDA. The guidelines comprise online documents and formats, categorized according to the various paper forms used in different hospitals. For ease of use, the Standardization Clinical Document Generation System was also developed. The system generates sample documents automatically, thereby maintaining consistency and reducing human errors.
Integration of the HL7 Standard in a Multiagent System to Support Personalized Access to e-Health Services In this paper, we present a multiagent system to support patients in search of healthcare services in an e-health scenario. The proposed system is HL7-aware in that it represents both patient and service information according to the directives of HL7, the information management standard adopted in medical context. Our system builds a profile for each patient and uses it to detect Healthcare Service Providers delivering e-health services potentially capable of satisfying his needs. In order to handle this search it can exploit three different algorithms: the first, called PPB, uses only information stored in the patient profile; the second, called DS-PPB, considers both information stored in the patient profile and similarities among the e-health services delivered by the involved providers; the third, called AB, relies on {\rm A}{\bf^*}, a popular search algorithm in Artificial Intelligence. Our system builds also a social network of patients; once a patient submits a query and retrieves a set of services relevant to him, our system applies a spreading activation technique on this social network to find other patients who may benefit from these services.
Metadata-based image watermarking for copyright protection In this paper, we propose a practical application for copyright protection of images with watermarking. The EXIF metadata of images and error-control codes are integrated into our algorithm and corresponding applications. Application for robust watermarking is one of the major branches in digital rights management (DRM) systems and digital forensics. Here, we focus on the copyright protection for images taken by ordinary cameras. By use of robust watermarking, it generally alters selected coefficients of the contents to accomplish the embedding process. Should the received image be in question, the watermark embedded beforehand can be extracted to indicate the copyright owner of such an image. We consider not only the image contents itself, but we also employ the EXIF metadata, which serves as the role of watermark, to be integrated into our scheme to make copyright protection possible. Moreover, for enhancing the performance for copyright protection, channel coding is employed and better protection capability can be expected. Taking the manufacturer, camera model, date and time stamp, and other important information in the EXIF metadata into account, conventional watermarking techniques can be applied to ordinary pictures taken by ourselves, and watermarked images with good quality can be produced. Even when the marked image has been intentionally modified, the original EXIF with selected information can mostly be recovered from the channel decoding process. Simulation results present the effectiveness of such an implementation.
Adjacent Quad Modification Algorithm for Reversible Data Hiding Reversible data hiding has been a newly developed branch in data hiding researches. With the definition, when the message is hidden into the original image by some means, the image containing the hidden message is transmitted to the decoder. After decoding, both the original image and the hidden message should both be recovered perfectly. In this paper, we make use of the easily implemented scheme by modifying the histogram of original image, and perform data hiding with the proposed scheme by considering the local statistics in the original image. Simulation results demonstrate the better performance, and hence the effectiveness of the proposed algorithm.
Telephony Speech Enhancement by Data Hiding The current public switched telephone network (PSTN) is only able to deliver analog signals in a relatively narrow frequency band, about 200-3500 Hz. Such a limited bandwidth causes the typical sound of the narrowband telephone speech. In order to improve intelligibility and perceived quality of telephone speech, we propose using data hiding to extend the PSTN channel bandwidth. Based on the perceptual masking principle, the inaudible spectrum components within the telephone bandwidth can be removed without degrading the speech quality, providing a hidden channel to transmit extra information. The audible components outside the PSTN bandwidth, which are spread out by using orthogonal pseudo-noise codes, are embedded into this hidden channel and then transmitted through the PSTN channel. While this hidden signal is not audible to the human ear, it can be extracted at the receiver end. It results in a final speech signal with a wider bandwidth than the normal PSTN channel. Using both theoretical and simulation analysis, it is shown that the proposed approach is robust to quantization errors and channel noises. Although we cannot physically extend the transmission bandwidth of PSTN, the telephony speech quality can be significantly improved by using the proposed data hiding technique
Adaptive lossless image coding using least squares optimization with edge-look-ahead In predictive image coding, the least squares (LS)-based adaptive predictor is noted as an efficient method to improve prediction result around edges. However pixel-by-pixel optimization of the predictor coefficients leads to a high coding complexity. To reduce computational complexity, we activate the LS optimization process only when the coding pixel is around an edge or when the prediction error is large. We propose a simple yet effective edge detector using only causal pixels. The system can look ahead to determine if the coding pixel is around an edge and initiate the LS adaptation to prevent the occurrence of a large prediction error. Our experiments show that the proposed approach can achieve a noticeable reduction in complexity with only a minor degradation in the prediction results.
Lossless data hiding based on histogram modification of difference images In this paper, we propose a new lossless data hiding method where distortion due to data embedding can be completely removed from the watermarked image after the watermark has been extracted. In the proposed method, we utilize characteristics of the difference image and modify pixel values slightly to embed the data. We show that the lower bound of the PSNR (peak-signal-to-noise-ratio) values for typical images are about 51.14 dB. Moreover, the proposed method is quite simple and fast. Experimental results demonstrate that the proposed scheme can embed a large amount of data while keeping high visual quality of test images.
Near lossless reversible data hiding based on adaptive prediction In this paper we present a new near lossless reversible watermarking algorithm using adaptive prediction for embedding. The prediction is based on directional first-order differences of pixel intensities within a suitably selected neighborhood. The proposed scheme results to be computationally efficient and allows achieving high embedding capacity while preserving a high image quality. Extensive experimental results demonstrate the effectiveness of the proposed approach.
Wavelet-based reversible watermarking for authentication In the digital information age, digital content (audio, image, and video) can be easily copied, manipulated, and distributed. Copyright protection and content authentication of digital content has become an urgent problem to content owners and distributors. Digital watermarking has provided a valuable solution to this problem. Based on its application scenario, most digital watermarking methods can be divided into two categories: robust watermarking and fragile watermarking. As a special subset of fragile watermark, reversible watermark (which is also called lossless watermark, invertible watermark, erasable watermark) enables the recovery of the original, unwatermarked content after the watermarked content has been detected to be authentic. Such reversibility to get back unwatermarked content is highly desired in sensitive imagery, such as military data and medical data. In this paper we present a reversible watermarking method based on an integer wavelet transform. We look into the binary representation of each wavelet coefficient and embed an extra bit to "expandable" wavelet coefficient. The location map of all "expanded" coefficients will be coded by JBIG2 compression and these coefficient values will be losslessly compressed by arithmetic coding. Besides these two compressed bit streams, an SHA-256 hash of the original image will also be embedded for authentication purpose.
Low-density parity-check codes A low-density parity-check code is a code specified by a parity-check matrix with the following properties: each column contains a small fixed number of l's and each row contains a small fixed number of l's. The typical minimum distance of these codes increases linearly with block length for a fixed rate and fixed . When used with maximum likelihood decoding on a sufficiently quiet binary-input symmetric channel, the typical probability of decoding error decreases exponentially with block length for a fixed rate and fixed . A simple but nonoptimum decoding scheme operating directly from the channel a posteriori probabilities is described. Both the equipment complexity and the data-handling capacity in bits per second of this decoder increase approximately linearly with block length. For and a sufficiently low rate, the probability of error using this decoder on a binary symmetric channel is shown to decrease at least exponentially with a root of the block length. Some experimental results show that the actual probability of decoding error is much smaller than this theoretical bound.
A Histogram shifting-based blind reversible data hiding method with a histogram peak estimator This paper proposes a simple but efficient method which is free from memorizing side information, for histogram shifting-based reversible data hiding (HS-RDH). To hide data into an image, a HS-RDH method modifies particular pixel values in the image, based on its tonal distribution, and the method not only extracts hidden data but also recovers the original image from the distorted image which conveys hidden data. Most of HS-RDH methods excepting a few methods should memorize a set of image-dependent side information. The proposed method is free from memorizing side information by introducing two mechanisms; one is estimation of a side information based on a simple statistic and the other is hiding a part of side information as well as data to the image. This approach makes the proposed method is superior to the conventional blind methods in terms of the quality of images which carry hidden data.
Automatic evaluation of summaries using N-gram co-occurrence statistics Following the recent adoption by the machine translation community of automatic evaluation using the BLEU/NIST scoring process, we conduct an in-depth study of a similar idea for evaluating summaries. The results show that automatic evaluation using unigram co-occurrences between summary pairs correlates surprising well with human evaluations, based on various statistical metrics; while direct application of the BLEU evaluation procedure does not always give good results.
The automated transformation of abstract specifications of numerical algorithms into efficient array processor implementations We present a set of program transformations which are applied automatically to convert abstract functional specifications of numerical algorithms into efficient implementations tailored to the AMT DAP array processor. The transformations are based upon a formal algebra of a functional array form, which provides a functional model of the array operations supported by the DAP programming language. The transformations are shown to be complete. We present specifications and derivations of two example algorithms: an algorithm for computing eigensystems and an algorithm for solving systems of linear equations. For the former, we compare the execution performance of the implementation derived by transformation with the performance of an independent, manually constructed implementation; the efficiency of the derived implementation matches that of the manually constructed implementation.
Integer transform based reversible watermarking incorporating block selection. •We use the invariant mean value of a block to evaluate the local complexity.•The block size can be set to a small value by the invariability of the mean value.•The reduced size location map is created by the invariability of the mean value.•We can modify flexibly each pixel in a block using DE or HS.
1.2075
0.2075
0.006927
0.000503
0.00041
0.000147
0.000061
0.000028
0.000013
0.000003
0
0
0
0
A New Color Image Encryption Scheme Based on Enhanced Quadratic Chaotic Map In this study, an enhanced quadratic map (EQM) is proposed and has been applied in a new colour image encryption scheme. The performance evaluations show that the EQM has excellent performances such as better Lyapunov exponent and larger chaotic ranges when compared with the classical quadratic map. The sequences generated from this EQM are successfully used in a new proposed colour image encryption scheme with excellent confusion and diffusion properties. The encryption structure is based on the permutation–diffusion process, and then adopted on the classical permutation, it is characterised by a high speed of diffusion, which enables the encryption of the three components of the plaintext image at the same time, and these encrypted components are simultaneously related to each other. The proposed scheme is tested on the USC-SIPI image dataset and on the real-life image dataset; its effectiveness is also compared with five latterly proposed image encryption schemes. The simulation results indicate that the proposed scheme has the properties of large key space, a weaker correlation between neighbouring pixels, higher sensitivity towards key, greater randomness of pixels and the capacity to withstand statistical analysis, plaintext/chosen-plaintext attacks, and differential attacks, thus that it has higher security and can be appropriate for image encryption.
Data hiding in encrypted images based on predefined watermark embedding before encryption process. In this paper, we propose a novel approach which allows embedding a message into an encrypted image; a message that can be accessed whether the image is encrypted or not. To do so, the proposed solution relies on the insertion into the image of a predefined watermark, a “pre-watermark”, before the encryption process. Message insertion (resp. extraction) is then commonly conducted into (resp. from) the encrypted image. It is the impact of this data hiding process onto the “pre-watermark” that gives us access to the message into the spatial domain, i.e. after the decryption process. By doing so, the watermark processing process is independent of the knowledge of the encryption key and one only just has to know the watermarking key so as to embed the message and extract it from the encrypted or decrypted image. Reciprocally, encryption/decryption processes are completely independent from message embedding/extraction. We illustrate the feasibility of our approach considering the RC4 stream cipher algorithm and the Least Significant Bit substitution watermarking modulation. Experiments conducted on natural test images and ultrasound medical images demonstrate the general capabilities of our system to securely make available a message in both spatial and encrypted domains while minimizing image distortion. We further discuss the use of different encryption and watermarking algorithms and we illustrate how our system can be used for ensuring image integrity and authenticity control.
A Study on the Collusion Security of LUT-Based Client-Side Watermark Embedding. Recently, Celik et al. proposed a lookup-table (LUT)-based embedding mechanism for spread spectrum watermarks, in which the content distribution server sends an encrypted content to the client, then the client uses his or her personalized decryption LUT to decrypt the received content, meanwhile embeds his or her personalized watermark into the content. They also provided a brief analysis on the security of the LUT-based embedding in terms of the collusion attack on watermarked contents. However, we find that the LUT-based embedding is vulnerable to not only the collusion attack on watermarked contents but the collusion attack on decryption LUTs as well, due to the fact all clients share the same long-term encryption LUT. In this paper, we present a theoretical analysis on the collusion security of the LUT-based embedding mechanism. The analysis shows that the collusion attack on decryption LUTs is more effective than the collusion attack on watermarked contents. Based on our analysis, the content distribution system proposed by Celik et al. can only be used for package sale, which limits its applications. In order to extend the applications, we suggest that the encryption and decryption LUTs of the LUT-based client-side embedding should be set as short-term keys instead of long-term keys. Finally, simulations are carried out to illustrate and validate our theoretical analysis.
Design and implementation of a simple dynamical 4-D chaotic circuit with applications in image encryption. We present a simple yet highly dimensional hybrid diode bridge circuit network that can exhibit complex chaotic behaviours. Further, since our network is characterised by smooth fourth-order exponential nonlinearity, we employ a distinctive approach to assess its different properties: we examine the circuit stability near fixed points. Specifically, we evaluate dynamic complexity using the Lyaponov spectrum analysis, bifurcation analysis and phase space trajectories; additionally, we assess coexisting attractors in the parameter space using numerical and experimental analysis. Furthermore, we report assessments of our network in terms of remerging Feigenbaum trees and metastable chaos. Finally, these properties, and especially the chaotic series of the coexisting attractors, were exploited to implement a chaos-based image encryption protocol using S-box construction and PRNG generation mechanisms. To validate the performance of our protocol, we employed standard security analysis including correlation coefficient, pixel change rate, information entropy, time complexity and key space analysis whose outcome were compared alongside available state-of-the-art methods. Outcomes suggest promising applications for our chaotic circuit network in image encryption.
A novel one-dimensional chaotic map generator and its application in a new index representation-based image encryption scheme The fast growth in digital image transmission technologies requires more secure and effective image encryption schemes to provide essential security. In this paper, we present a novel one-dimensional chaotic map amplifier (1-DCMA). The evaluation of the proposed chaotic system shows that the 1-DCMA improve the chaotic behavior, control parameters’ structure, and sensitivity of the 1-D chaotic maps used as input. We further implement a chaotic map generated by the 1-DCMA in a new asymmetric image encryption scheme (Amp-Lg-IE). Using the secret key, the proposed encryption algorithm adds rows and implement a new index representation (IR) concept with shifting sequences to manipulate the pixels’ positions and values synchronously. Finally, we execute bit-level operations to obtain the ciphered image. The simulation and security analysis prove that the Amp-Lg-IE, in a satisfying time, can encrypt a plain image into an unidentified random-like one with high resistance to different types of threats and attacks.
Reversible data hiding in encrypted images using adaptive block-level prediction-error expansion As directly reserving room from the encrypted image for data embedding is difficult and inefficient, many encryption domain based reversible data hiding schemes have disadvantages such as small embedding rate and low visual quality of the directly decrypted image. In order to solve these problems, this paper first introduces a reversible data hiding method for natural images using the block-level prediction-error expansion. The method can embed secret data into 2 x 2 image blocks by exploiting the pixel redundancy within each block. Extending this concept to the encrypted domain, we then propose a reversible data hiding method in encrypted images using adaptive block-level prediction-error expansion (ABPEE-RDHEI). ABPEE-RDHEI encrypts the original image by block permutation to preserve spatial redundancy for data embedding, and applies a stream cipher to the block permutated image to further enhance the security level. Due to the adaptive pixel selection and iterative embedding processes, the proposed ABPEE-RDHEI can achieve a high embedding rate and pleasing visual quality of the marked decrypted images. Experimental results and analysis show that ABPEE-RDHEI has a better performance than several state-of-the-art methods.
Reversible hiding in DCT-based compressed images This paper presents a lossless and reversible steganography scheme for hiding secret data in each block of quantized discrete cosine transformation (DCT) coefficients in JPEG images. In this scheme, the two successive zero coefficients of the medium-frequency components in each block are used to hide the secret data. Furthermore, the scheme modifies the quantization table to maintain the quality of the stego-image. Experimental results also confirm that the proposed scheme can provide expected acceptable image quality of stego-images and successfully achieve reversibility.
Recursive reconstruction of high resolution image from noisy undersampled multiframes In several applications it is required to reconstruct a high-resolution noise-free image from multipath frames of undersampled low-resolution noisy images. Using the aliasing relationship between the undersamples frames and the reference image, an algorithm based on weighted recursive least-squares theory is developed in the wavenumber domain. This algorithm is efficient because interpolation and noise removal are performed recursively, and is highly suitable for implementation via the massively parallel computational architectures currently available. Success in the use of the algorithm is demonstrated through various simulated examples
Transformation of logic programs: Foundations and techniques We present an overview of some techniques which have been proposed for the transformation of logic programs. We consider the so-called “rules + strategies” approach, and we address the following two issues: the correctness of some basic transformation rules w.r.t. a given semantics and the use of strategies for guiding the application of the rules and improving efficiency. We will also show through some examples the use and the power of the transformational approach, and we will briefly illustrate its relationship to other methodologies for program development.
Derivation of Logic Programs
FLUTE: Fast Lookup Table Based Rectilinear Steiner Minimal Tree Algorithm for VLSI Design In this paper, we present a very fast and accurate rectilinear Steiner minimal tree (RSMT) algorithm called fast lookup table estimation (FLUTE). FLUTE is based on a precomputed lookup table to make RSMT construction very fast and very accurate for low-degreeThe degree of a net is the number of pins in the net. nets. For high-degree nets, a net-breaking technique is proposed to reduce the net size until the table can be used. A scheme is also presented to allow users to control the tradeoff between accuracy and runtime. FLUTE is optimal for low-degree nets (up to degree 9 in our current implementation) and is still very accurate for nets up to degree 100. Therefore, it is particularly suitable for very large scale integration applications in which most nets have a degree of 30 or less. We show experimentally that, over 18 industrial circuits in the ISPD98 benchmark suite, FLUTE with default accuracy is more accurate than the Batched 1-Steiner heuristic and is almost as fast as a very efficient implementation of Prim's rectilinear minimum spanning tree algorithm.
mPL6: a robust multilevel mixed-size placement engine The most recent version of the mPL multilevel placement algorithm, mPL6, is reviewed. This version is derived from the mPL5 placer (ISPD05) and the Patoma floorplanner (ASPDAC05). It is also augmented by new techniques for detailed placement. As a result, it can handle mixed-size placement very effectively. First-choice clustering is used to construct a hierarchy of problem formulations. Generalized force-directed placement guides global placement at each level of the cluster hierarchy. Prior to interpolation of each coarse-level solution to its adjacent finer level, however, recursive, top-down displacement-minimizing floorplanning optimizes block orientations and checks that overlap can be removed at the current level. Where necessary, the floor-planner perturbs coarse-level solutions enough that legalization of the given placement can be assured. The resulting flow is scalable and robust, and it produces very low-wirelength solutions for known benchmark circuits.
Invertible spread-spectrum watermarking for image authentication and multilevel access to precision-critical watermarked images Invertible watermarking has been introduced in the literature for the purpose of image authentication. We present a spread-spectrum invertible watermarking system which can be used to authenticate images in any lossless format, i.e. establish their integrity. A second application of invertible watermarking is multilevel access to watermarked images: depending on her clearance, the image user can "clean" the marks of more or less parts of the image, so as to gain in precision. Both applications make sense for precision-critical images (e.g. military, satellite, medical, quality control, reverse engineering images) whose copyright should still be protected to some extent.
Overhead for independent net approach for Global Routing Global Routing is one of the major Electronic Design Automation steps and it is classified as an NP-hard problem. We verified that 61% of the nets in ISPD 2008's benchmarks are shorter than 128 length units. We propose a method to cluster these nets using an independent net approach to perform global routing in massively parallel systems. We demonstrate that separating nets in clusters according to its length and processing it in parallel can reduce by 67 the processing time for collision detection, if compared with a sequential non-clustered analysis.
1.2
0.2
0.2
0.2
0.2
0.04
0.001961
0
0
0
0
0
0
0
Embedded Fractional Nonlinear Supercapacitor Model And Its Parametric Estimation Method This paper deals with a supercapacitor (SC) onboard model and its online identification procedure for embedded applications. To take into account its nonlinear behavior, the strategy used in this paper consists, as the first step, in the application of the porous electrode theory to the SC and the approximation of the resulting model. A set of fractional linear systems which are represented by a differential equation involving fractional derivatives is then obtained. Each element of this set represents the behavior of the SC only around one operating voltage. A global nonlinear model is then deduced through an integration method. An online identification procedure has been developed for this nonlinear model. This time identification is based on the mean least square method. Its time behavior has been compared with that of an SC cell for a specific current profile with different levels.
Fractional Fuzzy Adaptive Sliding-Mode Control of a 2-DOF Direct-Drive Robot Arm This paper presents a novel parameter adjustment scheme to improve the robustness of fuzzy sliding-mode control achieved by the use of an adaptive neuro-fuzzy inference system (ANFIS) architecture. The proposed scheme utilizes fractional-order integration in the parameter tuning stage. The controller parameters are tuned such that the system under control is driven toward the sliding regime in the traditional sense. After a comparison with the classical integer-order counterpart, it is seen that the control system with the proposed adaptation scheme displays better tracking performance, and a very high degree of robustness and insensitivity to disturbances are observed. The claims are justified through some simulations utilizing the dynamic model of a 2-DOF direct-drive robot arm. Overall, the contribution of this paper is to demonstrate that the response of the system under control is significantly better for the fractional-order integration exploited in the parameter adaptation stage than that for the classical integer-order integration.
Using Neural Network Model Predictive Control for Controlling Shape Memory Alloy-Based Manipulator This paper presents a new setup and investigates neural model predictive and variable structure controllers designed to control the single-degree-of-freedom rotary manipulator actuated by shape memory alloy (SMA). SMAs are a special group of metallic materials and have been widely used in the robotic field because of their particular mechanical and electrical characteristics. SMA-actuated manipulators exhibit severe hysteresis, so the controllers should confront this problem and make the manipulator track the desired angle. In this paper, first, a mathematical model of the SMA-actuated robot manipulator is proposed and simulated. The controllers are then designed. The results set out the high performance of the proposed controllers. Finally, stability analysis for the closed-loop system is derived based on the dissipativity theory.
Robust Control of Four-Rotor Unmanned Aerial Vehicle With Disturbance Uncertainty This paper addresses the stability and tracking control problem of a quadrotor unmanned flying robot vehicle in the presence of modeling error and disturbance uncertainty. The input algorithms are designed for autonomous flight control with the help of an energy function. Adaptation laws are designed to learn and compensate the modeling error and external disturbance uncertainties. Lyapunov theorem shows that the proposed algorithms can guarantee asymptotic stability and tracking of the linear and angular motion of a quadrotor vehicle. Compared with the existing results, the proposed adaptive algorithm does not require an a priori known bound of the modeling errors and disturbance uncertainty. To illustrate the theoretical argument, experimental results on a commercial quadrotor vehicle are presented.
Achievable Performance Region for a Fractional-Order Proportional and Derivative Motion Controller
Shortcut deforestation in calculational form In functional programming, intermediate data structures are often used to &quot;glue&quot; together small programs. Deforestation is a program transformation to remove these intermediate data structures automatically. We present a simple algorithm for deforestation based on two fusion rules for hylomorphism, an expressive recursion pattern. A generic notation for hylomorphisms is introduced, where natural transformations are explicitly factored out, and it is used to represent programs. Our method...
Automated Theory Formation in Mathematics A program called "AM" is described which cairies on simple mathematics research: defining, and studying new concepts under the guidance of a large body of heuiistic rules. The 250 heurKtus communicate via an agenda mechanism, a global priority queue of small bisk', for the program to pei foim and teasons why each task is plausible (e.g., "Find PENCRAHZTION. of 'prnes', because turued out to be so useful a Conccpi"). Fach concept is an active, structured knowledge module. One bundled vei y incomplete modules are initially supplied, each one corresponding to an elementary set theoretic concept (e.g., union). This provides a definite but immense space which AM begins to explore. In one boor, AM rediscovers hundreds of common concepts (including singleton sets, natural numbers, arithmetic) and theorems (e.g., unique factorization).
Cooperative mobile robotics: antecedents and directions There has been increased research interest in systems composed ofmultiple autonomous mobile robots exhibiting cooperative behavior.Groups of mobile robots are constructed, with an aimto studying such issues as group architecture, resource conflict,origin of cooperation, learning, and geometric problems. As yet, fewapplications of cooperative robotics have been reported, and supportingtheory is still in its formative stages. In this paper, we give acritical survey of existing works and discuss open problems in thisfield, emphasizing the various theoretical issues that arisein the study of cooperative robotics. We describe the intellectualheritages that have guided early research, as well as possibleadditions to the set of existing motivations.
From e-sex to e-commerce: Web search changes The Web has become a worldwide source of information and a mainstream business tool. Are human information needs and searching behaviors evolving along with Web content? As part of a body of research studying this question, we have analyzed three data sets culled from more than one million queries submitted by more than 200,000 users of the Excite Web search engine, collected in September 1997, December 1999, and May 2001. This longitudinal benchmark study shows that public Web searching is evolving in certain directions. Specifically, search topics have shifted from entertainment and sex to commerce and people, but there is little change in query lengths or frequency per user. Search topics have shifted, but there is little change in user search behaviors
Transformations of FP program schemes The perceived inefficiency in execution functional programming languages has been an obstacle to their widespread acceptance. Consequently, algorithms are often coded for efficient execution at the expense of clarity. This compromises the functional style, which is the prime advantage of such languages. We argue that high-level program transformations can relieve the programmer from concern for efficiency in many instances. We present several transformations applicable to FP program schemes, and show how these may be proven using fixpoint induction. We also show how specific subalgebras may be exploited to develop more specialised transformations, and suggest that this may be the most fruitful direction for further efforts to take. Comparison with earlier work on transformations reveals that the use of variables in LISP-like languages has often interfered with the identification of superficially dissimilar programs as instances of a common scheme. A variable-free notation such as FP has proven easier to work with.
Annotated production systems: a model for skill acquisition Annotated Production Systems provide a procedural model for skill acquisition by augmenting a production model of the skill with formal commentary describing plans, hugs, and interrelationships among various productions. This commentary supports efficient interpretation, self-debugging and self-improvement. The theory of annotated productions is developed by analyzing the skill of attitude instrument flying. An annotated production interpreter has been written that executes skill models which control a flight simulator. Preliminary evidence indicates that annotated productions effectively model certain bugs and certain learning behaviors characteristic of student.
Analytical placement: A linear or a quadratic objective function? This paper addresses the problem of cell placement which is considered crucial for layout quality. Based on the combined analytical and partitioning strategy successfully applied in the GORDIAN placement tool, we discuss the consequences of using linear or quadratic ob- jective functions. By joining the linear objective with an efficient quadratic programming approach, and by applying a refined iterative partitioning scheme, we ob- tain placements of excellent quality. The effect of a quadratic and a linear objective function on the chip area after final routing is demonstrated for benchmark circuits and other circuits with up to 21000 cells.
Block-based reversible data embedding This paper proposes a block-based reversible data embedding method applied to grayscale cover images. The image is first divided into two areas: data embedding area and auxiliary information embedding area. The data embedding area is further partitioned into 3*3 blocks. According to two predefined thresholds, all the blocks are divided into three categories: smooth, normal or complex. The complex blocks do not embed any data. A smooth block will embed twice the data of a normal block. After predicting the edge direction in a smooth or normal block, Alattar's difference expansion of a generalized integer transform method is exploited to embed data into each vector in the block. Then, the location map and other auxiliary information are embedded in the auxiliary information embedding area using a novel least significant bit (LSB) substitution method. Our method not only improves the embedded image quality but also provides a higher payload capacity compared to Alattar's method. For some smoother images, the embedding capacity of the proposed method is higher than 1 bit/pixel in a single pass and also derives acceptable embedded image quality (PSNR=30). To the best of our knowledge, the proposed method is the first to do this.
A novel algorithm for colour image steganography using a new intelligent technique based on three phases Steganography architecture with seven security layers. New steganography algorithm.Proposed new intelligent technique.Proposed seven layers of security.Extract byte characteristics.Construct image segmentation. A three-phase intelligent technique has been constructed to improve the data-hiding algorithm in colour images with imperceptibility. The first phase of the learning system (LS) has been applied in advance, whereas the other phases have been applied after the hiding process. The first phase has been constructed to estimate the number of bits to be hidden at each pixel (NBH); this phase is based on adaptive neural networks with an adaptive genetic algorithm using upwind adaptive relaxation (LSANN_AGAUpAR1). The LS of the second phase (LSANN_AGAUpAR2) has been introduced as a detector to check the performance of the proposed steganographic algorithm by creating a rich images model from available cover and stego images. The LS of the last phase (LSCANN_AGAUpAR3) has been implemented through three steps, and it is based on a concurrent approach to improve the stego image and defend against attacks. The adaptive image filtering and adaptive image segmentation algorithms have been introduced to randomly hide a compressed and encrypted secret message into a cover image. The NBH for each pixel has been estimated cautiously using 32 principle situations (PS) with their 6 branch situations (BS). These situations have been worked through seven layers of security to augment protection from attacks. In this paper, hiding algorithms have been produced to fight three types of attacks: visual, structural, and statistical attacks. The simulation results have been discussed and compared with new literature using data hiding algorithms for colour images. The results of the proposed algorithm can efficiently embed a large quantity of data, up to 12bpp (bits per pixel), with better image quality.
1.208
0.208
0.208
0.208
0.208
0
0
0
0
0
0
0
0
0
Compile-time garbage collection by sharing analysis This paper describes an analysis technique to reduce the cost in processing time of the storage management operations implied by a program (possibly to zero). The goal of the compile-time optimizations is to replace as much as possible the allocation of new cells by the reuse of previously deallocated cells (in place updating). The technique is based on the detection of sharing information. We present several abstract interpretations on a domain of list patterns allowing to detect sharing information and necessity information; then we show how the infinite domain of patterns can be mapped onto a finite domain in order to achieve the analysis at compile-time. These pieces of information are then used to improve the reference counting semantics associated with particular programs. We illustrate the method developed in this paper with the na’ive reverse example and we show that all the cell allocations involved in this program can be implemented as in place updating.
Applicative caching The “referential transparency” principle of applicative language expressions stipulates that a single value exists for all occurrences of an expression in a given context (where a context is a set of bindings of variables to values). In principle, each such value therefore need to be computed only once. However, in applicative language systems supporting recursive programming or tasking notions, the bindings are not all precomputed and explicit. As a result, textual recognition of all multipleoccurrences is precluded, with the unfortunate consequence that such occurrences are recomputed. We elaborate upon the early notion of “memo function” for solving this problem. We suggest syntactic and semantic constructs providing programmer control for avoiding recomputation, which is incorporated into a “building-block” approach.
Optimization of functional programs by grammar thinning We describe a new technique for optimizing first-order functional programs. Programs are represented as graph grammars, and optimization proceeds by counterexample: when a graph generated by the grammar is found to contain an unnecessary computation, the optimizer attempts to reformulates the grammar so that it never again generates any graph that contains that counterexample. This kind of program reformulation corresponds to an interesting problem on context-free grammars. Our reformulation technique is derived from an (approximate) solution to this CFG problem. An optimizer called Thinner is the proof of concept for this technique. Thinner is a fully automatic, source-to-source optimizer for a Lisp-like language of purely functional, first-order programs. Thinner rediscovers a wide variety of common compiler optimizations. It also finds other more exotic transformations, including the well-known Fibonacci reformulation and the Knuth-Morris-Pratt optimization.
Discovering auxiliary information for incremental computation
Transformational programming: applications to algorithms and systems Ten years ago Cheatham and Wegbreit [4] proposed atransformational program development methodology based on notionsof top-down stepwise program refinement first expressed by Dijkstra[10] and Wirth [45]. A schema describing the process of thismethodology is given in fig. 1. To develop a program bytransformation, we first specify the program in as high a level ofabstraction and as great a degree of clarity as our programminglanguage admits. This high level problem statement program P isproved correct semimechanically according to some standard approach(see Flovd and Hoare [15, 21]), Next, using an interactive systemequipped with a library of encoded transformations, each of whichmaps a correct program into another equivalent program, we selectand apply transformations one at a time to successive versions ofthe program until we obtain a concrete, low level, effecientimplementation version P'. The goals of transformationalprogramming are to reduce programming labor, improve programreliability, and upgrade program performance. In order for labor tobe reduced, the effort required to obtain P, prove it correct, andderive P' by transformation should be less than the effort requiredto code P from scratch, and also to debug it. Program reliabilitywill be improved if P can be certified correct, and if eachtransformation preserves program meaning. Finally, programperformance will be upgraded if transformations are directedtowards increased efficiency.Experimental transformational systems that emphasize one or moreaspects of the methodology outlined above have been implemented byCheatham [5], Darlington [3], Loveman [27], Standish [41], Feather[14] Huet and Lang [11], and others. However, all of these systemsfall short of the goals, because of a number of reasons thatinclude,1 inability to mechanize the checking of transformationapplicability conditions2 reliance on large, unmanageable collections of low leveltransformations, and long arduous derivation sequences3 dependency on transformations whose potential for improvingprogram performance is unpredictable4 use of source languages insufficiently high level toaccommodate perspicuous initial program specifications and powerfulalgorithmic transformationsYet, convincing evidence that this new methodology will succeedhas come from recent advances in verification, programtransformations, syntax directed editting systems, and high levellanguages. These advances, discussed below, represent partialsolution to the problems stated above, and could eventually beintegrated into a single system1 The transformational approach to verification was pioneered byGerhart [19] and strengthened by the results of Schwartz [39],Scherlis [36], Broy et al [2], Koenig and Paige [26.31] Blaustein[1], and others. Due mainly to improved technology for themechanization of proofs of enabling conditions that justifyapplication of transformations, this approach is now at a pointwhere it can be effectively used in a system. Such mechanizationdepends strongly on program analysis, and, in particular, onreanalyses after a program is modified. Attribute grammars [24]have been shown to be especially useful in facilitating programanalysis [23]. Moreover, Reps [34] has discovered algorithm thatreevaluates attributes in optimal time after a program undergoessyntax directed editing changes (as are allowed on the CornellSynthesizer [43]). He has implemented his algorithm recently, andhas reported initial success2 There are encouraging indications that a transformationalsystem can be made to depend mainly on a small but powerfulcollection of transformations applied top-down fashion to programsspecified at various levels of abstraction from logic down toassembler. We envision such a system as a fairly conventionalsemiautomatic compiler which classes of transformations areselected semimechanically in a predetermined order, and arejustified by predicates supplied mechanically but provedsemimanually. Of particular importance is nondeterminism removalwhich has formulated by Sharir [40] could lead to a technique forturning naive, nondeterministic programs into deterministicprograms with emergent strategies. Such programs could then betransformed automatically by finite differencing [13, 16, 17, 18,29, 30, 31] and jamming [28, 31, 20] (which we have implemented)into programs whose data access paths are fully determined. TheSETL optimizer could improve these programs further byautomatically choosing efficient data structure representations andaggregations3 Of fundamental importance to the transformations justmentioned is the fact that they can be associated with speeduppredictions Fong and Ullman [16] were the first to characterize animportant class of algorithmic differencing transformations interms of accurate asymptotic speedup predictions, eg, they gaveconditions under which repeated calculation of a set former {x ins|k(x)} could be computed on O(#s) + cost(k) steps. By consideringstronger conditions and special cases for the boolean valuedsubpart k, Paige [31] later gave sharper speedup predictions (eg,either O(1) steps for each encounter of the set former or acumulative cost of O(#s) steps for every encounter) associated withanother differencing method. Both Morgenstern [28] and Paige [31]prove constant factor improvements due to their jammingtransformations (implemented by Morgenstern for the improvement offile processing, and by Paige for the optimization of programs).Constant factor speedup has also been observed for data structureselection by the method of basings but a supporting analytic studyhas not been presented [8, 37]4 Essential to the whole transformational process is a widespectrum programming language (or set of languages) that canexpress a program at every stage of development from the initialabstract specification down to its concrete implementationrealization. Since transformations applied to programs written atthe highest levels of abstraction are likely to make the mostfundamental algorithmic changes, it is important to stress abstractfeatures in our language. In addition to supportingtransformations, the highest level language dictions should supportlucid initial specifications, verification, and even programanalysts. Of special importance is SETL [38, 9], because itsabstract set theoretic dictions can model data structures andalgorithms easily, because its philosophy of avoiding hidden asymptotic costs facilitates program analysis, because its semanticsconforms to finite set theory and can accommodate a set theoreticprogram logic, and because it is wide spectrum. As is evidenced bythe work of Schwartz, Fong, Paige, and Sharir, SETL is also a richmedium for transformation.
A higher-order removal method
KIDS: a semiautomatic program development system The Kestrel Interactive Development System (KIDS), which provides automated support for the development of correct and efficient programs from formal specifications, is described. The system has components for performing algorithm design, deductive inference, program simplification, partial evaluation, finite differencing optimizations, data type refinement, compilation, and other development operations. Although their application is interactive, all of the KIDS operations are automatic except the algorithm design tactics, which require some interaction at present. Dozens of programs have been derived using the system, and it is believed that KIDS could be developed to the point where it becomes economical to use for routine programming. To illustrate the use of KIDS, the author traces the derivation of an algorithm for enumerating solutions to the k-queens problem. The initial algorithm that KIDS designed takes about 60 minutes on a SUN-4/110 to find all 92 solutions to the 8-queens problem instance. The final optimized version finds the same solutions in under one second.
Proving the correctness of recursion-based automatic program transformations This paper shows how the Improvement Theorem-a semantic condition for the total correctness of program transformation on higher-order functional programs-has practical value in proving the correctness of automatic techniques, including deforestation and supercompilation. This is aided by a novel formulation (and generalisation) of deforestation-like transformations, which also greatly adds to the modularity of the proof with respect to extensions to both the language and the transformation rules.
Program developments: formal explanations of implementations Automated program transformation systems are emerging as the basis for a new programming methodology in which high-level, understandable specifications are transformed into efficient programs. Subsequent modification of the original specification will be dealt with by reimplementation of the specification. For such a system to be practical, these reimplementations must occur relatively quickly and reliably in comparison with the original implementation. We believe that reimplementation requires that a formal document--the program development--be constructed during the development process explaining the resulting implementation to future
Transforming constraint logic programs We study à la Tamaki-Sato transformations of constraint logic programs. We give an operational and fixpoint semantics of our constraint logic programs, show that the operational semantics is sound and complete with respect to the fixpoint semantics; we then extend the Tamaki-Sato transformation system into a fold-unfold transformation system which can take care of constraints and we give a direct proof of its correctness which is simpler than the Tamaki-Sato proof.
Rate-constrained coder control and comparison of video coding standards A unified approach to the coder control of video coding standards such as MPEG-2, H.263, MPEG-4, and the draft video coding standard H.264/AVC (advanced video coding) is presented. The performance of the various standards is compared by means of PSNR and subjective testing results. The results indicate that H.264/AVC compliant encoders typically achieve essentially the same reproduction quality as encoders that are compliant with the previous standards while typically requiring 60% or less of the bit rate.
Multipurpose watermarking for image authentication and protection We propose a novel multipurpose watermarking scheme, in which robust and fragile watermarks are simultaneously embedded, for copyright protection and content authentication. By quantizing a host image's wavelet coefficients as masking threshold units (MTUs), two complementary watermarks are embedded using cocktail watermarking and they can be blindly extracted without access to the host image. For the purpose of image protection, the new scheme guarantees that, no matter what kind of attack is encountered, at least one watermark can survive well. On the other hand, for the purpose of image authentication, our approach can locate the part of the image that has been tampered with and tolerate some incidental processes that have been executed. Experimental results show that the performance of our multipurpose watermarking scheme is indeed superb in terms of robustness and fragility.
A DFM aware, space based router The days when a router simply connected up the nets, obeying simple width and spacing rules, are long gone. While modern routers are still expected to connect things up correctly (albeit with a much more complex set of design rules), they are also expected to balance many more objectives. Even in a digital design, routers are now expected to preserve or enhance timing, get rid of crosstalk violations, even out metal density, insert redundant vias, minimize critical area, and generate 'litho friendly' geometry where possible, just to name a few additional tasks. Analog and mixed signal design adds many more, and more complex, constraints. Managing this large set of often conflicting objectives requires a router built for this express purpose. This talk will discuss Cadence's 'Space Based Router', a new router designed with DFM and DFY in mind.
Reversible data hiding based on local histogram shifting with multilayer embedding We use the two neighboring points of the peak point to embed the secret data.We improve the histogram of embedded cover image to ensure security of secret data.The multilayer embedding of pyramid structure is adopted to increase the capacity.Our proposed method has a higher embedding capacity with lower distortion. In this paper, we present a new reversible data hiding method based on histogram shifting using localization. Our proposed method selects peak point as the reference point, then uses the two neighboring points of the peak point to achieve secret data embedding based on histogram shifting and the peak point keeps unchanged. In the extraction end, we no longer need the key information about the peak point, we can directly find the peak point from the histogram to extract the secret data. We also exploit the localization to make the histogram of embedded cover image become almost the same as the histogram of the original cover image. The embedding capacity is also increased rapidly by the localization with multilayer embedding. Experimental results show that our proposed method is effective and superior.
1.072401
0.047142
0.035842
0.024774
0.013872
0.00365
0.000773
0.000181
0.000037
0
0
0
0
0
Synthesis by Completion The Knuth-Bendix completion procedure was introduced as a means of deriving canonical term-rewriting systems to serve as deci­ sion procedures for given equational theories. The procedure gen­ erates new rewrite rules to resolve ambiguities resulting from exist­ ing rules that overlap. We propose using this procedure to syn­ thesize logic programs, as well as functional programs, from specifications and domain knowledge expressed as equivalence- preserving rewrite rules. An implementatio n is underway.
Communicating agents for applicative concurrent programming A good program methodology should allow easy proofs of program correctness and should also incorporate methods for improving program efficiency. We try to achieve both aims by proposing, in the framework of the applicative programming style, a language by which one can specify a system of computing agents, which communicate by sending and receiving messages.
Improvement of Recursive Programs from a Logic Programming Point of View
Inductive synthesis of equational programs An equational approach to the synthesis of functional and logic programs is taken. Typically, a target program contains equations that are only true in the standard model of the given domain rules. To synthesize such programs, induction is necessary. We propose heuristics for generalizing from a sequence of deductive consequences. These are combined with rewrite-based methods of inductive proof to derive provably correct programs.
Derivation of Graph and Pointer Algorithms We introduce operators and laws of an algebra of formal languages, a subalgebra of which corresponds to the algebra of (multiary) relations. This algebra is then used in the formal specification and derivation of some graph and pointer algorithms. This study is part of an attempt to single out a framework for program development at a very high level of discourse, close to informal reasoning but still with full formal precision.
A rewrite rule based approach for synthesizing abstract data types An approach for synthesizing data type implementations based on the theory of term rewriting systems is presented. A specification is assumed to be given as a system of equations; an implementation is derived from the specification as another system of equations. The proof based approach used for the synthesis consists of reversing the process of proving theorems (i.e. searching for appropriate theorems rather than proving the given ones). New tools and concepts to embody this reverse process are developed. In particular, the concept of expansion, which is a reverse of rewriting (or reduction), is defined and analyzed. The proposed system consists of a collection of inference rules — instantiation, simplification, expansion and hypothesis tesing, and two strategies for searching for theorems depending upon whether the theorem being looked for is in the equational theory or in the inductive theory of the specification.
Preservation of stronger equivalence in unfold/fold logic program transformation This paper shows that Tamaki-Sato's unfold/fold transformation of Prolog programs preserves equivalence in a stronger sense that of the usual least Herbrand model semantics, which Tamaki and Sato originally showed. Conventionally, the semantics of Prolog program is defined by the least Herbrand model. However, the least Herbrand model does not always characterize what answer substitutions are returned. This paper proves that any program obtained from an initial program by applying Tamaki-Sato's transformation returns the same answer substitutions as the initial program for any given top-level goal.
On Using Mode Input-output for Transforming Logic Programs
A System for Proving Equivalences of Recursive Programs We present a system for proving equivalences of recursive programs based on program transformations, namely the fold/unfold method and a generalisation of this method.
Synthesis and transformation of logic programs using unfold/fold proofs We present a method for proving properties of definite logic programs. This method is called unfold/fold proof method because it is based on the unfold/fold transformation rules. Given a program P and two goals (that is, conjunctions of atoms) F(X,Y) and G(X,Z), where X, Y, and Z are pairwise disjoint vectors of variables, the unfold/fold proof method can be used to show that the equivalence formula ∀X(∃YF(X,Y)↔∃ZG(X,Z)) holds in the least Herbrand model of P. Equivalence formulas of that form can be used to justify goal replacement steps, which allow us to transform logic programs by replacing old goals, such as F(X,Y), by equivalent new goals, such as G(X,Z). These goal replacements preserve the least Herbrand model semantics if we find non-ascending unfold/fold proofs of the corresponding equivalence formulas, that is, unfold/fold proofs which ensure suitable well-founded orderings between the successful SLD-derivations of F(X,Y) and G(X,Z), respectively. We also present a method for program synthesis from implicit definitions. It can be used to derive a definite logic program for the predicate newp implicitly defined by an equivalence formula of the form ∀X(∃YF(X,Y)↔∃Z(H(X,Z),newp(X,Z))), such that the predicates occurring in the goals F(X,Y) and H(X,Z) are defined in a given program P, and newp is a predicate symbol not occurring in P. The set of clauses defining newp, say Eureka, allows us to prove that the above equivalence formula holds in the least Herbrand model of P∪Eureka using an unfold/fold proof. Thus, the correctness of our synthesis method derives from the one of the unfold/fold proof method. We finally illustrate our synthesis method through some examples of program specialization, program synthesis, and program transformation, which can all be viewed as program syntheses from implicit definitions.
Maintaining Sensing Coverage and Connectivity in Large Sensor Networks In this paper, we address the issues of maintaining sensing coverage and connectivity by keeping a minimal number of sensor nodes in the active mode in wireless sensor networks. We investigate the relationship between coverage and connectiv- ity by solving the following two sub-problems. First, we prove that if the radio range is at least twice of the sensing range, a complete coverage of a convex area implies connectivity among the working set of nodes. With such a proof, we can then focus only on the coverage problem. Second, we derive, under the ideal case in which node density is sufficiently high, a set of optimality conditions under which a subset of working sensor nodes can be chosen for full coverage. Based on the optimality conditions, we then devise a de- centralized and localized density control algorithm, Optimal Geographical Density Control (OGDC), for density control in large scale sensor networks. Ns-2 simulation show that OGDC outperforms the PEAS algorithm (32), the hexagon-based GAF- like algorithm, and the sponsor area algorithm (28) with respect to the number of working nodes needed (sometimes at a 50% improvement), and achieves almost the same coverage as the algorithm with the best result.
Unfolding in CHR Program transformation is an appealing technique which allows to improve run-time efficiency, space-consumption and more generally to optimize a given program. Essentially it consists of a sequence of syntactic program manipulations which preserves some kind of semantic equivalence. One of the basic operations which is used by most program transformation systems is unfolding which consists in the replacement of a procedure call by its definition. While there is a large body of literature on transformation and unfolding of sequential programs, very few papers have addressed this issue for concurrent languages and, to the best of our knowledge, no one has considered unfolding of CHR programs. This paper is a first attempt to define a correct unfolding system for CHR programs. We define an unfolding rule, show its correctness and discuss some conditions which can be used to delete an unfolded rule while preserving the program meaning.
RF interconnects for communications on-chip In this paper, we propose a new way of implementing on-chip global interconnect that would meet stringent challenges of core-to-core communications in latency, data rate, and re-configurability for future chip-microprocessors (CMP) with efficient area and energy overheads. We discuss the limitation of traditional RC-limited interconnects and possible benefits of multi-band RF-interconnect (RF-I) through on-chip differential transmission lines. The physical implementation of RF-I and its projected performance versus overhead as the function of CMOS technology scaling are discussed as well
Reversible data hiding based on local histogram shifting with multilayer embedding We use the two neighboring points of the peak point to embed the secret data.We improve the histogram of embedded cover image to ensure security of secret data.The multilayer embedding of pyramid structure is adopted to increase the capacity.Our proposed method has a higher embedding capacity with lower distortion. In this paper, we present a new reversible data hiding method based on histogram shifting using localization. Our proposed method selects peak point as the reference point, then uses the two neighboring points of the peak point to achieve secret data embedding based on histogram shifting and the peak point keeps unchanged. In the extraction end, we no longer need the key information about the peak point, we can directly find the peak point from the histogram to extract the secret data. We also exploit the localization to make the histogram of embedded cover image become almost the same as the histogram of the original cover image. The embedding capacity is also increased rapidly by the localization with multilayer embedding. Experimental results show that our proposed method is effective and superior.
1.017264
0.024087
0.017257
0.014656
0.009589
0.005471
0.001949
0.000156
0.000066
0.00001
0
0
0
0
What is the future for watermarking? (part I)
Progressive protection of JPEG2000 codestreams This paper presents an authentication scheme and an access control scheme for JPEG2000 image codestreams based on hash chains. Both schemes are fully compatible with the core part of JPEG2000 standard. The authentication scheme allows users to verify the authenticity of sub-images progressively extracted from a single codestream protected with a single signature, and the access control scheme allows users to access sub-images partially decrypted from a single encrypted JPEG2000 codestream. In addition, the two schemes can be integrated to provide both authenticity and access control simultaneously. Our experiments demonstrate the two scheme are very efficient and highly scalable.
Encryption of Composite Multimedia Contents for Access Control An encryption scheme is proposed that considers hierarchies in media, such as text, images, sound, and so on, in a composite multimedia content to enable versatile access control. In the proposed scheme, a content provider has only one managed key (the master key) for a particular composite multimedia content, and an user who is permitted to access a reserved content entities in the composite content receives only one key that is subordinately generated from the master key. Another key generated from the identical master key is delivered to another user, and this permits the user to access different entities. This scheme introduces a new key concept, namely "unusable key," to keep all entities encrypted in a particular medium and to simultaneously decrypt several entities in other media. The other new key, "numbering key," is also used in this scheme to support simultaneous partial decryption of multiple images that are coded with a scalable coding technology. Simulation results show the effectiveness of the proposed scheme; in particular, the length of the managed master key and that of keys to be delivered to users are small.
Ongoing Innovation in Digital Watermarking Watermarking-based applications can help content owners balance business requirements with consumer choice.
Multibit versus multilevel embedding in high capacity difference expansion reversible watermarking The classical difference expansion (DE) reversible watermarking expands two times a difference in order to embed one bit of data. The upper limit of such DE schemes in a single embedding level is usually bounded by 1 bpp. In order to obtain more than 1 bpp, several embedding stages are chained. The multibit DE schemes expand n times the difference in order to embed up to log2 n bpp. This paper proposes a multibit extension of the difference expansion reversible watermarking and compares the results with the multilevel embedding case. The multibit reversible watermarking outperforms the multilevel one both in capacity and quality. Experimental results on standard test images are provided.
Lossless data embedding--new paradigm in digital watermarking One common drawback of virtually all current data embedding methods is the fact that the original image is inevitably distorted due to data embedding itself. This distortion typically cannot be removed completely due to quantization, bit-replacement, or truncation at the grayscales 0 and 255. Although the distortion is often quite small and perceptual models are used to minimize its visibility, the distortion may not be acceptable for medical imagery (for legal reasons) or for military images inspected under nonstandard viewing conditions (after enhancement or extreme zoom). In this paper, we introduce a new paradigm for data embedding in images (lossless data embedding) that has the property that the distortion due to embedding can be completely removed from the watermarked image after the embedded data has been extracted. We present lossless embedding methods for the uncompressed formats (BMP, TIFF) and for the JPEG format. We also show how the concept of lossless data embedding can be used as a powerful tool to achieve a variety of nontrivial tasks, including lossless authentication using fragile watermarks, steganalysis of LSB embedding, and distortion-free robust watermarking.
Reversible image watermarking based on full context prediction This paper proposes a reversible image watermarking scheme of low distortion and relatively large capacity, wherein prediction-errors are modified at most by 1 to embed secret bits. Different from most existing predictors, where only a partial prediction context is available, we provide a full context for every pixel in our watermarking scheme. Predictors operating on full contexts are preciser and thus produce smaller prediction-errors, which are more favorable for data embedding. Experimental results also validate that the proposed scheme can achieve high image fidelity while providing relatively large capacity.
Reversible Image Data Hiding Based On Gradient Adjusted Prediction The present study illustrates a new lossless data hiding method for digital images using image prediction technique. In the proposed method which is based on gradient-adjusted prediction (GAP), first prediction errors are computed and then the error values are slightly modified through shifting the prediction errors. The modified errors are used for embedding the data. Experimental results of present research have demonstrated that the proposed method called shifted gradient-adjusted prediction error (SGAPE) is capable of hiding more secret data with absolutely high PSNR.
A Novel Reversible Data Hiding Scheme Based on Two-Dimensional Difference-Histogram Modification In this paper, based on two-dimensional difference- histogram modification, a novel reversible data hiding (RDH) scheme is proposed by using difference-pair-mapping (DPM). First, by considering each pixel-pair and its context, a sequence consisting of pairs of difference values is computed. Then, a two-dimensional difference-histogram is generated by counting the frequency of the resulting difference-pairs. Finally, reversible data embedding is implemented according to a specifically designed DPM. Here, the DPM is an injective mapping defined on difference-pairs. It is a natural extension of expansion embedding and shifting techniques used in current histogram-based RDH methods. By the proposed approach, compared with the conventional one-dimensional difference-histogram and one-dimensional prediction-error-histogram-based RDH methods, the image redundancy can be better exploited and an improved embedding performance is achieved. Moreover, a pixel-pair-selection strategy is also adopted to priorly use the pixel-pairs located in smooth image regions to embed data. This can further enhance the embedding performance. Experimental results demonstrate that the proposed scheme outperforms some state-of-the-art RDH works.
Maximum likelihood estimation of length of secret message embedded using ±k steganography in spatial domain In this paper, we propose a new method for estimating the number of embedding changes for non-adaptive +/- K embedding in images. The method uses a high-pass FIR filter and then recovers an approximate message length using a Maximum Likelihood Estimator on those stego image segments where the filtered samples can be modeled using a stationary Generalized Gaussian random process. It is shown that for images with a low noise level, such as decompressed JPEG images, this method can accurately estimate the number of embedding changes even for K = 1 and for embedding rates as low as 0.2 bits per pixel. Although for raw, never compressed images the message length estimate is less accurate, when used as a scalar parameter for a classifier detecting the presence of +/- K steganography, the proposed method gave us relatively reliable results for embedding rates as low as 0.5 bits per pixel.
Capacity maximization for reversible data hiding based on dynamic programming approach Recently, an efficient reversible lossless data hiding algorithm by Ni et al. was presented. Their fast algorithm can recover the original image without any distortion and its PSNR lower bound is higher than that of all existing reversible data hiding algorithms. Instead of selecting the peak-valley pairs in a greedy way, this paper presents a dynamic programming-based reversible data hiding algorithm to determine the most suitable peak-valley pairs such that the embedding capacity object can be maximized. Based on some artificial map images, experimental results demonstrate that our proposed algorithm has 9% embedding capacity improvement ratio and has the similar image quality performance when compared to Ni et al.’s algorithm although it has some execution-time degradation. For natural images, the embedding capacity of Ni et al.’s algorithm is very close to the maximal embedding capacity obtained by our proposed algorithm. Furthermore, the comparison between our proposed dynamic programming-based algorithm and the reversible data hiding algorithm by Chang et al. is investigated.
An Experimental Analysis of a Compact Graph Representation In previous work we described a method for compactly representing graphs with small separators, which makes use of small separators, and presented preliminary ex- perimental results. In this paper we extend the experi- mental results in several ways, including extensions for dynamic insertion and deletion of edges, a comparison of a variety of coding schemes, and an implementation of two applications using the representation. The results show that the representation is quite ef- fective for a wide variety of real-world graphs, including graphs from nite-element meshes, circuits, street maps, router connectivity, and web links. In addition to signif- icantly reducing the memory requirements, our imple- mentation of the representation is faster than standard representations for queries. The byte codes we intro- duce lead to DFT times that are a factor of 2.5 faster than our previous results with gamma codes and a fac- tor of between 1 and 9 faster than adjacency lists, while using a factor of between 3 and 6 less space.
A New Multilevel Framework for Large-Scale Interconnect-Driven Floorplanning We present in this paper a new interconnect-driven multilevel floorplanner, called interconnect-driven multilevel-floorplanning framework (IMF), to handle large-scale building-module designs. Unlike the traditional multilevel framework that adopts the ldquoLambda-shapedrdquo framework (inaccurately called the ldquoV-cyclerdquo framework in the literature): bottom-up coarsening followed by top-down uncoarsening, the IMF, in contrast, works in the ldquoV-shapedrdquo manner: top-down uncoarsening (partitioning) followed by bottom-up coarsening (merging). The top-down partitioning stage iteratively partitions the floorplan region based on min-cut bipartitioning with exact net-weight modeling to reduce the number of global interconnections and, thus, the total wirelength. Then, the bottom-up merging stage iteratively applies fixed-outline floorplanning using simulated annealing for all regions and merges two neighboring regions recursively. Experimental results show that the IMF obtains the best published fixed-outline floorplanning results with the smallest average wirelength for the Microelectronics Center of North Carolina/Gigascale Systems Research Center benchmarks. In particular, IMF scales very well as the circuit size increases. The V-shaped multilevel framework outperforms the Lambda-shaped one in the optimization of global circuit effects, such as interconnection and crosstalk optimization, since the V-shaped framework considers the global configuration first and then processes down to local ones level by level, and thus, the global effects can be handled at earlier stages. The V-shaped multilevel framework is general and, thus, can be readily applied to other problems.
Hiding in encrypted images: a three tier security data hiding technique This paper presents a new crypto domain data hiding technique based on Intermediate Significant Bit Plane Embedding (ISBPE). The cover image is encrypted; the information to be secured is scrambled, and then embedded in the Intermediate Significant Bit (ISB) planes of encrypted cover image, at the locations determined by a Pseudorandom Address Vector (PAV). The pseudorandom embedding of the scrambled data in the ISB planes of encrypted image results in a three tier security of the data to be secured. The ISBPE embedding results in an important advantage that the proposed scheme becomes completely robust to commonly employed attack of Least Significant Bit (LSB) removal/replacement. A novel concept of embedding a very small size fragile watermark in addition to the secret information has been used which facilitates early tamper detection. This feature could save crucial processor time in critical situations of national security issues/warfare etc. Experimental results show that the proposed scheme is more robust to various signal processing attacks like Joint Picture Expert Group compression, Additive White Gaussian Noise and `salt and pepper' noise as compared to conventional LSB based embedding techniques. Comparison results with some well-known techniques show that besides providing high degree of security and robustness to various malicious attacks the proposed technique is capable of embedding a fairly large amount of secret data in the host image while maintaining a good stego-image quality.
1.089735
0.124209
0.124209
0.124209
0.008647
0.001115
0.000138
0.000043
0.000015
0.000004
0.000001
0
0
0
Lossless Compression for Space Imagery in a Dynamically Reconfigurable Architecture This paper presents a novel dynamically reconfigurable hardware architecture for lossless compression and its optimization for space imagery. The proposed system makes use of reconfiguration to support optimal modeling strategies adaptively for data with different dimensions. The advantage of the proposed system is the efficient combination of different compression functions. For image data, we propose a new multi-mode image model which can detect the local features of the image and use different modes to encode regions with different features. Experimental results show that our system improves compression ratios of space image while maintaining low complexity and high throughput.
Lossless Compression for RGB Color Still Images
Prediction based on backward adaptive recognition of local texture orientation and Poisson statistical model for lossless/near-lossless image compression. This paper is devoted to prediction-based lossless/near-lossless image compression algorithm. Within this framework, there are three modules, including prediction model, statistical model and entropy coding. This paper focuses on the former two, and puts forward two new methods: prediction model based on backward adaptive recognition of local texture orientation (BAROLTO), and Poisson statistical model. As far as we know, BAROLTO is the best predictor in efficiency. The Poisson model is designed to avoid the context dilution to some extent and make use of a large neighborhood; therefore, we can capture more local correlation. Experiments show that our compression system (BP) based on BAROLTO prediction and Poisson model outperforms the products of IBM and HP significantly.
Lossless and near-lossless compression of still color images This paper proposes a unified coding algorithm for lossless and near-lossless color image compression that exploits the correlations between RGB signals. For lossless coding, a reversible color transform is proposed that removes the correlations between RGB signals while avoiding any finite word length limitation. Next, the lossless algorithm is extended to a unified coding algorithm of lossless and near-lossless compression that can control the distortion level in the magnitude on the RGB plane. Experimental results show the effectiveness of the proposed algorithm
Entropy based merging of context models for efficient arithmetic coding The contextual coding of data requires in general a step which reduces the vast variety of possible contexts down to a feasible number. This paper presents a new method for non-uniform quantisation of contexts, which adaptively merges adjacent intervals as long as the increase of the contextual entropy is negligible. This method is incorporated in a framework for lossless image compression. In combination with an automatic determination of model sizes for histogram-tail truncation, the proposed approach leads to a significant gain in compression performance for a wide range of different natural images.
Recent Developments in Context-Based Predictive Techniques for Lossless Image Compression In this paper we describe some recent developments that have taken place in context-based predictive coding, in response to the JPEG/JBIG committee&#39;s recent call for proposals for a new international standard on lossless compression of continuous-tone images. We describe the different prediction techniques that were proposed and give a performance comparison. We describe the notion of context-base...
An axiomatic basis for computer programming In this paper an attempt is made to explore the logical foundations of computer programming by use of techniques which were first applied in the study of geometry and have later been extended to other branches of mathematics. This involves the elucidation of sets of axioms and rules of inference which can be used in proofs of the properties of computer programs. Examples are given of such axioms and rules, and a formal proof of a simple theorem is displayed. Finally, it is argued that important advantages, both theoretical and practical, may follow from a pursuance of these topics.
Formal methods: state of the art and future directions ing with credit is permitted. To copy otherwise, to republish, to post onservers, to redistribute to lists, or to use any component of this work in other works, requires priorspecific permission and/or a fee. Permissions may be requested from Publications Dept, ACMInc., 1515 Broadway, New York, NY 10036 USA, fax +1 (212) 869-0481, or permissions@acm.org.2 \Delta E.M. Clarke and J.M. WingAbout Programs---Mechanical verification, Specification techniques; F.4.1 [Mathematical Logicand...
Biogeography-Based Optimization Biogeography is the study of the geographical distribution of biological organisms. Mathematical equations that govern the distribution of organisms were first discovered and developed during the 1960s. The mindset of the engineer is that we can learn from nature. This motivates the application of biogeography to optimization problems. Just as the mathematics of biological genetics inspired the development of genetic algorithms (GAs), and the mathematics of biological neurons inspired the development of artificial neural networks, this paper considers the mathematics of biogeography as the basis for the development of a new field: biogeography-based optimization (BBO). We discuss natural biogeography and its mathematics, and then discuss how it can be used to solve optimization problems. We see that BBO has features in common with other biology-based optimization methods, such as GAs and particle swarm optimization (PSO). This makes BBO applicable to many of the same types of problems that GAs and PSO are used for, namely, high-dimension problems with multiple local optima. However, BBO also has some features that are unique among biology-based optimization methods. We demonstrate the performance of BBO on a set of 14 standard benchmarks and compare it with seven other biology-based optimization algorithms. We also demonstrate BBO on a real-world sensor selection problem for aircraft engine health estimation.
Using dynamic programming for solving variational problems in vision Dynamic programming is discussed as an approach to solving variational problems in vision. Dynamic programming ensures global optimality of the solution, is numerically stable, and allows for hard constraints to be enforced on the behavior of the solution within a natural and straightforward structure. As a specific example of the approach's efficacy, applying dynamic programming to the energy-minimizing active contours is described. The optimization problem is set up as a discrete multistage decision process and is solved by a time-delayed discrete dynamic programming algorithm. A parallel procedure for decreasing computational costs is discussed
Query chains: learning to rank from implicit feedback This paper presents a novel approach for using clickthrough data to learn ranked retrieval functions for web search results. We observe that users searching the web often perform a sequence, or chain, of queries with a similar information need. Using query chains, we generate new types of preference judgments from search engine logs, thus taking advantage of user intelligence in reformulating queries. To validate our method we perform a controlled user study comparing generated preference judgments to explicit relevance judgments. We also implemented a real-world search engine to test our approach, using a modified ranking SVM to learn an improved ranking function from preference data. Our results demonstrate significant improvements in the ranking given by the search engine. The learned rankings outperform both a static ranking function, as well as one trained without considering query chains.
Reasoning about multi-process systems with the box calculus The box calculus is a formalism for reasoning about the properties of multi-process systems which enables account to be taken of pragmatic as well as computational concerns. It was developed for the programming language Hume which explicitly distinguishes between coordination, based on concurrent boxes linked by wires, and expressions, based on polymorphic recursive functions. This chapter introduces Hume expressions and surveys classic techniques for reasoning about functional programs. It then explores Hume coordination and the box calculus, and examines how Hume programs may be systematically transformed while maintaining computational and pragmatic correctness.
Lossless data hiding based on companding technique and difference expansion of triplets A reversible data hiding scheme based on the companding technique and the difference expansion (DE) of triplets is proposed in this paper. The companding technique is employed to increase the number of the expandable triplets. The capacity consumed by the location map recording the expanded positions is largely decreased. As a result, the hiding capacity is considerably increased. The experimental results reveal that high hiding capacity can be achieved at low embedding distortion.
Reversible data hiding based on local histogram shifting with multilayer embedding We use the two neighboring points of the peak point to embed the secret data.We improve the histogram of embedded cover image to ensure security of secret data.The multilayer embedding of pyramid structure is adopted to increase the capacity.Our proposed method has a higher embedding capacity with lower distortion. In this paper, we present a new reversible data hiding method based on histogram shifting using localization. Our proposed method selects peak point as the reference point, then uses the two neighboring points of the peak point to achieve secret data embedding based on histogram shifting and the peak point keeps unchanged. In the extraction end, we no longer need the key information about the peak point, we can directly find the peak point from the histogram to extract the secret data. We also exploit the localization to make the histogram of embedded cover image become almost the same as the histogram of the original cover image. The embedding capacity is also increased rapidly by the localization with multilayer embedding. Experimental results show that our proposed method is effective and superior.
1.24
0.24
0.24
0.048
0.01
0.00016
0
0
0
0
0
0
0
0
Context embedding for raster-scan rhombus based reversible watermarking The embedding not only into the current pixel, but also into the prediction context was recently proposed as an improvement of difference expansion reversible watermarking algorithms. So far it was shown that the effect of splitting the data between the current pixel and the prediction context decreases the embedding distortion, but increases the prediction error. This paper revisits the case of context embedding for the case of pixel prediction on the rhombus composed of the two vertical and the two horizontal neighbors. For this case it appears that the context embedding can be used not only to reduce the embedding distortion, but also to improve the prediction. The gain provided by the improvement of the prediction outperforms the one provided by the reduction of the embedding distortion. Experimental results are provided.
Reversible Watermarking Algorithm with Distortion Compensation A novel reversible watermarking algorithm with two-stage data hiding strategy is presented in this paper. The core idea is two-stage data hiding (i.e., hiding data twice in a pixel of a cell), where the distortion after the first stage of embedding can be rarely removed, mostly reduced, or hardly increased after the second stage. Note that even the increased distortion is smaller compared to that of other methods under the same conditions. For this purpose, we compute lower and upper bounds from ordered neighboring pixels. In the first stage, the difference value between a pixel and its corresponding lower bound is used to hide one bit. The distortion can be removed, reduced, or increased by hiding another bit of data by using a difference value between the upper bound and the modified pixel. For the purpose of controlling capacity and reducing distortion, we determine appropriate threshold values. Finally, we present an algorithm to handle overflow/underflow problems designed specifically for two-stage embedding. Experimental study is carried out using several images, and the results are compared with well-known methods in the literature. The results clearly highlight that the proposed algorithm can hide more data with less distortion.
Reversible data hiding with context modeling, generalized expansion and boundary map This paper proposes a reversible data hiding scheme with high capacity-distortion efficiency, which embeds data by expanding prediction-errors. Instead of using the MED predictor as did in other schemes, a predictor with context modeling, which refines prediction-errors through an error feedback mechanism, is adopted to work out prediction-errors. The context modeling can significantly sharpen the distribution of prediction-errors, and benefit the embedding capacity and the image quality. To expand prediction-errors, the proposed scheme utilizes a generalized expansion, which enables it to provide capacities larger than 1 bpp (bits per pixel) without resorting to multiple embedding. Besides, a novel boundary map is proposed to record overflow-potential pixels. The boundary map is much shorter compared with either a location map or an overflow map even though it is not compressed. The combination of the context modeling, the generalized expansion and the boundary map makes the overall scheme efficient in pursuing large embedding capacity and high image quality. Experimental results demonstrate that the proposed scheme provides competitive capacity compared with other state-of-the-art schemes when the image quality is kept at the same level.
Histogram shifting based reversible data hiding method using directed-prediction scheme This paper aims at reducing the shifting distortion of histogram shifting reversible data hiding method. Instead of calculating symmetrically the prediction value as were done in other schemes, based on the gradient-adjusted predictor (GAP), a directed-prediction scheme, which includes two asymmetric predictors-the right and left GAPs, is designed to predict asymmetrically pixel value. Then two asymmetric error histograms, with right and left-skewness, are constructed by gathering the directed prediction errors, which effectively reduces the amount of pixels on the shifted side of the error histograms. Moreover, the optimal embedding points and thresholds are calculated by defining an evaluation index of the shifting distortion. Experimental results validate the effectiveness of the proposed method and demonstrate that it outperforms several previous methods in terms of payload and image quality.
The Scalar Scheme for Reversible Information-Embedding in Gray-Scale Signals: Capacity Evaluation and Code Constructions Reversible information-embedding (RIE) is a technique transforming host signals and the message into the stego-signals, and the stego-signals can be losslessly reversed to the host signals and the message. We consider the conditions: 1) the host signals are composed of gray-scale independent and identically distributed (i.i.d.) samples; 2) the mean squared error is adopted as the measure of distortion; and 3) the procedure is a scalar approach, i.e., the encoder only reads a host signal and then outputs the corresponding stego-signal in each iteration. In this paper, we propose an iterative algorithm to calculate the signal transition probabilities approximating the optimal rate-distortion bound. Then we propose an explicit implementation to embed a message in an i.i.d. host sequence. The experiments show that the proposed method closely approaches the expected rate-distortions in i.i.d. gray-scale signals. By the image prediction model, the proposed method can be applied to gray-scale images.
Capacity-approaching codes for reversible data hiding By reversible data hiding, the original cover can be losslessly restored after the embedded information is extracted. Kalker and Willems established a rate-distortion model for reversible data hiding, in which they proved the capacity bound and proposed a recursive code construction. In this paper we improve the recursive construction by designing a data embedding method for all-zero covers and a more efficient compression algorithm. We prove that the proposed codes can approach the capacity bound under various distortion constraints. We also apply this coding method to RS method for spatial images, and the experimental results show that the novel codes can significantly reduce the embedding distortion.
An Inpainting-Assisted Reversible Steganographic Scheme Using a Histogram Shifting Mechanism In this paper, we propose a novel prediction-based reversible steganographic scheme based on image inpainting. First, reference pixels are chosen adaptively according to the distribution characteristics of the image content. Then, the image inpainting technique based on partial differential equations is introduced to generate a prediction image that has similar structural and geometric information as the cover image. Finally, by using the two selected groups of peak points and zero points, the histogram of the prediction error is shifted to embed the secret bits reversibly. Since the same reference pixels can be exploited in the extraction procedure, the embedded secret bits can be extracted from the stego image correctly, and the cover image can be restored losslessly. Through the use of the adaptive strategy for choosing reference pixels and the inpainting predictor, the prediction accuracy is high, and more embeddable pixels are acquired. Thus, the proposed scheme provides a greater embedding rate and better visual quality compared with recently reported methods.
Industrial Wireless Sensor Networks 2016 The industrial wireless sensor network (IWSN) is the next frontier in the Industrial Internet of Things (IIoT), which is able to help industrial organizations to gain competitive advantages in industrial manufacturing markets by increasing productivity, reducing the costs, developing new products and services, and deploying new business models. The IWSN can bridge the gap between the existing industrial systems and cyber networks to offer both new challenges and opportunities for manufacturers.
A reversible compression code hiding using SOC and SMVQ indices Reversible data hiding recovers the original images without distortion after the secret data are extracted, so it has received considerable attention among researchers in recent years. This work presents a novel reversible data hiding scheme based on the search-order coding (SOC) algorithm and side match vector quantization (SMVQ). The compression performance is improved by encoding the index with either SOC or SMVQ during the data embedding phase. The index of an image block also is encoded by using a few bits, thus improving the compression rate. During the extracting phase, the receiver uses simple steps to extract the secret data and reconstruct the original SMVQ compression code. Our experimental results indicated that, using a size 64 state codebook, the average compression rate with our scheme was 0.41bpp, which is much better than the compression rates of Chang et al.'s, Yang and Lin's, and Yang et al.'s schemes, i.e., 0.50, 0.54, and 0.56bpp, respectively. Furthermore, in addition to reducing the time required for the embedding and extracting phases, the proposed scheme yields a higher embedding rate than the schemes of Yang and Lin and Yang et al.
Efficient reversible data hiding based on multiple histograms modification Prediction-error expansion (PEE) is the most successful reversible data hiding (RDH) technique, and existing PEE-based RDH methods are mainly based on the modification of one- or two-dimensional prediction-error histogram (PEH). The two-dimensional PEH based methods perform generally better than those based on one-dimensional PEH, however, their performance is still unsatisfactory since the PEH modification manner is fixed and independent of image content. In this paper, we propose a new RDH method based on PEE for multiple histograms. Unlike the previous methods, we consider in this work a sequence of histograms and devise a new embedding mechanism based on multiple histograms modification (MHM). A complexity measurement is computed for each pixel according to its context, and the pixels with a given complexity are collected together to generate a PEH. By varying the complexity to cover the whole image, a sequence of histograms can be generated. Then, two expansion bins are selected in each generated histogram and data embedding is realized based on MHM. Here, the expansion bins are adaptively selected considering the image content such that the embedding distortion is minimized. With such selected expansion bins, the proposed MHM-based RDH method works well. Experimental results show that the proposed method outperforms the conventional PEE and its miscellaneous extensions including both one- or two-dimensional PEH based ones.
Fault resilient and compression tolerant digital signature for image authentication Many watermarking schemes have been proposed to protect the authority of the image producer. These schemes can help the image producer to assert the ownership when their images are illegal reproduced. A buyer may want to make sure that the received commodity is from the legal sender and prove its integrity. In this situation, digital signature schemes can meet these requirements. In this sender-receiver protocol, images are always compressed before transmission. So compression tolerant image-features have to be designed to replace the hash functions in conventional digital signature schemes. In this paper, a public-key based fault resilient and compression tolerant digital signature is proposed. The proposed method can not only verify the authenticity and the integrity of commodities, but also can locate the illegal modifications. Furthermore, the proposed method can provide the capability of recovery if images are tampered
Universal coding, information, prediction, and estimation A connection between universal codes and the problems of prediction and statistical estimation is established. A known lower bound for the mean length of universal codes is sharpened and generalized, and optimum universal codes constructed. The bound is defined to give the information in strings relative to the considered class of processes. The earlier derived minimum description length criterion for estimation of parameters, including their number, is given a fundamental information, theoretic justification by showing that its estimators achieve the information in the strings. It is also shown that one cannot do prediction in Gaussian autoregressive moving average (ARMA) processes below a bound, which is determined by the information in the data.
Reliable provenance information for multimedia data using invertible fragile watermarks Today, more and more data is available in digital form, ranging from normal text to multimedia data such as image or video data. Since some data is of high sensitivity or undergoes legal restrictions, it is important to obtain more reliable information about the data origin and its transformations, known as data provenance. Unfortunately, current approaches for data provenance neither support multimedia data nor provide mechanisms to ensure reliability of the provenance information. In this paper, we present an approach based on existing watermarking schemes evaluated by a database system. Hence, this approach ensures the reliability of multi media data (e.g., fingerprint data) and its corresponding provenance information. Furthermore, we show how this approach can be applied within a specific database, used for fingerprint verification.
A Survey on Parallel and Distributed Multi-Agent Systems. Simulation has become an indispensable tool for researchers to explore systems without having recourse to real experiments. Depending on the characteristics of the modeled system, methods used to represent the system may vary. Multi-agent systems are, thus, often used to model and simulate complex systems. Whatever modeling type used, increasing the size and the precision of the model increases the amount of computation, requiring the use of parallel systems when it becomes too large. In this paper, we focus on parallel platforms that support multi-agent simulations. Our contribution is a survey on existing platforms and their evaluation in the context of high performance computing. We present a qualitative analysis, mainly based on platform properties, then a performance comparison using the same agent model implemented on each platform.
1.040407
0.013769
0.013481
0.006781
0.003307
0.002255
0.001047
0.0005
0.000167
0.000083
0.000039
0.000004
0
0
Generating synthetic benchmark circuits for evaluating CAD tools For the development and evaluation of computer-aided design tools for partitioning, floorplanning, placement, and routing of digital circuits, a huge amount of benchmark circuits with suitable characteristic parameters is required. Observing the lack of industrial benchmark circuits available for use in evaluation tools, one could consider to actually generate synthetic circuits. In this paper, we extend a graph-based benchmark generation method to include functional information. The use of a user-specified component library, together with the restriction that no combinational loops are introduced, now broadens the scope to timing-driven and logic optimizer applications. Experiments show that the resemblance between the characteristic Rent curve and the net degree distribution of real versus synthetic benchmark circuits is hardly influenced by the suggested extensions and that the resulting circuits are more realistic than before. An indirect validation verifies that existing partitioning programs have comparable behavior for both real and synthetic circuits. The problems of accounting for timing-aware characteristics in synthetic benchmarks are addressed in detail and suggestions for extensions are included
A stochastic model for the interconnection topology of digital circuits Rent's rule has been successfully applied to a priori estimation of wire length distributions. However, this approach is very restrictive: the circuits are assumed to be homogeneous. In this paper, recursive clustering is described as a more advanced model for the partitioning behavior of digital circuits. It is applied to predict the variance of the terminal count distribution. First, the impact of the block degree distribution is analyzed with a simple model. A more refined model incorporates the effect of stochastic self similarity. Finally, the model is further extended to describe the effects of heterogeneity. This model is a promising candidate for more accurate a priori estimation tools.
On synthetic benchmark generation methods In the process of designing complex chips and systems, the use of benchmark designs is often necessary. However, the existing benchmark suites are not sufficient for the evaluation of new archi- tectures and EDA tools; synthetic benchmark circuits are a viable alternative. In this paper, a systematic approach for the genera- tion and evaluation of synthetic benchmark circuits is presented. A number of existing benchmark generation methods are examined using direct validation of size and topological parameters. This ex- poses certain features and drawbacks of the different methods. 1. INTRODUCTION VLSI technology continues to evolve. The development and eval- uation of new technologies, architectures and electronic design au- tomation (EDA) tools requires a large number of benchmark cir- cuits. A careful selection allows the reduction of the number of circuits needed. A benchmark suite is a set of circuits that - in the ideal case - is representative for the circuit space, or at least that part at which the EDA tool is aimed. Benchmark suites form a standard comparison basis that is accepted throughout the EDA community. with low level physical descriptions. Since most of the benchmark generation methods that have appeared in literature apply to FPGA circuits, we will focus this study on structural descriptions of tech- nology mapped circuits at the logic level. However, most of this section can be extrapolated to other descriptions at other levels of abstraction as well. A circuit is a graph that consists of nodes that are connected through nets. The nodes are fanin-restricted lookup tables (LUTs), sequential elements, such as latches or flip flops (FFs) and primary in- and outputs (PIs, POs). A number of parameters can be identi- fied. We distinguish three classes: size parameters These parameters express the size of the circuit, and can be directly obtained from the netlist of the graph. Ex- amples are: the number of nodes and nets, the number of un- reachable nodes, the number of FFs, the number of primary in- and outputs, etc. topological parameters These parameters determine the intercon- nection structure of the circuit. Some can be obtained directly, such as the average fanin and the fanin distribution of the LUTs, and the average net degree and net degree distribution. When using a bipartite graph representation where we distinguish be- tween logic elements (LUTs) and sequential elements (FFs), we can define delay using the unit delay model. The delay distribu- tion is an important characteristic, especially for timing-driven applications. The most important topological parameter is prob- ably the Rent characteristic, which will be addressed more thor- oughly in section 4. functional parameters At a detailed functional level, the logic function of every node is known. Typical functional parame- ters are: the number of literals in a two level sum-of-product expansion, or the number of BDD nodes for a given variable or- dering. The redundancy can be defined as ,w here
Towards synthetic benchmark circuits for evaluating timing-driven CAD tools For the development and evaluation of CAD-tools for partition- ing, floorplanning, placement, and routing of digital circuits, a huge amount of benchmark circuits with suitable characteristic parame- ters is required. Observing the lack of industrial benchmark circuits for use in evaluation tools, one could consider to actually generate such circuits. In this paper, we extend a graph-based benchmark generation method to include functional information. The use of a user-specified component library, together with the restriction that no combinational loops are introduced, now broadens the scope to timing-driven and logic optimizer applications. Experiments show that the resemblance between the characteristic Rent curve and the net degree distribution of real versus synthetic benchmark circuits is hardly influenced by the suggested extensions and that the result- ing circuits are more realistic than before. However, the synthetic benchmark circuits are still very redundant, compared to exist- ing sets of real benchmarks. It is shown that a correlation exists between the degree of redundancy and key circuit parameters.
Automatic generation of synthetic sequential benchmark circuits The design of programmable logic architectures and supporting computer-aided design tools fundamentally requires both a good understanding of the combinatorial nature of netlist graphs and sufficient quantities of realistic examples to evaluate or benchmark the results. In this paper, the authors investigate these two issues. They introduce an abstract model for describing sequential circuits and a collection of statistical parameters for better understanding the nature of circuits. Based upon this model they introduce and formally define the signature of a circuit netlist and the signature equivalence of netlists. They give an algorithm (GEN) for generating sequential benchmark netlists, significantly expanding previous work (Hutton et al, 1998) which generated purely combinational circuits. By comparing synthetic circuits to existing benchmarks and random graphs they show that GEN circuits are significantly more realistic than random graphs. The authors further illustrate the viabilty of the methodology by applying GEN to a case study comparing two partitioning algorithms.
Optimality, scalability and stability study of partitioning and placement algorithms This paper studies the optimality, scalability and stability of state-of-the-art partitioning and placement algorithms. We present algorithms to construct two classes of benchmarks, one for partitioning and the other for placement, which have known upper bounds of their optimal solutions, and can match any given net distribution vector. Using these partitioning and placement benchmarks, we studied the optimality of state-of-the-art algorithms by comparing their solutions with the upper bounds of the optimal solutions, and their scalability and stability by varying the sizes and characteristics of the benchmarks. The conclusions from this study are: 1) State-of-the-art, multilevel two way partitioning algorithms scale very well and are able to find solutions very close to the upper bounds of the optimal solutions of our benchmarks. This suggests that existing circuit partitioning techniques are fairly mature. There is not much room for improvement for cutsize minimization for problems of the current sizes. Multiway partitioning algorithms, on the other hand, do not perform that well. Their results can be up to 18% worse than our estimated upper bounds. 2) The state-of-the-art placement algorithms produce significantly inferior results compared with the estimated optimal solutions. There is still significant room for improvement in circuit placement. 3) Existing placement algorithms are not stable. Their effectiveness varies considerably depending on the characteristics of the benchmarks. New hybrid techniques are probably needed for future generation placement engines that are more scalable and stable.
Analytical placement: A linear or a quadratic objective function? This paper addresses the problem of cell placement which is considered crucial for layout quality. Based on the combined analytical and partitioning strategy successfully applied in the GORDIAN placement tool, we discuss the consequences of using linear or quadratic ob- jective functions. By joining the linear objective with an efficient quadratic programming approach, and by applying a refined iterative partitioning scheme, we ob- tain placements of excellent quality. The effect of a quadratic and a linear objective function on the chip area after final routing is demonstrated for benchmark circuits and other circuits with up to 21000 cells.
OpenTimer: A High-Performance Timing Analysis Tool We introduce in this paper, OpenTimer, an open-source timing analysis tool that efficiently supports (1) both block-based and path-based timing propagations, (2) common path pessimism removal (CPPR), and (3) incremental processing. OpenTimer works on industry formats (e.g., .v, .spef, .lib, .sdc) and is designed to be parallel and portable. To further facilitate integration between timing and other electronic design automation (EDA) applications such as timing-driven placement and routing, OpenTimer provides user-friendly application programming interface (API) for inactive analysis. Experimental results on industry benchmarks released from TAU 2015 timing analysis contest have demonstrated remarkable results achieved by OpenTimer, especially in its order-of-magnitude speedup over existing timers.
Ripple 2.0: High quality routability-driven placement via global router integration Due to a significant mismatch between the objectives of wirelength and routing congestion, the routability issue is becoming more and more important in VLSI design. In this paper, we present a high quality placer Ripple 2.0 to solve the routability-driven placement problem. We will study how to make use of the routing path information in cell spreading and relieve congestion with tangled logic in detail. Several techniques are proposed, including (1) lookahead routing analysis with pin density consideration, (2) routing path-based cell inflation and spreading and (3) robust optimization on congested cluster. With the official evaluation protocol, Ripple 2.0 outperforms the top contestants on the ICCAD 2012 Contest benchmark suite.
A faster implementation of APlace APlace is a high quality, scalable analytical placer. This paper describes our recent e orts to improve APlace for speed and scalability. We explore various wirelength and density approximation functions. We speed up the placer using a hybrid usage of wirelength and density approxima-tions during the course of multi-level placement, and obtain 2-2.5 times speedup of global placement on the IBM ISPD04 and ISPD05 benchmarks. Recent applications of the APlace framework to supply voltage degradation-aware placement and lens aberration-aware timing-driven placement are also brie y described.
Super-resolution image reconstruction: a technical overview A new approach toward increasing spatial resolution is required to overcome the limitations of the sensors and optics manufacturing technology. One promising approach is to use signal processing techniques to obtain an high-resolution (HR) image (or sequence) from observed multiple low-resolution (LR) images. Such a resolution enhancement approach has been one of the most active research areas, and it is called super resolution (SR) (or HR) image reconstruction or simply resolution enhancement. In this article, we use the term "SR image reconstruction" to refer to a signal processing approach toward resolution enhancement because the term "super" in "super resolution" represents very well the characteristics of the technique overcoming the inherent resolution limitation of LR imaging systems. The major advantage of the signal processing approach is that it may cost less and the existing LR imaging systems can be still utilized. The SR image reconstruction is proved to be useful in many practical cases where multiple frames of the same scene can be obtained, including medical imaging, satellite imaging, and video applications. The goal of this article is to introduce the concept of SR algorithms to readers who are unfamiliar with this area and to provide a review for experts. To this purpose, we present the technical review of various existing SR methodologies which are often employed. Before presenting the review of existing SR algorithms, we first model the LR image acquisition process.
Derivational software engineering
Algorithms To Automatically Quantify The Geometric Similarity Of Anatomical Surfaces We describe approaches for distances between pairs of two-dimensional surfaces (embedded in three-dimensional space) that use local structures and global information contained in interstructure geometric relationships. We present algorithms to automatically determine these distances as well as geometric correspondences. This approach is motivated by the aspiration of students of natural science to understand the continuity of form that unites the diversity of life. At present, scientists using physical traits to study evolutionary relationships among living and extinct animals analyze data extracted from carefully defined anatomical correspondence points (landmarks). Identifying and recording these landmarks is time consuming and can be done accurately only by trained morphologists. This necessity renders these studies inaccessible to non-morphologists and causes phenomics to lag behind genomics in elucidating evolutionary patterns. Unlike other algorithms presented for morphological correspondences, our approach does not require any preliminary marking of special features or landmarks by the user. It also differs from other seminal work in computational geometry in that our algorithms are polynomial in nature and thus faster, making pairwise comparisons feasible for significantly larger numbers of digitized surfaces. We illustrate our approach using three datasets representing teeth and different bones of primates and humans, and show that it leads to highly accurate results.
Automatic correctness proofs for logic program transformations The many approaches which have been proposed in the literature for proving the correctness of unfold/fold program transformations, consist in associating suitable well-founded orderings with the proof trees of the atoms belonging to the least Herbrand models of the programs. In practice, these orderings are given by 'clause measures', that is, measures associated with the clauses of the programs to be transformed. In the unfold/fold transformation systems proposed so far, clause measures are fixed in advance, independently of the transformations to be proved correct. In this paper we propose a method for the automatic generation of the clause measures which, instead, takes into account the particular program transformation at hand. During the transformation process we construct a system of linear equations and inequations whose unknowns are the clause measures to be found, and the correctness of the transformation is guaranteed by the satisfiability of that system. Through some examples we show that our method is able to establish in a fully automatic way the correctness of program transformations which, by using other methods, are proved correct at the expense of fixing sophisticated clause measures.
1.021659
0.024933
0.023007
0.021253
0.016667
0.006616
0.001509
0.000269
0.000109
0.000039
0
0
0
0
Final algebra semantics and data type extensions We consider the problem of data type extensions. Guttag, Horowitz, and Musser have pointed out that in this situation the naive initial algebra approach requires the data type to save too much information. We formulate a category of implementations of such an extension, and we show that such a category has a final object. The resulting semantics is closer to that of Hoare, since it can be argued that an abstract data type in the sense of Hoare is a final object in the category of representations of that type. We consider as an example the specification of integer arrays, and we show that our specification yields arrays as its abstract data type. The connection with initial algebra semantics is discussed.
m-EVES: A tool for verifying software This paper describes the development of a new tool for formally verifying software. The tool is called m-EVES and consists of a new language, called m-Verdi, for implementing and specifying software; a new logic, which has been proven sound; and a new theorem prover, called m-NEVER, which integrates many state-of-the-art techniques drawn from the theorem proving literature. Two simple examples are used to present the fundamental ideas embodied within the system.
Specifying Concurrent Program Modules A method for specifying program modules in a concurrent program is described. It is based upon temporal logic, but uses new kinds of temporal assertions to make the specifications simpler and easier to understand. The semantics of the specifications is described informally, and a sequence of examples are given culminating in a specification of three modules comprising the alternating-bit communication protocol. A formal semantics is given in the appendix.
A class of programming language mechanisms to facilitate multiple implementations of the same specification To facilitate construction and use of multiple implementations of the same specification, it is shown that an important class of programming language mechanisms, not present in languages such as Ada, is essential. Ways to enhance Ada with these mechanisms are proposed, and a small set of programming language mechanisms to facilitate the use of multiple implementations is presented. By means of these mechanisms, the functionality and the performance of software systems can be separated, specified, analyzed, and verified. It is argued, using realistic examples, that it is necessary to name specifications and implementations separately, to distinguish parameters of specifications and implementations, and for it to be possible for clients to associate different implementations with different instances of the same abstractions. To support these possibilities, it is shown that some basic language mechanisms are essential. How the proposed solutions can be extended to produce performance-parameterized implementations and reusable translators is outlined
A deductive approach to program synthesis Program synthesis is the systematic derivation of a program from a given specification. A deductive approach to program synthesis is presented for the construction of recursive programs. This approach regards program synthesis as a theorem-proving task and relies on a theorem-proving method that combines the features of transformation rules, unification, and mathematical induction within a single framework.
Developing the Generic Path Algorithmic Program and its Instantiations Using PAR Method
Proofs as programs The significant intellectual cost of programming is for problem solving and explaining, not for coding. Yet programming systems offer mechanical assistance for the coding process exclusively. We illustrate the use of an implemented program development system, called PRL ("pearl"), that provides automated assistance with the difficult part. The problem and its explained solution are seen as formal objects in a constructive logic of the data domains. These formal explanations can be executed at various stages of completion. The most incomplete explanations resemble applicative programs, the most complete are formal proofs.
A simple greedy algorithm for a class of shuttle transportation problems Greedy algorithms for combinatorial optimization problems are typically direct and efficient, but hard to prove optimality. The paper presents a special class of transportation problems where a supplier sends goods to a set of customers, returning to the source after each delivery. We show that these problems with different objective functions share a common structural property, and therefore a simple but powerful generic greedy algorithm yields optimal solutions for all of them.
Developing a linear algorithm for cubing a cyclic permutation A linear algorithm is developed for cubing a cyclic permutation stored as a function in an array. This continues work discussed in [1] and [2] on searching for disciplined methods for developing and describing algorithms that deal with complicated data structures such as linked tests. Here, a different representation of a cyclic permutation reveals a simple algorithm; then, an equally simple coordinate transformation is used to yield the final algorithm.
An incremental algorithm for software analysis In programming environments aimed at “industrial strength” software development, there is a need for software tools which facilitate both design and maintenance. These tools should encourage experimentation with different system configurations which enable designers to a priori estimate the associated system complexity and judge the ease of accommodating enhancements. Maintainers should be able to check straightforwardly the ramifications of system changes due to enhancements or “bug fixes”. With interprocedural data flow information about the definition and use of global variables and parameters in a software system, tools can be built to perform these tasks.For large, complex systems, efficient methods for interprocedural analysis are necessarily incremental, as a software system is a dynamically evolving entity. Incremental algorithms update current information about a system in response to a change rather than re-calculating the information by re-analyzing the entire system. This paper reports our development of a general purpose incremental data flow analysis algorithm, which is applicable to both intraprocedural and interprocedural domains. It is based on interval analysis, a technique whose observed performance is linear for most programs; under reasonable assumptions about program flow graphs this linearity can be verified [20].
Maintaining Sensing Coverage and Connectivity in Large Sensor Networks In this paper, we address the issues of maintaining sensing coverage and connectivity by keeping a minimal number of sensor nodes in the active mode in wireless sensor networks. We investigate the relationship between coverage and connectiv- ity by solving the following two sub-problems. First, we prove that if the radio range is at least twice of the sensing range, a complete coverage of a convex area implies connectivity among the working set of nodes. With such a proof, we can then focus only on the coverage problem. Second, we derive, under the ideal case in which node density is sufficiently high, a set of optimality conditions under which a subset of working sensor nodes can be chosen for full coverage. Based on the optimality conditions, we then devise a de- centralized and localized density control algorithm, Optimal Geographical Density Control (OGDC), for density control in large scale sensor networks. Ns-2 simulation show that OGDC outperforms the PEAS algorithm (32), the hexagon-based GAF- like algorithm, and the sponsor area algorithm (28) with respect to the number of working nodes needed (sometimes at a 50% improvement), and achieves almost the same coverage as the algorithm with the best result.
Helping People Find What They Don'T Know - Recommendation Systems Help Users Find The Correct Words For A Successful Search.
An Effective Multi-level Immune Algorithm for Graph Bipartitionin An important application of graph partitioning is data clustering using a graph model--- the pairwise similarities between all data objects form a weighted graph adjacency matrix that contains all necessary information for clustering. An effective multi-level algorithm based on AIS (artificial immune systems) for graph bipartitioning is proposed. During its coarsening phase, we adopt an improved matching approach based on the global information of the graph core to develop its guidance function. During its refinement phase, we exploit the hybrid immune refinement algorithm inspired in the CSA (clonal selection algorithm) and affinity maturation of the AIS. The algorithm is verified to be capable of finding the global approximate bipartitioning which incorporate early-exit FM (FM-EE) local improvement heuristic into CSA. The success of our algorithm relies on exploiting both the CSA and the concept of the graph core. It is implemented with American National Standards Institute (ANSI) C and compared to MeTiS that is a state-of-the-art partitioner in the literature. Our experimental evaluations show that it performs well and produces encouraging solutions on 18 graphs benchmarks.
Reversible data hiding based on local histogram shifting with multilayer embedding We use the two neighboring points of the peak point to embed the secret data.We improve the histogram of embedded cover image to ensure security of secret data.The multilayer embedding of pyramid structure is adopted to increase the capacity.Our proposed method has a higher embedding capacity with lower distortion. In this paper, we present a new reversible data hiding method based on histogram shifting using localization. Our proposed method selects peak point as the reference point, then uses the two neighboring points of the peak point to achieve secret data embedding based on histogram shifting and the peak point keeps unchanged. In the extraction end, we no longer need the key information about the peak point, we can directly find the peak point from the histogram to extract the secret data. We also exploit the localization to make the histogram of embedded cover image become almost the same as the histogram of the original cover image. The embedding capacity is also increased rapidly by the localization with multilayer embedding. Experimental results show that our proposed method is effective and superior.
1.077837
0.034865
0.026478
0.004442
0.001329
0.000442
0.000223
0.000079
0.000018
0.000001
0
0
0
0
Topic analysis using a finite mixture model Addressed here is the issue of 'topic analysis' which is used to determine a text's topic structure, a representation indicating what topics are included in a text and how those topics change within the text. Topic analysis consists of two main tasks: topic identification and text segmentation. While topic analysis would be extremely useful in a variety of text processing applications, no previous study has so far sufficiently addressed it. A statistical learning approach to the issue is proposed in this paper. More specifically, topics here are represented by means of word clusters, and a finite mixture model, referred to as a stochastic topic model (STM), is employed to represent a word distribution within a text. In topic analysis, a given text is segmented by detecting significant differences between STMs, and topics are identified by means of estimation of STMs. Experimental results indicate that the proposed method significantly outperforms methods that combine existing techniques.
Topic-focused multi-document summarization using an approximate oracle score We consider the problem of producing a multi-document summary given a collection of documents. Since most successful methods of multi-document summarization are still largely extractive, in this paper, we explore just how well an extractive method can perform. We introduce an "oracle" score, based on the probability distribution of unigrams in human summaries. We then demonstrate that with the oracle score, we can generate extracts which score, on average, better than the human summaries, when evaluated with ROUGE. In addition, we introduce an approximation to the oracle score which produces a system with the best known performance for the 2005 Document Understanding Conference (DUC) evaluation.
Search result diversification Result diversification has recently attracted much attention as a means of increasing user satisfaction in recommender systems and web search. Many different approaches have been proposed in the related literature for the diversification problem. In this paper, we survey, classify and comparatively study the various definitions, algorithms and metrics for result diversification.
Generating templates of entity summaries with an entity-aspect model and pattern mining In this paper, we propose a novel approach to automatic generation of summary templates from given collections of summary articles. This kind of summary templates can be useful in various applications. We first develop an entity-aspect LDA model to simultaneously cluster both sentences and words into aspects. We then apply frequent subtree pattern mining on the dependency parse trees of the clustered and labeled sentences to discover sentence patterns that well represent the aspects. Key features of our method include automatic grouping of semantically related sentence patterns and automatic identification of template slots that need to be filled in. We apply our method on five Wikipedia entity categories and compare our method with two baseline methods. Both quantitative evaluation based on human judgment and qualitative comparison demonstrate the effectiveness and advantages of our method.
Data mining for improving textbooks We present our early explorations into developing a data mining based approach for enhancing the quality of textbooks. We describe a diagnostic tool to algorithmically identify deficient sections in textbooks. We also discuss techniques for algorithmically augmenting textbook sections with links to selective content mined from the Web. Our evaluation, employing widely-used textbooks from India, indicates that developing technological approaches to help improve textbooks holds promise.
Statistical source expansion for question answering A source expansion algorithm automatically extends a given text corpus with related content from large external sources such as the Web. The expanded corpus is not intended for human consumption but can be used in question answering (QA) and other information retrieval or extraction tasks to find more relevant information and supporting evidence. We propose an algorithm that extends a corpus of seed documents with web content, using a statistical model to select text passages that are both relevant to the topics of the seeds and complement existing information. In an evaluation on 1,500 hand-labeled web pages, our algorithm ranked text passages by relevance with 81% MAP, compared to 43% when relying on web search engine ranks alone and 75% when using a multi-document summarization algorithm. Applied to QA, the proposed method yields consistent and significant performance gains. We evaluated the impact of source expansion on over 6,000 questions from the Jeopardy! quiz show and TREC evaluations using Watson, a state-of-the-art QA system. Accuracy increased from 66% to 71% on Jeopardy! questions and from 59% to 64% on TREC questions.
How to share a secret In this paper we show how to divide data D into n pieces in such a way that D is easily reconstructable from any k pieces, but even complete knowledge of k - 1 pieces reveals absolutely no information about D. This technique enables the construction of robust key management schemes for cryptographic systems that can function securely and reliably even when misfortunes destroy half the pieces and security breaches expose all but one of the remaining pieces.
Partitioning Nominal Attributes in Decision Trees To find the optimal branching of a nominal attribute at a node in anL-ary decision tree, one is often forced to search over all possibleL-ary partitions for the one that yields the minimum impurity measure.For binary trees (L &equals; 2) when there are just two classes a short-cutsearch is possible that is linear in n, the number of distinct valuesof the attribute.For the general case in which the number of classes, k, may begreater than two, Burshtein et al. have shown that the optimalpartition satisfies a condition that involves the existence of({L\atop 2}) hyperplanes in the class probability space.We derive a property of the optimal partition for concave impurity measures (including in particularthe Gini and entropy impurity measures) in terms of the existence ofL vectors in the dualof the class probability space, which implies the earlier condition.Unfortunately, these insights still do not offer a practical searchmethod when n and k are large, even for binary trees.We therefore present a new heuristic search algorithm tofind a good partition. It is based on ordering the attribute‘svalues according to their principal component scores in the classprobability space, and is linear in n. We demonstrate theeffectiveness of the new method through Monte Carlo simulationexperiments and compare its performance against other heuristic methods.
Compact representations of separable graphs We consider the problem of representing graphs compactly while supporting queries efficiently. In particular we describe a data structure for representing n-vertex unlabeled graphs that satisfy an O(nc)-separator theorem, c O(n) bits, and supports adjacency and degree queries in constant time, and neighbor listing in constant time per neighbor. This generalizes previous results for graphs with constant genus, such as planar graphs.We present experimental results using many "real world" graphs including 3-dimensional finite element meshes, link graphs of the web, internet router graphs, VLSI circuits, and street map graphs. Compared to adjacency lists, our approach reduces space usage by almost an order of magnitude, while supporting depthfirst traversal in about the same running time.
A Model for Deliberation, Action, And Introspection This thesis investigates the problem of controlling or directing the reasoning and actions of a computer program. The basic approach explored is to view reasoning as a species of action, so that a program might apply its reasoning powers to the task of deciding what inferences to make as well as to deciding what other actions to take. A design for the architecture of reasoning programs is proposed. This architecture involves self-consciousness, intentional actions, deliberate adaptations, and a form of decision-making based on dialectical argumentation. A program based on this architecture inspects itself, describes aspects of itself to itself, and uses this self-reference and these self-descriptions in making decisions and taking actions. The program's mental life includes awareness of its own concepts, beliefs, desires, intentions, inferences, actions, and skills. All of these are represented by self-descriptions in a single sort of language, so that the program has access to all of these aspects of itself, and can reason about them in the same terms.
A Naïve Time Analysis and its Theory of Cost Equivalence Techniques for reasoning about extensional properties of functional programs are well understood but methods for analysing the underlying intensional or operational properties have been much neglected. This paper begins with the development of a simple but useful calculus for time analysis of non-strict functional programs with lazy lists. One limitation of this basic calculus is that the ordinary equational reasoning on functional programs is not valid. In order to buy back some of these equational properties we develop a non-standard operational equivalence relation called cost equivalence, by considering the number of computation steps as an 'observable' component of the evaluation process. We define this relation by analogy with Park's definition of bisimulation in ccs. This formulation allows us to show that cost equivalence is a contextual congruence (and thus is substitutive with respect to the basic calculus) and provides useful proof techniques for establishing cost-equivalence laws. It is shown that basic evaluation time can be derived by demonstrating a certain form of cost equivalence, and we give an axiomatization of cost equivalence which is complete with respect to this application. This shows that cost equivalence subsumes the basic calculus. Finally we show how a new operational interpretation of evaluation demands can be used to provide a smooth interface between this time analysis and more compositional approaches, retaining the advantages of both.
Software synthesis using generic architectures We describe a framework for synthesizing software systems based on abstracting software system designs, problem descriptions, and the design process. The result of such an abstraction process is a generic architecture and the customization knowledge for customizing the architecture for instances of a problem class. The customization knowledge is used to assist a designer in customizing the architecture as opposed to completely automating the design of systems. We illustrate our approach using an implemented example of a generic tracking architecture which we have customized in two different domains. We describe how the designs produced using KASE compare to the original designs of the two systems, discuss how our work relates to other approaches to software design, and describe current work and plans for extending KASE to other application areas.
Optimizing Fuzzy Logic Programs by Unfolding, Aggregation and Folding Multi-adjoint logic programming represents a very recent, extremely flexible attempt for introducing fuzzy logic into logic programming. Inspired by our previous experiences in the field of (declarative) program transformation, in this paper we propose the development of a fold/unfold based transformation system for optimizing such kind of fuzzy logic programs. The starting point is a set of unfolding-based transformations together with a reversible kind of fuzzy folding, that we have designed in the past. The present work substantially improves this last transformation operation by allowing the possibility of using rules belonging to different programs in a transformation sequence when performing a folding step, which is crucial to obtain better, recursive and elegant definitions of fuzzy predicates. In contrast with other declarative paradigms, in the fuzzy setting it is mandatory to pack sets of fuzzy predicates in tuples, if we really want the folding operation to proceed. This implies the need for re-defining the classical ''definition introduction'' transformation rule and introducing a completely new operation, that we call ''aggregation'', which is especially tailored for the new framework. Finally, we illustrate how the effects of appropriately applying our set of transformation rules (definition introduction, aggregation, folding, unfolding and facting) to a given program, are able to improve the execution of goals against transformed programs.
Reversible data hiding based on local histogram shifting with multilayer embedding We use the two neighboring points of the peak point to embed the secret data.We improve the histogram of embedded cover image to ensure security of secret data.The multilayer embedding of pyramid structure is adopted to increase the capacity.Our proposed method has a higher embedding capacity with lower distortion. In this paper, we present a new reversible data hiding method based on histogram shifting using localization. Our proposed method selects peak point as the reference point, then uses the two neighboring points of the peak point to achieve secret data embedding based on histogram shifting and the peak point keeps unchanged. In the extraction end, we no longer need the key information about the peak point, we can directly find the peak point from the histogram to extract the secret data. We also exploit the localization to make the histogram of embedded cover image become almost the same as the histogram of the original cover image. The embedding capacity is also increased rapidly by the localization with multilayer embedding. Experimental results show that our proposed method is effective and superior.
1.213333
0.213333
0.213333
0.213333
0.213333
0.071111
0
0
0
0
0
0
0
0
A survey of strategies in rule-based program transformation systems Program transformation is the mechanical manipulation of a program in order to improve it relative to some cost function and is understood broadly as the domain of computation where programs are the data. The natural basic building blocks of the domain of program transformation are transformation rules expressing a 'one-step' transformation on a fragment of a program. The ultimate perspective of research in this area is a high-level, language parametric, rule-based program transformation system, which supports a wide range of transformations, admitting efficient implementations that scale to large programs. This situation has not yet been reached, as trade-offs between different goals need to be made. This survey gives an overview of issues in rule-based program transformation systems, focusing on the expressivity of rule-based program transformation systems and in particular on transformation strategies available in various approaches. The survey covers term rewriting, extensions of basic term rewriting, tree parsing strategies, systems with programmable strategies, traversal strategies, and context-sensitive rules.
Towards a Box Calculus for Hierarchical Hume.
The worker/wrapper transformation The worker/wrapper transformation is a technique for changing the type of a computation, usually with the aim of improving its performance. It has been used by compiler writers for many years, but the technique is little known in the wider functional programming community, and has never been described precisely. In this article we explain, formalise and explore the generality of the worker/wrapper transformation. We also provide a systematic recipe for its use as an equational reasoning technique for improving the performance of programs, and illustrate the power of this recipe using a range of examples.
Higher-order expression procedures
Driving in the Jungle Collapsed jungle evaluation is an evaluation strategy for functional programs that can give super-linear speedups compared to conventional evaluation strategies such as call-by-need. However, the former strategy may incur administrative evaluation overhead. We demonstrate how this overhead can be eliminated by transforming the program using a variation of positive supercompilation in which the transformation strategy is based on collapsed jungle evaluation. In penetrating the constantfactor barrier, we seem to be close to establishing a transformation technique that guarantees the efficiency of the transformed program. As a spin-off, we clarify the relationship between call-by-name, call-by-need and collapsed-jungle evaluation, showing that all three can be expressed as instances of a common semantics in which the variations -- differing only in efficiency -- are obtained by varying the degree of sharing in a DAG representation.
A Tiny Functional Language with Logical Features
Higher Order Matching for Program Transformation We present a simple, practical algorithm for higher-order matching in the contextof automatic program transformation. Our algorithm finds more matches than thestandard second-order matching algorithm of Huet and Lang, but it has an equallysimple specification, and it is better suited to the transformation of programs inmodern programming languages such as Haskell or ML. The algorithm has beenimplemented as part of the MAG system for transforming functional programs.1 Background and...
Unfold/fold transformations and loop optimization of logic programs Programs typically spend much of their execution time in loops. This makes the generation of efficient code for loops essential for good performance. Loop optimization of logic programming languages is complicated by the fact that such languages lack the iterative constructs of traditional languages, and instead use recursion to express loops. In this paper, we examine the application of unfold/fold transformations to three kinds of loop optimization for logic programming languages: recursion removal, loop fusion and code motion out of loops. We describe simple unfold/fold transformation sequences for these optimizations that can be automated relatively easily. In the process, we show that the properties of unification and logical variables can sometimes be used to generalize, from traditional languages, the conditions under which these optimizations may be carried out. Our experience suggests that such source-level transformations may be used as an effective tool for the optimization of logic programs.
An Efficient Unification Algorithm
Automated Deduction: Looking Ahead In this article, the body of a report on automated deduction is presented that notes some significant achievements and takes a studied look at the future of the field.
Robust fusion of irregularly sampled data using adaptive normalized convolution We present a novel algorithm for image fusion from irregularly sampled data. The method is based on the framework of normalized convolution (NC), in which the local signal is approximated through a projection onto a subspace. The use of polynomial basis functions in this paper makes NC equivalent to a local Taylor series expansion. Unlike the traditional framework, however, the window function of adaptive NC is adapted to local linear structures. This leads to more samples of the same modality being gathered for the analysis, which in turn improves signal-to-noise ratio and reduces diffusion across discontinuities. A robust signal certainty is also adapted to the sample intensities to minimize the influence of outliers. Excellent fusion capability of adaptive NC is demonstrated through an application of super-resolution image reconstruction.
Joint mobility and routing for lifetime elongation in wireless sensor networks Although many energy efficient/conserving routing protocols have been proposed for wireless sensor networks, the concentration of data traffic towards a small number of base stations remains a major threat to the network lifetime. The main reason is that the sensor nodes located near a base station have to relay data for a large part of the network and thus deplete their batteries very quickly. The solution we propose in this paper suggests that the base station be mobile; in this way, the nodes located close to it change over time. Data collection protocols can then be optimized by taking both base station mobility and multi-hop routing into account. We first study the former, and conclude that the best mobility strategy consists in following the periphery of the network (we assume that the sensors are deployed within a circle). We then consider jointly mobility and routing algorithms in this case, and show that a better routing strategy uses a combination of round routes and short paths. We provide a detailed analytical model for each of our statements, and corroborate it with simulation results. We show that the obtained improvement in terms of network lifetime is in the order of 500%.
An improved lossless data hiding scheme based on image VQ-index residual value coding Copyright protection and information security have become serious problems due to the ever growing amount of digital data over the Internet. Reversible data hiding is a special type of data hiding technique that guarantees not only the secret data but also the cover media can be reconstructed without any distortion. Traditional schemes are based on spatial, discrete cosine transformation (DCT) and discrete wavelet transformation (DWT) domains. Recently, some vector quantization (VQ) based reversible data hiding schemes have been proposed. This paper proposes an improved reversible data hiding scheme based on VQ-index residual value coding. Experimental results show that our scheme outperforms two recently proposed schemes, namely side-match vector quantization (SMVQ)-based data hiding and modified fast correlation vector quantization (MFCVQ)-based data hiding.
Reversible data hiding based on local histogram shifting with multilayer embedding We use the two neighboring points of the peak point to embed the secret data.We improve the histogram of embedded cover image to ensure security of secret data.The multilayer embedding of pyramid structure is adopted to increase the capacity.Our proposed method has a higher embedding capacity with lower distortion. In this paper, we present a new reversible data hiding method based on histogram shifting using localization. Our proposed method selects peak point as the reference point, then uses the two neighboring points of the peak point to achieve secret data embedding based on histogram shifting and the peak point keeps unchanged. In the extraction end, we no longer need the key information about the peak point, we can directly find the peak point from the histogram to extract the secret data. We also exploit the localization to make the histogram of embedded cover image become almost the same as the histogram of the original cover image. The embedding capacity is also increased rapidly by the localization with multilayer embedding. Experimental results show that our proposed method is effective and superior.
1.07042
0.036667
0.019481
0.017815
0.002444
0.001204
0.000099
0.000004
0
0
0
0
0
0
A Fragile Watermark Method For Improving Medical Images Security Aiming at the medical images security problem during storage and transmission, the author provides a fragile watermark method to protect it. And this method can be able to achieve integrity detection and accurate tampering localization. This method adopts image block, chaotic modulation in watermark information and encryption to set password. The watermark information will be embedded to the least significant bits of original medical image's sub-block area. Experimental results show that the algorithm has the following features: (1) The watermarked medical images has high and stable quality. (2) It is sensitive to manipulations, and any manipulation of any pixel can be detected. (3) The tampering localization can be accurate into 2x2 pixel area. (4) The algorithm achieves the blind detection with high security.
Medical image authentication using SLT and IWT schemes Over the years, different watermarking techniques have been used for medical image authentication purposes. Some techniques have been presented to detect tampering in the medical image while others can also recover the tampered region after the tamper detection. Many of the previous medical image authentication schemes have successfully achieved their aims; however, the robustness of the authentication scheme against unintentional attacks has not been highlighted sufficiently. This paper presents a new medical image authentication scheme in which the medical image is divided into two regions (i.e., region of interest (ROI) and region of non-interest (RONI)). Then two watermarking methods based on Slantlet transform (SLT) are used to embed data in the ROI and the RONI. The proposed scheme can be used for tamper detection, localization, and recovery in addition to the data hiding. To generate the recovery information of the ROI, a new method has been proposed based on the integer wavelet transform (IWT) coefficients. The experiments that have been conducted to evaluate the proposed authentication scheme proved that it is efficient not only in achieving its main tasks that have been mentioned above but also in having robustness against unintentional attacks (i.e., JPEG compression, additive Gaussian noise (AGN), and salt-and-pepper noise) and that makes it more suitable for the practical applications.
Secured telemedicine using region-based watermarking with tamper localization. Medical images exchanged over public networks require a methodology to provide confidentiality for the image, authenticity of the image ownership and source of origin, and image integrity verification. To provide these three security requirements, we propose in this paper a region-based algorithm based on multiple watermarking in the frequency and spatial domains. Confidentiality and authenticity are provided by embedding robust watermarks in the region-of-non-interest (RONI) of the image using a blind scheme in the discrete wavelet transform and singular value decomposition domain (DWT-SVD). On the other hand, integrity is provided by embedding local fragile watermarks in the region-of-interest (ROI) of the image using a reversible scheme in the spatial domain. The integrity provided by the proposed algorithm is implemented on a block-level of the partitioned-image, thus enabling localized detection of tampered regions. The algorithm was evaluated with respect to imperceptibility, robustness, capacity, and tamper localization capability, using MRI, Ultrasound, and X-ray gray-scale medical images. Performance results demonstrate the effectiveness of the proposed algorithm in providing the required security services for telemedicine applications.
Lossless watermarking for verifying the integrity of medical images with tamper localization. Given the ease of alteration of digital data, integrity verification and tamper detection for medical images are becoming ever more important. In this paper, instead of using the conventional irreversible block-based watermarking approach to achieve tamper localization, we propose to incorporate such functionality into the region-based lossless watermarking scheme. This is achieved by partitioning an image into certain non-overlapping regions and appending the associated local authentication information directly into the watermark payload. A region of authentication, which can be flexibly specified by the user, is partitioned into small regions in a multilevel hierarchical manner. Such hierarchical structure allows the user to easily adjust the localization accuracy, and makes the tamper detection efficient. Experimental results demonstrate the effectiveness of tamper localization.
A region-based lossless watermarking scheme for enhancing security of medical data. This paper presents a lossless watermarking scheme in the sense that the original image can be exactly recovered from the watermarked one, with the purpose of verifying the integrity and authenticity of medical images. In addition, the scheme has the capability of not introducing any embedding-induced distortion in the region of interest (ROI) of a medical image. Difference expansion of adjacent pixel values is employed to embed several bits. A region of embedding, which is represented by a polygon, is chosen intentionally to prevent introducing embedding distortion in the ROI. Only the vertex information of a polygon is transmitted to the decoder for reconstructing the embedding region, which improves the embedding capacity considerably. The digital signature of the whole image is embedded for verifying the integrity of the image. An identifier presented in electronic patient record (EPR) is embedded for verifying the authenticity by simultaneously processing the watermarked image and the EPR. Combining with fingerprint system, patient's fingerprint information is embedded into several image slices and then extracted for verifying the authenticity.
Watermarking techniques used in medical images: a survey. The ever-growing numbers of medical digital images and the need to share them among specialists and hospitals for better and more accurate diagnosis require that patients' privacy be protected. As a result of this, there is a need for medical image watermarking (MIW). However, MIW needs to be performed with special care for two reasons. Firstly, the watermarking procedure cannot compromise the quality of the image. Secondly, confidential patient information embedded within the image should be flawlessly retrievable without risk of error after image decompressing. Despite extensive research undertaken in this area, there is still no method available to fulfill all the requirements of MIW. This paper aims to provide a useful survey on watermarking and offer a clear perspective for interested researchers by analyzing the strengths and weaknesses of different existing methods.
Wavelet-based copyright-protection scheme for digital images based on local features A lossless, robust copyright-protection scheme for digital images based on cryptography and watermarking is proposed. In the proposed wavelet-based scheme, the logo image is not embedded into the protected image. The secret key is generated during the embedding process using local features extracted from the perceptually prominent components of the host image after the digital wavelet transform. Digital signature and time-stamping technologies are then used for further protection. With the secret key, the logo image extraction process can be completed without the host image. Experimental results show that the proposed scheme is effective and robust against common image processing and geometric distortions.
Lossless data embedding--new paradigm in digital watermarking One common drawback of virtually all current data embedding methods is the fact that the original image is inevitably distorted due to data embedding itself. This distortion typically cannot be removed completely due to quantization, bit-replacement, or truncation at the grayscales 0 and 255. Although the distortion is often quite small and perceptual models are used to minimize its visibility, the distortion may not be acceptable for medical imagery (for legal reasons) or for military images inspected under nonstandard viewing conditions (after enhancement or extreme zoom). In this paper, we introduce a new paradigm for data embedding in images (lossless data embedding) that has the property that the distortion due to embedding can be completely removed from the watermarked image after the embedded data has been extracted. We present lossless embedding methods for the uncompressed formats (BMP, TIFF) and for the JPEG format. We also show how the concept of lossless data embedding can be used as a powerful tool to achieve a variety of nontrivial tasks, including lossless authentication using fragile watermarks, steganalysis of LSB embedding, and distortion-free robust watermarking.
Deforestation: transforming programs to eliminate trees An algorithm that transforms programs to eliminate intermediate trees is presented. The algorithm applies to any term containing only functions with definitions in a given syntactic form, and is suitable for incorporation in an optimizing compiler.
A tutorial on cross-layer optimization in wireless networks This tutorial paper overviews recent developments in optimization-based approaches for resource allocation problems in wireless systems. We begin by overviewing important results in the area of opportunistic (channel-aware) scheduling for cellular (single-hop) networks, where easily implementable myopic policies are shown to optimize system performance. We then describe key lessons learned and the main obstacles in extending the work to general resource allocation problems for multihop wireless networks. Towards this end, we show that a clean-slate optimization-based approach to the multihop resource allocation problem naturally results in a "loosely coupled" cross-layer solution. That is, the algorithms obtained map to different layers [transport, network, and medium access control/physical (MAC/PHY)] of the protocol stack, and are coupled through a limited amount of information being passed back and forth. It turns out that the optimal scheduling component at the MAC layer is very complex, and thus needs simpler (potentially imperfect) distributed solutions. We demonstrate how to use imperfect scheduling in the cross-layer framework and describe recently developed distributed algorithms along these lines. We conclude by describing a set of open research problems
Realization of a geometry-theorem proving machine
Connectivity properties of a packet radio network model A model of a packet radio network in which transmitters with range R are distributed according to a two-dimensional Poisson point process with density D is examined. To ensure network connectivity, it is shown that πR2D, the expected number of nearest neighbors of a transmitter, must grow logarithmically with the area of the network. For an infinite area there exists an infinite connected component with nonzero probability if π R2D>N0, for some critical value N0. It is shown that 2.195<N 0<10.526
Structural Synthesis of Programs from Refined User Requirements (Programming Boiler Control in NUT) The aim of this work is to demonstrate the feasibility of using a declarative lan- guage as a tool for automated implementation of requirements written in a semi- formal manner. The technique of structural synthesis of programs based on automatic proof search in intuitionisti c propositional calculus implemented in the NUT system is used for solving the steam-boiler problem. The goal of the experiment is to bridge a gap between the language of requirements and an implementation. An appropriate set of concepts is developed for representing the problem, i.e. writing the requirements in a form understandable by the program synthesis tools. A complete implementation, including simulator of the actual steam-boiler and control panel, is written inNUT. For solving the problem, a technique of deductive synthesis of programs based on auto- matic proof search in intuitionisti c propositional calculus, implemented in the NUT system, has been used (Tyu94). In the NUT system, an appropriate set of concepts rep- resented as classes has to be developed for each kind of problems. The concepts for simulating the steam-boiler and programming a boiler model are: device, boiler, pump, pump controller, level meter, flow meter, etc. These concepts are represented as sets of equations taken directly from the requirements specification. Another set of concepts is required for implementing a control algorithm: propositions, rules and actions. The control algorithm is represented as a collection of rules written as relations in NUT. The rules are triggered by conjuctions of propositions describing a state of the system at an observable time moment. Also these rules are derived directly from the requirements specification after introducing a proper collection of propositions and predicates which are implicitly present in the text of requirements. With the present experiment, we demonstrate that the presentation of requirements as given in the steam- boiler example, enables us to extract rules immediately from the requirements text.
Integer transform based reversible watermarking incorporating block selection. •We use the invariant mean value of a block to evaluate the local complexity.•The block size can be set to a small value by the invariability of the mean value.•The reduced size location map is created by the invariability of the mean value.•We can modify flexibly each pixel in a block using DE or HS.
1.24
0.24
0.12
0.028211
0.012114
0.00125
0.000556
0.000156
0
0
0
0
0
0
Transformation Rules and Strategies for Functional-Logic Programs This paper abstracts the contents of a PhD dissertation entitled 'Transformation Rules and Strategies for Functional-Logic Programs' which has been recently defended. These techniques are based on fold/unfold transformations and they can be used to optimize integrated (functional-logic) programs for a wide class of applications. Experimental results shows that typical examples in the field of Artificial Intelligence are successfully enhanced by our transformation system SYNTH. The thesis presents the first approach of these methods for declarative languages that integrate the best features from functional and logic programming.
Improved Tupling for Optimizing Multi-paradigm Declarative Programs This paper investigates the optimization by fold/unfold of declarative programs that integrate the best features from both functional and logic programming. Transformation sequences are guided by a mixed strategy which, in three low-level transformation phases. successfully combines two well-known heuristics -composition and tupling-, thus avoiding the construction of intermediate data structures and redundant. sub-computations. In particular, whereas composition is able to produce a single function definition for some nested (composed) functions. the tupling method merges non-nested functions calls into a new, function definition called eureka. We solve the non trivial problem of discovering the set of calls to be tupled in an incremental way.. i.e. chaining different eureka definitions where only non-nested calls sharing common variables are taken into account. Moreover, by appropriately combining both strategies, together with a simplification pre-process based on a kind of normalization, we automatically optimize a wide range of programs (with nested and/or non-nested function calls) at a very low cost.
Automatic Optimization of Multi-paradigm Declarative Programs This paper investigates the optimization by fold/unfold of functional-logic programswith operational semantics based on needed narrowing. Transformation sequences are automatically guided by tupling, a powerful strategy that avoids multiple accesses to data structures and redundant sub-computations. We systematically decompose in detail the internal structure of tupling in three low-level transformation phases (definition introduction, unfolding and abstraction with folding) that constitute the core of our automatic tupling algorithm. The resulting strategy is (strongly) correct and complete, efficient, elegant and realistic. In addition (and most important), our technique preserves the natural structure of multi-paradigm declarative programs, which contrasts with prior pure functional approaches that produce corrupt integrated programs with (forbidden) overlapping rules.
An Automatic Composition Algorithm for Functional Logic Programs Functional logic languages with a complete operational semantics are based on narrowing, which combines the instantiation of variables with the reduction of expressions. In this paper, we investigate the relationship between partial evaluation and more general transformations based on folding/unfolding. First, we show that the transformations obtained by partial evaluators can be also achieved byfolding/unfolding using a particular kind of eurekas which can be mechanically attained. Then, we propose an algorithm (based on folding/unfolding) which starts with the automatic eureka generation and is able to perform program composition, i. e. it is able to produce a single function definition for some nested functions of the original program. This avoids the construction of intermediate data structures that are produced bythe inner function and consumed as inputs bythe outer function. As opposed to both partial evaluation and (general) fold/unfold transformations, strong correctness of the transformed programs holds w. r. t. goals which contain calls to the old function symbols--i. e. from the original program--as well as to the new ones--i. e. introduced during the transformation.
Rules + strategies for transforming lazy functional logic programs This work introduces a transformation methodology for functional logic programs based on needed narrowing, the optimal and complete operational principle for modern declarative languages which integrate the best features of functional and logic programming. We provide correctness results for the transformation system w.r.t, the set of computed values and answer substitutions and show that the prominent properties of needed narrowing--namely, the optimality w.r.t, the length of derivations and the number of computed solutions--carry over to the transformation process and the transformed programs. We illustrate the power of the system by taking on in our setting two well-known transformation strategies (composition and tupling). We also provide an implementation of the transformation system which, by means of some experimental results, highlights the potentiality of our approach.
Transformation in a Non-Strict Language: An Approach to Instantiation A problem arises when the usual rules of fold/unfold transformation are applied in a non-strict programming system. Case analysis by instantia- tion may alter strictness characteristics of the function being transformed, and hence alter the behaviour of programs. Although such behavioural changes can in general be quite subtle, they are all too apparent if the pro- gram is interactive, since I/O interleaving is closely tied to strictness pro- perties. A two-phase solution to the problem is proposed. It comprises a suitable form of strictness analysis to determine whether a proposed instantiation is safe, and a procedure to re-formulate troublesome definitions so that, in effect, case analysis is shifted to a nearby safe con- text.
A Transformation System for Developing Recursive Programs A system of rules for transforming programs is described, with the programs in the form of recursion equations. An initially very simple, lucid, and hopefully correct program is transformed into a more efficient one by altering the recursion structure. Illustrative examples of program transformations are given, and a tentative implementation is described. Alternative structures for programs are shown, and a possible initial phase for an automatic or semiautomatic program-manipulation system is indicated.
Semantics and axiomatics of a simple recursive language. In this paper, we provide a simple recursive programming language with a semantics and a formal proof system, along the lines of [5], [17] and [23]. We show that the semantics used is the “best” possible if one admits the validity of Algol's copy rule, and that the proof system is complete with respect to the semantics. The definitions and methods used are meant to provide a basis for a broader theory of program schemas, which models parallel as well as sequential programs.
An incremental algorithm for software analysis In programming environments aimed at “industrial strength” software development, there is a need for software tools which facilitate both design and maintenance. These tools should encourage experimentation with different system configurations which enable designers to a priori estimate the associated system complexity and judge the ease of accommodating enhancements. Maintainers should be able to check straightforwardly the ramifications of system changes due to enhancements or “bug fixes”. With interprocedural data flow information about the definition and use of global variables and parameters in a software system, tools can be built to perform these tasks.For large, complex systems, efficient methods for interprocedural analysis are necessarily incremental, as a software system is a dynamically evolving entity. Incremental algorithms update current information about a system in response to a change rather than re-calculating the information by re-analyzing the entire system. This paper reports our development of a general purpose incremental data flow analysis algorithm, which is applicable to both intraprocedural and interprocedural domains. It is based on interval analysis, a technique whose observed performance is linear for most programs; under reasonable assumptions about program flow graphs this linearity can be verified [20].
Multilevel hypergraph partitioning: applications in VLSI domain In this paper, we present a new hypergraph-partitioning algorithm that is based on the multilevel paradigm. In the multilevel paradigm, a sequence of successively coarser hypergraphs is constructed. A bisection of the smallest hypergraph is computed and it is used to obtain a bisection of the original hypergraph by successively projecting and refining the bisection to the next level finer hypergraph. We have developed new hypergraph coarsening strategies within the multilevel framework. We evaluate their performance both in terms of the size of the hyperedge cut on the bisection, as well as on the run time for a number of very large scale integration circuits. Our experiments show that our multilevel hypergraph-partitioning algorithm produces high-quality partitioning in a relatively small amount of time. The quality of the partitionings produced by our scheme are on the average 6%-23% better than those produced by other state-of-the-art schemes. Furthermore, our partitioning algorithm is significantly faster, often requiring 4-10 times less time than that required by the other schemes. Our multilevel hypergraph-partitioning algorithm scales very well for large hypergraphs. Hypergraphs with over 100 000 vertices can be bisected in a few minutes on today's workstations. Also, on the large hypergraphs, our scheme outperforms other schemes (in hyperedge cut) quite consistently with larger margins (9%-30%).
A Novel Fast Image Encryption Scheme Based On 3d Chaotic Baker Maps Symmetric block encryption schemes, designed on invertible two-dimensional chaotic maps on a torus or a square, prove feasible and secure for real-time image encryption according to commonly used criteria given in the literature. In this paper, a typical map of this kind, namely, the baker map, is further extended to be three-dimensional and then used to speed up image encryption while retaining its high degree of security. The proposed algorithm is described in detail, along with its security analysis and implementation. Experimental results show that its three-dimensional baker map is 2-3 times faster than the two-dimensional one, showing it's great potential in real-time image encryption applications.
Do you want to take notes?: identifying research missions in Yahoo! search pad Addressing user's information needs has been one of the main goals of Web search engines since their early days. In some cases, users cannot see their needs immediately answered by search results, simply because these needs are too complex and involve multiple aspects that are not covered by a single Web or search results page. This typically happens when users investigate a certain topic in domains such as education, travel or health, which often require collecting facts and information from many pages. We refer to this type of activities as "research missions". These research missions account for 10% of users' sessions and more than 25% of all query volume, as verified by a manual analysis that was conducted by Yahoo! editors. We demonstrate in this paper that such missions can be automatically identified on-the-fly, as the user interacts with the search engine, through careful runtime analysis of query flows and query sessions. The on-the-fly automatic identification of research missions has been implemented in Search Pad, a novel Yahoo! application that was launched in 2009, and that we present in this paper. Search Pad helps users keeping trace of results they have consulted. Its novelty however is that unlike previous notes taking products, it is automatically triggered only when the system decides, with a fair level of confidence, that the user is undertaking a research mission and thus is in the right context for gathering notes. Beyond the Search Pad specific application, we believe that changing the level of granularity of query modeling, from an isolated query to a list of queries pertaining to the same research missions, so as to better reflect a certain type of information needs, can be beneficial in a number of other Web search applications. Session-awareness is growing and it is likely to play, in the near future, a fundamental role in many on-line tasks: this paper presents a first step on this path.
Block-based reversible data embedding This paper proposes a block-based reversible data embedding method applied to grayscale cover images. The image is first divided into two areas: data embedding area and auxiliary information embedding area. The data embedding area is further partitioned into 3*3 blocks. According to two predefined thresholds, all the blocks are divided into three categories: smooth, normal or complex. The complex blocks do not embed any data. A smooth block will embed twice the data of a normal block. After predicting the edge direction in a smooth or normal block, Alattar's difference expansion of a generalized integer transform method is exploited to embed data into each vector in the block. Then, the location map and other auxiliary information are embedded in the auxiliary information embedding area using a novel least significant bit (LSB) substitution method. Our method not only improves the embedded image quality but also provides a higher payload capacity compared to Alattar's method. For some smoother images, the embedding capacity of the proposed method is higher than 1 bit/pixel in a single pass and also derives acceptable embedded image quality (PSNR=30). To the best of our knowledge, the proposed method is the first to do this.
Reversible data hiding based on local histogram shifting with multilayer embedding We use the two neighboring points of the peak point to embed the secret data.We improve the histogram of embedded cover image to ensure security of secret data.The multilayer embedding of pyramid structure is adopted to increase the capacity.Our proposed method has a higher embedding capacity with lower distortion. In this paper, we present a new reversible data hiding method based on histogram shifting using localization. Our proposed method selects peak point as the reference point, then uses the two neighboring points of the peak point to achieve secret data embedding based on histogram shifting and the peak point keeps unchanged. In the extraction end, we no longer need the key information about the peak point, we can directly find the peak point from the histogram to extract the secret data. We also exploit the localization to make the histogram of embedded cover image become almost the same as the histogram of the original cover image. The embedding capacity is also increased rapidly by the localization with multilayer embedding. Experimental results show that our proposed method is effective and superior.
1.22
0.22
0.048842
0.03
0.012018
0.002
0.000316
0.000047
0
0
0
0
0
0
Program Development as a Formal Activity A methodology of program development by transformations is outlined. In particular, ways of representing the transformation rules are discussed, and the relationship between notions of their correctness and the semantic definition of programming languages is studied. How transformation techniques are complemented by the use of abstract data types and assertions is described. In the resulting calculus of transformations, the single rules not only represent design or optimization techniques, but they also incorporate verification principles. To illustrate this approach, the Warshall algorithm is developed by successive applications of transformations.
Self-redefinition as a program manipulation strategy This is an exploration of a constructive strategy for program improvement and synthesis. A first part recalls unfolding-folding style of manipulations initiated by BURSTALL and DARLINGTON with an application to proofs of equivalence of certain function compositions. Second part, in a more abstract way, presents three basic “forms” and their associated “transforms” constructed with this strategy in a hierarchical order ; they may serve as goals of transformations. Last part associates self-redefinition to mixed strategies for program composition: symbolic macro replacement, logarithmic speed up, resolution of formal recurrences. Each situation, where technique and method apply, is depicted on examples and open problems are evoked.
An exercise in program synthesis: algorithms for computing the transitive closure of a relation This paper contains the synthesis of several transitive closure algorithms (including Warshall's) from one common high level definition. For deriving recursion equations Burstall's and Darlington's unfolding-and folding-technique is used. A special effort is made to treat the first step of the syntheses (i.e. finding appropriate recursion arguments) systematically.
Transformational Derivation of Parsing Algorithms Executable on Parallel Architectures
Structured Programming with go to Statements For serious students of structured programming, and also for language designers, Knuth's "Structured Programming with go to Statements" is probably the paper to read. It is by far the most complete description of structured programming of all the selections in this book. Even though it originally was published in late 1974, Knuth's ideas have not aged a bit. The title, as Knuth acknowledges, is somewhat of a misnomer: " . . . I should confess that the title of this article was chosen primarily to generate attention." Indeed, he is not arguing that structured programming always should be carried out with goto statements; instead, as he says, "what I am really doing is striving for a reasonably well-balanced viewpoint about the proper role of go to statements." Like a number of other authors whose views are presented in this book, Knuth seems deeply concerned that the average programmer will over-react to the "no goto" philosophy. Reflecting that concern, the first few pages of Knuth's paper give an indication of the fanaticism of its time --- a fanaticism that has largely died away. For history buffs, Knuth has carefully documented the origins of structured programming. He claims that the programming style now referred to as "structured programming" actually was first practiced by D.V. Schorre in 1960, and that was followed by publication of the work of Peter Naur in 1963 and of George Forsythe in 1964 -- all of this before the better-known works of Dijkstra, and of Böm and Jacopini. There are three major themes in Knuth's paper, and avast number of examples to illustrate each one, One theme --- a familiar one, if you've read many of the other papers in this book is that a program needs to be rewritten several times before it can beconsidered a truly good program. As Knuth says, " . . . I learned again that I should always keep looking for improvements, even When I have a satisfactory program." Indeed, one gets the impression that Knuth views this as the major virtue of structured programming: The requirement to eliminate gotos forces the programmer to rewrite and thus think more about what he was trying to accomplish with his program. A second theme --- one that you won't find in any of the other papers - is that efficiency is important in some programming applications, and that the programmer needs the freedom to use gotos to optimize critical portions of a program. Actually, Martin Hopkins makes the same point in "A Case for the GOTO" [Paper 9], but he does so much less forcefully than Knuth. Knuth recognizes the danger of overemphasizing optimization; as he says, "premature optimization is the root of all evil." And, he does state that only three percent of the code in a typical program ever needs to be optimized; but for that critical three percent, he demonstrates that structured code often is twenty to thirty percent less efficient than equivalent code with gotos. Unfortunately, the point is repeated so many times, with so many examples, that the average reader is likely to conclude that Knuth is obsessed with efficiency. However, given the number of authors exhorting programmers to ignore efficiency altogether, it is probably very healthy to have someone argue for a careful consideration of program efficiency. Knuth's third main theme is that structured programming could be made much more practical with the addition of a few language constructs. In addition to constructs like LEAVE and BREAK, Knuth argues for a "situation" construct patterned after a proposal originally made by C.T. Zahn. In keeping with the heretical title of his paper, Knuth introduces a fourth theme: There are times when the programmer should put gotos into his code, rather than take them out. For example, gotos can be used to convert recursion to iteration; or to implement coroutines; or to eliminate Boolean variables by branching into common code. In this context, Knuth suggests the following strategy: First, write the program in a structured manner to convince yourself that it is correct; then, transform it into an efficient program, possibly by introducing some goto statements; and, finally, leave the original structured code behind as documentation, so that subsequent readers can understand how the transformation took place. Whether the average programmer would go through these steps in an orderly, formal way is something worth pondering. My own suspicion is that it won't work, but I'm often branded a skeptic. To conclude: It's probably best not to read Knuth's paper in one sitting, for you almost surely will become mentally fatigued by the eight major examples, each of which is rewritten three or four (or more!) times in an ALGOL-like pseudocode. Read the paper piece by piece, and do read the code --- you'll learn a lot from it!
Software Technology in the 1990's: Using a New Paradigm First Page of the Article
Algorithm theories and design tactics Algorithm theories represent the structure common to a class of algorithms, such as divide-and-conquer or backtrack. An algorithm theory for a class A provides the basis for design tactics — specialized methods for designing A-algorithms from formal problem specifications. We illustrate this approach with recent work on the theory of global search algorithms and briefly mention several others. Several design tactics have been implemented in the KIDS/CYPRESS system and have been used to semiautomatically derive many algorithms.
Transformational programming: applications to algorithms and systems Ten years ago Cheatham and Wegbreit [4] proposed atransformational program development methodology based on notionsof top-down stepwise program refinement first expressed by Dijkstra[10] and Wirth [45]. A schema describing the process of thismethodology is given in fig. 1. To develop a program bytransformation, we first specify the program in as high a level ofabstraction and as great a degree of clarity as our programminglanguage admits. This high level problem statement program P isproved correct semimechanically according to some standard approach(see Flovd and Hoare [15, 21]), Next, using an interactive systemequipped with a library of encoded transformations, each of whichmaps a correct program into another equivalent program, we selectand apply transformations one at a time to successive versions ofthe program until we obtain a concrete, low level, effecientimplementation version P'. The goals of transformationalprogramming are to reduce programming labor, improve programreliability, and upgrade program performance. In order for labor tobe reduced, the effort required to obtain P, prove it correct, andderive P' by transformation should be less than the effort requiredto code P from scratch, and also to debug it. Program reliabilitywill be improved if P can be certified correct, and if eachtransformation preserves program meaning. Finally, programperformance will be upgraded if transformations are directedtowards increased efficiency.Experimental transformational systems that emphasize one or moreaspects of the methodology outlined above have been implemented byCheatham [5], Darlington [3], Loveman [27], Standish [41], Feather[14] Huet and Lang [11], and others. However, all of these systemsfall short of the goals, because of a number of reasons thatinclude,1 inability to mechanize the checking of transformationapplicability conditions2 reliance on large, unmanageable collections of low leveltransformations, and long arduous derivation sequences3 dependency on transformations whose potential for improvingprogram performance is unpredictable4 use of source languages insufficiently high level toaccommodate perspicuous initial program specifications and powerfulalgorithmic transformationsYet, convincing evidence that this new methodology will succeedhas come from recent advances in verification, programtransformations, syntax directed editting systems, and high levellanguages. These advances, discussed below, represent partialsolution to the problems stated above, and could eventually beintegrated into a single system1 The transformational approach to verification was pioneered byGerhart [19] and strengthened by the results of Schwartz [39],Scherlis [36], Broy et al [2], Koenig and Paige [26.31] Blaustein[1], and others. Due mainly to improved technology for themechanization of proofs of enabling conditions that justifyapplication of transformations, this approach is now at a pointwhere it can be effectively used in a system. Such mechanizationdepends strongly on program analysis, and, in particular, onreanalyses after a program is modified. Attribute grammars [24]have been shown to be especially useful in facilitating programanalysis [23]. Moreover, Reps [34] has discovered algorithm thatreevaluates attributes in optimal time after a program undergoessyntax directed editing changes (as are allowed on the CornellSynthesizer [43]). He has implemented his algorithm recently, andhas reported initial success2 There are encouraging indications that a transformationalsystem can be made to depend mainly on a small but powerfulcollection of transformations applied top-down fashion to programsspecified at various levels of abstraction from logic down toassembler. We envision such a system as a fairly conventionalsemiautomatic compiler which classes of transformations areselected semimechanically in a predetermined order, and arejustified by predicates supplied mechanically but provedsemimanually. Of particular importance is nondeterminism removalwhich has formulated by Sharir [40] could lead to a technique forturning naive, nondeterministic programs into deterministicprograms with emergent strategies. Such programs could then betransformed automatically by finite differencing [13, 16, 17, 18,29, 30, 31] and jamming [28, 31, 20] (which we have implemented)into programs whose data access paths are fully determined. TheSETL optimizer could improve these programs further byautomatically choosing efficient data structure representations andaggregations3 Of fundamental importance to the transformations justmentioned is the fact that they can be associated with speeduppredictions Fong and Ullman [16] were the first to characterize animportant class of algorithmic differencing transformations interms of accurate asymptotic speedup predictions, eg, they gaveconditions under which repeated calculation of a set former {x ins|k(x)} could be computed on O(#s) + cost(k) steps. By consideringstronger conditions and special cases for the boolean valuedsubpart k, Paige [31] later gave sharper speedup predictions (eg,either O(1) steps for each encounter of the set former or acumulative cost of O(#s) steps for every encounter) associated withanother differencing method. Both Morgenstern [28] and Paige [31]prove constant factor improvements due to their jammingtransformations (implemented by Morgenstern for the improvement offile processing, and by Paige for the optimization of programs).Constant factor speedup has also been observed for data structureselection by the method of basings but a supporting analytic studyhas not been presented [8, 37]4 Essential to the whole transformational process is a widespectrum programming language (or set of languages) that canexpress a program at every stage of development from the initialabstract specification down to its concrete implementationrealization. Since transformations applied to programs written atthe highest levels of abstraction are likely to make the mostfundamental algorithmic changes, it is important to stress abstractfeatures in our language. In addition to supportingtransformations, the highest level language dictions should supportlucid initial specifications, verification, and even programanalysts. Of special importance is SETL [38, 9], because itsabstract set theoretic dictions can model data structures andalgorithms easily, because its philosophy of avoiding hidden asymptotic costs facilitates program analysis, because its semanticsconforms to finite set theory and can accommodate a set theoreticprogram logic, and because it is wide spectrum. As is evidenced bythe work of Schwartz, Fong, Paige, and Sharir, SETL is also a richmedium for transformation.
Planware - Domain-Specific Synthesis of High-Performance Schedulers Planware is a domain-specific generator of high- performance scheduling software, currently being developed at Kestrel Institute. Architecturally, Planware is an extension of the Specware system with domain-independent and domain-dependent parts. The domain-independent part includes a general algorithm design facility (including mech- anisms to synthesize global-search and constraint propagation algorithms), as well as support for theorem-proving and witness finding. The domain- dependent part includes scheduling domain knowl- edge and architecture representations, and other domain-specific refinement knowledge that relates the scheduling domain to general algorithm design and data type refinement. Using Planware, the user interactively specifies a problem and then the sys- tem automatically generates a formal specification and refines it.
An Equivalence Preserving First Order Unfold/fold Transformation System An unfold/fold transformation system for first order programs is presented which comprises an unfolding rule, a folding rule and a replacement rule. It is intended to work with a first order theory specifying the meaning of primitives, on top of which new relations are built by programs. It preserves the provability relationship |- G between a call-consistent program and a goal formula G such that is strict with respect to G. It also preserves the logical consequence relationship for three-valued logic.
The internet of things: a survey In recent year, the Internet of Things (IoT) has drawn significant research attention. IoT is considered as a part of the Internet of the future and will comprise billions of intelligent communicating `things'. The future of the Internet will consist of heterogeneously connected devices that will further extend the borders of the world with physical entities and virtual components. The Internet of Things (IoT) will empower the connected things with new capabilities. In this survey, the definitions, architecture, fundamental technologies, and applications of IoT are systematically reviewed. Firstly, various definitions of IoT are introduced; secondly, emerging techniques for the implementation of IoT are discussed; thirdly, some open issues related to the IoT applications are explored; finally, the major challenges which need addressing by the research community and corresponding potential solutions are investigated.
Sharing multiple secrets in visual cryptography The secret sharing schemes in conventional visual cryptography arecharacterized by encoding one shared secret into a set of randomtransparencies which reveal the secret to the human visual systemwhen they are superimposed. In this paper, we propose a visualsecret sharing scheme that encodes a set of x≥2 secrets into twocircle shares such that none of any single share leaks the secretsand the x secrets can be obtained one by one by stacking the firstshare and the rotated second shares with x different rotationangles. This is the first true result that discusses the sharingability in visual cryptography up to any general number of multiplesecrets in two circle shares.
Classification and retrieval of reusable components using semantic features Automated assistance for software component reuse involves supporting retrieval, adaptation and verification of software components. The informality of feature-based software classification schemes is an impediment to formally verifying the reusability of a software component. The use of formal specifications to model and retrieve reusable components alleviates the informality, but the formal reasoning required for retrieval introduces questions of scalability. To provide scalability, current retrieval systems resort to syntactic classification at some level of abstraction, abandoning the semantic information provided by the specification. We propose a methodology that shifts the overhead of formal reasoning from the retrieval to the classification phase of reuse. Software components are classified using semantic features that are derived from their formal specification. Retrieval of functionally similar components can then be accomplished based on the stored feature sets. Formal verification can be applied to precisely determine the reusability of the set of similar components.
Reversible data hiding based on local histogram shifting with multilayer embedding We use the two neighboring points of the peak point to embed the secret data.We improve the histogram of embedded cover image to ensure security of secret data.The multilayer embedding of pyramid structure is adopted to increase the capacity.Our proposed method has a higher embedding capacity with lower distortion. In this paper, we present a new reversible data hiding method based on histogram shifting using localization. Our proposed method selects peak point as the reference point, then uses the two neighboring points of the peak point to achieve secret data embedding based on histogram shifting and the peak point keeps unchanged. In the extraction end, we no longer need the key information about the peak point, we can directly find the peak point from the histogram to extract the secret data. We also exploit the localization to make the histogram of embedded cover image become almost the same as the histogram of the original cover image. The embedding capacity is also increased rapidly by the localization with multilayer embedding. Experimental results show that our proposed method is effective and superior.
1.023825
0.034717
0.025053
0.020249
0.005209
0.002974
0.000944
0.000122
0.000038
0.000002
0
0
0
0
A general framework for robust watermarking security The analysis of the security of watermarking algorithms has received increasing attention since it has been recognized that the sole investigation of robustness issues is not enough to properly address the challenges set by practical applications. Such a security analysis, though, is still in its infancy, up to a point that a general agreement has not yet been reached even on the most fundamental problems. The purpose of this paper is to provide a general security framework encompassing most of the problems encountered in real-world applications. By considering the amount of information the attacker has about the watermarking algorithm, we introduce the notion of fair and un-fair attacks, so to ease the classification of different systems and attacks. Though we recognize that many important differences exist between watermarking and cryptographic security, a large part of our work is inspired by the Diffie-Helmann's paradigm, which is widely used in cryptography. For each class of systems great care is taken to describe both the attacker's and watermarker's point of view, presenting the challenges raised by each system to these different actors. Finally, we try to outline some research directions which, according to us, deserve further analysis.
Security evaluation of informed watermarking schemes In this paper, security evaluation of an important watermarking class based on quantization is given. Theoretical developments and practical simulations are used to measure the security level of watermarking techniques. We give the contribution of each observation available to the attacker on the total gathered information about the watermarking secrecy. By leading on watermarking technique weaknesses, we find that using the Quantization Index Modulation (QIM) with continuous secret key is equivalent to using a secret key with two states. The latter is easier to estimate than a continuous key. Then, we propose a secure version of a classical trellis coded quantization watermarking. The security is guaranteed thanks to the trellis path generated from the discrete key and the message. We show that the spread transform can represent a second or alternative security level for watermarking systems. It allows to increase the watermarking security level and to keep the embedded message hard to read for unauthorise user.
Content-based image watermarking via public-key cryptosystems Digital watermarking is a technique to insert an information-carrying digital signature into a digital media so that the signature can be extracted for variety of purposes including ownership authentication and content verification. We examine the weaknesses against common watermarking attacks of blockwise independent and content-based watermarking algorithms for image integrity verification, and implement a new and more secure invisible fragile public-key watermarking algorithm for color or grayscale images that increases the message digest size from the proposed 64 to 128 bits using the same small-size blocks and maintaining high-quality watermarked images and accurate localization of image changes. Our watermarking technique is capable to detect any changes made to the image since the time it was stamped, any changes to the pixel values and also to the dimensions of the image will be automatically detected and localized. Our scheme consists of a watermark-insertion process that uses a private key to embed a watermark image into a cover image, and a watermark-extraction process that uses a public key to extract the watermark from the watermarked image. The embedded watermark can only be extracted by someone who has possession of a proper verification key.
Security of ownership watermarking of digital images based on singular value decomposition Recently watermarking algorithms of digital images based on singular value decomposition (SVD) have been proposed. Most SVD-based watermarking techniques use singular values as the embedding watermark information. These SVD-based techniques are advantageous for watermarking images since slight changes in the singular values do not significantly affect the image quality. However, it has been reported that these SVD-based watermarking algorithms, as implemented, suffer from a very high probability of false positive detections of watermarks. The false positive detection vulnerability of two recent SVD-based watermarking schemes is exposed. Two solutions are proposed to mitigate this vulnerability without changing the design principle of these watermarking algorithms. (C) 2010 SPIE and IS&T. [DOI: 10.1117/1.3327935]
A generic digital watermarking model This paper presents a generic model of digital watermarking techniques based on the following principles: steganography, spread spectrum communications technology, and perceptibility restriction and noise theory. Concepts and functional modules are developed in this paper to describe an abstract scheme of digital watermarking methods. This generic model provides a common basis to study many existing watermarking systems and to evaluate their performance and characteristics. A variety of watermarking methods can be derived from this generic model and several of them are discussed in this paper. (C) 1998 Elsevier Science Ltd. All rights reserved.
Fragile and robust watermarking by histogram specification This paper presents new results on regional image watermarking by exact histogram specification. Image is split in regions and for each region a watermark is specified. Watermarks are selected such as image original histogram is preserved. Main improvement of proposed regional scheme consists in the marking of the entire image (all the regions) with complementary watermarks. This procedure considerably increases watermarking robustness. The region selection strategy is discussed so that direct identification of regions and bordering effects are eliminated. Robustness/fragility of the proposed scheme depends on the specified histograms. In a general setting, exact histogram specification allows only certain graylevel values for the pixels of each region. Fragile watermarking is obtained when sentinel pixels' region is allowed to take only certain discrete values. Thus, using sparse histograms, one achieves not only image authentication, but also, in case of any attack or malicious editing, the detection of the area where image has been altered. On the contrary, robust watermarking against many attacks is obtained when pixels of each region are allowed to take values on compact intervals of graylevels.
A review of medical image watermarking requirements for teleradiology Teleradiology allows medical images to be transmitted over electronic networks for clinical interpretation and for improved healthcare access, delivery, and standards. Although such remote transmission of the images is raising various new and complex legal and ethical issues, including image retention and fraud, privacy, malpractice liability, etc., considerations of the security measures used in teleradiology remain unchanged. Addressing this problem naturally warrants investigations on the security measures for their relative functional limitations and for the scope of considering them further. In this paper, starting with various security and privacy standards, the security requirements of medical images as well as expected threats in teleradiology are reviewed. This will make it possible to determine the limitations of the conventional measures used against the expected threats. Furthermore, we thoroughly study the utilization of digital watermarking for teleradiology. Following the key attributes and roles of various watermarking parameters, justification for watermarking over conventional security measures is made in terms of their various objectives, properties, and requirements. We also outline the main objectives of medical image watermarking for teleradiology and provide recommendations on suitable watermarking techniques and their characterization. Finally, concluding remarks and directions for future research are presented.
Secret and public key image watermarking schemes for image authentication and ownership verification We describe a watermarking scheme for ownership verification and authentication. Depending on the desire of the user, the watermark can be either visible or invisible. The scheme can detect any modification made to the image and indicate the specific locations that have been modified. If the correct key is specified in the watermark extraction procedure, then an output image is returned showing a proper watermark, indicating the image is authentic and has not been changed since the insertion of the watermark. Any modification would be reflected in a corresponding error in the watermark. If the key is incorrect, or if the image was not watermarked, or if the watermarked image is cropped, the watermark extraction algorithm will return an image that resembles random noise. Since it requires a user key during both the insertion and the extraction procedures, it is not possible for an unauthorized user to insert a new watermark or alter the existing watermark so that the resulting image will pass the test. We present secret key and public key versions of the technique
Issues and solution on distortion drift in reversible video data hiding Different from reversible image data hiding, most reversible video data hiding schemes have the particular problem that the distortion due to hidden data will spread and accumulate. In this paper, the problem of distortion drift caused by reversible data hiding in compressed video is analyzed, and a lossless drift compensation scheme is proposed to restrain the distortion for the first time. In order to ensure the reversibility, drift compensation signals are merged in the quantized DCT (Discrete Cosine Transform) coefficients of P-frames and the corresponding recovery mechanism is presented as well. Experimental results demonstrate that the proposed lossless drift compensation scheme significantly improves the video quality, and the original compressed video can be recovered exactly after the hidden data and compensation signals are removed. In addition, the proposed scheme does not depend on specific reversible data hiding method.
A high capacity reversible watermarking approach for authenticating images: Exploiting down-sampling, histogram processing, and block selection Amongst the various data hiding schemes, lossless data hiding has achieved substantial attention in the recent years. Lossless watermarking also known as reversible watermarking, not only ensures the extraction of the hidden information, called watermark, but also the complete recovery of the original image as well. Recently, several interesting reversible block-based watermarking approaches using histogram processing have been reported. These approaches offer substantial watermarking capacity for a fixed image quality measure. However, there is considerable margin of improvement as regards the imperceptibility versus capacity tradeoff is concerned. Some of the watermarking applications such as those related to integrity-control need high capacity. Therefore, we present a high capacity reversible watermarking approach based on histogram processing and block selection (RW-HPBS). The proposed RW-HPBS approach exploits the concept of down sampling for effectively increasing the capacity and is useful for integrity control and authentication related applications. Two down-sampled versions, the reference and the data hiding, are obtained to create space for embedding. Authentication property is achieved by taking a secondary advantage of the location map before being compressed and embedded as an auxiliary information. The effectiveness of the proposed method is shown through comparison with the existing techniques. Experimental results show that the proposed RW-HPBS approach provides better imperceptibility versus capacity tradeoff and can detect tampering and collage attack.
Context-based, adaptive, lossless image coding We propose a context-based, adaptive, lossless image codec (CALIC). The codec obtains higher lossless compression of continuous-tone images than other lossless image coding techniques in the literature. This high coding efficiency is accomplished with relatively low time and space complexities. The CALIC puts heavy emphasis on image data modeling. A unique feature of the CALIC is the use of a large number of modeling contexts (states) to condition a nonlinear predictor and adapt the predictor to varying source statistics. The nonlinear predictor can correct itself via an error feedback mechanism by learning from its mistakes under a given context in the past. In this learning process, the CALIC estimates only the expectation of prediction errors conditioned on a large number of different contexts rather than estimating a large number of conditional error probabilities. The former estimation technique can afford a large number of modeling contexts without suffering from the context dilution problem of insufficient counting statistics as in the latter approach, nor from excessive memory use. The low time and space complexities are also attributed to efficient techniques for forming and quantizing modeling contexts
Reusing and Interconnecting Software Components First Page of the Article
Verify Results of Network Intrusion Alerts Using Lightweight Protocol Analysis When will we be secure? Nobody knows for surebut it cannot happen before commercial security products and services possess not only enough functionality to satisfy customers' stated needs, but also sufficient assurance of quality, reliability, safety, ...
Correlation-aware cross-layer design for network management of wireless sensor networks The observations of the nodes of a wireless sensor network are spatiotemporally correlated. Sensor nodes can exploit the correlation for enhancing network efficiency. However, an energy-efficient collaboration is required for better network management. For saving energy, sensor nodes schedule between Active and Sleep states. Nodes extract information from medium access control layer, and use that information along with the correlation of observations as a means of energy-efficient collaboration and proper scheduling of their Active and Sleep states. Furthermore, sensor nodes use non-deterministic reinforcement learning-based approach for reducing energy consumption and end-to-end delay by regulating the duration of their Sleep states. Extensive simulations have shown that the proposed cross-layer approach outperforms existing benchmark schemes in terms of end-to-end delay, data accuracy and energy efficiency.
1.034871
0.025722
0.025722
0.025722
0.013445
0.008575
0.002464
0.000538
0.000088
0.000034
0.00001
0
0
0
Tragic loss or good riddance? The impending demise of traditional scholarly journals s of the AMS might be appropriate,so that &quot;proofs&quot; that the Earth is flat, or that special relativity is a Zionistconspiracy, would be kept out. Discussions of whether Bacon wrote Shakespeare'splays might be accepted (since there are interesting statistical approaches to thisquestion). There would also be digital signatures and digital timestamping, to provideauthentication. The precise rules for how the system would function wouldhave to be decided by experimentation. For example, one...
Cut Size Statistics of Graph Bisection Heuristics We investigate the statistical properties of cut sizes generated by heuristic algorithms which solve the graph bisection problem approximately. On an ensemble of sparse random graphs, we find empirically that the distribution of the cut sizes found by "local" algorithms becomes peaked as the number of vertices in the graphs becomes large. Evidence is given that this distribution tends toward a Gaussian whose mean and variance scales linearly with the number of vertices of the graphs. Given the distribution of cut sizes associated with each heuristic, we provide a ranking procedure that takes into account both the quality of the solutions and the speed of the algorithms. This procedure is demonstrated for a selection of local graph bisection heuristics.
Designing and reporting on computational experiments with heuristic methods This article discusses the design of computational experiments to test heuristic methods and provides reporting guidelines for such experimentation. The goal is to promote thoughtful, well-planned, and extensive testing of heuristics, full disclosure of experimental conditions, and integrity in and reproducibility of the reported results.
Hypergraph partitioning for VLSI CAD: methodology for heuristic development, experimentation and reporting We illustrate how technical contributions in the VLSI CAD parti- tioning literature can fail to provide one or more of: (i) reproducible results and descriptions, (ii) an enabling account of the key under- standing or insight behind a given contribution, and (iii) experimen- tal evidence that is not only contrasted with the state-of-the-art, but also meaningful in light of the driving application. Such failings can lead to reporting of spurious and misguided conclusions. For example, new ideas may appear promising in the context of a weak experimental testbed, but in reality do not advance the state of the art. The resulting inefficiencies can be detrimental to the entire re- search community. We draw on several models (chiefly from the metaheuristics community) (5) for experimental research and re- porting in the area of heuristics for hard problems, and suggest that such practices can be adopted within the VLSI CAD community. Our focus is on hypergraph partitioning.
The ISPD98 circuit benchmark suite From 1985-1993, the MCNC regularly introduced and maintained circuit benchmarks for use by the Design Automation community. However, during the last five years, no new circuits have been introduced that can be used for developing fundamental physical design applications, such as partitioning and placement. The largest circuit in the existing set of benchmark suites has over 100,000 modules, but the second largest has just over 25,000 modules, which is small by today's standards. This paper introduces the ISPD98 benchmark suite which consists of 18 circuits with sizes ranging from 13,000 to 210,000 modules. Experimental results for three existing partitioners are presented so that future researchers in partitioning can more easily evaluate their heuristics.
ISPD 2018 Initial Detailed Routing Contest and Benchmarks. In advanced technology nodes, detailed routing becomes the most complicated and runtime consuming stage. To spur detailed routing research, ISPD 2018 initial detailed routing contest is hosted and it is the first ISPD contest on detailed routing problem. In this contest, the benchmarks synthesized by industrial tool and library are released, which consider the design rules like spacing table, cut spacing, end-of-line spacing, and min-area rules. In addition, the global routing guide is provided associated to each benchmark, and detailed routers are required to honor the routing guides as much as possible meanwhile minimize design-rule-checking (DRC) violations. The biggest benchmark released in this contest has near-millions of nets, so the runtime and memory scalability for detailed routers need to be well addressed. To reduce routers' runtime, the deterministic multithreading framework is encouraged but optional in this contest.
Graph bisection algorithms with good average case behavior In the paper, we describe a polynomial time algorithm that, for every input graph, either outputs the minimum bisection of the graph or halts without output. More importantly, we show that the algorithm chooses the former course with high probability for many natural classes of graphs. In particular, for every fixedd≧3, all sufficiently largen and allb=o(n 1−1/[(d+1)/2]), the algorithm finds the minimum bisection for almost alld-regular labelled simple graphs with 2n nodes and bisection widthb. For example, the algorithm succeeds for almost all 5-regular graphs with 2n nodes and bisection widtho(n 2/3). The algorithm differs from other graph bisection heuristics (as well as from many heuristics for other NP-complete problems) in several respects. Most notably: (i)  the algorithm provides exactly the minimum bisection for almost all input graphs with the specified form, instead of only an approximation of the minimum bisection, (ii)  whenever the algorithm produces a bisection, it is guaranteed to be optimal (i.e., the algorithm also produces a proof that the bisection it outputs is an optimal bisection), (iii)  the algorithm works well both theoretically and experimentally, (iv)  the algorithm employs global methods such as network flow instead of local operations such as 2-changes, and (v)  the algorithm works well for graphs with small bisections (as opposed to graphs with large bisections, for which arbitrary bisections are nearly optimal).
Congestion estimation during top-down placement Congestion is one of the fundamental issues in very large scale integration physical design. In this paper, we propose two congestion-estimation approaches for early placement stages. First, we theoretically analyze the peak-congestion value of the design and experimentally validate the estimation approach. Second, we estimate regional congestion at the early stages of top-down placement. This is done by combining the wire-length distribution model and interregion wire estimation. Both approaches are based on the well-known Rent's rule, which is previously used for wirelength estimation. This is the first attempt to predict congestion using Rent's rule. The estimation results are compared with the layout after placement and global routing. Experiments on large industry circuits show that the early congestion estimation based on Rent's rule is a promising approach
Lens Aberration Aware Timing-Driven Placement Process variations due to lens aberrations are to a large extent systematic, and can be modeled for purposes of analyses and optimizations in the design phase. Traditionally, variations induced by lens aberrations have been considered random due to their small extent. However, as process margins reduce, and as improvements in reticle enhancement techniques control variations due to other sources with increased efficacy, lens aberration-induced variations gain importance. For example, our experiments indicate that lens aberration can result in up to 8% variation in cell delay. In this paper, we propose an aberration-aware timing-driven analytical placement approach that accounts for aberration-induced variations during placement. Our approach minimizes the design's cycle time and prevents hold-time violations under systematic aberration-induced variations. On average, the proposed placement technique reduces cycle time by ~ 5% at the cost of ~ 2% increase in wire length
PADE: a high-performance placer with automatic datapath extraction and evaluation through high dimensional data learning This work presents PADE, a new placer with automatic datapath extraction and evaluation. PADE applies novel data learning techniques to train, predict, and evaluate potential datapaths using high-dimensional data such as netlist symmetrical structures, initial placement hints and relative area. Extracted datapaths are mapped to bit-stack structures that are aligned and simultaneously placed with the random logic. Results show at least 7% average total Half-Perimeter Wire Length (HPWL) and 12% Steiner Wire Length (StWL) improvements on industrial hybrid benchmarks and at least 2% average total HPWL and 3% StWL improvements on ISPD 2005 contest benchmarks. To the best of our knowledge, this is the first attempt to link data learning, datapath extraction with evaluation, and placement and has the tremendous potential for pushing placement state-of-the-art for modern circuits which have datapath and random logics.
An Automatic Technique for Selection of Data Structures in SETL Programs
Supervised tensor learning Tensor representation is helpful to reduce the small sample size problem in discriminative subspace selection. As pointed by this paper, this is mainly because the structure information of objects in computer vision research is a reasonable constraint to reduce the number of unknown parameters used to represent a learning model. Therefore, we apply this information to the vector-based learning and generalize the vector-based learning to the tensor-based learning as the supervised tensor learning (STL) framework, which accepts tensors as input. To obtain the solution of STL, the alternating projection optimization procedure is developed. The STL framework is a combination of the convex optimization and the operations in multilinear algebra. The tensor representation helps reduce the overfitting problem in vector-based learning. Based on STL and its alternating projection optimization procedure, we generalize support vector machines, minimax probability machine, Fisher discriminant analysis, and distance metric learning, to support tensor machines, tensor minimax probability machine, tensor Fisher discriminant analysis, and the multiple distance metrics learning, respectively. We also study the iterative procedure for feature extraction within STL. To examine the effectiveness of STL, we implement the tensor minimax probability machine for image classification. By comparing with minimax probability machine, the tensor version reduces the overfitting problem.
Balancing Traffic Load Using One-Turn Rectilinear Routing We consider the problem of load-balanced routing, where a dense network is modelled by a continuous square region and origin and destination nodes correspond to pairs of points in that region. The ob- jective is to define a routing policy that assigns a continuous path to each origin-destination pair while minimizing the traffic, or load, pass- ing through any single point. While the average load is minimized by straight-line routing, such a routing policy distributes the load non- uniformly, resulting in higher load near the center of the region. We consider one-turn rectilinear routing policies that divert traffic away from regions of heavier load, resulting in up to a 33% reduction in the maxi- mum load while simultaneously increasing the path lengths by an average of less than 28%. Our policies are simple to implement, being both local and oblivious. We provide a lower bound that shows that no one-turn rectilinear routing policy can reduce the maximum load by more than 39% and we give a polynomial-time procedure for approximating the optimal randomized policy.
Linear time computation of the maximal linear and circular sums of multiple independent insertions into a sequence. The maximal sum of a sequence A of n real numbers is the greatest sum of all elements of any linearly contiguous and possibly empty subsequence of A. It can be computed in O(n) time by means of Kadane's algorithm. Letting A(x→p) denote the sequence which results from inserting a real number x just after element A[p−1], we show how the maximal sum of A(x→p) can be computed in O(1) worst-case time for any given x and p, provided that an O(n) time preprocessing step has already been executed on A. In particular, this implies that, given m pairs (x0,p0),…,(xm−1,pm−1), we can compute the maximal sums of sequences A(x0→p0),…,A(xm−1→pm−1) optimally in O(n+m) time, improving on the straightforward and suboptimal strategy of applying Kadane's algorithm to each sequence A(xi→pi), which takes a total of Θ(nm) time. We also show that the same time bound is attainable when circular subsequences of A(x→p) are taken into account. Our algorithms are easy to implement in practice, and they were motivated by a buffer minimization problem on wireless mesh networks.
1.202987
0.068622
0.034978
0.014915
0.001203
0.000693
0.000302
0.000111
0.000063
0.000028
0
0
0
0
A semi-persistent clustering technique for VLSI circuit placement Placement is a critical component of today's physical synthesis flow with tremendous impact on the final performance of VLSI designs. However, it accounts for a significant portion of the over-all physical synthesis runtime. With complexity and netlist size of today's VLSI design growing rapidly, clustering for placement can provide an attractive solution to manage affordable placement runtime. Such clustering, however, has to be carefully devised to avoid any adverse impact on the final placement solution quality. In this paper we present a new bottom-up clustering technique, called best-choice, targeted for large-scale placement problems. Our best-choice clustering technique operates directly on a circuit hypergraph and repeatedly clusters the globally best pair of objects. Clustering score manipulation using a priority-queue data structure enables us to identify the best pair of objects whenever clustering is performed. To improve the runtime of priority-queue-based best-choice clustering, we propose a lazy-update technique for faster updates of clustering score with almost no loss of solution quality. We also discuss a number of effective methods for clustering score calculation, balancing cluster sizes, and handling of fixed blocks. The effectiveness of our best-choice clustering methodology is demonstrated by extensive comparisons against other standard clustering techniques such as Edge-Coarsening [12] and First-Choice [13]. All clustering methods are implemented within an industrial placer CPLACE [1] and tested on several industrial benchmarks in a semi-persistent clustering context.
Fast Wire Length Estimation by Net Bundling for Block Placement The wire length estimation is the bottleneck of packing based block placers. To cope with this problem, we present a fast wire length estimation method in this paper. The key idea is to bundle the 2-pin nets between block pairs, and measure the wire length bundle by bundle, instead of net by net. Previous bundling method (Chan et al., 2005) introduces a huge error which compromises the performance. We present an error-free bundling approach which utilizes the piecewise linear wire length function of a pair of blocks. With the function implemented into a lookup table, the wire length can be computed promptly and precisely by binary search. Furthermore, we show that 3-pin nets can also be bundled, resulting in a further speedup. The effectiveness of our method is verified by experiments
Net Cluster: A Net-Reduction-Based Clustering Preprocessing Algorithm for Partitioning and Placement The complexity and size of digital circuits have grown exponentially, and today's circuits can contain millions of logic elements. Clustering algorithms have become popular due to their ability to reduce circuit sizes, so that the circuit layout can be performed faster and with higher quality. This paper presents a deterministic net-reduction-based clustering algorithm called Net Cluster. The basic idea of the proposed technique is to put the emphasis on reducing the number of nets versus the number of cells, thereby capturing the natural clusters of a circuit. The proposed algorithm has proven a linear-time complexity of O(p), where p is the number of pins in a circuit. To demonstrate the effectiveness of the proposed clustering technique, it has been applied to multilevel partitioning and wire length-driven placement. The numerical experiments on the ISPD98 benchmark suite for partitioning and the ICCAD 2004 benchmark suite for placement demonstrate that by applying Net Cluster as a preprocessing step, the performance of state-of-the-art multilevel partitioners and placers can be further improved
Tutorial on congestion prediction With the increasing sophistication of circuits and specifically in the presence of IP blocks, new estimation methods are needed in the design flow of large-scale circuits. Up to now, a number of post-placement congestion estimation techniques in the presence of IP blocks have been presented. In this paper we present a unified approach for predicting wirelength, congestion and delay parameters early in the design flow. We also propose a methodology to integrate these prediction methods into the placement framework to handle the large complexity of the designs. The total wirelength affects three major parameters of today's chip design cycle: Chip Size, Clock Frequency, and Power Dissipation. Since the above parameters are largely affected by interconnect lengths, total wirelength is frequently used as a measure of the quality of the placement. For the placement and routing phases, the quality requirements are particularly stringent (19). For the result of these phases to be acceptable, accurate predictions of relevant post-layout circuit proper ties are an absolute necessity to limit the search of the vast solution space. Hence, CAD tools use estimation tools usually based on partitioning methodologies (39, 52, 53, 54, 55, 56). Furthermore, an integrated prediction of interconnect delay and length is needed to handle the complexity of today's designs. Minimizing the total routed wirelength is one of the fundamental goals in the VLSI placement stage. However, as VLSI circuits are growing in complexity and more importantly in the presence of extremely large number of IP blocks, not only the wirelength but also the congestion needs to be emphasized at the placement. So far, all of the congestion estimation methods perform post- placement congestion estimation. However in the presence of IP blocks, alleviating congestion after placement may result in an abrupt increase in wirelength; therefore, congestion needs to be estimated early enough to guide placement to avoid generating highly-congested and hence un-routable designs.
Reap what you sow: spare cells for post-silicon metal fix Post-silicon validation has recently become a major bottleneck in IC design. Several high profile IC designs have been taped-out with latent bugs, and forced the manufacturers to resort to additional design revisions. Such changes can be applied through metal fix; however, this is impractical without carefully pre-placed spare cells. In this work we perform the first comprehensive analysis of the issues related to spare-cell insertion, including the types of spare cells that should be used as well as their placement. In addition, we propose a new technique to measure the heterogeneity among signals and use it to determine spare-cell density. Finally, we integrate our findings into a novel multi-faceted approach that calculates regional demand for spare cells, identifies the most appropriate cell types, and places such cells into the layout. Our approach enables the use of metal fix at a much smaller delay cost, with a reduction of up to 37% compared to previous solutions
Metal-density driven placement for cmp variation and routability In this paper, we propose the first metal-density driven placement algorithm to reduce CMP variation and achieve higher routability. Based on an analytical placement framework, we use a probabilistic routing model to estimate the wire density during the placement. Then, the metal density and thickness are predicted by a predictive CMP model. The spreading forces are adjusted according to the metal density map to reduce the metal density variation. Experimental results show that our method reduces the topography variation by 12% and the number of dummy fills by 6% and achieves much better routability, compared with wirelength-driven placement
Ripple: A Robust and Effective Routability-Driven Placer The significant mismatch between the objective of wirelength and routing congestion makes the routability issue even more important in placement. In this paper, we describe a routability-driven placer called Ripple. Each step, including global placement, legalization, and detailed placement, is made to trade-off between routability and wirelength. We propose a robust and effective flow by using cell inflation to relieve routing congestion. Cell inflation has traditionally been used to deal with congestion and we will discuss how this technique can be used easily and robustly in the global placement. Besides, unlike many previous works that focus on different types of swapping strategies, we analyze and propose some simple and effective approaches when considering routability in the legalization and detailed placement steps. Experimental results show that Ripple is particularly effective in improving routability. When compared to the top results in the ISPD 2011 Contest and SimPLR, Ripple can obtain the smallest overflow and half-perimeter wirelength on average, while the congestion hot spots are also distributed sparsely in Ripple.
Detailed placement accounting for technology constraints Circuit placement involves the arrangement of a large number of cells which must be aligned to sites in rows without overlap. Placement is done via a sequence of optimization steps which include global placement, legalization and detailed placement. Global placement determines a rough position for each cell throughout the chip while optimizing objectives such as wirelength and routability. The rough placement is legalized and cells are aligned to sites in rows without overlap. Detailed placement attempts to further improve the placement while keeping the placement feasible. In reality, the placement of cells is more complicated than aligning cells to sites without overlap; detailed routability issues compound the placement problem by introducing issues such as pin shorts, pin access problems, and other spacing requirements. The importance of addressing these issues were highlighted during the recent ISPD2014 placement contest [1]. In many cases, detailed routability issues can be addressed during placement to avoid later problems. We describe our ISPD2014 contest legalizer and detailed placer (plus additional extensions) that can address many detailed routing issues without negatively impacting the quality of the final placement. Numerical results are presented to demonstrate the effectiveness of our techniques.
An analytic placer for mixed-size placement and timing-driven placement We extend the APlace wirelength-driven standard-cell analytic placement framework of A.A. Kennings and I.L. Markov (2002) to address timing-driven and mixed-size ("boulders and dust") placement. Compared with timing-driven industry tools, evaluated by commercial detailed routing and STA, we achieve an average of 8.4% reduction in cycle time and 7.5% reduction in wirelength for a set of six industry testcases. For mixed-size placement, we achieve an average of 4% wirelength reduction on ISPD02 mixed-size placement benchmarks compared to results of the leading-edge solver, Feng Shui (v2.4) (Khatkhate et al., 2004). We are currently evaluating our placer on industry testcases that combine the challenges of timing constraints, large instance sizes, and embedded blocks (both fixed and unfixed).
On Whitespace and Stability in Mixed-Size Placement and Physical Synthesis In the context of physical synthesis, large-scale standard-cell placement algorithms must facilitate incremental changes to layout, both local and global. In particular, flexible gate sizing, net buffering and detail placement require a certain amount of unused space in every region of the die. The need for "local" whitespace is further emphasized by temperature and power-density limits. Another requirement, the stability of placement results from run to run, is important to the convergence of physical synthesis loops. Indeed, logic resynthesis targetting local congestion in a given placement or particular critical paths may be irrelevant for another placement produced by the same or a different layout tool. In this work we offer solutions to the above problems. We show how to tie the results of a placer to a previously existing placement, and yet leave room for optimization. In our experiments this technique produces placements with similar congestion maps. We also show how to trade-off wirelength for routability by manipulating whitespace. Empirically, our techniques improve circuit delay of sparse layouts in conjunction with physical synthesis. In the context of earlier proposed techniques for mixed-size placement, we tune a state-of-the-art recursive bisection placer to better handle regular netlists that offer a convenient way to represent memories, data paths and random-logic IP blocks. These modifications and better whitespace distribution improve results on recent mixed-size placement benchmarks.
Recursive reconstruction of high resolution image from noisy undersampled multiframes In several applications it is required to reconstruct a high-resolution noise-free image from multipath frames of undersampled low-resolution noisy images. Using the aliasing relationship between the undersamples frames and the reference image, an algorithm based on weighted recursive least-squares theory is developed in the wavenumber domain. This algorithm is efficient because interpolation and noise removal are performed recursively, and is highly suitable for implementation via the massively parallel computational architectures currently available. Success in the use of the algorithm is demonstrated through various simulated examples
Combining belief networks and neural networks for scene segmentation We are concerned with the problem of image segmentation, in which each pixel is assigned to one of a predefined finite number of labels. In Bayesian image analysis, this requires fusing together local predictions for the class labels with a prior model of label images. Following the work of Bouman and Shapiro (1994), we consider the use of tree-structured belief networks (TSBNs) as prior models. The parameters in the TSBN are trained using a maximum-likelihood objective function with the EM algorithm and the resulting model is evaluated by calculating how efficiently it codes label images. A number of authors have used Gaussian mixture models to connect the label field to the image data. We compare this approach to the scaled-likelihood method of Smyth (1994) and Morgan and Bourlard (1995), where local predictions of pixel classification from neural networks are fused with the TSBN prior. Our results show a higher performance is obtained with the neural networks. We evaluate the classification results obtained and emphasize not only the maximum a posteriori segmentation, but also the uncertainty, as evidenced e.g., by the pixelwise posterior marginal entropies. We also investigate the use of conditional maximum-likelihood training for the TSBN and find that this gives rise to improved classification performance over the ML-trained TSBN
Algorithms To Automatically Quantify The Geometric Similarity Of Anatomical Surfaces We describe approaches for distances between pairs of two-dimensional surfaces (embedded in three-dimensional space) that use local structures and global information contained in interstructure geometric relationships. We present algorithms to automatically determine these distances as well as geometric correspondences. This approach is motivated by the aspiration of students of natural science to understand the continuity of form that unites the diversity of life. At present, scientists using physical traits to study evolutionary relationships among living and extinct animals analyze data extracted from carefully defined anatomical correspondence points (landmarks). Identifying and recording these landmarks is time consuming and can be done accurately only by trained morphologists. This necessity renders these studies inaccessible to non-morphologists and causes phenomics to lag behind genomics in elucidating evolutionary patterns. Unlike other algorithms presented for morphological correspondences, our approach does not require any preliminary marking of special features or landmarks by the user. It also differs from other seminal work in computational geometry in that our algorithms are polynomial in nature and thus faster, making pairwise comparisons feasible for significantly larger numbers of digitized surfaces. We illustrate our approach using three datasets representing teeth and different bones of primates and humans, and show that it leads to highly accurate results.
Automatic correctness proofs for logic program transformations The many approaches which have been proposed in the literature for proving the correctness of unfold/fold program transformations, consist in associating suitable well-founded orderings with the proof trees of the atoms belonging to the least Herbrand models of the programs. In practice, these orderings are given by 'clause measures', that is, measures associated with the clauses of the programs to be transformed. In the unfold/fold transformation systems proposed so far, clause measures are fixed in advance, independently of the transformations to be proved correct. In this paper we propose a method for the automatic generation of the clause measures which, instead, takes into account the particular program transformation at hand. During the transformation process we construct a system of linear equations and inequations whose unknowns are the clause measures to be found, and the correctness of the transformation is guaranteed by the satisfiability of that system. Through some examples we show that our method is able to establish in a fully automatic way the correctness of program transformations which, by using other methods, are proved correct at the expense of fixing sophisticated clause measures.
1.011235
0.016135
0.015333
0.009734
0.008
0.00568
0.003228
0.001696
0.000909
0.000111
0
0
0
0
A Theory of Historical Discovery: The Construction of Componential Models One of the major goals of 18th century chemistry was to determine the components of substances. In this paper we describe STAHL, a system that models significant portions of 18th century reasoning about compositional models. The system includes a number of heuristics for generating componential models from reactions, as well as error recovery mechanisms for dealing with inconsistent results. STAHL processes chemical reactions incrementally, and is therefore capable of reconstructing extended historic episodes, such as the century-long development of the phlogiston theory. We evaluate STAHL's heuristics in the light of historical data, and conclude that the same reasoning mechanisms account for a variety of historical achievements, including Black's models of mild alkali and Lavoisier's oxygen theory. STAHL explains the generation of competing accounts of the same reactions, since the system's reasoning chain depends on knowledge it has accumulated at earlier stages.
Generating Predictions to Aid the Scientific Discovery Process NGLAUBER is a system which models the scientific discovery of qualitative empirical laws. As such, it falls into the category of scientific discovery systems. However, NGLAUBER can also be viewed as a conceptual clustering system since it forms classes of objects and characterizes these classes. NGLAUBER differs from existing scientific discovery and conceptual clustering systems in a number of ways. I. It uses an incremental method to group objects into classes. 2. These classes are formed based on the relationships between objects rather than just the attributes of objects. 3 The system describes the relationships between classes rather than simply describing the classes. 4. Most importantly, NGLAUBER proposes experiments by predicting future data. The experiments help the system guide itself through the search for regularities in the data.
Data-driven discovery of physical laws BACON.3 is a production system that discovers empirical laws. Although it does not attempt to model the human discovery process in detail, it incorporates some general heuristics that can lead to discovery in a number of domains. The main heuristics detect constancies and trends in data, and lead to the formulation of hypotheses and the definition of theoretical terms. Rather than making a hard distinction between data and hypotheses, the program represents information at varying levels of description. The lowest levels correspond to direct observations, while the highest correspond to hypotheses that explain everything so far observed. To take advantage of this representation, BACON.3 has the ability to carry out and relate multiple experiments, collapse hypotheses with identical conditions, ignore differences to let similar concepts be treated as equal, and to discover and ignore irrelevant variables. BACON.3 has shown its generality by rediscovering versions of the ideal gas law, Kepler's third law of planetary motion, Coulomb's law, Ohm's law, and Galileo's laws for the pendulum and constant acceleration.
Three facets of scientific discovery Scientific discovery is a complex process, and in this paper we consider three of its many facets - discovering laws of qualitative structure, finding quantitative relations between variables, and formulating sfructural models of reactions. We describe three discovery systems - GLAUBER, BACON, and DALTON - that address these three aspects of the scientific process. GLAUBER forms classes of objects based on regularities in qualitative data, and states abstract laws in terms of these classes. BACON includes heuristics for finding numerical laws, for postulating intrinsic properties, and for noting common divisors. DALTON formulates molecular models that account for observed reactions, taking advantage of theoretical assumptions to direct its search if they are available. We show how each of the programs is capable of rediscovering laws or models that were found in the early days of chemistry. Finally, we consider some possble interactions between these systems, and the need for an integrated theory of discovery.
Toward mechanical mathematics Results are reported here of a rather successful attempt of proving all theorems, totalling near 400, of Principia Mathematica which are strictly in the realm of logic, viz., the restricted predicate calculus with equality. A number of other problems of the same type are discussed. It is suggested that the time is ripe for a new branch of applied logic which may be called “inferential” analysis, which treats proofs as numerical analysis does calculations. This discipline seems capable, in the not too remote future, of leading to machine proofs of difficult new theorems. An easier preparatory task is to use machines to formalize proofs of known theorems. This line of work may also lead to mechanical checks of new mathematical results comparable to the debugging of a program.
Shortcut deforestation in calculational form In functional programming, intermediate data structures are often used to &quot;glue&quot; together small programs. Deforestation is a program transformation to remove these intermediate data structures automatically. We present a simple algorithm for deforestation based on two fusion rules for hylomorphism, an expressive recursion pattern. A generic notation for hylomorphisms is introduced, where natural transformations are explicitly factored out, and it is used to represent programs. Our method...
A real-time garbage collector based on the lifetimes of objects In previous heap storage systems, the cost of creating objects and garbage collection is independent of the lifetime of the object. Since objects with short lifetimes account for a large portion of storage use, it is worth optimizing a garbage collector to reclaim storage for these objects more quickly. The garbage collector should spend proportionately less effort reclaiming objects with longer lifetimes. We present a garbage collection algorithm that (1) makes storage for short-lived objects cheaper than storage for long-lived objects, (2) that operates in real time—object creation and access times are bounded, (3) increases locality of reference, for better virtual memory performance, (4) works well with multiple processors and a large address space.
Counterfeiting attacks on oblivious block-wise independent invisible watermarking schemes We describe a class of attacks on certain block-based oblivious watermarking schemes. We show that oblivious watermarking techniques that embed information into a host image in a block-wise independent fashion are vulnerable to a counterfeiting attack. Specifically, given a watermarked image, one can forge the watermark it contains into another image without knowing the secret key used for watermark insertion and in some cases even without explicitly knowing the watermark. We demonstrate successful implementations of this attack on a few watermarking techniques that have been proposed in the literature. We also describe a possible solution to this problem of block-wise independence that makes our attack computationally intractable
Image compression using address-vector quantization A novel vector quantization scheme, called the address-vector quantizer (A-VQ), is proposed. It is based on exploiting the interblock correlation by encoding a group of blocks together using an address-codebook. The address-codebook consists of a set of address-codevectors where each codevector represents a combination of addresses (indexes). Each element of this codevector is an address of an entry in the LBG-codebook, representing a vector quantized block. The address-codebook consists of two regions: one is the active (addressable) region, and the other is the inactive (nonaddressable) region. During the encoding process the codevectors in the address-codebook are reordered adaptively in order to bring the most probable address-codevectors into the active region. When encoding an address-codevector, the active region of the address-codebook is checked, and if such an address combination exist its index is transmitted to the receiver. Otherwise, the address of each block is transmitted individually. The quality (SNR value) of the images encoded by the proposed A-VQ method is the same as that of a memoryless vector quantizer, but the bit rate would be reduced by a factor of approximately two when compared to a memoryless vector quantizer
Program Transformation Systems Interest is increasing in the transformational approach to programming and in mechanical aids for supporting the program development process. Available aids range from simple editorlike devices to rather powerful interactive transformation systems and even to automatic synthesis tools. This paper reviews and classifies transformation systems and is intended to acquaint the reader with the current state of the art and provide a basis for comparing the different approaches. It is also designed to provide easy access to specific details of the various methodologies.
Constraints to stop deforestation Wadler's deforestation algorithm eliminates intermediate data structures from functional programs. To be suitable for inclusion in a compiler, deforestation must terminate on all programs. Several techniques exist to ensure termination of deforestation on all first-order programs, but general techniques for higher-order programs were introduced only recently first by Hamilton and then by Marlow. We present a new technique for ensuring termination of deforestation on all higher-order programs that allows useful transformation steps prohibited in Hamilton's and Marlow's techniques. The technique uses a constraint-based higher-order control-flow analysis. We also relate our technique to previous approaches to termination of first- and higher-order deforestation in some detail.
A robust error concealment technique using data hiding for image and video transmission over lossy channels A robust error concealment scheme using data hiding which aims at achieving high perceptual quality of images and video at the end-user despite channel losses is proposed. The scheme involves embedding a low-resolution version of each image or video frame into itself using spread-spectrum watermarking, extracting the embedded watermark from the received video frame, and using it as a reference for reconstruction of the parent image or frame, thus detecting and concealing the transmission errors. Dithering techniques have been used to obtain a binary watermark from the low-resolution version of the image/video frame. Multiple copies of the dithered watermark are embedded in frequencies in a specific range to make it more robust to channel errors. It is shown experimentally that, based on the frequency selection and scaling factor variation, a high-quality watermark can be extracted from a low-quality lossy received image/video frame. Furthermore, the proposed technique is compared to its two-part variant where the low-resolution version is encoded and transmitted as side information instead of embedding it. Simulation results show that the proposed concealment technique using data hiding outperforms existing approaches in improving the perceptual quality, especially in the case of higher loss probabilities.
A novel data hiding scheme based on modulus function Four criteria are generally used to evaluate the performance of data hiding scheme: the embedding capacity, the visual quality of the stego-image, the security, and the complexity of the data-embedding algorithm. However, data hiding schemes seldom take all these factors into consideration. This paper proposes a novel data hiding scheme that uses a simple modulus function to address all the performance criteria listed above. According to the input secret keys, the encoder and decoder use the same set-generation functions H"r() and H"c() to first generate two sets Kr and Kc. A variant Cartesian product is then created using Kr and Kc. Each cover pixel then forms a pixel group with its neighboring pixels by exploiting an efficient modulus function; the secret data are then embedded or extracted via a mapping process between the variant of the Cartesian product and each pixel group. The proposed scheme offers several advantages, namely (1) the embedding capacity can be scaled, (2) a good visual quality of the stego-image can be achieved, (3) the computational cost of embedding or extracting the secret data is low and requires little memory space, (4) secret keys are used to protect the secret data and (5) the problem of overflow or underflow does not occur, regardless of the nature of the cover pixels. We tested the performance of the proposed scheme by comparing it with Mielikainen's and Zhang and Wang's schemes for gray-scale images. The experimental results showed that our proposed scheme outperforms Mielikainen's in three respects, namely scalable embedding capacity, embedding rate, and level of security. Our data hiding scheme also achieved a higher embedding capacity than Zhang and Wang's. The proposed scheme can easily be applied to both gray-scale and color images. Analyses of its performance showed that our proposed scheme outperforms Tsai and Wang's in terms of its time complexity and memory space requirement.
Reversible data hiding based on local histogram shifting with multilayer embedding We use the two neighboring points of the peak point to embed the secret data.We improve the histogram of embedded cover image to ensure security of secret data.The multilayer embedding of pyramid structure is adopted to increase the capacity.Our proposed method has a higher embedding capacity with lower distortion. In this paper, we present a new reversible data hiding method based on histogram shifting using localization. Our proposed method selects peak point as the reference point, then uses the two neighboring points of the peak point to achieve secret data embedding based on histogram shifting and the peak point keeps unchanged. In the extraction end, we no longer need the key information about the peak point, we can directly find the peak point from the histogram to extract the secret data. We also exploit the localization to make the histogram of embedded cover image become almost the same as the histogram of the original cover image. The embedding capacity is also increased rapidly by the localization with multilayer embedding. Experimental results show that our proposed method is effective and superior.
1.078333
0.078333
0.039167
0.03
0.000142
0
0
0
0
0
0
0
0
0
On Using Mode Input-output for Transforming Logic Programs
Top-down synthesis of recursive logic procedures for first-order logic specifications Derivation of logic programs from first-order logic specifications is nontrivial and tends to be done by hand. We describe a method for synthesising recursive logic procedures from their first-order logic specifications that is amenable to mechanisation. The method is strictly top-down and has been implemented as part of a user-guided synthesis system in Prolog.
Algorithm = logic + control The notion that computation = controlled deduction was first proposed by Pay Hayes [19] and more recently by Bibel [2] and Vaughn-Pratt [31]. A similar thesis that database systems should be regarded as consisting of a relational component, which defines the logic of the data, and a control component, which stores and retrieves it, has been successfully argued by Codd [10]. Hewitt's argument [20] for the programming language PLANNER, though generally regarded as an argument against logic, can also be regarded as an argument for the thesis that algorithms be regarded as consisting of both logic and control components. In this paper we shall explore some of the useful consequences of that thesis.
Synthesis of eureka predicates for developing logic programs We consider the problem of inventing new predicates when developing logic programs by transformation. Those predicates, often called eureka predicates, improve program efficiency by eliminating redundant computations and avoiding multiple visits of data structures. It can be shown that no general method exists for inventing the required eureka predicates for a given initial program. We introduce here two strategies, the Loop Absorption Strategy and the Generalization Strategy, which in many cases determine the new predicates to be defined during program transformation. We study the properties of those strategies and we present some classes of programs in which they are successful.
Logic program derivation for a class of first order logic relations Logic programming has been an attempt to bridge the gap betwen specification and programming language and thus to simplify the software development process. Even though the only difference between a specification and a program in a logic programming framework is that of efficiency, there is still some conceptual distance to be covered between a naive, intuitively correct specification and an efficiently executable version of it And even though some mechanical tools have been developed to assist in covering this distance, no fully automatic system for this purpose is yet known. In this paper v/t present a general class of first-order logic relations, which is a subset of the extended Horn clause subset of logic, for which we give mechanical means for deriving Horn logic programs, which are guaranteed to be correct and complete with respect to the initial specifications.
Some Thoughts on the Role of Examples in Program Transformation and its Relevance for Explanation-based Learning Explanation-based learning is using the same technique as partial evaluation, namely unfolding. However, it brings a new insight: an example can be used to guide the transformation process. In this paper, we further explore this insight and show how examples can be used to guide other kinds of program transformation, guiding not only the unfolding, but also the introduction of new predicates and the folding. On the other hand, we illustrate the more fundamental restructuring which is possible with program transformation and the relevance of completeness results to eliminate computationalty inefficient knowledge.
Deriving very efficient algorithms for evaluating linear recurrence relations using the program transformation technique Using the program transformation technique we derive some algorithms for evaluating linear recurrence relations in logarithmic time. The particular case of the Fibonacci function is first considered and a comparison with the conventional matrix exponentiation algorithm is made. This comparison allows us also to contrast the transformation technique and the stepwise refinement technique underlining some interesting features of the former one. Through the examples given we also explain why those features are interesting for a useful and reliable program construction methodology.
A simple calculus for program transformation (inclusive of induction) A basic purpose of transformation systems is the application of ‘correctness-preserving rules’ in order to derive from given programs new, ‘equivalent’ ones. An important aspect here is the usage of induction principles, without which transformation systems would have too limited power. The paper presents a formal system of ‘transformation rules’ that incorporates induction. This system is a kind of ‘Gentzen-style calculus’, impoverished, however, to a degree that just meets the needs of program transformation. Thus we achieve a basis for the design of transformation systems, which is both simple and sound.
Using Type Transformation in Database Implementation
A new approach to recursion removal Iterative forms are derived for a class of recursive functions, i.e. the recursion is “removed”. The transformation comprises first analysis of the defining equation of a recursive function and then synthesis of an imperative language loop from the primitive subexpressions so obtained. This initially leads to a two-loop program but further transformation provides a single loop version under appropriate conditions. The analysis-synthesis approach contrasts with previous methods using template matching and induces a constructive method which is better suited to mechanisation, although its implementation is not considered here.
Scaling Personalized Web Search Recent web search techniques augment traditional text matching with a global notion of "importance" based on the linkage structure of the web, such as in Google's PageRank algo- rithm. For more refined searches, this global notion of importance can be specialized to create personalized views of importance—for example, importance scores can be biased according to a user-specified set of initially-interesting pages. Computing and storing all possible per- sonalized views in advance is impractical, as is computing personalized views at query time, since the computation of each view requires an iterative computation over the web graph. We present new graph-theoretical results, and a new technique based on these results, that encode personalized views as partial vectors. Partial vectors are shared across multiple personalized views, and their computation and storage costs scale well with the number of views. Our ap- proach enables incremental computation, so that the construction of personalized views from partial vectors is practical at query time. We present efficient dynamic programming algo- rithms for computing partial vectors, an algorithm for constructing personalized views from partial vectors, and experimental results demonstrating the effectiveness and scalability of our techniques.
Context-based, adaptive, lossless image coding We propose a context-based, adaptive, lossless image codec (CALIC). The codec obtains higher lossless compression of continuous-tone images than other lossless image coding techniques in the literature. This high coding efficiency is accomplished with relatively low time and space complexities. The CALIC puts heavy emphasis on image data modeling. A unique feature of the CALIC is the use of a large number of modeling contexts (states) to condition a nonlinear predictor and adapt the predictor to varying source statistics. The nonlinear predictor can correct itself via an error feedback mechanism by learning from its mistakes under a given context in the past. In this learning process, the CALIC estimates only the expectation of prediction errors conditioned on a large number of different contexts rather than estimating a large number of conditional error probabilities. The former estimation technique can afford a large number of modeling contexts without suffering from the context dilution problem of insufficient counting statistics as in the latter approach, nor from excessive memory use. The low time and space complexities are also attributed to efficient techniques for forming and quantizing modeling contexts
Placement: hot or not? Placement is considered a fundamental physical design problem in electronic design automation. It has been around so long that it is commonly viewed as a solved problem. However, placement is not just another design automation problem; placement quality is at the heart of design quality in terms of timing closure, routability, area, power and most importantly, time-to-market. Small improvements in placement quality often translate into large improvements further down the design closure stack. This paper makes the case that placement is a "hot topic" in design automation and presents several placement formulations related to routability, clocking, datapath, timing, and constraint management to drive years of research.
Reversible data hiding based on local histogram shifting with multilayer embedding We use the two neighboring points of the peak point to embed the secret data.We improve the histogram of embedded cover image to ensure security of secret data.The multilayer embedding of pyramid structure is adopted to increase the capacity.Our proposed method has a higher embedding capacity with lower distortion. In this paper, we present a new reversible data hiding method based on histogram shifting using localization. Our proposed method selects peak point as the reference point, then uses the two neighboring points of the peak point to achieve secret data embedding based on histogram shifting and the peak point keeps unchanged. In the extraction end, we no longer need the key information about the peak point, we can directly find the peak point from the histogram to extract the secret data. We also exploit the localization to make the histogram of embedded cover image become almost the same as the histogram of the original cover image. The embedding capacity is also increased rapidly by the localization with multilayer embedding. Experimental results show that our proposed method is effective and superior.
1.105038
0.014767
0.008163
0.006726
0.001538
0.00085
0.00047
0.000191
0.000033
0.000004
0
0
0
0
Kullback-Leibler approximation of spectral density functions We introduce a Kullback-Leibler (1968) -type distance between spectral density functions of stationary stochastic processes and solve the problem of optimal approximation of a given spectral density Ψ by one that is consistent with prescribed second-order statistics. In general, such statistics are expressed as the state covariance of a linear filter driven by a stochastic process whose spectral density is sought. In this context, we show (i) that there is a unique spectral density Φ which minimizes this Kullback-Leibler distance, (ii) that this optimal approximate is of the form Ψ/Q where the "correction term" Q is a rational spectral density function, and (iii) that the coefficients of Q can be obtained numerically by solving a suitable convex optimization problem. In the special case where Ψ = 1, the convex functional becomes quadratic and the solution is then specified by linear equations.
Histogram shape-based robust image watermarking method Developing a watermarking method that is robust to cropping attack and random bending attacks (RBAs) is a challenging task in image watermarking. In this paper, we propose a histogram-based image watermarking method to tackle with both cropping attack and RBAs. In this method first the gray levels are divided into groups. Secondly the groups for watermark embedding are selected according to the number of pixels in them, which makes this method fully based on the histogram shape of the original image and adaptive to different images. Then the watermark bits are embedded by modifying the histogram of the selected groups. Since histogram shape is insensitive to cropping and independent from pixel positions, the proposed method is robust to cropping attack and RBAs. Besides, it also has high robustness against other common attacks. Experimental results demonstrate the effectiveness of the proposed method.
Three stages prediction-error expansion reversible watermarking This paper proposes a three-stages difference expansion reversible watermarking scheme. In the first stage, a quarter of the pixels are estimated by using the median of the eight original neighbors of the 3×3 window. In the second stage, a quarter of the pixels are estimated as the average on the rhombus of the four horizontal and vertical original pixels. Finally, the remaining pixels are estimated on the rhombus context, using the modified pixels computed in the two previous stages. The experimental results show that the proposed scheme can provide slightly improved results than the classical two-stages reversible watermarking based on the rhombus context.
Prediction based reversible watermarking with contrast modification Reversible watermarking (RW) is used to embed some useful information into digital content in a reversible manner, where the reconstruction of the original image is highly desirable. This reversibility is important for almost all sensitive image application domains such as medical and military fields. To this aim here we have proposed a hybrid algorithm that offers higher embedding bit rate with a considerable low visual distortion. The proposed algorithm is a hybrid model of modified reversible contrast mapping (RCM) and prediction error expansion (PEE) used in RW. The benefits of RCM that provides higher embedding space and PEE algorithm that allows higher embedding bit rate are used combinedly. Both the attributes are used here in one integrated form that allows high data embedding rate with low embedding distortion. In a single pass embedding, the proposed algorithm provides 1.33 bpp, 1.15 bpp, 1.024 bpp and 1.085 bpp for Lena, Boat, Baboon and Elaine images, respectively.
Improved rhombus interpolation for reversible watermarking by difference expansion The paper proposes an interpolation error expansion reversible watermarking algorithm. The main novelty of the paper is a modified rhombus interpolation scheme. The four horizontal and vertical neighbors are considered and, depending on their values, the interpolated pixel is computed as the average of the horizontal pixels, of the vertical pixels or of the entire set of four pixels. Experimental results are provided. The proposed scheme outperforms the results obtained by using the average on the four horizontal and vertical neighbors and the ones obtained by using well known predictors as MED or GAP.
Reversible Watermarking: Current Status and Key Issues Over the past few years a number of research papers about reversible watermarks has been produced. Reversible wa- termarking is a novel category of watermarking schemes. It not only can strengthen the ownership of the original media but also can completely recover the original me- dia from the watermarked media. This feature is suitable for some important media, such as medical and military images, because these kinds of media do not allow any losses. The aim of this paper is to define the purpose of reversible watermarking, reflecting recent progress, and provide some research issues for the future.
New directions in cryptography Two kinds of contemporary developments in cryptography are examined. Widening applications of teleprocessing have given rise to a need for new types of cryptographic systems, which minimize the need for secure key distribution channels and supply the equivalent of a written signature. This paper suggests ways to solve these currently open problems. It also discusses how the theories of communication and computation are beginning to provide the tools to solve cryptographic problems of long standing.
Shortest-path routing in arbitrary networks We introduce an on-line protocol which routes any set ofNpackets along shortest paths with congestionCand dilationDthrough an arbitrary network inO(C+D+logN) steps, with high probability. This time bound is optimal up to the additive logN, and it has previously only been reached for bounded-degree leveled networks.
Highly dynamic Destination-Sequenced Distance-Vector routing (DSDV) for mobile computers An ad-hoc network is the cooperative engagement of a collection of Mobile Hosts without the required intervention of any centralized Access Point. In this paper we present an innovative design for the operation of such ad-hoc networks. The basic idea of the design is to operate each Mobile Host as a specialized router, which periodically advertises its view of the interconnection topology with other Mobile Hosts within the network. This amounts to a new sort of routing protocol. We have investigated modifications to the basic Bellman-Ford routing mechanisms, as specified by RIP [5], to make it suitable for a dynamic and self-starting network mechanism as is required by users wishing to utilize ad hoc networks. Our modifications address some of the previous objections to the use of Bellman-Ford, related to the poor looping properties of such algorithms in the face of broken links and the resulting time dependent nature of the interconnection topology describing the links between the Mobile Hosts. Finally, we describe the ways in which the basic network-layer routing can be modified to provide MAC-layer support for ad-hoc networks.
A debate on teaching computing science
High-performance global routing with fast overflow reduction Global routing is an important step for physical design. In this paper, we develop a new global router, NTUgr, that contains three major steps: prerouting, initial routing, and enhanced iterative negotiation-based rip-up/rerouting (INR). The prerouting employs a two-stage technique of congestion-hotspot historical cost pre-increment followed by small bounding-box area routing. The initial routing is based on efficient iterative monotonic routing. For traditional INR, it has evolved as the main stream for the state-of-the-art global routers, which reveals its great ability to reduce the congestion and overflow. As pointed out by recent works, however, traditional INR may get stuck at local optima as the number of iterations increases. To remedy this deficiency, we replace INR by enhanced iterative forbidden-region rip-up/rerouting (IFR) which features three new techniques of (1) multiple forbidden regions expansion, (2) critical subnet rerouting selection, and (3) look-ahead historical cost increment. Experimental results show that NTUgr achieves high-quality results for the ISPD'07 and ISPD'08 benchmarks for both overflow and runtime.
Do you want to take notes?: identifying research missions in Yahoo! search pad Addressing user's information needs has been one of the main goals of Web search engines since their early days. In some cases, users cannot see their needs immediately answered by search results, simply because these needs are too complex and involve multiple aspects that are not covered by a single Web or search results page. This typically happens when users investigate a certain topic in domains such as education, travel or health, which often require collecting facts and information from many pages. We refer to this type of activities as "research missions". These research missions account for 10% of users' sessions and more than 25% of all query volume, as verified by a manual analysis that was conducted by Yahoo! editors. We demonstrate in this paper that such missions can be automatically identified on-the-fly, as the user interacts with the search engine, through careful runtime analysis of query flows and query sessions. The on-the-fly automatic identification of research missions has been implemented in Search Pad, a novel Yahoo! application that was launched in 2009, and that we present in this paper. Search Pad helps users keeping trace of results they have consulted. Its novelty however is that unlike previous notes taking products, it is automatically triggered only when the system decides, with a fair level of confidence, that the user is undertaking a research mission and thus is in the right context for gathering notes. Beyond the Search Pad specific application, we believe that changing the level of granularity of query modeling, from an isolated query to a list of queries pertaining to the same research missions, so as to better reflect a certain type of information needs, can be beneficial in a number of other Web search applications. Session-awareness is growing and it is likely to play, in the near future, a fundamental role in many on-line tasks: this paper presents a first step on this path.
Hiding Data Reversibly in an Image via Increasing Differences between Two Neighboring Pixels This paper proposes a simple, efficient method that, based on increasing the differences between two neighboring pixels, losslessly embeds a message into a host image. The point at which the number of pixel differences in the image is at a maximum is selected to embed the message. The selected difference is increased by 1 or left unchanged if the embedded bit is “1” or “0”, respectively. On the other hand, differences larger than the selected difference are increased by 1. Increasing a difference is done by adding 1 to or subtracting 1 from the pixel if its value is larger or smaller than its preceding pixel, respectively. Experimental results show that the proposed method can achieve a high payload capacity while the image distortion of the stego-image remains minimal.
Reversible data hiding based on local histogram shifting with multilayer embedding We use the two neighboring points of the peak point to embed the secret data.We improve the histogram of embedded cover image to ensure security of secret data.The multilayer embedding of pyramid structure is adopted to increase the capacity.Our proposed method has a higher embedding capacity with lower distortion. In this paper, we present a new reversible data hiding method based on histogram shifting using localization. Our proposed method selects peak point as the reference point, then uses the two neighboring points of the peak point to achieve secret data embedding based on histogram shifting and the peak point keeps unchanged. In the extraction end, we no longer need the key information about the peak point, we can directly find the peak point from the histogram to extract the secret data. We also exploit the localization to make the histogram of embedded cover image become almost the same as the histogram of the original cover image. The embedding capacity is also increased rapidly by the localization with multilayer embedding. Experimental results show that our proposed method is effective and superior.
1.2
0.2
0.2
0.2
0.011111
0.002941
0
0
0
0
0
0
0
0
Higher-order statistical steganalysis of palette images In this paper, we describe a new higher-order steganalytic method called Pairs Analysis for detection of secret messages embedded in digital images. Although the approach is in principle applicable to many different steganographic methods as well as image formats, it is ideally suited to 8-bit images, such as GIF images, where message bits are embedded in LSBs of indices to an ordered palette. The EzStego algorithm(4) with random message spread and optimized palette order is used as an embedding archetype on which we demonstrate Pairs Analysis and compare its performance with the chisquare attacks(5,7) and our previously proposed RS steganalysis(2). Pairs Analysis enables more reliable and accurate message detection than previous methods. The method was tested on databases of GIF images of natural scenes, cartoons, and computer-generated images. The experiments indicate that the relative steganographic capacity of the EzStego algorithm with random message spread is less than 10% of the total image capacity (0.1 bits per pixel).
Immune programming This paper describes 'Immune Programming', a paradigm in the field of evolutionary computing taking its inspiration from principles of the vertebrate immune system. These principles are used to derive stack-based computer programs to solve a wide range of problems. An antigen is used to represent the programming problem to be addressed and may be provided in closed form or as an input/output mapping. An antibody set (a repertoire), wherein each member represents a candidate solution, is generated at random from a gene library representing computer instructions. Affinity, the fit of an antibody (a solution candidate) to the antigen (the problem), is analogous to shape-complementarity evident in biological systems. This measure is used to determine both the fate of individual antibodies, and whether or not the algorithm has successfully completed. When a repertoire has not yielded affinity relating algorithm completion, individual antibodies are replaced, cloned, or hypermutated. Replacement occurs according to a replacement probability and yields an entirely new randomly-generated solution candidate when invoked. This randomness (and that of the initial repertoire) provides diversity sufficient to address a wide range of problems. The chance of antibody cloning, wherein a verbatim copy is placed in the new repertoire, occurs proportionally to its affinity and according to a cloning probability. The chances of an effective (high-affinity) antibody being cloned is high, analogous to replication of effective pathogen-fighting antibodies in biological systems. Hypermutation, wherein probability-based replacement of the gene components within an antibody occurs, is also performed on high-affinity entities. However, the extent of mutation is inversely proportional to the antigenic affinity. The effectiveness of this process lies in the supposition that a candidate showing promise is likely similar to the ideal solution. This paper describes the paradigm in detail along with the underlying immune theories and their computational models. A set of sample problems are defined and solved using the algorithm, demonstrating its effectiveness and excellent convergent qualities. Further, the speed of convergence with respect to repertoire size limitations and probability parameters is explored and compared to stack-based genetic programming algorithms.
Detecting Low Embedding Rates This paper shows three approaches for detecting steganograms with low change density. MP3Stego is a steganographic algorithm with a very low embedding rate. The attack presented here is a statistical analysis of block sizes. It is able to detect 0.001 % of steganographic payload in MP3 files. The second approach is the use of hash functions to combine sample categories for the chi-square attack. One of these hash functions enables us to detect about 0.2 bits per pixel in true colour images. Another algorithm (Hide) was presented at the last workshop and constructed to be secure against visual and statistical chi-square attacks. The detection method for Hide combines the three colour components of each pixel to recognise an increased number of "neighbour colours".
Category attack for LSB steganalysis of JPEG images In this paper, we propose a new method for the detection of LSB embedding in JPEG images. We are motivated by a need to further research the idea of the chi-square attack. The new method simply use the first-order statistics of DCT coefficients, but is more powerful to detect the random embedding in JPEG images. For evaluation, we used versions of Jsteg and Jphide with randomized embedding path to generate stego images in our experiments. In results, the proposed method outperforms the method of Zhang and Ping and is applicable to Jphide. The detection power of both proposed methods is compared to the blind classifier by Fridrich that uses 23 DCT features.
Secret sharing with multi-cover adaptive steganography More and more studies have been dedicated to investigating secret sharing with steganography. Unfortunately, no previous work has ever reported its capability to resist steganalysis. In this paper, we pose the multi-cover adaptive steganography problem. Two secret sharing methods for natural images based on multi-cover adaptive steganography have been proposed. The secret information is adaptively shared into textured regions of covers by a spatial +/-1 operation. In comparison to previous secret sharing methods, each of the proposed methods uses a simple share-constructing operation and each has lossless secret reconstruction and high quality shares. More importantly, the proposed methods are more secure in terms of resistance against state-of-the-art steganalysis techniques. In comparison to previous steganographic methods, the proposed methods hide secret bits among textured regions with different covers and are thus difficult to detect. Moreover, the proposed methods can be used to adaptively embed location-sensitive secrets (e.g., secret images) and require no stego key for extracting the encrypted message. These results also have significance in individual cover steganography. The experimental results show the effectiveness of the proposed methods.
STEGANALYSIS OF ADDITIVE NOISE MODELABLE INFORMATION HIDING The process of information hiding is modeled in the context of additive noise. Under an independence assumption, the histogram of the stegomessage is a convolution of the noise probability mass function (PMF) and the original histogram. In the frequency domain this convolution is viewed as a multiplication of the histogram characteristic function (HCF) and the noise characteristic function. Least significant bit, spread spectrum, and DCT hiding methods for images are analyzed in this framework. It is shown that these embedding methods are equivalent to a lowpass filtering of histograms that is quantified by a decrease in the HCF center of mass (COM). These decreases are exploited in a known scheme detection to classify unaltered and spread spectrum images using a bivariate classifier. Finally, a blind detection scheme is built that uses only statistics from unaltered images. By calculating the Mahalanobis distance from a test COM to the training distribution, a threshold is used to identify steganographic images. At an embedding rate of 1 b.p.p. greater than 95% of the stegoimages are detected with false alarm rate of 5%.
New steganography algorithm to conceal a large amount of secret message using hybrid adaptive neural networks with modified adaptive genetic algorithm In this paper, we propose a new steganography algorithm using non-uniform adaptive image segmentation (NUAIS) with an intelligent computing technique to conceal efficiently a large amount of confidential messages (Smsg) into color images. Whereas, the number of secret bits to be replaced is non uniform from byte to another byte; it based on byte characteristics, which are extracted by using 16 byte levels (BL) with variance distribution of the Neighboring Eight Bytes (NEB) around the current byte. Four security layers are introduced to increase resistance against statistical and visual attacks. These layers are designed to make an excellent imperceptible concealing Smsg with lower distortion of a color plane and high protection of Smsg. The proposed intelligent technique using the hybrid adaptive neural networks and modified adaptive genetic algorithm employing uniform adaptive relaxation (ANN_AGAUAR) is working as the fourth security layer to improve the quality of the stego image (I"s). The results are discussed and compared with the previous steganography algorithms; it demonstrates that the proposed algorithm's effectiveness can be concealed efficiently the number of secret bits reached to four bits per byte with better visual quality.
Reversible watermarking algorithm using sorting and prediction This paper presents a reversible or lossless watermarking algorithm for images without using a location map in most cases. This algorithm employs prediction errors to embed data into an image. A sorting technique is used to record the prediction errors based on magnitude of its local variance. Using sorted prediction errors and, if needed, though rarely, a reduced size location map allows us to embed more data into the image with less distortion. The performance of the proposed reversible watermarking scheme is evaluated using different images and compared with four methods: those of Kamstra and Heijmans, Thodi and Rodriguez, and Lee et al. The results clearly indicate that the proposed scheme can embed more data with less distortion.
Inverted pattern approach to improve image quality of information hiding by LSB substitution Capacity and invisibility are two targets of the methods for information hiding. Because these two targets contradict each other, to hide large messages into the cover image and remain invisible is an interesting challenge. The simple least-significant-bit (LSB) substitution approach, which embeds secret messages into the LSB of pixels in cover images, usually embeds huge secret messages. After a large message is embedded, the quality of the stego-image will be significantly degraded. In this paper, a new LSB-based method, called the inverted pattern (IP) LSB substitution approach, is proposed to improve the quality of the stego-image. Each section of secret images is determined to be inverted or not inverted before it is embedded. The decisions are recorded by an IP for the purpose of extracting data and the pattern can be seen as a secret key or an extra data to be re-embedded. The experimental results show that our proposed method runs fast and has better results than that of previous works.
An image watermarking scheme in wavelet domain with optimized compensation of singular value decomposition via artificial bee colony Digital image watermarking is the process of authenticating a digital image by embedding a watermark into it and thereby protecting the image from copyright infringement. This paper proposes a novel robust image watermarking scheme developed in the wavelet domain based on the singular value decomposition (SVD) and artificial bee colony (ABC) algorithm. The host image is transformed into an invariant wavelet domain by applying redistributed invariant wavelet transform, subsequently the low frequency sub-band of wavelet transformed image is segmented into non-overlapping blocks. The most suitable embedding blocks are selected using the human visual system for the watermark embedding. The watermark bits are embedded into the target blocks by modifying the first column coefficients of the left singular vector matrix of SVD decomposition with the help of a threshold and the visible distortion caused by the embedding is compensated by modifying the coefficients of the right singular vector matrix employing compensation parameters. Furthermore, ABC is employed to obtain the optimized threshold and compensation parameters. Experimental results, compared with the related existing schemes, demonstrated that the proposed scheme not only possesses the strong robustness against image manipulation attacks, but also, is comparable to other schemes in term of visual quality.
Joint reversible data hiding and image encryption Image encryption process is jointed with reversible data hiding in this paper, where the data to be hided are modulated by different secret keys selected for encryption. To extract the hided data from the cipher-text, the different tentative decrypted results are tested against typical random distribution in both spatial and frequency domain and the goodness-of-fit degrees are compared to extract one hided bit. The encryption based data hiding process is inherently reversible. Experiments demonstrate the proposed scheme's effectiveness on natural and textural images, both in gray-level and binary forms.
A 9/7 wavelet-based lossless data hiding In this paper, a lossless data-hiding approach is presented based on quantized coefficients of discrete wavelet transform (DWT) in the frequency domain to embed secret message. Using the quantized coefficients for 9/7 wavelet filter in DWT, we embed secret data into the successive zero coefficients of the medium-high frequency components in each reconstructed block for 3-level 2-D DWT of a cover-image. The procedures of the proposed system include embedment, extraction, and restoration. Experimental results show that the proposed method can achieve high embedding capacity and acceptable image quality of stego-image, and data reversibility.
A Graph-Oriented Knowledge Representation and Unification Technique for Automatically Selecting and Invoking Software Functions An interface to information systems that can automatically select, sequence, and invoke the sources needed to satisfy a user’s request can have great practical value. It can spare the user from the need to know what information is available from each of the sources, and how to access them. We have developed and implemented a graph-oriented technique for representing software modules and databases, along with unification and search algorithms that enable an interface to perform this automatic programming function. The approach works for a large class of useful requests, in a tractable amount of run time. The approach permits the logical integration of pre-existing batch application programs and databases. It may also be used in other situations requiring automatic selection of software functions to obtain information specified in a declarative expression.
A semi-fragile lossless data hiding scheme based on multi-level histogram shift in image integer wavelet transform domain A semi-fragile lossless data hiding (LDH) Scheme based on histogram distribution shift in integer wavelet transform (IWT) domain is proposed in this paper. In the proposed scheme, the transform approximation image is divided into non-overlapping blocks. In each block, the differences between the neighboring elements are computed and a histogram is made on the difference values. The secret data are embedded into the blocks based on a multi-level shifting mechanism of the histogram. The proposed method enables the exact recovery of the original host signal upon extracting the embedded information, if the watermarked image is not affected by any other process. The performance of proposed scheme is evaluated with respect to imperceptibility, robustness, and data payload capacity by simulations. Comparing with the state-of-the-art known techniques, the proposed method performs a higher performance.
1.022996
0.022629
0.013096
0.011111
0.004921
0.002477
0.000695
0.000183
0.000058
0.000015
0.000004
0
0
0
A new 1D chaotic system for image encryption This paper introduces a simple and effective chaotic system using a combination of two existing one-dimension (1D) chaotic maps (seed maps). Simulations and performance evaluations show that the proposed system is able to produce many 1D chaotic maps with larger chaotic ranges and better chaotic behaviors compared with their seed maps. To investigate its applications in multimedia security, a novel image encryption algorithm is proposed. Using a same set of security keys, this algorithm is able to generate a completely different encrypted image each time when it is applied to the same original image. Experiments and security analysis demonstrate the algorithm's excellent performance in image encryption and various attacks. HighlightsThis work proposes a new chaotic system with a simple and effective structure.The proposed system generates many new 1D chaotic maps using two existing maps as seed maps.The proposed chaotic system has much larger chaotic ranges and better chaotic behaviors compared with their seed maps.This work also introduces a new image encryption algorithm based on the proposed chaotic system.Using the same security keys, the algorithm can transform an original image into different encrypted images in each execution time.
On the Relation of Random Grid and Deterministic Visual Cryptography Visual cryptography is a special type of secret sharing. Two models of visual cryptography have been independently studied: 1) deterministic visual cryptography, introduced by Naor and Shamir, and 2) random grid visual cryptography, introduced by Kafri and Keren. In this paper, we show that there is a strict relation between these two models. In particular, we show that to any random grid scheme corresponds a deterministic scheme and vice versa. This allows us to use results known in a model also in the other model. By exploiting the (many) results known in the deterministic model, we are able to improve several schemes and to provide many upper bounds for the random grid model and by exploiting some results known for the random grid model, we are also able to provide new schemes for the deterministic model. A side effect of this paper is that future new results for any one of the two models should not ignore, and in fact be compared with, the results known in the other model.
Image encryption: Generating visually meaningful encrypted images. To protect image contents, most existing encryption algorithms are designed to transform an original image into a texture-like or noise-like image which is, however, an obvious visual sign indicating the presence of an encrypted image and thus results in a significantly large number of attacks. To address this problem, this paper proposes a new image encryption concept to transform an original image into a visually meaningful encrypted one. As an example of the implementation of this concept, we introduce an image encryption system. Simulation results and security analysis demonstrate excellent encryption performance of the proposed concept and system.
Data hiding in encrypted images based on predefined watermark embedding before encryption process. In this paper, we propose a novel approach which allows embedding a message into an encrypted image; a message that can be accessed whether the image is encrypted or not. To do so, the proposed solution relies on the insertion into the image of a predefined watermark, a “pre-watermark”, before the encryption process. Message insertion (resp. extraction) is then commonly conducted into (resp. from) the encrypted image. It is the impact of this data hiding process onto the “pre-watermark” that gives us access to the message into the spatial domain, i.e. after the decryption process. By doing so, the watermark processing process is independent of the knowledge of the encryption key and one only just has to know the watermarking key so as to embed the message and extract it from the encrypted or decrypted image. Reciprocally, encryption/decryption processes are completely independent from message embedding/extraction. We illustrate the feasibility of our approach considering the RC4 stream cipher algorithm and the Least Significant Bit substitution watermarking modulation. Experiments conducted on natural test images and ultrasound medical images demonstrate the general capabilities of our system to securely make available a message in both spatial and encrypted domains while minimizing image distortion. We further discuss the use of different encryption and watermarking algorithms and we illustrate how our system can be used for ensuring image integrity and authenticity control.
Image Encryption Algorithm Based on Logistic and Two-Dimensional Lorenz In recent years, experts and scholars in the field of information security have attached great importance to the security of image information. They have proposed many image encryption algorithms with higher security. In order to further improve the security level of image encryption algorithm, this paper proposes a new image encryption algorithm based on two-dimensional Lorenz and Logistic. The encryption test of several classic images proves that the algorithm has high security and strong robustness. This paper also analyzes the security of encryption algorithms, such as analysis of the histogram, entropy process of information, examination of correlation, differential attack, key sensitivity test, secret key space analysis, noise attacks, contrast analysis. By comparing the image encryption algorithm proposed in this paper with some existing image encryption algorithms, the encryption algorithm has the characteristics of large secret key space, sensitivity to the key, small correlation coefficient and high contrast. In addition, the encryption algorithm is used. It can also resist noise attacks.
Integrated chaotic systems for image encryption. •We propose two integrated chaotic systems (ICS) to generate different chaotic maps.•A new image encryption algorithm is developed using ICS.•We provide a theoretical study of ICS and extensive security analysis of proposed encryption algorithm.
Reversible data hiding based on reducing invalid shifting of pixels in histogram shifting. •A reversible data hiding scheme for effectively reducing distortion is proposed.•Reduce the number of invalid shifting pixels in histogram shifting.•The proposed method has a higher embedding capacity.
Reversible data hiding with histogram-based difference expansion for QR code applications In this paper, we propose a new algorithm in reversible data hiding, with the application associated with the quick response (QR) codes. QR codes are random patterns, which can be commonly observed on the corner of posters or webpages. The goal of QR codes aims at convenienceoriented applications for mobile phone users. People can use the mobile phone cameras to capture QR code at the corner of web page, and then the hyperlink corresponding to the QR code can be accessed instantly. Since QR code looks like random noise and it occupies a corner of the original image, its existence can greatly reduce the value of the original content. Thus, how to retain the value of original image, while keeping the capability for the instant access for webpages, would be the major concern of this paper. With the aid of our reversible data hiding technique, the QR codes can be hidden into the original image, and considerable increase in embedding capacity can be expected. Next, we propose a scheme such that when the image containing the QR code is browsed, the hyperlink corresponding to the QR code is accessed first. Then, the QR code could get vanished and the original image would be recovered to retain the information conveyed therein. Simulation results demonstrate the applicability of the proposed algorithm.
Secret image sharing with steganography and authentication A novel approach to secret image sharing based on a (k, n)-threshold scheme with the additional capabilities of steganography and authentication is proposed. A secret image is first processed into n shares which are then hidden in n user-selected camouflage images. It is suggested to select these camouflage images to contain well-known contents, like famous character images, well-known scene pictures, etc., to increase the steganographic effect for the security protection purpose. Furthermore, an image watermarking technique is employed to embed fragile watermark signals into the camouflage images by the use of parity-bit checking, thus providing the capability of authenticating the fidelity of each processed camouflage image, called a stego-image. During the secret image recovery process, each stego-image brought by a participant is first verified for its fidelity by checking the consistency of the parity conditions found in the image pixels. This helps to prevent the participant from incidental or intentional provision of a false or tampered stego-image. The recovery process is stopped if any abnormal stego-image is found. Otherwise, the secret image is recovered from k or more authenticated stego-images. Some effective techniques for handling large images as well as for enhancing security protection are employed, including pixelwise processing of the secret image in secret sharing, use of parts of camouflage images as share components, adoption of prime-number modular arithmetic, truncation of large image pixel values, randomization of parity check policies, etc. Consequently, the proposed scheme as a whole offers a high secure and effective mechanism for secret image sharing that is not found in existing secret image sharing methods. Good experimental results proving the feasibility of the proposed approach are also included.
Perceptual aspects in data hiding In this paper, a new methodology for the secure embedding of data in a video sequence is presented. To guarantee the imperceptibility of the embedded data, we propose a novel method for selecting the frame regions that may be considered perceptually non relevant. For each frame of the video, a saliency analysis is performed based on features that are thought to be relevant to the Human Vision System. In particular, the local contrast, the color, and the motion information have been considered. By weighting all these features, an importance map is built to drive the embedding procedure. Subjective experiment results show that the artifacts caused by this localized embedding procedure are considered by the subjects to be less annoying than if the data hiding is performed on the whole frame. Nevertheless robustness is achieved.
Reversible data hiding for high quality images exploiting interpolation and direction order mechanism Reversible data hiding can restore the original image from the stego image without any distortion after the embedded secret message is exactly extracted. In this paper, a novel, reversible, data hiding scheme for high quality images is proposed in spatial domain. To increase embedding capacity and enhance image quality, the proposed scheme classifies all pixels as wall pixels and non-wall pixels. For wall pixel, the interpolation error is used to embed secret data over the interpolation prediction method. In contrast, the difference value between the non-wall pixel and its parent pixel, which is defined by the direction order, is computed to hide secret data based on the histogram shifting. The experimental results demonstrate that our scheme provides a larger payload and a better image quality than some existing schemes. Moreover, the performance of the proposed scheme is more stable for different images.
A new robust color image watermarking using local quaternion exponent moments. Desynchronization attacks that cause displacement between embedding and detection are usually difficult for watermark to survive. It is a challenging work to design a robust image watermarking scheme against desynchronization attacks, especially for color host images. In this paper, we propose a robust color image watermarking scheme based on local quaternion exponent moments. The proposed scheme has the following advantages: (1) the stable and uniform color image feature points are extracted by the new color image detector, in which the probability density gradient and color invariance model are used, (2) the affine invariant local feature regions are constructed adaptively according to the variation of local probability density and (3) the effective quaternion exponent moments are derived and introduced to embed watermark in the color image, which consider the correlation between different color channels. Experiments are carried out on a color image set of 100 images collected from Internet, and the preliminary results show that the proposed color image watermarking is not only invisible and robust against common image processing operations such as sharpening, noise adding, and JPEG compression, but also robust against the desynchronization attacks.
Edge separability-based circuit clustering with application to multilevel circuit partitioning In this paper, we propose a new efficient O(nlogn) connectivity-based bottom-up clustering algorithm called edge separability-based clustering (ESC). Unlike existing bottom-up algorithms that are based on local connectivity information of the netlist, ESC exploits more global connectivity information using edge separability to guide the clustering process, while carefully monitoring cluster area balance. Exact computation of the edge separability λ(e) for a given edge e=(x,y) in an edge-weighted undirected graph G is equivalent to finding the maximum flow between x and y. Since the currently best known time bounds for solving the maximum flow problem is O(mnlog(n2/m)), due to Goldberg and Tarjan (Goldberg and Tarjan, 1988), the computation of λ(e) for all edges in G requires O(m2nlog(n2/m)) time. However, we show that a simple and efficient algorithm CAPFOREST (Nagamochi and Ibaraki, 1992) can be used to provide a good approximation of edge separability (within 9.1% empirical error bound) for all edges in G without using any network flow computation in O(nlogn) time. Our experimental results based on large-scale benchmark circuits demonstrate the effectiveness of using edge separability in the context of multilevel partitioning framework for cutsize minimization. We observe that exploiting edge separability yields better quality partitioning solution compared to existing clustering algorithms (Sun and Sechen, 1993), (Cong and Smith, 1993), (Huang and Kahng, 1995), (Ng et al., 1987), (Wei and Cheng, 1991), (Shin and Kim, 1993), (Schuler and Ulrich, 1972), (Karypis et al., 1997), that rely on local connectivity information. In addition, our ESC-based iterative improvement based multilevel partitioning algorithm LR/ESC-PM provides comparable results to state-of-the-art hMetis package (Karypis et al., 1997), (Karypis and Kumar, 1999).
Novel Image Authentication Scheme for AMBTC-Compressed Images This paper proposes a novel image authentication scheme based on absolute moment block truncation coding (AMBTC). In the proposed scheme, for each compressed image block we use the parity of the bitmap (BM) to generate the authentication code. For image authentication, data hiding is adopted to embed the authentication code to quantization levels of each image block compressed by absolute moment block truncation coding. The embedding position is generated by a pseudo-random number generator for security concerned. Experimental results demonstrated that the proposed scheme achieved high-quality embedded images and good detection accuracy, with stable performance and high expansibility.
1.030418
0.03149
0.03149
0.028571
0.028571
0.014286
0.003571
0.000922
0.000149
0.000025
0.000005
0
0
0
Reversible data hiding based on PDE predictor. •Propose a new predictor based on PDE (partial difference expanding) which is image-content dependent.•More accurate than the conventional predictors and hence result in better performance than the state-of-the-art reversible data hiding methods.•High applicability since PDE predictor can be well incorporated in various PEE schemes.
Capacity improvement of reversible data hiding scheme through better prediction and double cycle embedding process For better protection of surreptitious information from the intruders, steganography is now a widely used technique for hiding information through bits embedding inside a picture, speech and other multimedia contents. Among many different steganography schemes, prediction errors based reversible data hiding methods are more effective for security reasons. Nevertheless such techniques presented so far can embed limited number of message bits in a cover media and thus limit the length of covert message to be sent through embedding. In this paper, the authors presented a novel embedding process, called double cycle embedding scheme, where embedding capacity is increased significantly by modifying the scheme of estimation of pixel values, the block-variance calculation technique, selecting two highest most peaks in the histogram regardless their positions and embedding twice in the same cover image. The scheme also modified the pixel-prediction technique for the better predictions and for generalization of block-sizes in the cover image to increase the robustness of the policy. The experimental results demonstrate that the proposed scheme dictates other competing schemes with an embedding gain of 40%~700% depending on the cover image. The experimental results also demonstrate that the scheme exhibits stronger resistance against several statistical attacks, e.g. Benford generalized laws, SPAM features.
Enhancing the embedding payload by handling the affair of association and mapping of block pixels through prediction errors histogram. Images are intentionally distorted in many reversible data hiding schemes. This distortion is performed either by encrypting before data embedment or by applying histogram association mapping (HAM) during data concealment. The later one minimizes the processing complexity and demolishes the requirement of sharing secret keys. In the HAM, each block's pixels belong to a segment in the gray scale are associated and mapped by the message bits to another gray part. The quantity of embedded bits are determined by the range of block pixels, i.e. limit within which pixels spread out. Though, smaller valued ranges contribute much to embedding payload, the frequency of such ranges decreases in archetype images. However, a predictor, if applied on the block pixels, huddles its prediction errors together. Consequently, the range of absolute values of the prediction errors become smaller. The proposed scheme, therefore, applies HAM to prediction errors and shifts the block pixels accordingly. Experiments were conducted using three different standard image datasets to examine the enhancement in the embedding payload and to investigate the performance on different block-sizes. The investigation reveals that the average payloads in the proposed scheme is 1.5 to 2 times of the competing one in all the cases.
Facial Biometrics Using Nontensor Product Wavelet And 2d Discriminant Techniques A new facial biometric scheme is proposed in this paper. Three steps are included. First, a new nontensor product bivariate wavelet is utilized to get different facial frequency components. Then a modified 2D linear discriminant technique (M2DLD) is applied on these frequency components to enhance the discrimination of the facial features. Finally, support vector machine (SVM) is adopted for classification. Compared with the traditional tensor product wavelet, the new nontensor product wavelet can detect more singular facial features in the high-frequency components. Earlier studies show that the high-frequency components are sensitive to facial expression variations and minor occlusions, while the low-frequency component is sensitive to illumination changes. Therefore, there are two advantages of using the new nontensor product wavelet compared with the traditional tensor product one. First, the low-frequency component is more robust to the expression variations and minor occlusions, which indicates that it is more efficient in facial feature representation. Second, the corresponding high-frequency components are more robust to the illumination changes, subsequently it is more powerful for classification as well. The application of the M2DLD on these wavelet frequency components enhances the discrimination of the facial features while reducing the feature vectors dimension a lot. The experimental results on the AR database and the PIE database verified the efficiency of the proposed method.
Optimum reversible data hiding and permutation coding This paper is mainly devoted to investigating the connection between binary reversible data hiding and permutation coding. We start by undertaking an approximate combinatorial analysis of the embedding capacity of reversible watermarking in the binary Hamming case, which asymptotically shows that optimum reversible watermarking must involve not only “writing on dirty paper”, as in any blind data hiding scenario, but also writing on the dirtiest parts of the paper. The asymptotic analysis leads to the information-theoretical result given by Kalker and Willems more than a decade ago. Furthermore, the novel viewpoint of the problem suggests a near-optimum reversible watermarking algorithm for the low embedding distortion regime based on permutation coding. A practical implementation of permutation coding, previously proposed in the context of maximum-rate perfect steganography of memoryless hosts, can be used to implement the algorithm. The paper concludes with a discussion on the evaluation of the general rate-distortion bound for reversible data hiding.
Adaptive reversible watermarking with improved embedding capacity Embedding capacity is one of the most important issues of the reversible watermarking. However, the theoretical maximum embedding capacity of most reversible watermarking algorithms is only 1.0bits per pixel (bpp). To achieve a higher capacity, we have to modify the least significant bit (LSB) multiple times which definitely lowers the quality of the embedded image. To this end, this paper proposes a novel reversible watermarking algorithm by employing histogram shifting and adaptive embedding. Specifically, the amount of the embedded watermark is adaptively determined in terms of the context of each pixel. For pixels with small prediction error, we modify the second, third and even the fourth LSBs as well to embed more than one watermark bit. Consequently, the proposed method achieves the embedding capacity larger than 1.0 bpp in single-pass embedding as well as bringing relatively low embedding distortion. The superiority of the proposed method is experimental verified by comparing with other existing schemes.
Complete Separable Reversible Data Hiding in Encrypted Image. Reversible data hiding in encrypted image (RDHEI) is an emerging technology since it has good potential for practical applications such as encrypted image authentication, content owner identification and privacy protection. But there is one key problem of many existing published works, that the embedded data only can be extracted either before or after image decryption. In this paper, a complete separable reversible data hiding scheme in encrypted images is proposed. Additional data can be embedded into a cipher image which is encrypted by RC4 and can be extracted error-free both from the cipher domain and the plaintext domain. Moreover, the proposed method is simpler to calculate, while offering better performance. The results demonstrate that larger payload, better image quality, and error-free data extraction as well as image recovery are achieved.
A reversible watermarking for authenticating 2D vector graphics based on bionic spider web. Based on the principle of predation using spider web, a novel semi-blind and semi-fragile reversible watermarking scheme for authenticating 2D vector graphics is proposed. Aiming to resolve the problem of the deficiency in robustness resulting from watermark embedding in file stream order, this scheme proposes to embed geometric correlated watermark by adding vertices for entities. Firstly, two spider webs are generated based on the convex hull of 2D vector graphics and a secret key. Secondly, let the first spider web intersect with the 2D vector graphics, and some vertices are added at the intersections. After that, let the second spider web intersect with the 2D vector graphics, and record the number of intersections between every cobweb and entities. The authentication is carried out by checking the intersection states and the number of intersections between the 2D vector graphics and two spider webs. Experimental results and analysis show that it can localize the tampers in entity with the premise that the convex hull is kept. Meanwhile, the watermarking can keep the shape of 2D vector graphics, and it is robust against translation, scaling, rotation and even entity rearrangement. It provides a novel means for the content protection of 2D vector graphics.
Data hiding based on the similarity between neighboring pixels with reversibility The technique of reversible data hiding recovers the original image from a stego-image without distortion after the hidden data are extracted. A natural image usually contains several smooth areas. The difference between two adjacent pixels has a high probability of being a small value. Therefore, this study proposed a novel reversible data hiding method, Adjacent Pixel Difference (APD), which employs the histogram of the pixel difference sequence to increase the embedding capacity. Experimental results reveal that APD achieves a high embedded capacity and still maintains a high stego-image quality. Furthermore, the stego-image quality and embedded capacity of the APD method outperform those of NSAS, F&S and T&H.
Adaptive reversible image watermarking algorithm based on DE. In order to improve the embedding rate of reversible watermarking algorithm for digital image and enhance the imperceptibility of the watermarked image, an adaptive reversible image watermarking algorithm based on DE is proposed. By analyzing the traditional DE algorithm and the generalized DE algorithm, an improved difference expansion algorithm is proposed. Through the analysis of image texture features, the improved algorithm is used for embedding and extracting the watermark. At the same time, in order to improve the embedding capacity and visual quality, the improved algorithm is optimized in this paper. Simulation results show that the proposed algorithm can not only achieve the blind extraction, but also significantly heighten the embedded capacity and non-perception. Moreover, compared with similar algorithms, it is easy to implement, and the quality of the watermarked images is high.
Dual imaging-based reversible hiding technique using LSB matching Dual imaging technique is being used often in the reversible data hiding method recently. It creates two copies of the original image for embedding a message, to achieve a high hiding capacity. Image quality and hiding capacity in the dual imaging method are affected by embedding (or pixel modification) rules. Hence, the hiding rules play a pivotal role in similar techniques. In recent years, the dual imaging technique has focused on extended application in the modulo function hiding method. It uses a modulo function to establish rule tables, and refers to mapped values as camouflage pixel values. This study proposes an alternative approach. It is based on the least-significant-bit matching (LSB-matching) method for embedding using the dual imaging technique. It employs seven rules to camouflage pixel modifications. According to experimental results, we found that the method proposed in this study could maintain a rather high camouflage image quality with a high hiding capacity. The quality of the two camouflage images was also of an above average level. Besides, better image quality was also obtained from natural images with various characteristics.
Fragile image watermarking using a gradient image for improved localization and security A fragile watermarking algorithm for image authentication and tamper detection is proposed. It uses a gradient image and its structure to achieve localization and security requirements. It provides superior localization with greater security against many attacks including vector quantization attack.
Recursive bisection based mixed block placement Many current designs contain a large number of standard cells intermixed with larger macro blocks. The range of size in these "mixed block" designs complicates the placement process considerably; traditional methods produce results that are far from satisfactory.In this paper we extend the traditional recursive bisection standard cell placement tool Feng Shui to directly consider mixed block designs. On a set of recent benchmarks, the new version obtains placements with wire lengths substantially lower than other current tools. Compared to Feng Shui 2.4, the placements of a Capo-based approach have 29% higher wire lengths, while the placements of mPG are 26% higher. Run times of our tool are also lower, and the general approach is scalable.
BonnPlace: A Self-Stabilizing Placement Framework We present a new algorithm for VLSI placement. Our tool BonnPlace incorporates a partitioning-based legalization into a force-directed loop by iteratively pulling circuits towards their positions in a legalized placement. This self-stabilizing algorithm combines the accuracy of partitioning-based methods with the stability of force-directed placement strategies. Using information from earlier iterations, it is capable of improving netlength as well as more involved objective functions like routability and timing behavior. In contrast to previous techniques, we legalize with higher effort, which allows us to reduce the number of iterations. Performance is further improved by adapting a clustering heuristic that takes into account the current cell positions, both for clustering and unclustering. We tested our tool on recent instances from industry and on publicly available benchmark suites. In particular on the routability-driven placement instances of the DAC 2012 contest, our algorithm produces the best known results.
1.004347
0.005031
0.005031
0.004517
0.003952
0.002824
0.002012
0.001538
0.000951
0.000453
0.000093
0.000007
0
0
Synthesis of list algorithms by mechanical proving. We address the automation of the processes of algorithm synthesis and systematic exploration of the theory of lists. Our focus is on methods, techniques, inference rules and strategies for computer based synthesis of list algorithms based on proving. Starting from the specification of the problem (input and output conditions), a synthesis statement is built: \"for any list satisfying the input condition, there exists a list satisfying the output condition\". The main difficulty is to find a constructive proof of this statement, from which the corresponding algorithm is easily extracted as a set of conditional equalities.In more detail, we aim at computer automation of the proof of the existence of the sorted version of the input list. By using different proof methods we automatically synthesize five sorting algorithms: Selection-Sort, Insertion-Sort, Quick-Sort, Merge-Sort, and a novel algorithm, which we call Unbalanced-Merge-Sort, as well as the auxiliary functions used in the sorting algorithms. The theory we use is first order, and mostly contains formulae which are equivalent to Horn clauses. Therefore, except for induction, SLD resolution style inferences are in principle sufficient for performing the proofs. However, for most of the proofs this leads to a very large search space. Therefore we introduce several novel inference rules and specific strategies, which are based on the properties of lists, and which we developed in the course of this case study on sorting.Moreover, during the process of algorithm synthesis we explore the theory of lists by introducing (automatically prove, and then use) the necessary properties.When the knowledge base does not contain the auxiliary functions needed for the respective version of the algorithm, then the proof fails and from this failure a new proof goal is created, which is the synthesis statement for the missing auxiliary functions (\"cascading\").
Remarks on \"A Synthesis of Several Sorting Algorithms\" by John Darlington In his paper \"A Synthesis of Several Sorting Algorithms,\" John Darlington presents syntheses for six different sorting algorithms, together with a family tree of sorting algorithms, and mentions a symmetry between Quick Sort, Selection Sort, Merge Sort, and Insertion Sort. In our own attempts to codify programming knowledge, we have developed a slightly different family tree which shows similar symmetries, and which also shows that Bubble Sort and Sinking Sort can be viewed as in-place versions of Selection Sort and Insertion Sort, thus adding another symmetry to those noted by Darlington.
Derivation of sorting algorithms from a specification
A synthesis of several sorting algorithms We synthesise versions of six well known sorting algorithms from a common specification using program transformation techniques. On the way to the sorting algorithms we synthesise three algorithms for generating permutations thus building up a family tree for the sorts exposing certain relationships between them.
New directions in cryptography Two kinds of contemporary developments in cryptography are examined. Widening applications of teleprocessing have given rise to a need for new types of cryptographic systems, which minimize the need for secure key distribution channels and supply the equivalent of a written signature. This paper suggests ways to solve these currently open problems. It also discusses how the theories of communication and computation are beginning to provide the tools to solve cryptographic problems of long standing.
How to share a secret In this paper we show how to divide data D into n pieces in such a way that D is easily reconstructable from any k pieces, but even complete knowledge of k - 1 pieces reveals absolutely no information about D. This technique enables the construction of robust key management schemes for cryptographic systems that can function securely and reliably even when misfortunes destroy half the pieces and security breaches expose all but one of the remaining pieces.
A calculus of refinements for program derivations A calculus of program refinements is described, to be used as a tool for the step-by-step derivation of correct programs. A derivation step is considered correct if the new program preserves the total correctness of the old program. This requirement is expressed as a relation of (correct) refinement between nondeterministic program statements. The properties of this relation are studied in detail. The usual sequential statement constructors are shown to be monotone with respect to this relation and it is shown how refinement between statements can be reduced to a proof of total correctness of the refining statement. A special emphasis is put on the correctness of replacement steps, where some component of a program is replaced by another component. A method by which assertions can be added to statements to justify replacements in specific contexts is developed. The paper extends the weakest precondition technique of Dijkstra to proving correctness of larger program derivation steps, thus providing a unified framework for the axiomatic, the stepwise refinement and the transformational approach to program construction and verification.
Digital watermarking: algorithms and applications Digital watermarking of multimedia content has become a very active research area over the last several years. A general framework for watermark embedding and detection/decoding is presented here along with a review of some of the algorithms for different media types described in the literature. We highlight some of the differences based on application such as copyright protection, authentication, tamper detection, and data hiding as well as differences in technology and system requirements for different media types such as digital images, video, audio and text
A new polynomial-time algorithm for linear programming We present a new polynomial-time algorithm for linear programming. The running-time of this algorithm is O(n3-5L2), as compared to O(n6L2) for the ellipsoid algorithm. We prove that given a polytope P and a strictly interior point a &egr; P, there is a projective transformation of the space that maps P, a to P', a' having the following property. The ratio of the radius of the smallest sphere with center a', containing P' to the radius of the largest sphere with center a' contained in P' is O (n). The algorithm consists of repeated application of such projective transformations each followed by optimization over an inscribed sphere to create a sequence of points which converges to the optimal solution in polynomial-time.
The JPEG still picture compression standard A joint ISO/CCITT committee known as JPEG (Joint Photographic Experts Group) has been working to establish the first international compression standard for continuous-tone still images, both grayscale and color. JPEG's proposed standard aims to be generic, to support a wide variety of applications for continuous-tone images. To meet the differing needs of many applications, the JPEG standard includes two basic compression methods, each with various modes of operation. A DCT (discrete cosine transform)-based method is specified for `lossy' compression, and a predictive method for `lossless' compression. JPEG features a simple lossy technique known as the Baseline method, a subset of the other DCT-based modes of operation. The Baseline method has been by far the most widely implemented JPEG method to date, and is sufficient in its own right for a large number of applications. The author provides an overview of the JPEG standard, and focuses in detail on the Baseline method
Higher Order Deforestation Intermediate data structures are widely used in functional programs. Programs which use these intermediate structures are usually a lot easier to understand, but they result in loss of efficiency at run-time. In order to reduce these run-time costs, a transformation algorithm called deforestation was proposed by Wadler which could eliminate intermediate structures. However, this transformation algorithm was formulated only for first order functional programs. In this paper, it is shown how the original deforestation algorithm can be extended to deal with higher order functional programs. A treeless form of higher order expression is defined which creates no intermediate structures. Higher order treeless form is an easily recognised form of expression, and any function definition can easily be generalised so that it is in this form. It is shown that the higher order deforestation algorithm will terminate if all function definitions are in this form. This algorithm is then compared with related work.
Feedback utilization control in distributed real-time systems with end-to-end tasks An increasing number of distributed real-time systems face the critical challenge of providing quality of service guarantees in open and unpredictable environments. In particular, such systems often need to enforce utilization bounds on multiple processors in order to avoid overload and meet end-to-end deadlines even when task execution times are unpredictable. While recent feedback control real-time scheduling algorithms have shown promise, they cannot handle the common end-to-end task model where each task is comprised of a chain of subtasks distributed on multiple processors. This paper presents the end-to-end utilization control (EUCON) algorithm that adaptively maintains desired CPU utilization through performance feedbacks loops. EUCON is based on a model predictive control approach that models utilization control on a distributed platform as a multivariable constrained optimization problem. A multi-input-multi-output model predictive controller is designed based on a difference equation model that describes the dynamic behavior of distributed real-time systems. Both control theoretic analysis and simulations demonstrate that EUCON can provide robust utilization guarantees when task execution times deviate from estimation or vary significantly at runtime.
Anti-Forensics with Steganographic Data Embedding in Digital Images E-forensics investigates and extracts confidential information from electronic products; in other words, the anti-forensics indicates that evidences in those products are imperceptible and undetected. This work presents an anti-forensic steganography method that can embed and extract messages from images. Highlight of exploiting modification direction (HoEMD) and adaptive EMD (AdEMD) methods use the module operation and take into account of the sensitive nature of a human visual system. The HoEMD approach exploits the pixel directions. A pixel with a larger change implies more pixel directions and, ultimately, a larger embedding capacity. The pixel differencing in the proposed AdEMD method is used to evaluate whether the pixel located in the edge area can tolerate a larger change than that of the pixel location in a smooth area. To successfully extract a message, the differencing value is maintained on the same level before and after data is concealed; a delicate adjusting phase is used as well. In contrast with the PVD and LSB replacement method of Wu et al., LSB substitution method, and Lee and Chen's data hiding scheme based on modulus function, the proposed steganography system has a larger embedding capacity and a higher image quality. Effectiveness of the proposed steganography schemes over that of a previous blind steganalyzer is demonstrated using the statistical attack of Chi-square analysis.
Reversible data hiding based on local histogram shifting with multilayer embedding We use the two neighboring points of the peak point to embed the secret data.We improve the histogram of embedded cover image to ensure security of secret data.The multilayer embedding of pyramid structure is adopted to increase the capacity.Our proposed method has a higher embedding capacity with lower distortion. In this paper, we present a new reversible data hiding method based on histogram shifting using localization. Our proposed method selects peak point as the reference point, then uses the two neighboring points of the peak point to achieve secret data embedding based on histogram shifting and the peak point keeps unchanged. In the extraction end, we no longer need the key information about the peak point, we can directly find the peak point from the histogram to extract the secret data. We also exploit the localization to make the histogram of embedded cover image become almost the same as the histogram of the original cover image. The embedding capacity is also increased rapidly by the localization with multilayer embedding. Experimental results show that our proposed method is effective and superior.
1.2
0.066667
0.033333
0.005405
0
0
0
0
0
0
0
0
0
0
Fast Analytic Placement using Minimum Cost Flow Many current integrated circuits designs, such as those released for the ISPD2005[14] placement contest, are extremely large and can contain a great deal of white space. These new placement problems are challenging; analytic placers perform well, but can suffer from high run times. In this paper, we present a newplacement tool called Vaastu. Our approach combines continuous and discrete optimization techniques. We utilize network flows, which incorporate the more realistic half-perimeter wire length objective, to facilitate module spreading in conjunction with a log-sum-exponential function based analytic approach. Our approach obtains wire length results that are competitive with the best known results, but with much lower run times.
A morphing approach to address placement stability Traditionally, research in global placement has focused on relatively few simple metrics, such as pure wirelength or routability estimates. However, in the real world today, designs are driven by not-so-simple issues such as timing and crosstalk. The future holds even more difficulties as physical models for devices and interconnects become increasingly complex and unpredictable. Adoption of an iterative methodology, where one incrementally fixes design errors, is a basic approach to tackling these problems. However, developers of placement algorithms have long neglected the need for an tool which can be easily adopted into an incremental design flow. We propose a novel placement approach called grid morphing, which is specifically tailored for an incremental approach to placement. In particular, our technique focuses on the stability of the placement, which is critical for minimization of perturbation of the final placement under changes applied to the input netlist. We comparethe stability of our approach to existing placement tools, and show through experiments that our approach still delivers good results under traditional placement metrics.
Mixed-size placement with fixed macrocells using grid-warping Grid-warping is a placement strategy based on a novel physical analogy: rather than move the gates to optimize their location, it elastically deforms a model of the 2-D chip surface on which the gates have been coarsely placed via a standard quadratic solve. Although the original warping idea works well for cell-based placement, it works poorly for mixed-size placements with large, fixed macrocells. The new problem is how to avoid elastically deforming gates into illegal overlaps with these background objects. We develop a new lightweight mechanism called "geometric hashing" which relocates gates to avoid these overlaps, but is efficient enough to embed directly in the nonlinear warping optimization. Results from a new placer (WARP3) running on the ISPD 2005 benchmark suite show both good quality and scalability.
Temperature-aware global placement This paper describes a deterministic placement method for standard cells which minimizes total power consumption and leads to a smooth temperature distribution over the die. It is based on the Quadratic Placement formulation, where the overall weighted net length is minimized. Two innovations are introduced to achieve the above goals. First, overall power consumption is minimized by shortening nets with a high power dissipation. Second, cells are spread over the placement area such that the die temperature profile inside the package is flattened.Experimental results show a significant reduction of the maximum temperature on the die and a reduction of total power consumption.
Mixed block placement via fractional cut recursive bisection Recursive bisection is a popular approach for large scale circuit placement problems, combining a high degree of scalability with good results. In this paper, we present a bisection-based approach for both standard cell and mixed block placement; in contrast to prior work, our horizontal cut lines are not restricted to row boundaries. This technique, which we refer to as a fractional cut, simplifies mixed block placement and also avoids a narrow region problem encountered in standard cell placement. Our implementation of these techniques in the placement tool Feng Shui 2.6 retains the speed and simplicity for which bisection is known, while making it competitive with leading methods on standard cell designs. On mixed block placement problems, we obtain substantial improvements over recently published work. Half perimeter wire lengths are reduced by 29% on average, compared to a flow based on Capo and Parquet; compared to mPG-ms, wire lengths are reduced by 26% on average.
BonnPlace: Placement of Leading-Edge Chips by Advanced Combinatorial Algorithms BonnPlace is the placement tool of the University of Bonn, Germany. It is continuously used in the industry for the placement of most complex chips. Global placement is based on quadratic placement and multisection. Legalization of macros and standard cells uses minimum cost flow and dynamic programming algorithms. We describe details of our implementation and present new experimental results.
Fast and robust quadratic placement combined with an exact linear net model This paper presents a robust quadratic placement approach, which offers both high-quality placements and excellent computational efficiency. The additional force which distributes the modules on the chip in force-directed quadratic placement is separated into two forces: hold force and move force. Both of these forces are determined without any heuristics. Based on this novel systematic force implementation, we show that our iterative placement algorithm converges to an overlapfree placement. In addition, engineering change order (ECO) is efficiently supported by our placer. To handle the important trade-off between CPU time and placement quality, a deterministic quality control is presented. In addition, a new linear net model is proposed, which accurately models the half-perimeter wirelength (HPWL) in the quadratic cost function of quadratic placement. HPWL in general is a linear metric for netlength and represents an efficient and common estimation for routed wirelength. Compared with the classical clique net model, our linear net model reduces memory usage by 75%, CPU time by 23% and netlength by 8%, which is measured by the HPWL of all nets. Using the ISPD-2005 benchmark suite for comparison, our placer combined with the new linear net model has just 5.9% higher netlength but is 16x faster than APlace, which offers the best netlength in this benchmark. Compared to Capo, our placer has 9.2% lower netlength and is 5.4x faster. In the recent ISPD-2006 placement contest, in which quality is mainly determined by netlength and CPU time, our placer together with the new net model produced excellent results.
Post-placement rewiring and rebuffering by exhaustive search for functional symmetries Separate optimizations of logic and layout have been thoroughly studied in the past and are well documented for common benchmarks. However, to be competitive, modern circuit optimizations must use physical and logic information simultaneously. In this work, we propose new algorithms for rewiring and rebuffering - a post-placement optimization that reconnects pins of a given netlist without changing the logic function and gate locations. These techniques are compatible with separate layout and logic optimizations, and appear independent of them. In particular, when the new optimization is applied before or after detailed placement, it approximately doubles the improvement in wirelength. Our contributions are based on exhaustive search for functional symmetries in sub-circuits consisting of several gates. Our graph-based symmetry finding is more comprehensive than previously known algorithms - it detects permutational and phase-shift symmetries on multiple input and output wires, as well as hybrid symmetries, creating more opportunities for rewiring and rebuffering.
Design and implementation of move-based heuristics for VLSI hypergraph partitioning We summarize the techniques of implementing move-based hypergraph partitioning heuristics and evaluating their performance in the context of VLSI design applications. Our first contribution is a detailed software architecture, consisting of seven reusable components, that allows flexible, efficient and accurate assessment of the practical implications of new move-based algorithms and partitioning formulations. Our second contribution is an assessment of the modern context for hypergraph partitioning research for VLSI design applications. In particular, we discuss the current level of sophistication in implementation know-how and experimental evaluation, and we note how requirements for real-world partitioners - if used as motivation for research - should affect the evaluation of prospective contributions. Two "implicit decisions" in the implementation of the Fiduccia-Mattheyses heuristic are used to illustrate the difficulty of achieving meaningful experimental evaluation of new algorithmic ideas.
Highly efficient gradient computation for density-constrained analytical placement methods Recent analytical global placers use density constraints to approximate non-overlap constraints and show very successful results. In this paper we unify a wide range of density smoothing techniques that we call global smoothing, and present a highly efficient method to compute the gradient of such smoothed densities used in several well-known analytical placers [3, 5, 7]. Our method reduces the complexity of the gradient computation by a factor of n compared to a naïve method, where n is the number of modules. Furthermore, with this efficient gradient computation we can come up with an efficient nonlinear programming-based placement framework, which supercedes the existing force-directed placement methods [4, 7]. An application of our technique, as the engine of a multilevel placer, achieved 13% and 15% wirelength improvement compared with SCAMPI [13] and mPL6 [3] on IBM-HB+ benchmark [13]
Lightweight Run-Time Code Generation Run-time code generation is an alternative and complementto compile-time program analysis and optimization. Staticanalyses are inherently imprecise because most interestingaspects of run-time behavior are uncomputable. By deferringaspects of compilation to run time, more precise informationabout program behavior can be exploited, leading togreater opportunities for code improvement.The cost of performing optimization at run time is ofparamount importance, since it must be repaid by...
Invertible authentication watermark for JPEG images We present two new invertible watermarking methods for authentication of digital images in the JPEG format. While virtually all previous authentication watermarking schemes introduced some small amount of non-invertible distortion in the image, the new methods are invertible in the sense that, if the image is deemed authentic, the distortion due to authentication can be completely removed to obtain the original image data. The first technique is based on lossless compression of biased bit-streams derived from the quantized JPEG coefficients. The second technique modifies the quantization matrix to enable lossless embedding of one bit per DCT coefficient. Both techniques are fast and can be used for general distortion-free (invertible) data embedding. The new methods provide new information assurance tools for integrity protection of sensitive imagery, such as medical images or high-importance military images viewed under non-standard conditions when usual criteria for visibility do not apply
The List Introduction Strategy for the Derivation of Logic Programs We present a new program transformation strategy based on the introduction of lists. Thisstrategy is an extension of the tupling strategy which is based on the introduction of tuples ofxed length.The list introduction strategy overcomes some of the limitations of the tupling strategy and, in particular,it makes it possible to transform general recursive programs into linear recursive ones also in cases whenthis transformation cannot be performed by the tupling strategy. The linear...
Steganography Based On Grayscale Images Using (5,3) Hamming Code Steganography is a technique to hide secret data in cover images securely. This technique is used for secret communication. However, steganography is not as strong as watermark against various attacks. "Hamming+1" scheme is a well known scheme in the steganography. In this paper, we propose new data hiding scheme that showed better performance compared to "Hamming+1". The proposed scheme conceals 3 bits per 5 pixels of an image. The experimental result showed that the proposed scheme achieves an 0.599 bpp embedding payload and a higher visual quality of stego images compared to the previous schemes.
1.033674
0.033717
0.016913
0.013521
0.005024
0.003055
0.001916
0.000419
0.000142
0.00004
0
0
0
0
Secret-Fragment-Visible Mosaic Image–A New Computer Art and Its Application to Information Hiding A new type of computer art image called secret-fragment-visible mosaic image is proposed, which is created automatically by composing small fragments of a given image to become a target image in a mosaic form, achieving an effect of embedding the given image visibly but secretly in the resulting mosaic image. This effect of information hiding is useful for covert communication or secure keeping of secret images. To create a mosaic image of this type from a given secret color image, the 3-D color space is transformed into a new 1-D colorscale, based on which a new image similarity measure is proposed for selecting from a database a target image that is the most similar to the given secret image. A fast greedy search algorithm is proposed to find a similar tile image in the secret image to fit into each block in the target image. The information of the tile image fitting sequence is embedded into randomly-selected pixels in the created mosaic image by a lossless LSB replacement scheme using a secret key; without the key, the secret image cannot be recovered. The proposed method, originally designed for dealing with color images, is also extended to create grayscale mosaic images which are useful for hiding text-type grayscale document images. An additional measure to enhance the embedded data security is also proposed. Good experimental results show the feasibility of the proposed method.
A sufficient condition based on the Cauchy-Schwarz inequality for efficient template matching The paper proposes a technique aimed at reducing the num- ber of calculations required to cany out an exhaustive tem- plate matchmg process based on the Normalized Cross Cor- relation (NCC). The technique deploys an effective suffi- cient condition, relying on the recently introduced concept of Lwunded partial correlation, that allows rapid elimination of the (mints that can not provide a better cross-correlation score with respect to the current hest candidate. In this paper we devise anovel sufficient condition b&ed on the Cauchy- Schwarz inequality and compare the experimental results with those attained using the standard NCC-based template matching algorithm and the already known sufficient condi- tion based on the lensen inequality.
Image Encryption by Novel Cryptosystem Using Matrix Transformation. The Hill cipher is a famous symmetric cryptosystem that have several advantages in data encryption. However, the Hill cipher algorithm cannot encrypt images that contain large areas of a single color. Thus, it does not hide all features of the image which reveals patterns in the plaintext. Moreover, it can be easily broken with a known plaintext attack revealing weak security. In this paper, novel cryptosystem is used to encrypt image that overcomes these disadvantages. The novel cryptosystem uses randomly generated self-invertible matrix as an encryption key for each block encryption and also this method eliminates the computational complexity involved in finding inverse of the matrix while decryption. The proposed variant yields higher security and significantly superior encryption quality compared to the original one.
Colour image watermarking based on wavelet and QR decomposition. In this work, a new image watermarking algorithm on colour images is proposed. The proposed algorithm divides a cover image into three colour bands of red, green and blue. Then the following tasks are done on all three channels separately. First, Each colour band is divided into patches of small sizes then the entropy of each patch is calculated. At this step a threshold is found based on the average entropy of all patches and following is applied to all patches which have entropy lower than the threshold. A wavelet representation of each patch are given by applying a discrete wavelet transform. Then Singular value decomposition, orthogonal-triangular decomposition, and a chirp z-transform are used to embed a watermark on the cover image. Several signal processing attacks are applied on watermarked images in order to robustness of the algorithm. The Proposed algorithm is compared with one conventional and two state-of-the-art algorithms. Experimental results show superiority of the proposed algorithm compare with other algorithm in the area of image watermarking.
Image camouflage by reversible image transformation. •Reduce the information for recording block indexes by non-uniform clustering algorithm and thus improve the quality of camouflage image by setting small block size.•Modify the previous color transformation to be reversible. So the secret image can be transformed to a freely-selected target image, getting a camouflage image used as the camouflage of secret image with good visual quality, and the secret image can be restored without any loss.•Because of the reversibility, we can further realize multi-round transformation, that is, transform the camouflage image to another target image.
Reversible data hiding in encrypted AMBTC images. Signal processing in the encrypted domain has attracted a lot of attention due to the requirement for content security and privacy protection. Reversible data hiding in encrypted images (RDH-EI) is also a hot topic. However, the majority of the published techniques are designed for uncompressed images rather than JPEG-, VQ- and BTC-compressed images. In this paper, for the first time, a RDH-EI method for AMBTC images is proposed. In the proposed method, the higher mean and lower mean of a triple in an AMBTC-compressed image are encrypted by using stream cipher at first. Then, additional data can be embedded into the redundant space by using prediction error histogram modification technique. Experimental results and analysis demonstrate that, with the marked cipher-image, legal receivers are able to extract embedded data exactly by using a data hiding key, decrypt it to recover an image very similar to the original one by using an image encryption key, or extract additional data and recover the original image error free with both keys. The proposed method is applicable to real-time transmission due to the simple implementation of the algorithm and low computational complexity.
Reversible data hiding in medical images with enhanced contrast in texture area. •Explain the reason for PSNR metric can't consistent with human visual system.•Embed the message into texture area preferentially for helping accurate diagnosis.•Propose message sparse representation method for decreasing embedding distortion.
Hiding data in images by simple LSB substitution In this paper, a data hiding scheme by simple LSB substitution is proposed. By applying an optimal pixel adjustment process to the stego-image obtained by the simple LSB substitution method, the image quality of the stego-image can be greatly improved with low extra computational complexity. The worst case mean-square-error between the stego-image and the cover-image is derived. Experimental results show that the stego-image is visually indistinguishable from the original cover-image. The obtained results also show a significant improvement with respect to a previous work.
Achieving Efficient Cloud Search Services: Multi-Keyword Ranked Search Over Encrypted Cloud Data Supporting Parallel Computing Cloud computing is becoming increasingly popular. A large number of data are outsourced to the cloud by data owners motivated to access the large-scale computing resources and economic savings. To protect data privacy, the sensitive data should be encrypted by the data owner before outsourcing, which makes the traditional and efficient plain-text keyword search technique useless. So how to design an efficient, in the two aspects of accuracy and efficiency, searchable encryption scheme over encrypted cloud data is a very challenging task. In this paper, for the first time, we propose a practical, efficient, and flexible searchable encryption scheme which supports both multi-keyword ranked search and parallel search. To support multi-keyword search and result relevance ranking, we adopt Vector Space Model (VSM) to build the searchable index to achieve accurate search results. To improve search efficiency, we design a tree-based index structure which supports parallel search to take advantage of the powerful computing capacity and resources of the cloud server. With our designed parallel search algorithm, the search efficiency is well improved. We propose two secure searchable encryption schemes to meet different privacy requirements in two threat models. Extensive experiments on the real-world dataset validate our analysis and show that our proposed solution is very efficient and effective in supporting multi-keyword ranked parallel searches.
An Efficient MSB Prediction-Based Method for High-Capacity Reversible Data Hiding in Encrypted Images. Reversible data hiding in encrypted images (RDHEI) is an effective technique to embed data in the encrypted domain. An original image is encrypted with a secret key and during or after its transmission, it is possible to embed additional information in the encrypted image, without knowing the encryption key or the original content of the image. During the decoding process, the secret message can b...
A reversible data hiding scheme for VQ indices using histogram shifting of prediction errors Abstract Concomitant with the rapid advancements in information technology, the issue of secure data transmission through the Internet has become increasingly important. Hiding data in images is an important technique in digital media; it facilitates confidential data transfer to receivers and renders the data virtually undetectable by third parties. In this paper, a novel reversible data hiding scheme based on vector quantization (VQ) is proposed. First, codebook sorting is employed to enhance the correlation of neighbor indices and then a prediction error technique is used to generate a high peak histogram. The secret data are then embedded via histogram shifting of prediction errors. Our proposed scheme utilizes only one codebook, unlike other similar methods, and the stego carrier (index table after embedding) can be decompressed without secret data extraction to avoid detection by third parties. Experimental results indicate that the proposed scheme has better visual image quality and greater embedding capacity than recently proposed schemes of a similar nature.
Immune programming This paper describes 'Immune Programming', a paradigm in the field of evolutionary computing taking its inspiration from principles of the vertebrate immune system. These principles are used to derive stack-based computer programs to solve a wide range of problems. An antigen is used to represent the programming problem to be addressed and may be provided in closed form or as an input/output mapping. An antibody set (a repertoire), wherein each member represents a candidate solution, is generated at random from a gene library representing computer instructions. Affinity, the fit of an antibody (a solution candidate) to the antigen (the problem), is analogous to shape-complementarity evident in biological systems. This measure is used to determine both the fate of individual antibodies, and whether or not the algorithm has successfully completed. When a repertoire has not yielded affinity relating algorithm completion, individual antibodies are replaced, cloned, or hypermutated. Replacement occurs according to a replacement probability and yields an entirely new randomly-generated solution candidate when invoked. This randomness (and that of the initial repertoire) provides diversity sufficient to address a wide range of problems. The chance of antibody cloning, wherein a verbatim copy is placed in the new repertoire, occurs proportionally to its affinity and according to a cloning probability. The chances of an effective (high-affinity) antibody being cloned is high, analogous to replication of effective pathogen-fighting antibodies in biological systems. Hypermutation, wherein probability-based replacement of the gene components within an antibody occurs, is also performed on high-affinity entities. However, the extent of mutation is inversely proportional to the antigenic affinity. The effectiveness of this process lies in the supposition that a candidate showing promise is likely similar to the ideal solution. This paper describes the paradigm in detail along with the underlying immune theories and their computational models. A set of sample problems are defined and solved using the algorithm, demonstrating its effectiveness and excellent convergent qualities. Further, the speed of convergence with respect to repertoire size limitations and probability parameters is explored and compared to stack-based genetic programming algorithms.
Ispd2009 clock network synthesis contest Clock network synthesis (CNS) is one of the most important design challenges in high performance synchronized VLSI designs. However, without appropriate problem examples and real-world objectives, research can become less relevant to industrial design flows. To address the need of the research community, we organize a clock network synthesis contest and a set of benchmark suite is released. Since the full-specification physical and electrical requirements of a leading-edge processor clock distribution would be cumbersome and impractical for this contest, we make the problem formulation familiar to the academia; that is to synthesize, buffer, and tune a clock distribution. However, the objective function has been modified to appropriately include the increasing importance of robustness to variation, in addition to the typical performance and power metrics. The paper briefly describes the ISPD clock network synthesis contest and the benchmark suite.
A Reversible Data Hiding Method Using Inverse S-Scan Order and Histogram Shifting Secure delivery of secret data on the Internet is an important research topic in digital communications. Data encryption and steganography are two common techniques used for sending secret data over a public computer network. Steganography utilizes a digital medium as a cover in the delivery of secret data. For medical and military image applications, reversibility is an important requirement in data embedding method design. This paper presents a reversible data hidding method. The proposed method utilizes the inverse S-scan order, pixel segmentation, and histogram shifting strategies to embed secret data into a cover image. The experiment results indicate that the proposed method can further enhance the data embedding capacity as compared with the method by Li, et al.
1.025931
0.024
0.024
0.024
0.018046
0.01
0.002671
0.000835
0.000177
0.000048
0.000009
0
0
0
Perimeter-degree: a priori metric for directly measuring and homogenizing interconnection complexity in multilevel placement In this paper, we describe an accurate metric (perimeter-degree) for measuring interconnection complexity and effective use of it for controlling congestion in a multilevel framework. Perimeter-degree is useful for uniformly spreading interconnection density. In modern designs interconnects consume significant area and power. By making interconnect spread homogeneous, it is possible to improve routability as well as power dissipation distribution.Most of the existing congestion minimization heuristics are posteriori. In this work, we extend and complement our previous work [16] on priori congestion minimization techniques. In [16], we identified and used perimeter-degree for constructing congestion friendly clusters. This paper extends that work by unveiling perimeter-degree based whitespace allocation techniques.We show why "number of external nets" is not a desirable candidate for identifying potential regions of high interconnect density and provide perimeter-degree as a possible alternative. We also provide empirical evidence for the effectiveness of perimeter-degree in effectively identifying congested regions even before they are formed. By implicitly allocating resources to these potential high interconnect density regions, 19% reduction in congestion was achieved.Traditionally, bin capacity bounds are expressed in units of area. In a true interconnect centric approach we ignore area and instead use interconnect complexity as weights for clusters and capacity bounds for bins. This technique creates a placement with homogeneous interconnect density, but slightly unbalanced utilization. On average, this novel interconnect complexity driven scheme reduces congestion by 26%.
PRICE: power reduction by placement and clock-network co-synthesis for pulsed-latch designs Pulsed latches have emerged as a popular technique to reduce the power consumption and delay for clock networks. However, the current physical synthesis flow for pulsed latches still performs circuit placement and clock-network synthesis separately, which limits achievable power reduction. This paper presents the first work in the literature to perform placement and clock-network co-synthesis for pulsed-latch designs. With the interplay between placement and clock-network synthesis, the clock-network power and timing can be optimized simultaneously. Novel progressive network forces are introduced to globally guide the placer for iterative improvements, while the clock-network synthesizer makes use of updated latch locations to optimize power and timing locally. Experimental results show that our framework can substantially minimize power consumption and improve timing slacks, compared to existing synthesis flows.
Activity and register placement aware gated clock network design Clock gating is one of the most effective techniques to reduce clock network power dissipation. Although it has already been studied considerably, most of the previous works are restricted to either logic level or clock routing stage. Due to the restriction, clock gating often meets the trouble of wirelength overhead and frequent control signal switching, both of which degrade its effectiveness. Furthermore, previous design flows which insert gate logics after placement introduce a lot of overlaps, especially when there are lots of gate logics inserted. In this work, we propose a new design flow for low power gated clock network construction, in order to minimize the clock wirelength and the activity of control signals, and to eliminate the overlaps incurred by the gate logics. Our method begins with a coarse placement followed by soft register clustering. Then, we perform clock tree topology construction and zero skew clock routing to further reduce the power and the clock skew. Last, the gated clock network is fed back to the placer for incremental placement. Experimental results on ISCAS89 benchmarks demonstrate that our method outperforms previous algorithm of activity aware register placement in clock wirelength and clock power reduction with signal nets wirelength and signal nets power increase within 5% and 3%, respectively
Fast legalization for standard cell placement with simultaneous wirelength and displacement minimization Legalization is one of the most critical steps in modern placement designs. Since several objectives like wirelength, routability, or temperature are already optimized in global placement stage, the objective of legalization is not only to align the cells overlap-free to the rows, but also to preserve the solution of global placement, i.e., the displacement of cells needs to be minimized. However, minimizing displacement only is not enough for current timing-driven SoC designs. Blind displacement minimization may increase the half-perimeter wirelength (HPWL) of nets significantly that degrades the chip performance. In this paper, we propose a fast legalization algorithm for standard cell placement with simultaneous wirelength and displacement minimization. The main contributions of our work are: (1) a fast row selection technique by using k-medoid clustering approach; (2) an exact linear wirelength model to minimize both wirelength and total displacement; (3) a constant time approach to determine the median in trial placement stage. Compared with the state-of-the-art legalization algorithms, experimental results show that our legalizer acquires much better achievement in terms of HPWL, total and maximum displacements, and running time on legalized NTUplace3 global placement results on both ISPD 2005 and 2006 placement contest benchmarks.
A rigorous framework for convergent net weighting schemes in timing-driven placement We present a rigorous framework that defines a class of net weighting schemes in which unconstrained minimization is successively performed on a weighted objective. We show that, provided certain goals are met in the unconstrained minimization, these net weighting schemes are guaranteed to converge to the optimal solution of the original timing-constrained placement problem. These are the first results that provide conditions under which a net weighting scheme will converge to a timing optimal placement. We then identify several weighting schemes that satisfy the given convergence properties and implement them, with promising results: a modification of the weighting scheme given in [11]results in consistently improved delay over the original, 4% on average, without increase in computation time.
Metal-density driven placement for cmp variation and routability In this paper, we propose the first metal-density driven placement algorithm to reduce CMP variation and achieve higher routability. Based on an analytical placement framework, we use a probabilistic routing model to estimate the wire density during the placement. Then, the metal density and thickness are predicted by a predictive CMP model. The spreading forces are adjusted according to the metal density map to reduce the metal density variation. Experimental results show that our method reduces the topography variation by 12% and the number of dummy fills by 6% and achieves much better routability, compared with wirelength-driven placement
Hierarchical whitespace allocation in top-down placement Increased transistor density in modern commercial ICs typically originates in new manufacturing and defect prevention technologies. Additionally, better utilization of such low-level transistor density may result from improved software that makes fewer assumptions about physical layout in order to reliably automate the design process. In particular, recent layouts tend to have large amounts of whitespace, which is not handled properly by older tools. We observe that a major computational difficulty arises in partitioning-driven top-down placement when regions of a chip lack whitespace. This tightens balance constraints for min-cut partitioning and hampers move-based local-search heuristics such as Fiduccia-Mattheyses. However, the local lack of whitespace is often caused by very unbalanced distribution of whitespace during previous partitioning, and this concern is emphasized in chips with large overall whitespace. This paper focuses on accurate computation of tolerances to ensure smooth operation of common move-based iterative partitioners, while avoiding cell overlaps. We propose a mathematical model of hierarchical whitespace allocation in placement, which results in a simple computation of partitioning tolerance purely from relative whitespace in the block and the number of rows in the block. Partitioning tolerance slowly increases as the placer descends to lower levels, and relative whitespace in all blocks is limited from below (unless partitioners return "illegal" solutions), thus preventing cell overlaps. This facilitates good use of whitespace when it is scarce and prevents very dense regions when large amounts of whitespace are available. Our approach improves the use of the available whitespace during global placement, thus leading to smaller whitespace requirements. Existing techniques, particularly those based on simulated annealing, can be applied after global placement to bias whitespace with respect to particular concerns, such as routing congestion, heat dissipation, crosstalk noise and DSM yield improvement.
ePlace: Electrostatics Based Placement Using Nesterov's Method ePlace is a generalized analytic algorithm to handle large-scale standard-cell and mixed-size placement. We use a novel density function based on electrostatics to remove overlap and Nesterov's method to minimize the nonlinear cost. Steplength is estimated as the inverse of Lipschitz constant, which is determined by our dynamic prediction and backtracking method. An approximated preconditioner is proposed to resolve the difference between large macros and standard cells, while an annealing engine is devised to handle macro legalization followed by placement of standard cells. The above innovations are integrated into our placement prototype ePlace, which outperforms the leading-edge placers on respective standard-cell and mixed-size benchmark suites. Specifically, ePlace produces 2.83%, 4.59% and 7.13% shorter wirelength while runs 3.05×, 2.84× and 1.05× faster than BonnPlace, MAPLE and NTUplace3-unified in average of ISPD 2005, ISPD 2006 and MMS circuits, respectively.
FLUTE: fast lookup table based wirelength estimation technique Wirelength estimation is an important tool to guide the design optimization process in early design stages. In this paper, we present a wirelength estimation technique called FLUTE. Our technique is based on pre-computed lookup table to make wirelength estimation very fast and very accurate for low degree nets. We show experimentally that for FLUTE, RMST, and HPWL, the average error in wirelength are 0.72%, 4.23%, and -8.71%, respectively, and the normalized runtime are 1, 1.24, and 0.16, respectively.
Congestion prediction in early stages Routability optimization has become a major concern in the physical design cycle of VLSI circuits. Due to the recent advances in VLSI technology, interconnect has become a dominant factor of the overall performance of a circuit. In order to optimize interconnect cost, we need a good congestion estimation method to predict routability in the early stages of the design cycle. Many congestion models have been proposed but there's still a lot of room for improvement. Some existing models [6] are dependent on parameters that are related to the actual congestion of the circuits. Besides, routers will perform rip-up and re-route operations to prevent overflow but most models do not consider this case. The outcome is that the existing models will usually under-estimate the routability. In this paper, we propose a new congestion model to solve the above problems. The estimation process is divided into three steps: preliminary estimation, detailed estimation and congestion redistribution. We have compared our new model and some existing models with the actual congestion measures obtained by global routing some placement results with a publicly available maze router [2]. Results show that our model has significant improvement in prediction accuracy over the existing models.
The temporal logic of programs A unified approach to program verification is suggested, which applies to both sequential and parallel programs. The main proof method suggested is that of temporal reasoning in which the time dependence of events is the basic concept. Two formal systems are presented for providing a basis for temporal reasoning. One forms a formalization of the method of intermittent assertions, while the other is an adaptation of the tense logic system Kb, and is particularly suitable for reasoning about concurrent programs.
Back to direct style II: first-class continuations We continue to investigate the direct-style transformation by extending it to programs requiring call-with-current-continuation (a.k.a. call/cc). The direct style (DS) and the continuation-passing style (CPS) transformations form a Galois connection. This pair of functions has a place in the programmer's toolbox—yet we are not aware of the existence of any other DS transformer.Starting from our DS transformer towards pure, call-by-value functional terms (Scheme), we extend it with a counting analysis to detect non-canonical occurrences of a continuation. The declaration of such a continuation is translated into a call/cc and its application into the application of the corresponding first-class continuationWe also present staged versions of the DS and of the CPS transformations, where administrative reductions are separated from the actual translation, and where the actual translations are carried out by local, structure-preserving rewriting rules. These staged transformations are used to prove the Galois continuation.Together, the CPS and the DS transformations enlarge the class of programs that can be manipulated on a semantic basis. We illustrate this point with partial evaluation, by specializing a Scheme program with respect to a static part of its input. The program uses coroutines. This illustration achieves a first: a static coroutine is executed statically and its computational content is inlined in the residual program.
Tradeoffs between stretch factor and load balancing ratio in routing on growth restricted graphs A graph has growth rate k if the number of nodes in any subgraph with diameter r is bounded by O(rk). The communication graphs of wireless networks and peer-to-peer networks often have small growth rate. In this paper we study the tradeoff between two quality measures for routing in growth restricted graphs. The two measures we consider are the stretch factor, which measures the lengths of the routing paths, and the load balancing ratio, which measures how evenly the traffic is distributed. We show that if the routing algorithm is required to use paths with stretch factor c, then its load balancing ratio is bounded by O((n/c)1-1/k), where k is the graph's growth rate. We illustrate our results by focusing on the unit disk graph for modeling wireless networks in which two nodes have direct communication if their distance is under certain threshold. We show that if the maximum density of the nodes is bounded by ρ, there exists routing scheme such that the stretch factor of routing paths is at most c, and the maximum load on the nodes is at most O(min(√ρn/c, n/c)) times the optimum. In addition, the bound on the load balancing ratio is tight in the worst case. As a special case, when the density is bounded by a constant, the shortest path routing has a load balancing ratio of O(√n). The result extends to k-dimensional unit ball graphs and graphs with growth rate k. We also discuss algorithmic issues for load balanced short path routing and for load balanced routing in spanner graphs.
GIS Spatial Data Updating Algorithm Based on Digital Watermarking Technology
1.028528
0.037113
0.025081
0.024748
0.01871
0.012717
0.006637
0.003566
0.001338
0.000121
0
0
0
0
Lossless Data Hiding in the Spatial Domain for High Quality Images A lossless data embedding method that inserts data in images in the spatial domain is proposed in this paper. Though a lossless data embedding method once distorts an original image to embed data into the image, the method restores the original image as well as extracts hidden data from the image in which the data are embedded. To guarantee the losslessness of data embedding, all pixel values after embedding must be in the dynamic range of pixels. Because the proposed method modifies some pixels to embed data and leaves other pixels as their original values in the spatial domain, it can easily keep all pixel values after embedding in the dynamic range of pixels. Thus, both the capacity and the image quality of generated images are simultaneously improved. Moreover, the proposed method uses only one parameter based on the statistics of pixel blocks to embed and extract data. By using this parameter, this method does not require any reference images to extract embedded data nor any memorization of the positions of pixels in which data are hidden to extract embedded data. In addition, the proposed method can control the capacity for hidden data and the quality of images conveying hidden data by controlling the only one parameter. Simulation results show the effectiveness of the proposed method; in particular, it offers images with superior image quality to conventional methods.
A frequency domain approach to registration of aliased images with application to super-resolution Super-resolution algorithms reconstruct a high-resolution image from a set of low-resolution images of a scene. Precise alignment of the input images is an essential part of such algorithms. If the low-resolution images are undersampled and have aliasing artifacts, the performance of standard registration algorithms decreases. We propose a frequency domain technique to precisely register a set of aliased images, based on their low-frequency, aliasing-free part. A high-resolution image is then reconstructed using cubic interpolation. Our algorithm is compared to other algorithms in simulations and practical experiments using real aliased images. Both show very good visual results and prove the attractivity of our approach in the case of aliased input images. A possible application is to digital cameras where a set of rapidly acquired images can be used to recover a higher-resolution final image.
Robust fusion of irregularly sampled data using adaptive normalized convolution We present a novel algorithm for image fusion from irregularly sampled data. The method is based on the framework of normalized convolution (NC), in which the local signal is approximated through a projection onto a subspace. The use of polynomial basis functions in this paper makes NC equivalent to a local Taylor series expansion. Unlike the traditional framework, however, the window function of adaptive NC is adapted to local linear structures. This leads to more samples of the same modality being gathered for the analysis, which in turn improves signal-to-noise ratio and reduces diffusion across discontinuities. A robust signal certainty is also adapted to the sample intensities to minimize the influence of outliers. Excellent fusion capability of adaptive NC is demonstrated through an application of super-resolution image reconstruction.
A comparative analysis of image fusion methods There are many image fusion methods that can be used to produce high-resolution multispectral images from a high-resolution panchromatic image and low-resolution multispectral images. Starting from the physical principle of image formation, this paper presents a comprehensive framework, the general image fusion (GIF) method, which makes it possible to categorize, compare, and evaluate the existing...
Multisensor Remote Sensing Image Fusion Using Stationary Wavelet Transform: Effects Of Basis And Decomposition Level Stationary wavelet transform is an efficient algorithm for remote sensing image fusion. In this paper, we investigate the effects of orthogonal/ biorthogonal filters and decomposition depth on using stationary wavelet analysis for fusion. Spectral discrepancy and spatial distortion are used as quality measures. Empirical results lead to some recommendations on the wavelet filter parameters for use in remote sensing image fusion applications.
An Efficient Block Encryption Cipher Based on Chaotic Maps for Secure Multimedia Applications This paper presents an efficient chaotic-based block encryption cipher (CBBEC), which is a symmetric encryption algorithm designed with the potential of increasing security and improving performance. It is a hybrid mixture from chaos theory and cryptography. Its salient features include the use of eight working registers providing capability to handle 256-bits plaintext/ciphertext block sizes, employing a chaotic logistic map in key scheduling to generate session key used in encryption/decryption, the essential heavy use of data-dependent rotations and the inclusion of integer multiplication as an additional primitive operation. The use of multiplication with eight working registers greatly increases the diffusion achieved per round, allowing for greater security, fewer rounds and increased throughput. Comparative performance evaluation of the proposed chaotic-based block encryption cipher CBBEC with Rijndael, RC6 and RC5 is addressed. Experimental results show that the proposed CBBEC is a fast block cipher, showing its great potential in real-time encryption applications.
Application of artificial neural network in detection of DOS attacks A solo attack may cause a big loss in computer and network systems, its prevention is, therefore, very inevitable. Precise detection is very important to prevent such losses. Such detection is a pivotal part of any security tools like intrusion detection system, intrusion prevention system, and firewalls etc. Therefore, an approach is provided in this paper to analyze denial of service attack by using a supervised neural network. The methodology used sampled data from Kddcup99 dataset, an attack database that is a standard for judgment of attack detection tools. The system uses multiple layered perceptron architecture and resilient backpropagation for its training and testing. The developed system is then applied to denial of service attacks. Moreover, its performance is also compared to other neural network approaches which results more accuracy and precision in detection rate.
On a Pattern-Oriented Model for Intrusion Detection Operational security problems, which are often the result of access authorization misuse, can lead to intrusion in secure computer systems. We motivate the need for pattern-oriented intrusion detection, and present a model that tracks both data and privilege flows within secure systems to detect context-dependent intrusions caused by operational security problems. The model allows the uniform representation of various types of intrusion patterns, such as those caused by unintended use of foreign programs and input data, imprudent choice of default privileges, and use of weak protection mechanisms. As with all pattern-oriented models, this model cannot be used to detect new, unanticipated intrusion patterns that could be detected by statistical models. For this reason, we expect that this model will complement, not replace, statistical models for intrusion detection.
Adaptive reversible data hiding based on a local smoothness estimator A novel reversible watermarking (RW) scheme based on a local smoothness estimator and multi-step embedding strategy is proposed in this paper. All the pixels are divided into four equal parts. Correspondingly, the watermark embedding process is separated into four independent steps. Thus each step is performed to embed watermark information into its corresponding image part. In each step, for each to-be-embedded pixel, a local smoothness estimator defined as the variance of its total neighbors is presented to estimate its local smoothness. An obvious advantage of introducing this estimator is that it can determine those pixels in smooth regions accurately. In fact, accurate determinationmeans the decrease in embedding distortion. At the low embedding rate (ER), modifications induced by difference expansion (DE) are done only to those pixels located in smooth regions. Hence, the proposed method can obtain high embedding capacity while maintaining good visual quality. With ER gradually increased, adaptive embedding is employed. In adaptive embedding, for one to-be-embedded pixel, 1 or 2 bits are adaptively embedded according to the strength of relationship among all the pixels surrounding it. The experimental results demonstrate that the proposed method is effective. © Springer Science+Business Media New York 2014.
Fragile image watermarking using a gradient image for improved localization and security A fragile watermarking algorithm for image authentication and tamper detection is proposed. It uses a gradient image and its structure to achieve localization and security requirements. It provides superior localization with greater security against many attacks including vector quantization attack.
On the design of low-density parity-check codes within 0.0045 dB of the Shannon limit We develop improved algorithms to construct good low-density parity-check codes that approach the Shannon limit very closely. For rate 1/2, the best code found has a threshold within 0.0045 dB of the Shannon limit of the binary-input additive white Gaussian noise channel. Simulation results with a somewhat simpler code show that we can achieve within 0.04 dB of the Shannon limit at a bit error rate of 10/sup -6/ using a block length of 10/sup 7/.
Image restoration of compressed image using classified vector quantization To reduce communication bandwidth or storage space, image compression is needed. However, the subjective quality of compressed images may be unacceptable and the improvement of quality for compressed images may be desirable. This paper extends and modifies classified vector quantization (CVQ) to improve the quality of compressed images. The process consists of two phases: the encoding phase and the decoding phase. The encoding procedure needs a codebook for the encoder, which transforms a compressed image to a set of codeword-indices. The decoding phase also requires a different codebook for the decoder, which enhances a compressed image from a set of codeword-indices. Using CVQ to improve a compressed image's quality is different from the existing algorithm, which cannot reconstruct the high frequency components for compressed images. The experimental results show that the image quality is improved dramatically. For images in the training set, the improvement of PSNR is about 3dB. For images, which are outside the training set, the improvement of PSNR is about 0.57dB, which is comparable to the existing method.
On function languages and parallel computers This paper is a discussion of functional languages and parallel computers. It is aimed at an audience that has a background in computer architecture, but not necessarily in the area of functional languages. It therefore constitutes an introductory survey of functional languages, on the one hand, and a non-introductory discussion of parallel computers, on the other. The aim is to highlight some important issues regarding the use of adequacy of these languages and also on the design of parallel computers to interpret them. The concluding thesis of put forth is twofold: one, that to widen their scope of applicability, functional languages need to include more features of nondeterminism and may need to be integrated with features from conventional languages; two, that the right sort of architectures for such extended languages may well be less-specialised ones with a von Neumann flavour.
Adaptive predictor combination for lossless image coding This paper is concerned with adaptive prediction for lossless image coding. A new predictor which is an adaptive combination of a set of fixed predictors with a transform domain LMS based predictor is proposed. When a context-based arithmetic encoder is used to encode the prediction error, the compression performance of the proposed algorithm is better than that of the state-of-the-art algorithms and is close to TMW at a fraction of its computational complexity.
1.009804
0.014286
0.014286
0.014286
0.014286
0.014286
0.009762
0.007366
0.000555
0.000095
0.000024
0.000001
0
0
Feature mining and pattern classification for steganalysis of LSB matching steganography in grayscale images In this paper, we present a scheme based on feature mining and pattern classification to detect LSB matching steganography in grayscale images, which is a very challenging problem in steganalysis. Five types of features are proposed. In comparison with other well-known feature sets, the set of proposed features performs the best. We compare different learning classifiers and deal with the issue of feature selection that is rarely mentioned in steganalysis. In our experiments, the combination of a dynamic evolving neural fuzzy inference system (DENFIS) with a feature selection of support vector machine recursive feature elimination (SVMRFE) achieves the best detection performance. Results also show that image complexity is an important reference to evaluation of steganalysis performance.
Immune programming This paper describes 'Immune Programming', a paradigm in the field of evolutionary computing taking its inspiration from principles of the vertebrate immune system. These principles are used to derive stack-based computer programs to solve a wide range of problems. An antigen is used to represent the programming problem to be addressed and may be provided in closed form or as an input/output mapping. An antibody set (a repertoire), wherein each member represents a candidate solution, is generated at random from a gene library representing computer instructions. Affinity, the fit of an antibody (a solution candidate) to the antigen (the problem), is analogous to shape-complementarity evident in biological systems. This measure is used to determine both the fate of individual antibodies, and whether or not the algorithm has successfully completed. When a repertoire has not yielded affinity relating algorithm completion, individual antibodies are replaced, cloned, or hypermutated. Replacement occurs according to a replacement probability and yields an entirely new randomly-generated solution candidate when invoked. This randomness (and that of the initial repertoire) provides diversity sufficient to address a wide range of problems. The chance of antibody cloning, wherein a verbatim copy is placed in the new repertoire, occurs proportionally to its affinity and according to a cloning probability. The chances of an effective (high-affinity) antibody being cloned is high, analogous to replication of effective pathogen-fighting antibodies in biological systems. Hypermutation, wherein probability-based replacement of the gene components within an antibody occurs, is also performed on high-affinity entities. However, the extent of mutation is inversely proportional to the antigenic affinity. The effectiveness of this process lies in the supposition that a candidate showing promise is likely similar to the ideal solution. This paper describes the paradigm in detail along with the underlying immune theories and their computational models. A set of sample problems are defined and solved using the algorithm, demonstrating its effectiveness and excellent convergent qualities. Further, the speed of convergence with respect to repertoire size limitations and probability parameters is explored and compared to stack-based genetic programming algorithms.
Maximum likelihood estimation of length of secret message embedded using ±k steganography in spatial domain In this paper, we propose a new method for estimating the number of embedding changes for non-adaptive +/- K embedding in images. The method uses a high-pass FIR filter and then recovers an approximate message length using a Maximum Likelihood Estimator on those stego image segments where the filtered samples can be modeled using a stationary Generalized Gaussian random process. It is shown that for images with a low noise level, such as decompressed JPEG images, this method can accurately estimate the number of embedding changes even for K = 1 and for embedding rates as low as 0.2 bits per pixel. Although for raw, never compressed images the message length estimate is less accurate, when used as a scalar parameter for a classifier detecting the presence of +/- K steganography, the proposed method gave us relatively reliable results for embedding rates as low as 0.5 bits per pixel.
Image steganalysis based on moments of characteristic functions using wavelet decomposition, prediction-error image, and neural network In this paper, a general blind steganalysis system is proposed, in which the statistical moments of characteristic functions of the prediction-error image, the test image, and their wavelet subbands are selected as features. Artificial neural network is utilized as the classifier. The performance of the proposed steganalysis system is significantly superior to the prior arts.
Image Complexity and Feature Extraction for Steganalysis of LSB Matching Steganography In this paper, we present a scheme for steganalysis of LSB matching steganography based on feature extraction and pattern recognition techniques. Shape parameter of Generalized Gaussian Distribution (GGD) in the wavelet domain is introduced to measure image complexity. Several statistical pattern recognition algorithms are applied to train and classify the feature sets. Comparison of our method and others indicates our method is highly competitive. It is highly efficient for color image steganalysis. It is also efficient for grayscale steganalysis in the low image complexity domain.
Exploring Steganography: Seeing the Unseen Steganography is an ancient art of hiding information. Digital technology gives us new ways to apply steganographic techniques, including one of the most intriguing—that of hiding information in digital images.
A Novel Difference Expansion Transform for Reversible Data Embedding Reversible data embedding theory has marked a new epoch for data hiding and information security. Being reversible, the original data and the embedded data should be completely restored. Difference expansion transform is a remarkable breakthrough in reversible data-hiding schemes. The difference expansion method achieves high embedding capacity and keeps distortion low. This paper shows that the difference expansion method with the simplified location map and new expandability can achieve more embedding capacity while keeping the distortion at the same level as the original expansion method. Performance of the proposed scheme in this paper is shown to be better than the original difference expansion scheme by Tian and its improved version by Kamstra and Heijmans. This improvement can be possible by exploiting the quasi-Laplace distribution of the difference values.
An adaptive prediction-error expansion oriented reversible information hiding scheme Data hiding in digital images can be used in secure communication, copyright protection, and etc. For some important images, such as medical and military images, the original images must be recovered after extracting the embedded data, because distortions are unacceptable for these kinds of images. In this paper, we propose a reversible data hiding method based on prediction-error expansion. Each pixel of the cover image, excluding the first row and the first column, is predicted by its top and left neighboring pixels in the raster-scanning order. The relationship between the prediction error and the pre-determined threshold decides whether the current pixel is embeddable or not. Since the proposed prediction process provides small prediction error, our method can achieve high embedding rate and good visual quality of the stego image by the expansion of prediction error. During the procedure of extraction and recovery, the same prediction process is conducted, and then the embedded secret data and the cover image can be recovered correctly. The histogram squeezing technique is utilized to prevent underflow and overflow problems. Experimental results show that the proposed method provides better performance than some other methods.
Procedural content generation for games: A survey Hundreds of millions of people play computer games every day. For them, game content—from 3D objects to abstract puzzles—plays a major entertainment role. Manual labor has so far ensured that the quality and quantity of game content matched the demands of the playing community, but is facing new scalability challenges due to the exponential growth over the last decade of both the gamer population and the production costs. Procedural Content Generation for Games (PCG-G) may address these challenges by automating, or aiding in, game content generation. PCG-G is difficult, since the generator has to create the content, satisfy constraints imposed by the artist, and return interesting instances for gamers. Despite a large body of research focusing on PCG-G, particularly over the past decade, ours is the first comprehensive survey of the field of PCG-G. We first introduce a comprehensive, six-layered taxonomy of game content: bits, space, systems, scenarios, design, and derived. Second, we survey the methods used across the whole field of PCG-G from a large research body. Third, we map PCG-G methods to game content layers; it turns out that many of the methods used to generate game content from one layer can be used to generate content from another. We also survey the use of methods in practice, that is, in commercial or prototype games. Fourth and last, we discuss several directions for future research in PCG-G, which we believe deserve close attention in the near future.
A Location Map-Free Reversible Data Hiding Method For Specific Area Embedding This paper proposes a reversible data hiding method that embeds data into specific areas such as image edges. Though a reversible data hiding method distorts an original image to hide data, it not only extracts hidden data but also restores the original image from the distorted image. The proposed method memorizes only one parameter for data extraction and image restoration, and no location map recording watermarked positions is required. By simple modi. cation to the conventional method that has the above mentioned features, the proposed method has a new function; hiding data to only specific areas.
Genetic algorithm based methodology for breaking the steganalytic systems. Steganalytic techniques are used to detect whether an image contains a hidden message. By analyzing various image features between stego-images (the images containing hidden messages) and cover-images (the images containing no hidden messages), a steganalytic system is able to detect stego-images. In this paper, we present a new concept of developing a robust steganographic system by artificially counterfeiting statistic features instead of the traditional strategy by avoiding the change of statistic features. We apply genetic algorithm based methodology by adjusting gray values of a cover-image while creating the desired statistic features to generate the stego-images that can break the inspection of steganalytic systems. Experimental results show that our algorithm can not only pass the detection of current steganalytic systems, but also increase the capacity of the embedded message and enhance the peak signal-to-noise ratio of stego-images.
New Framework for Reversible Data Hiding in Encrypted Domain. In the past more than one decade, hundreds of reversible data hiding (RDH) algorithms have been reported. Via exploring the correlation between the neighboring pixels (or coefficients), extra information can be embedded into the host image reversibly. However, these RDH algorithms cannot be accomplished in encrypted domain directly, since the correlation between the neighboring pixels will disappear after encryption. In order to accomplish RDH in encrypted domain, specific RDH schemes have been designed according to the encryption algorithm utilized. In this paper, we propose a new simple yet effective framework for RDH in encrypted domain. In the proposed framework, the pixels in a plain image are first divided into sub-blocks with the size of m x n. Then, with an encryption key, a key stream (a stream of random or pseudorandom bits/bytes that are combined with a plaintext message to produce the encrypted message) is generated, and the pixels in the same sub-block are encrypted with the same key stream byte. After the stream encryption, the encrypted m x n sub-blocks are randomly permutated with a permutation key. Since the correlation between the neighboring pixels in each sub-block can be well preserved in the encrypted domain, most of those previously proposed RDH schemes can be applied to the encrypted image directly. One of the main merits of the proposed framework is that the RDH scheme is independent of the image encryption algorithm. That is, the server manager (or channel administrator) does not need to design a new RDH scheme according to the encryption algorithm that has been conducted by the content owner; instead, he/she can accomplish the data hiding by applying the numerous RDH algorithms previously proposed to the encrypted domain directly.
Multiple controlled mobile elements (data mules) for data collection in sensor networks Recent research has shown that using a mobile element to collect and carry data mechanically from a sensor network has many advantages over static multihop routing. We have an implementation as well employing a single mobile element. But the network scalability and traffic may make a single mobile element insufficient. In this paper we investigate the use of multiple mobile elements. In particular, we present load balancing algorithm which tries to balance the number of sensor nodes each mobile element services. We show by simulation the benefits of load balancing.
A new color QR code forward compatible with the standard QR code decoder This paper proposes a new color QR code which is forward compatible with standard QR code decoders for increasing the conveyable capacity of encoded information. The proposed method allocates three standard bicolor QR code to color channels of YCbCr color space so that one QR code in the Y channel can be decoded by a standard QR decoder. In addition, a proprietary decoder further decodes two more QR codes in Cb and Cr channels. The proposed method is based on the standard bicolor QR code in its encoding and decoding processes, whereas conventional methods increasing the conveyable capacity require complex proprietary codecs or different technology. Experimental results show the effectiveness of the proposed method.
1.045369
0.028918
0.021576
0.019084
0.009663
0.002524
0.000222
0.000097
0.000055
0.000033
0.000015
0.000001
0
0
A Fast Image Encoding Algorithm Based On The Pyramid Structure Of Codewords This article presents a very simple and efficient algorithm for codeword search in the vector quantization encoding. This algorithm uses 2-pixel merging norm pyramid structure to speed up the closest codeword search process. The authors first derive a condition to eliminate unnecessary matching operations from the search procedure. Then, based on this elimination condition, a fast search algorithm is suggested. Simulation results show that, the proposed search algorithm reduces the encoding complexity while maintaining the same encoding quality as that of the full search algorithm. It is also found that the proposed algorithm outperforms the existing search algorithms.
Efficient binary conversion for paillier encrypted values We consider the framework of secure n-party computation based on threshold homomorphic cryptosystems as put forth by Cramer, Damgård, and Nielsen at Eurocrypt 2001. When used with Paillier's cryptosystem, this framework allows for efficient secure evaluation of any arithmetic circuit defined over ℤN, where N is the RSA modulus of the underlying Paillier cryptosystem. In this paper, we extend the scope of the framework by considering the problem of converting a given Paillier encryption of a value x∈ℤN into Paillier encryptions of the bits of x. We present solutions for the general case in which x can be any integer in {0,1,...,N – 1}, and for the restricted case in which x N/(n2κ) for a security parameter κ. In the latter case, we show how to extract the ℓ least significant bits of x (in encrypted form) in time proportional to ℓ, typically saving a factor of log2N /ℓ compared to the general case. Thus, intermediate computations that rely in an essential way on the binary representations of their input values can be handled without enforcing that the entire computation is done bitwise. Typical examples involve the relational operators such as
An E-learning System Based on Secure Data Storage Services in Cloud Computing Abundant reasonable computers, web resources and education content are identified to transform educational usage on-demand in the field of cloud infrastructure. Therefore, there comes a necessity to redesign the educational system to meet the needs better. The appearance of cloud based services supports the creation of latest generation e-learning systems for storing multimedia data within the cloud; it draws attention for academia and research area, which may be able to use high quality of resources. Even though the merits of cloud service are more attractive, the physical possession of users data is under security risk with respect to data correctness. This poses many new security challenges which have not been well explored. This paper focuses mainly on distributed data storage security for e-learning system, which has always been an important aspect of quality service. To make sure the correctness of users data within the cloud, an adaptable and effective auditing mechanism hitches the challenges and distributes erasure-coded data for e-learning web application. This extensive analysis shows that the auditing result achieves quick data error correction and localization of servers for malicious data modification attacks.
A coordinated mechanism for multimode user equipment accessing wireless sensor network User Equipment UE as a basic device in cellular networks may contain different types of sensor module for specified usage, called multimode UE. With the natural mobility characteristics, this new type of UE is similar to an enhanced type of mobile sensor, with more energy and more flexibility. Since Wireless Sensor Network WSN can be deployed easily as no major infrastructure and human control are needed, the integration of cellular network and wireless sensor network becomes attractive by combining the sensing part and the connection part in a flexible way, which expands both scalability and service applications for the networks. In this paper, we propose a coordinated accessing mechanism when UE interacts with WSN network. Through the analysis of the accessing problem, we can optimise the contention period for accessing in order to enhance the heterogeneous network performance. The simulation result shows the proposed accessing mechanism outperforms the conventional method in an integrated network.
An Approach For Developing An Interoperability Mechanism Between Cloud Providers Due to the presence of numerous cloud service providers, the requirement is emerging for interoperability between them so that companies can choose multiple suppliers to fit their needs. This paper provides an approach to address the problem of cloud interoperability. We aim at facilitating the collaboration among providers by proposing an architecture based on an agent society, to support and ensure the data portability and interoperability. For that, we define a two-phase migration protocol that enables data portability by permitting providers to exchange data regardless of their infrastructure, tools and platforms, according to a specific demand in order to satisfy companies' needs.
Fully homomorphic encryption with relatively small key and ciphertext sizes We present a fully homomorphic encryption scheme which has both relatively small key and ciphertext size. Our construction follows that of Gentry by producing a fully homomorphic scheme from a “somewhat” homomorphic scheme. For the somewhat homomorphic scheme the public and private keys consist of two large integers (one of which is shared by both the public and private key) and the ciphertext consists of one large integer. As such, our scheme has smaller message expansion and key size than Gentry’s original scheme. In addition, our proposal allows efficient fully homomorphic encryption over any field of characteristic two.
Reversibility improved data hiding in encrypted images A novel reversible data hiding technique in encrypted images is presented in this paper. Instead of embedding data in encrypted images directly, some pixels are estimated before encryption so that additional data can be embedded in the estimating errors. A benchmark encryption algorithm (e.g. AES) is applied to the rest pixels of the image and a special encryption scheme is designed to encrypt the estimating errors. Without the encryption key, one cannot get access to the original image. However, provided with the data hiding key only, he can embed in or extract from the encrypted image additional data without knowledge about the original image. Moreover, the data extraction and image recovery are free of errors for all images. Experiments demonstrate the feasibility and efficiency of the proposed method, especially in aspect of embedding rate versus Peak Signal-to-Noise Ratio (PSNR).
Data-driven discovery of physical laws BACON.3 is a production system that discovers empirical laws. Although it does not attempt to model the human discovery process in detail, it incorporates some general heuristics that can lead to discovery in a number of domains. The main heuristics detect constancies and trends in data, and lead to the formulation of hypotheses and the definition of theoretical terms. Rather than making a hard distinction between data and hypotheses, the program represents information at varying levels of description. The lowest levels correspond to direct observations, while the highest correspond to hypotheses that explain everything so far observed. To take advantage of this representation, BACON.3 has the ability to carry out and relate multiple experiments, collapse hypotheses with identical conditions, ignore differences to let similar concepts be treated as equal, and to discover and ignore irrelevant variables. BACON.3 has shown its generality by rediscovering versions of the ideal gas law, Kepler's third law of planetary motion, Coulomb's law, Ohm's law, and Galileo's laws for the pendulum and constant acceleration.
Evolution strategies –A comprehensive introduction This article gives a comprehensive introduction into one of the main branches of evolutionary computation – the evolution strategies (ES) the history of which dates back to the 1960s in Germany. Starting from a survey of history the philosophical background is explained in order to make understandable why ES are realized in the way they are. Basic ES algorithms and design principles for variation and selection operators as well as theoretical issues are presented, and future branches of ES research are discussed.
An introduction to partial evaluation Partial evaluation provides a unifying paradigm for a broad spectrum of work in program optimization compiling interpretation and the generation of automatic program generators [Bjørner et al. 1987; Ershov 1992; and Jones et al. 1993]. It is a program optimization technique, perhaps better called program specialization, closely related to but different from Jørring and Scherlis' staging transformations [1986]. It emphasizes, in comparison with Burstall and Darlington [1977] and Jørring and Scherlis [1986] and other program transformation work, full automation and the generation of program generators as well as transforming single programs. Much partial evaluation work to date has concerned automatic compiler generation from an interpretive definition of programming language, but it also has important applications to scientific computing, logic programming, metaprogramming, and expert systems; some pointers are given later.
Issues in global routing In this paper, we present several shortcomings of current global routers and propose enhancements to remedy the problems. We propose that global routers incorporate aspects of both placement and detail routing in order to improve the quality of the global routing. In addition, we propose the use of a constrained die methodology to complement fixed and variable die design flows. This paper outlines the algorithms present in the commercial global routers available for InternetCAD.com, Inc.
Implementation and extensibility of an analytic placer Automated cell placement is a critical problem in very large scale integration (VLSI) physical design. New analytical placement methods that simultaneously spread cells and optimize wirelength have recently received much attention from both academia and industry. A novel and simple objective function for spreading cells over the placement area is described in the patent of Naylor et al. (U.S. Pat. 6301693). When combined with a wirelength objective function, this allows efficient simultaneous cell spreading and wirelength optimization using nonlinear optimization techniques. In this paper, we implement an analytic placer (APlace) according to these ideas (which have other precedents in the open literature), and conduct in-depth analysis of characteristics and extensibility of the placer. Our contributions are as follows. 1) We extend the objective functions described in (Naylor et al., U.S. Patent 6301693) with congestion information and implement a top-down hierarchical (multilevel) placer (APlace) based on them. For IBM-ISPD04 circuits, the half-perimeter wirelength of APlace outperforms that of FastPlace, Dragon, and Capo, respectively, by 7.8%, 6.5%, and 7.0% on average. For eight IBM-PLACE v2 circuits, after the placements are detail-routed using Cadence WRoute, the average improvement in final wirelength is 12.0%, 8.1%, and 14.1% over QPlace, Dragon, and Capo, respectively. 2) We extend the placer to address mixed-size placement and achieve an average of 4% wirelength reduction on ten ISPD'02 mixed-size benchmarks compared to results of the leading-edge solver, FengShui. 3) We extend the placer to perform timing-driven placement. Compared with timing-driven industry tools, evaluated by commercial detailed routing and static timing analysis, we achieve an average of 8.4% reduction in cycle time and 7.5% reduction in wirelength for a set of six industry testcases. 4) We also extend the placer to perform input/output-core coplacement and constraint handing for mixed-signal designs. Our paper aims to, and empirically demonstrates, that the APlace framework is a general, and extensible platform for "spatial embedding" tasks across many aspects of system physical implementation.
Eureka definitions for free! or disagreement points for fold/unfold transformations The fold/unfold framework of Burstall and Darlington is a very powerful framework for transforming function definitions in the form of recursion equation schemes. This may be used to transform a function so as to improve the efficiency of its implementation. However, for this to work the user must supply so-called Eureka definitions and it may require some ingenuity to construct these. This paper shows that a class of these Eureka definitions can be derived in a rather systematic way.
A survey on search results diversification techniques The quantity of information placed on the web has been greater than before and is increasing rapidly day by day. Searching through the huge amount of data and finding the most relevant and useful result set involves searching, ranking, and presenting the results. Most of the users probe into the top few results and neglect the rest. In order to increase user's satisfaction, the presented result set should not only be relevant to the search topic, but should also present a variety of perspectives, that is, the results should be different from one another. The effectiveness of web search and the satisfaction of users can be enhanced through providing various results of a search query in a certain order of relevance and concern. The technique used to avoid presenting similar, though relevant, results to the user is known as a diversification of search results. This article presents a survey of the approaches used for search result diversification. To this end, this article not only provides a technical survey of existing diversification techniques, but also presents a taxonomy of diversification algorithms with respect to the types of search queries.
1.2
0.2
0.2
0.2
0.2
0.05
0.002469
0
0
0
0
0
0
0
Oblivious routing on geometric networks We study oblivious routing in which the packet paths are constructed independently of each other. We give a simple oblivious routing algorithm for geometric networks in which the nodes are embedded in the Euclidean plane. In our algorithm, a packet path is constructed by first choosing a random intermediate node in the space between the source and destination, and then the packet is sent to its destination through the intermediate node. We analyze the performance of the algorithm in terms of the stretch and congestion of the resulting paths. We show that the stretch is constant, and the congestion is near optimal when the network paths can be chosen to be close to the geodesic lines that connect the end points of the paths. We give applications of our general result to the mesh topology and uniformly distributed disc graphs. Previous oblivious routing algorithms with near optimal congestion use many intermediate nodes and do not control the stretch.
Balancing traffic load in wireless networks with curveball routing We address the problem of balancing the traffic load in multi-hop wireless networks. We consider a point-to-point communicating network with a uniform distribution of source-sink pairs. When routing along shortest paths, the nodes that are centrally located forward a disproportionate amount of traffic. This translates into increased congestion and energy consumption. However, the maximum load can be decreased if the packets follow curved paths. We show that the optimum such routing scheme can be expressed in terms of geometric optics and computed by linear programming. We then propose a practical solution, which we call Curveball Routing which achieves results not much worse than the optimum. We evaluate our solution at three levels of fidelity: a Java high-level simulator, the ns2 simulator, and the Intel Mirage Sensor Network Testbed. Simulation results using the high-level simulator show that our solution successfully avoids the crowded center of the network, and reduces the maximum load by up to 40%. At the same time, the increase of the expected path length is minimal, i.e., only 8% on average. Simulation results using the ns2 simulator show that our solution can increase throughput on moderately loaded networks by up to 15%, while testbed results show a reduction in peak energy usage by up to 25%. Our prototype suggests that our solution is easily deployable.
Balancing Traffic Load Using One-Turn Rectilinear Routing We consider the problem of load-balanced routing, where a dense network is modelled by a continuous square region and origin and destination nodes correspond to pairs of points in that region. The ob- jective is to define a routing policy that assigns a continuous path to each origin-destination pair while minimizing the traffic, or load, pass- ing through any single point. While the average load is minimized by straight-line routing, such a routing policy distributes the load non- uniformly, resulting in higher load near the center of the region. We consider one-turn rectilinear routing policies that divert traffic away from regions of heavier load, resulting in up to a 33% reduction in the maxi- mum load while simultaneously increasing the path lengths by an average of less than 28%. Our policies are simple to implement, being both local and oblivious. We provide a lower bound that shows that no one-turn rectilinear routing policy can reduce the maximum load by more than 39% and we give a polynomial-time procedure for approximating the optimal randomized policy.
Optimal Oblivious Path Selection on the Mesh In the oblivious path selection problem, each packet in the network independently chooses a path, which is an important property if the routing algorithm is to be independent of the traffic distribution. The quality of the paths is determined by the congestion C, the maximum number of paths crossing an edge, and the dilation D, the maximum path length. So far, the oblivious algorithms studied in the literature have focused on minimizing the congestion while ignoring the dilation. An open problem is to give algorithms for networks in which C and D can be controlled simultaneously. Here, we solve this problem for the d-dimensional mesh. We present an oblivious algorithm for which C and D are both within O(d2) of optimal. The algorithm uses randomization, and we show that the number of random bits required per packet is within O(d) of the minimum number of random bits required by any algorithm that obtains the same congestion. For fixed d, our algorithm is asymptotically optimal.
A practical algorithm for constructing oblivious routing schemes In a (randomized) oblivious routing scheme the path chosen for a request between a source s and a target t is independent from the current traffic in the network. Hence, such a scheme consists of probability distributions over s-t paths for every source-target pair s,t in the network.In a recent result [11] it was shown that for any undirected network there is an oblivious routing scheme that achieves a polylogarithmic competitive ratio with respect to congestion. Subsequently, Azar et al. [4] gave a polynomial time algorithm that for a given network constructs the best oblivious routing scheme, i.e. the scheme that guarantees the best possible competitive ratio. Unfortunately, the latter result is based on the Ellipsoid algorithm; hence it is unpractical for large networks.In this paper we present a combinatorial algorithm for constructing an oblivious routing scheme that guarantees a competitive ratio of O(log4n) for undirected networks. Furthermore, our approach yields a proof for the existence of an oblivious routing scheme with competitive ratio O(log3n), which is much simpler than the original proof from [11].
On-line load balancing with applications to machine scheduling and virtual circuit routing
Reusing and Interconnecting Software Components First Page of the Article
Arity Raiser and its Use in Program Specialization Experiments on generating compilers by specializing specializers with respect to interpreters have shown that the compilers thus obtained have a natural structure only if the specializer does variable splitting. Variable splitting can result in a residual program using several variables to represent the values of a single variable of the original program. In the case of functional programming variable splitting is done by raising the arities of functions. The paper describes the structure and principles of operation of an arity raiser dealing with programs in a subset of pure Lisp.
Putting theories together to make specifications
Calculating path algorithms A calculational derivation is given of two abstract path algorithms. The first is an all-pairs algorithm, two well-known instances of which are Warshall's (reachability) algorithm and Floyd's shortest-path algorithm; instances of the second are Dijkstra's shortest-path algorithm and breadth-first/depth-first search of a directed graph. The basis for the derivations is the algebra of regular languages.
Duce, An Oracle-based Approach to Constructive Induction Duce is a Machine Learning system which suggests high-level domain features to the user (or oracle) on the basis of a set of example object descriptions. Six transformation operators are used to successively compress the given examples by generalisation and feature construction. In this paper Duce is illustrated by way of its construction of a simple animal taxonomy and a hierarchical parity checker. However, Duce's main achievement has been the restructuring of a substantial expert system for deciding whether positions within the chess endgame of King-and-Pawn-on-a7 v. King-and-Rook (KPa7KR) are won-for-while or not. The new concepts suggested by Duce for the chess expert system hierarchy were found to be meaningful by the chess expert Ivan Dratko. An existing manually created KPa7KR solution, which was the basis of a recent PhD. thesis, is compared to the structure interactively created by Duce.
DUNE-a multilayer gridless routing system Advances of very large scale integration technologies present two challenges for routing problems: (1) the higher integration of transistors due to shrinking of featuring size and (2) the requirement for off-grid routing due to the variable-width variable-spacing design rules imposed by optimization techniques. In this paper, we present a multilayer gridless detailed routing system for deep submicrometer physical designs. Our detailed routing system uses a hybrid approach consisting of two parts: (1) an efficient variable-width variable-spacing detailed routing engine and (2) a wire-planning algorithm providing high-level guidance as well as ripup and reroute capabilities. Our gridless routing engine is based on an efficient point-to-point gridless routing algorithm using an implicit representation of a nonuniform grid graph. We proved that such a graph guarantees a gridless connection of the minimum cost in multilayer variable-width and variable-spacing routing problem. A novel data structure using a two-level slit tree plus interval tree in combination of cache structure is developed to support efficient queries into the connection graph. Our experiments show that this data structure is very efficient in memory usage while very fast in answering maze expansion related queries. Our detailed routing system also features a coarse grid-based wire-planning algorithm that uses exact gridless design rules (variable-width and variable-spacing) to accurately estimate the routing resources and distribute nets into routing regions. The wire-planning method also enables efficient ripup and reroute in gridless routing. Unlike previous approaches for gridless routing that explore alternatives of blocked nets by gradually tightening the design rules, our planning-based approach can take the exact gridless rules and resolve the congestion and blockage at a higher level. Our experimental results show that using the wire-planning algorithm in our detailed routing system can improve the routability and also speed up the runtime by 3 to 17 times
A novel data hiding scheme based on modulus function Four criteria are generally used to evaluate the performance of data hiding scheme: the embedding capacity, the visual quality of the stego-image, the security, and the complexity of the data-embedding algorithm. However, data hiding schemes seldom take all these factors into consideration. This paper proposes a novel data hiding scheme that uses a simple modulus function to address all the performance criteria listed above. According to the input secret keys, the encoder and decoder use the same set-generation functions H"r() and H"c() to first generate two sets Kr and Kc. A variant Cartesian product is then created using Kr and Kc. Each cover pixel then forms a pixel group with its neighboring pixels by exploiting an efficient modulus function; the secret data are then embedded or extracted via a mapping process between the variant of the Cartesian product and each pixel group. The proposed scheme offers several advantages, namely (1) the embedding capacity can be scaled, (2) a good visual quality of the stego-image can be achieved, (3) the computational cost of embedding or extracting the secret data is low and requires little memory space, (4) secret keys are used to protect the secret data and (5) the problem of overflow or underflow does not occur, regardless of the nature of the cover pixels. We tested the performance of the proposed scheme by comparing it with Mielikainen's and Zhang and Wang's schemes for gray-scale images. The experimental results showed that our proposed scheme outperforms Mielikainen's in three respects, namely scalable embedding capacity, embedding rate, and level of security. Our data hiding scheme also achieved a higher embedding capacity than Zhang and Wang's. The proposed scheme can easily be applied to both gray-scale and color images. Analyses of its performance showed that our proposed scheme outperforms Tsai and Wang's in terms of its time complexity and memory space requirement.
Reversible data hiding based on local histogram shifting with multilayer embedding We use the two neighboring points of the peak point to embed the secret data.We improve the histogram of embedded cover image to ensure security of secret data.The multilayer embedding of pyramid structure is adopted to increase the capacity.Our proposed method has a higher embedding capacity with lower distortion. In this paper, we present a new reversible data hiding method based on histogram shifting using localization. Our proposed method selects peak point as the reference point, then uses the two neighboring points of the peak point to achieve secret data embedding based on histogram shifting and the peak point keeps unchanged. In the extraction end, we no longer need the key information about the peak point, we can directly find the peak point from the histogram to extract the secret data. We also exploit the localization to make the histogram of embedded cover image become almost the same as the histogram of the original cover image. The embedding capacity is also increased rapidly by the localization with multilayer embedding. Experimental results show that our proposed method is effective and superior.
1.027552
0.026562
0.025
0.022464
0.014771
0.007553
0
0
0
0
0
0
0
0
Dimensions in program synthesis Program Synthesis, which is the task of discovering programs that realize user intent, can be useful in several scenarios: enabling people with no programming background to develop utility programs, helping regular programmers automatically discover tricky/mundane details, program understanding, discovery of new algorithms, and even teaching. This paper describes three key dimensions in program synthesis: expression of user intent, space of programs over which to search, and the search technique. These concepts are illustrated by brief description of various program synthesis projects that target synthesis of a wide variety of programs such as standard undergraduate textbook algorithms e.g., sorting, dynamic programming), program inverses(e.g., decoders, deserializers), bitvector manipulation routines, deobfuscated programs, graph algorithms, text-manipulating routines, mutual exclusion algorithms, etc.
Algorithmic program diagnosis The notion of program correctness with respect to an interpretation is defined for a class of programming languages. Under this definition, if a program terminates with an incorrect output then it contains an incorrect procedure. Algorithms for detecting incorrect procedures are developed. These algorithms formalize what experienced programmers may know already.A logic program implementation of these algorithms is described. Its performance suggests that the algorithms can be the backbone of debugging aids that go far beyond what is offered by current programming environments.Applications of algorithmic debugging to automatic program construction are explored.
Alchemy: transmuting base alloy specifications into implementations Alloy specifications are used to define lightweight models of systems. We present Alchemy, which compiles Alloy specifications into implementations that execute against persistent databases. Alchemy translates a subset of Alloy predicates into imperative update operations, and it converts facts into database integrity constraints that it maintains automatically in the face of these imperative actions. In addition to presenting the semantics and an algorithm for this compilation, we present the tool and outline its application to a non-trivial specification. We also discuss lessons learned about the relationship between Alloy specifications and imperative implementations.
Sketching concurrent data structures We describe PSketch, a program synthesizer that helps programmers implement concurrent data structures. The system is based on the concept of sketching, a form of synthesis that allows programmers to express their insight about an implementation as a partial program: a sketch. The synthesizer automatically completes the sketch to produce an implementation that matches a given correctness criteria. PSketch is based on a new counterexample-guided inductive synthesis algorithm (CEGIS) that generalizes the original sketch synthesis algorithm from Solar-Lezama et.al. to cope efficiently with concurrent programs. The new algorithm produces a correct implementation by iteratively generating candidate implementations, running them through a verifier, and if they fail, learning from the counterexample traces to produce a better candidate; converging to a solution in a handful of iterations. PSketch also extends Sketch with higher-level sketching constructs that allow the programmer to express her insight as a "soup" of ingredients from which complicated code fragments must be assembled. Such sketches can be viewed as syntactic descriptions of huge spaces of candidate programs (over 108 candidates for some sketches we resolved). We have used the PSketch system to implement several classes of concurrent data structures, including lock-free queues and concurrent sets with fine-grained locking. We have also sketched some other concurrent objects including a sense-reversing barrier and a protocol for the dining philosophers problem; all these sketches resolved in under an hour.
Planware - Domain-Specific Synthesis of High-Performance Schedulers Planware is a domain-specific generator of high- performance scheduling software, currently being developed at Kestrel Institute. Architecturally, Planware is an extension of the Specware system with domain-independent and domain-dependent parts. The domain-independent part includes a general algorithm design facility (including mech- anisms to synthesize global-search and constraint propagation algorithms), as well as support for theorem-proving and witness finding. The domain- dependent part includes scheduling domain knowl- edge and architecture representations, and other domain-specific refinement knowledge that relates the scheduling domain to general algorithm design and data type refinement. Using Planware, the user interactively specifies a problem and then the sys- tem automatically generates a formal specification and refines it.
Automatic programming for streams II: transformational implementation &PHgr;NIX is an automatic programming system, now under development, for writing programs which interact with external devices through temporally-ordered streams of values. Abstract specifications are stated in terms of constraints on the values of input and output streams. The target language is the Stream Machine, a language which includes concurrently executing processes communicating and synchronizing through streams. &PHgr;NIX produces programs by repeatedly transforming abstract specifications through successively more concrete forms until concrete Stream Machine programs are produced. An example which &PHgr;NIX has successfully implemented involves three major steps: transforming the specification into an applicative expression, transforming the applicative expression into three imperative processes, and merging the processes into a single process. Each major step involves several other transformation steps that reformulate and simplify intermediate expressions.
Transformational Implementation: An Example A system for mechanically transforming formal program specifications into efficient implementations under interactive user control is described and illustrated through a detailed example. The potential benefits and problems of this approach to software implementation are discussed.
Structured Programming with go to Statements For serious students of structured programming, and also for language designers, Knuth's "Structured Programming with go to Statements" is probably the paper to read. It is by far the most complete description of structured programming of all the selections in this book. Even though it originally was published in late 1974, Knuth's ideas have not aged a bit. The title, as Knuth acknowledges, is somewhat of a misnomer: " . . . I should confess that the title of this article was chosen primarily to generate attention." Indeed, he is not arguing that structured programming always should be carried out with goto statements; instead, as he says, "what I am really doing is striving for a reasonably well-balanced viewpoint about the proper role of go to statements." Like a number of other authors whose views are presented in this book, Knuth seems deeply concerned that the average programmer will over-react to the "no goto" philosophy. Reflecting that concern, the first few pages of Knuth's paper give an indication of the fanaticism of its time --- a fanaticism that has largely died away. For history buffs, Knuth has carefully documented the origins of structured programming. He claims that the programming style now referred to as "structured programming" actually was first practiced by D.V. Schorre in 1960, and that was followed by publication of the work of Peter Naur in 1963 and of George Forsythe in 1964 -- all of this before the better-known works of Dijkstra, and of Böm and Jacopini. There are three major themes in Knuth's paper, and avast number of examples to illustrate each one, One theme --- a familiar one, if you've read many of the other papers in this book is that a program needs to be rewritten several times before it can beconsidered a truly good program. As Knuth says, " . . . I learned again that I should always keep looking for improvements, even When I have a satisfactory program." Indeed, one gets the impression that Knuth views this as the major virtue of structured programming: The requirement to eliminate gotos forces the programmer to rewrite and thus think more about what he was trying to accomplish with his program. A second theme --- one that you won't find in any of the other papers - is that efficiency is important in some programming applications, and that the programmer needs the freedom to use gotos to optimize critical portions of a program. Actually, Martin Hopkins makes the same point in "A Case for the GOTO" [Paper 9], but he does so much less forcefully than Knuth. Knuth recognizes the danger of overemphasizing optimization; as he says, "premature optimization is the root of all evil." And, he does state that only three percent of the code in a typical program ever needs to be optimized; but for that critical three percent, he demonstrates that structured code often is twenty to thirty percent less efficient than equivalent code with gotos. Unfortunately, the point is repeated so many times, with so many examples, that the average reader is likely to conclude that Knuth is obsessed with efficiency. However, given the number of authors exhorting programmers to ignore efficiency altogether, it is probably very healthy to have someone argue for a careful consideration of program efficiency. Knuth's third main theme is that structured programming could be made much more practical with the addition of a few language constructs. In addition to constructs like LEAVE and BREAK, Knuth argues for a "situation" construct patterned after a proposal originally made by C.T. Zahn. In keeping with the heretical title of his paper, Knuth introduces a fourth theme: There are times when the programmer should put gotos into his code, rather than take them out. For example, gotos can be used to convert recursion to iteration; or to implement coroutines; or to eliminate Boolean variables by branching into common code. In this context, Knuth suggests the following strategy: First, write the program in a structured manner to convince yourself that it is correct; then, transform it into an efficient program, possibly by introducing some goto statements; and, finally, leave the original structured code behind as documentation, so that subsequent readers can understand how the transformation took place. Whether the average programmer would go through these steps in an orderly, formal way is something worth pondering. My own suspicion is that it won't work, but I'm often branded a skeptic. To conclude: It's probably best not to read Knuth's paper in one sitting, for you almost surely will become mentally fatigued by the eight major examples, each of which is rewritten three or four (or more!) times in an ALGOL-like pseudocode. Read the paper piece by piece, and do read the code --- you'll learn a lot from it!
Folding by similarity. A formal speciflcation can describe software models which are di-cult to program. Transformational methods based on fold/unfold strategies have been proposed to palliate this problem. The objective of applying transfor- mationsistofllteroutanewversionofthespeciflcationwhererecursionmay be introduced by a folding step. Among many problems, the "eureka" about whenandhowtodeflneanewpredicateisdi-culttoflndautomatically. We proposeanewversionofthefoldingrulewhichdecidesautomaticallyhowto introduce new predicates in a speciflcation. Our method is based on flnding similarities between formulas represented as parsing trees and it constitutes anassistancetothecomplexproblemofderivingrecursivespeciflcationsfrom
A debate on teaching computing science
Generating query substitutions We introduce the notion of query substitution, that is, gen- erating a new query to replace a user's original search query. Our technique uses modications based on typical substitu- tions web searchers make to their queries. In this way the new query is strongly related to the original query, contain- ing terms closely related to all of the original terms. This contrasts with query expansion through pseudo-relevance feedback, which is costly and can lead to query drift. This also contrasts with query relaxation through boolean or TFIDF retrieval, which reduces the specicit y of the query. We de- ne a scale for evaluating query substitution, and show that our method performs well at generating new queries related to the original queries. We build a model for selecting be- tween candidates, by using a number of features relating the query-candidate pair, and by tting the model to human judgments of relevance of query suggestions. This further improves the quality of the candidates generated. Experi- ments show that our techniques signican tly increase cover- age and eectiv eness in the setting of sponsored search.
QoS-Aware Service Composition in Dino A major advantage offered by Web services technologies is the ability to dynamically discover and invoke services. This ability is particularly important for operations of many applications executing in open dynamic environments. The QoS properties of the required and provided services play a significant role in dynamic discovery and invocation of services in open dynamic environments. In this paper, we discuss our approach to QoS specification and service provider selection, in the context of our work on the Dino project. The service provider selection algorithm used in Dino takes into account the relative benefit offered by a provider with respect to the requester-specified QoS criteria, and the trustworthiness of the provider. We explain our approach using an example from the automotive domain.
Adjustable prediction-based reversible data hiding Reversible data hiding has received considerable interest recently in the field of information hiding. However, most of difference expansion (DE-based) schemes suffer from the problem that image cannot be restored into its original one free from location map. Reversible data hiding scheme with location map cannot effectively raise hiding capacity; in addition, it also incurs the computational cost during embedding and extracting. This study presents a reversible data hiding scheme that is free of location map and thus the computational cost, associated with embedding and extracting, is lower than that of most DE-based schemes. Furthermore, our scheme solves the problem of the low embedding rate of Tseng and Hsieh@?s scheme because our scheme takes advantage of the bulk of relatively high small prediction-errors to raise embedding capacity. Experimental results demonstrate that the proposed scheme achieves the larger payload and the better image quality than some existing schemes by an adjustable control factor. The performance of the proposed scheme is more stable for the different images.
Reversible data hiding based on local histogram shifting with multilayer embedding We use the two neighboring points of the peak point to embed the secret data.We improve the histogram of embedded cover image to ensure security of secret data.The multilayer embedding of pyramid structure is adopted to increase the capacity.Our proposed method has a higher embedding capacity with lower distortion. In this paper, we present a new reversible data hiding method based on histogram shifting using localization. Our proposed method selects peak point as the reference point, then uses the two neighboring points of the peak point to achieve secret data embedding based on histogram shifting and the peak point keeps unchanged. In the extraction end, we no longer need the key information about the peak point, we can directly find the peak point from the histogram to extract the secret data. We also exploit the localization to make the histogram of embedded cover image become almost the same as the histogram of the original cover image. The embedding capacity is also increased rapidly by the localization with multilayer embedding. Experimental results show that our proposed method is effective and superior.
1.068889
0.066667
0.066667
0.035
0.006061
0.000022
0.000002
0
0
0
0
0
0
0
Reversible Image Hiding Scheme for High Quality Based on Histogram Shifting In 2006, Ni et al. proposed reversible image hiding scheme using histogram shifting. Their scheme modified the pixel values of the cover-image between the peak point and the zero point in the histogram. PSNR of almost every histogram shifting based methods was about 48dB because many pixels that don't hide the secret data was modified. In this paper, to the higher hiding capacity and image quality improved Ni et al.'s scheme using Gradient-adjusted prediction (GAP) and modulo operation. In experimental results, the hiding capacity of the proposed scheme is superior to Ni et al.'s scheme. Also the image quality of the proposed scheme is increased by about 7 dB than Ni et al.'s scheme.
Adaptive reversible data hiding based on block median preservation and modification of prediction errors In this paper, two enhanced reversible data hiding methods are proposed; both of them are based on two novel reversible data hiding techniques. A latest predictor is adopted to achieve better data hiding capability for the first predicative reversible data hiding scheme, whereas another scheme utilizes a new approach by considering the nature of different images to classify the smoothness for each piece of image blocking regions such that more secret data can be hidden into the smooth regions rather than the non-smooth ones resulting in a better embedding capability. The experiments verify that these schemes outperform the original reversible data hiding algorithms and some state-of-the-art reversible data hiding schemes.
Block-based reversible data hiding with multi-round estimation and difference alteration In this paper, we propose a new algorithm in reversible data hiding, with the characteristics of ease of implementation, based on the alteration of difference values relating to original image. Three major categories can make reversible data hiding possible, one is to modify the histogram of original images, the second is to alter the difference between neighboring pixels, and the third is to apply prediction method to modify the difference between original and predicted pixels. By integrating the advantages from the these categories, and by extending the flexibility to acquire the difference values for data hiding, we employ the block-based, multi-round prediction to look for enhanced performances. Simulations with conventional and medical images have presented the superiority with our algorithm over existing ones in literature. With our algorithm, enhanced amount of capacity, in addition to the better image quality, and most important of all, the reversibility of algorithm, can be reached. It is also famous for its ease of implementation for medical applications in hospitals.
A high capacity reversible watermarking approach for authenticating images: Exploiting down-sampling, histogram processing, and block selection Amongst the various data hiding schemes, lossless data hiding has achieved substantial attention in the recent years. Lossless watermarking also known as reversible watermarking, not only ensures the extraction of the hidden information, called watermark, but also the complete recovery of the original image as well. Recently, several interesting reversible block-based watermarking approaches using histogram processing have been reported. These approaches offer substantial watermarking capacity for a fixed image quality measure. However, there is considerable margin of improvement as regards the imperceptibility versus capacity tradeoff is concerned. Some of the watermarking applications such as those related to integrity-control need high capacity. Therefore, we present a high capacity reversible watermarking approach based on histogram processing and block selection (RW-HPBS). The proposed RW-HPBS approach exploits the concept of down sampling for effectively increasing the capacity and is useful for integrity control and authentication related applications. Two down-sampled versions, the reference and the data hiding, are obtained to create space for embedding. Authentication property is achieved by taking a secondary advantage of the location map before being compressed and embedded as an auxiliary information. The effectiveness of the proposed method is shown through comparison with the existing techniques. Experimental results show that the proposed RW-HPBS approach provides better imperceptibility versus capacity tradeoff and can detect tampering and collage attack.
A local variance-controlled reversible data hiding method using prediction and histogram-shifting The stego image quality produced by the histogram-shifting based reversible data hiding technique is high; however, it often suffers from lower embedding capacity compared to other types of reversible data hiding techniques. In 2009, Tsai et al. solved this problem by exploiting the similarity of neighboring pixels to construct a histogram of prediction errors; data embedding is done by shifting the error histogram. However, Tsai et al.'s method does not fully exploit the correlation of the neighboring pixels. In this paper, a set of basic pixels is employed to improve the prediction accuracy, thereby increasing the payload. To further improve the image quality, a threshold is used to select only low-variance blocks to join the embedding process. According to the experimental results, the proposed method provides a better or comparable stego image quality than Tsai et al.'s method and other existing reversible data hiding methods under the same payload.
Information Hiding Based on Block Match Coding for Vector Quantization-Compressed Images In this paper, a novel highly efficient lossless data hiding scheme is proposed to achieve the goal of hiding secret data into vector quantization (VQ)-compressed images that can be losslessly reconstructed after the secret data is extracted in the decoder. For VQ-compressed images, the index-modifying and the side-match VQ (SMVQ) techniques can be applied to encode indices and hide secret data. In general, data hiding accompanied by the SMVQ technique can yield a higher embedding capacity and a lower bit rate, yet more time consuming. In contrast, data hiding accompanied by the index-modifying technique can yield a lower embedding capacity and a higher bit rate, yet less time consuming. To possess the advantages of the two techniques while removing the shortcomings, the neighboring processed indices are employed to speed up the processes of generating state codebooks required for encoding and hiding. To evaluate the effectiveness of this approach, various test images are employed in the experiments. As documented in the experimental results, it is shown that the performance of the proposed scheme is superior to former schemes in terms of compression ratio, embedding rate, processing efficiency, and embedding capacity.
Reversible image hiding scheme using predictive coding and histogram shifting In this paper, a reversible image hiding scheme based on histogram shifting for medical images is proposed. As we know, the histogram-based reversible data hiding is limited by the hiding capacity, which is influenced by the overhead of position information that has to be embedded in the host image. To solve this problem, the similarity of neighboring pixels in the images was explored by using the prediction technique and the residual histogram of the predicted errors of the host image was used to hide the secret data in the proposed scheme. In addition, the overlapping between peak and zero pairs was used to further increase the hiding capacity. According to the experimental results, a higher hiding capacity was obtained and a good quality stego-image was preserved in the proposed scheme. The hiding capacity provided by the proposed scheme was approximately three times that of the original histogram-based method. Compared to the histogram-based method, the quality of the stego-image improved about 1.5dB when the same amounts of secret data were embedded.
Separable Reversible Data Hiding in Encrypted JPEG Bitstreams. While most techniques of reversible data hiding in encrypted images (RDH-EI) are developed for uncompressed images, this paper provides a separable reversible data hiding protocol for encrypted JPEG bitstreams. We first propose a JPEG encryption algorithm, which enciphers an image to a smaller size and keeps the format compliant to JPEG decoder. After a content owner uploads the encrypted JPEG bit...
Generalized lossless data hiding by multiple predictors Digital watermarking is to prove the ownership and the authenticity of the media. However, as some applications, such as medical and military, are sensitive to distortion, this highlights the needs of lossless watermarking. In this paper, we propose a new lossless data hiding algorithm by using multiple predictors, which extends and generalizes our previous watermarking idea (Yip, 2005). By using different predictors with different characteristics, we can choose the embedding location to be low variance region or high variance region. The PSNR and the payload capacity are high and there are no "salt-and-peppers" artifacts
Steganalysis of least significant bit matching using multi-order differences AbstractThis paper presents a learning-based steganalysis/detection method to attack spatial domain least significant bit LSB matching steganography in grayscale images, which is the antetype of many sophisticated steganographic methods. We model the message embedded by LSB matching as the independent noise to the image, and theoretically prove that LSB matching smoothes the histogram of multi-order differences. Because of the dependency among neighboring pixels, histogram of low order differences can be approximated by Laplace distribution. The smoothness caused by LSB matching is especially apparent at the peak of the histogram. Consequently, the low order differences of image pixels are calculated. The co-occurrence matrix is utilized to model the differences with the small absolute value in order to extract features. Finally, support vector machine classifiers are trained with the features so as to identify a test image either an original or a stego image. The proposed method is evaluated by LSB matching and its improved version "Hugo". In addition, the proposed method is compared with state-of-the-art steganalytic methods. The experimental results demonstrate the reliability of the new detector. Copyright © 2013 John Wiley & Sons, Ltd.
Reversible steganographic method using SMVQ approach based on declustering Reversible steganography allows an original image to be completely restored after the extraction of hidden data embedded in a cover image. In this paper, we propose a reversible scheme for VQ-compressed images that is based on a declustering strategy and takes advantage of the local spatial characteristics of the image. The main advantages of our method are ease of implementation, low computational demands, and no requirement for auxiliary data. The experimental results show that the embedding capacity of the proposed method is controlled by the number of declustered groups and the spatial characteristics of the cover image. The results also show that the proposed method has the best embedding capacity and the least running time compared with other methods.
Synthesis: Dreams ? Programs Deductive techniques are presented for deriving programs systematically from given specifications. The specifications express the purpose of the desired program without giving any hint of the algorithm to be employed. The basic approach is to transform the specifications repeatedly according to certain rules, until a satisfactory program is produced. The rules are guided by a number of strategic controls. These techniques have been incorporated in a running program-synthesis system, called DEDALUS.
Calculating encoding and decoding functions for prefix codes The transformational model of program development allows obtaining programs correct by construction. When developing nontrivial programs, three activities arise: program synthesis, program transformation, and verification of properties; in addition, specification, use and implementation of ADTs must typically be considered all through these activities. The purpose of this article is to illustrate the need for all these activities by means of a nontrivial problem: encoding and decoding with prefix codes. Our exposition is rather informal, not being committed to any particular program transformation system, and it should be understood by any programmer looking forward to develop functional programs in a systematic way, while avoiding the burden of any particular system. In particular, we show the stages in the complete transformational development of a functional program, dealing successively with correctness and time efficiency, as well as the key decisions that are adopted.
Secure and imperceptible digital image steganographic algorithm based on diamond encoding in DWT domain. This paper presents a new efficient embedding algorithm in the wavelet domain of digital images based on the diamond encoding (DE) scheme. Current discrete wavelet transform (DWT) steganography adds an unacceptable distortion to the images and is considered as an ineffective in terms of security. Applying the DE scheme to the current DWT steganographic methods solves the problems of these methods, and reduces the distortion added to the images, and thus improves the embedding efficiency. The proposed algorithm first converts the secret image into a sequence of base-5 digits. After that, the cover image is transformed into the DWT domain and segmented into 2 × 1 coefficient pairs. The DE scheme is used then to change at most one coefficient of each coefficient pair to embed the base-5 digits. Experimental results depict that the proposed algorithm is more efficient in embedding compared to other methods in terms of embedding payload and image quality. Moreover, the proposed algorithm is attacked by well-known steganalysis software. Results are showing that the proposed algorithm is secure against the powerful universal steganalyzer “ensemble classifier” and the histogram attack. The results also reveal that the proposed algorithm is robust against different image processing attacks such as compression, added noise, and cropping attacks.
1.206778
0.015296
0.01477
0.013786
0.008115
0.003667
0.001054
0.000333
0.000106
0.000032
0.000001
0
0
0
High capacity, reversible data hiding in medical images In this paper we introduce a highly efficient reversible data hiding technique. It is based on dividing the image into tiles and shifting the histograms of each image tile between its minimum and maximum frequency. Data are then inserted at the pixel level with the largest frequency to maximize data hiding capacity. It exploits the special properties of medical images, where the histogram of their non-overlapping image tiles mostly peak around some gray values and the rest of the spectrum is mainly empty. The zeros (or minima) and peaks (maxima) of the histograms of the image tiles are then relocated to embed the data. The grey values of some pixels are therefore modified. High capacity, high fidelity, reversibility and multiple data insertions are the key requirements of data hiding in medical images. We show how histograms of image tiles of medical images can be exploited to achieve these requirements. Compared with data hiding method in the whole image, our scheme can result in 30%-200% capacity improvement with still better image quality, depending on the medical image content.
Reversible watermarking method based on adaptive thresholding and companding technique Embedding and extraction of secret information as well as the restoration of the original un-watermarked image are highly desirable in sensitive applications such as military, medical, and law enforcement imaging. This paper presents a novel reversible watermarking approach for digital images using integer-to-integer wavelet transform, companding technique, and adaptive thresholding, enabling it to embed and recover the secret information as well as restore the image to its pristine state. The proposed method takes advantage of block-based watermarking and iterative optimization of threshold for companding which avoids histogram pre-and postprocessing. Consequently, it reduces the associated overhead usually required in most of the reversible watermarking techniques. As a result, it generates less distortion between the watermarked and the original image. Experimental results on regular as well as medical images show that the proposed method outperforms the existing reversible watermarking approaches reported in the literature.
Hiding Biometric Data With the wide spread utilization of biometric identification systems, establishing the authenticity of biometric data itself has emerged as an important research issue. The fact that biometric data is not replaceable and is not secret, combined with the existence of several types of attacks that are possible in a biometric system, make the issue of security/integrity of biometric data extremely critical. We introduce two applications of an amplitude modulation-based watermarking method, in which we hide a user's biometric data in a variety of images. This method has the ability to increase the security of both the hidden biometric data (e.g., eigen-face coefficients) and host images (e.g., fingerprints). Image adaptive data embedding methods used in our scheme lead to low visibility of the embedded signal. Feature analysis of host images guarantees high verification accuracy on watermarked (e.g., fingerprint) images.
A Reversible Image Authentication Method Without Memorization Of Hiding Parameters This paper proposes a novel reversible image authentication method that does not memorize the parameters for extracting embedded authentication data from an image. The proposed method once distorts an image to hide data for authentication into the image, it recovers the original image from the distorted image unless tamper is applied to the image, i.e., reversible. By comparing extracted data and data generated from the restored image, this method detects image tampering and further localizes tampered regions by the unit of block. The proposed method extracts hidden data without memorization of parameters used in its algorithm. This feature makes the proposed method practical. Whereas any method memorizing parameters faces severe problems with storage and management of parameters, according to the increase in the number of memorized parameters that is caused by serving accurate tamper localization and/or by applying itself to a huge number of image collection, e.g., video sequences. Simulation results show the effectiveness of the proposed method.
Medical image integrity control combining digital signature and lossless watermarking Enforcing protection of medical content becomes a major issue of computer security. Since medical contents are more and more widely distributed, it is necessary to develop security mechanism to guarantee their confidentiality, integrity and traceability in an autonomous way. In this context, watermarking has been recently proposed as a complementary mechanism for medical data protection. In this paper, we focus on the verification of medical image integrity through the combination of digital signatures with such a technology, and especially with Reversible Watermarking (RW). RW schemes have been proposed for images of sensitive content for which any modification may affect their interpretation. Whence, we compare several recent RW schemes and discuss their potential use in the framework of an integrity control process in application to different sets of medical images issued from three distinct modalities: Magnetic Resonance Images, Positron Emission Tomography and Ultrasound Imaging. Experimental results with respect to two aspects including data hiding capacity and image quality preservation, show different limitations which depend on the watermark approach but also on image modality specificities.
A region-based lossless watermarking scheme for enhancing security of medical data. This paper presents a lossless watermarking scheme in the sense that the original image can be exactly recovered from the watermarked one, with the purpose of verifying the integrity and authenticity of medical images. In addition, the scheme has the capability of not introducing any embedding-induced distortion in the region of interest (ROI) of a medical image. Difference expansion of adjacent pixel values is employed to embed several bits. A region of embedding, which is represented by a polygon, is chosen intentionally to prevent introducing embedding distortion in the ROI. Only the vertex information of a polygon is transmitted to the decoder for reconstructing the embedding region, which improves the embedding capacity considerably. The digital signature of the whole image is embedded for verifying the integrity of the image. An identifier presented in electronic patient record (EPR) is embedded for verifying the authenticity by simultaneously processing the watermarked image and the EPR. Combining with fingerprint system, patient's fingerprint information is embedded into several image slices and then extracted for verifying the authenticity.
LSB matching revisited This letter proposes a modification to the least-significant-bit (LSB) matching, a steganographic method for embedding message bits into a still image. In the LSB matching, the choice of whether to add or subtract one from the cover image pixel is random. The new method uses the choice to set a binary function of two cover pixels to the desired value. The embedding is performed using a pair of pixels as a unit, where the LSB of the first pixel carries one bit of information, and a function of the two pixel values carries another bit of information. Therefore, the modified method allows embedding the same payload as LSB matching but with fewer changes to the cover image. The experimental results of the proposed method show better performance than traditional LSB matching in terms of distortion and resistance against existing steganalysis.
Robust watermarking of cartographic images We present a method (Text-Based Geometric Normalization--TBGN) which, by exploiting the particular content of cartographic images, namely text content, permits to cope with global geometric transformations. First, text is extracted from the to-be-marked map, then text orientation and size are exploited to normalize the image geometry prior to watermark insertion. Watermarking is performed by means of any of the existing algorithms ensuring good robustness against image processing tools. At the decoder side, text is extracted again from the map and used to normalize image geometry. Owing to the robustness of text features with respect to common image manipulations, and to the likely spreading of text all across the digital map, the proposed system exhibits an excellent robustness.
Reversible data hiding using integer wavelet transform and companding technique This paper presents a novel reversible data-embedding method for digital images using integer wavelet transform and companding technique. This scheme takes advantage of the Laplacian-like distribution of integer wavelet coefficients in high frequency subbands, which facilitates the selection of compression and expansion functions and keeps the distortion small between the marked image and the original one. Experimental results show that this scheme outperforms the state-of-the-art reversible data hiding schemes.
Reversible Watermarking Based on Statistical Quantity Histogram The histogram shifting based reversible watermarking techniques have attracted increasing interests due to their low computational complexity, high visual quality and considerable capacity. However, those methods suffer from unstable performance because they fail to consider the diversity of grayscale histograms for various images. For this purpose, we develop a novel histogram shifting based method by introducing a block statistical quantity (BSQ). The similarity of BSQ distributions for different images reduces the diversity of grayscale histograms and guarantees the stable performance of the proposed method. We also adopt different embedding schemes to prevent the issues of overflow and underflow. Moreover, by selecting the block size, the capacity of the proposed watermarking scheme becomes adjustable. The experimental results of performance comparisons with other existing methods are provided to demonstrate the superiority of the proposed method.
High capacity reversible data hiding scheme based upon discrete cosine transformation In this paper, we propose a reversible data hiding scheme based on the varieties of coefficients of discrete cosine transformation of an image. Cover images are decomposed into several different frequencies, and the high-frequency parts are embedded with secret data. We use integer mapping to implement our 2-dimensional discrete cosine transformation. Thus, the image recovered from the modified coefficients can be transformed back to the correct data-hidden coefficients. Since the distribution of 2-dimensional DCT coefficients looks close to Gaussian distribution centralized at zero, it is a natural candidate for embedding secret data using the histogram shifting approach. Thus, our approach shifts the positive coefficients around zero to the right and the negative coefficients around zero to the left in order to leave a space to hide the secret data. The experimental comparisons show that, compared to Chang et al. and Lin et al.'s method, the embedding capacity and quality of the stego-image of the proposed method is a great improvement.
New visual secret sharing schemes using probabilistic method Visual secret sharing (VSS) scheme is a perfect secure method that protects a secret image by breaking it into shadow images (called shadows). Unlike other threshold schemes, VSS scheme can be easily decoded by the human visual system without the knowledge of cryptography and cryptographic computations. However, the size of shadow images (i.e., the number of columns of the black and white matrices in VSS scheme [Naor, Shamir, Visual cryptography, Advances in Cryptology-EUROCRYPT'94, Lecture Notes in Computer Science, vol. 950, Springer-Verlag, 1995, p. 1]) will be expanded. Most recent papers about VSS schemes are dedicated to get a higher contrast or a smaller shadow size.In this paper, we use the frequency of white pixels to show the contrast of the recovered image. Our scheme is nonexpansible and can be easily implemented on a basis of conventional VSS scheme. The term non-expansible means that the sizes of the original image and shadows are the same.
Declarative Program Transformation: A Deforestation Case-Study . Software engineering has to reconcile modularity with efficiency.One way to grapple with this dilemma is to automatically transforma modular-specified program into an efficient-implementable one.This is the aim of deforestation transformations which get rid of intermediatedata structure constructions that occur when two functionsare composed. Beyond classical compile time optimization, these transformationsare undeniable tools for generic programming and softwarecomponent...
Robust reversible data hiding in encrypted image: Using fractal coding approach This paper presents a robust version of reversible data hiding method in encrypted images previously proposed by Zhang [8] and Wien [9]. In the original method, the image is partitioned into non-overlapping blocks, each divided into two neighbor sets. In these sets, one bit is embedded by flipping three LSBs of predefined pixels. The data extraction can be done by examining the block smoothness of neighboring blocks. However, this method is very sensitive to noise and attacks. Because noise and malicious activity usually alter adjacent blocks in the same manner. So, neighboring blocks do not have the same smoothness anymore. In this paper we propose a new method to select non-neighbor sets to decrease the effects of noise and therefore reduce bit error rate in encrypted image. These non-neighbor sets are selected by finding the most similar pairs as performed in the encoding phase of fractal image compression. Experimental results show that the proposed method outperforms previous approaches in the case of Gaussian noise contamination and results lower bit error rate.
1.072243
0.033345
0.022438
0.016589
0.009524
0.002778
0.000627
0.000114
0.000049
0.000025
0.000008
0
0
0
Data Hiding: Current Status and Key Issues
Video data hiding for managing privacy information in surveillance systems From copyright protection to error concealment, video data hiding has found usage in a great number of applications. In this work, we introduce the detailed framework of using data hiding for privacy information preservation in a video surveillance environment. To protect the privacy of individuals in a surveillance video, the images of selected individuals need to be erased, blurred, or re-rendered. Such video modifications, however, destroy the authenticity of the surveillance video. We propose a new rate-distortion-based compression-domain video data hiding algorithm for the purpose of storing that privacy information. Using this algorithm, we can safeguard the original video as we can reverse the modification process if proper authorization can be established. The proposed data hiding algorithm embeds the privacy information in optimal locations that minimize the perceptual distortion and bandwidth expansion due to the embedding of privacy data in the compressed domain. Both reversible and irreversible embedding techniques are considered within the proposed framework and extensive experiments are performed to demonstrate the effectiveness of the techniques.
Enhanced watermarking scheme based on texture analysis This paper proposes a new approach in digital watermarking applications that can be adapted for embedding either fragile or robust watermarking in a digital image in the spatial domain or in the frequency domain. The main objective of the proposed scheme is to explore the amount of texture or edge pixels belonging to the host image in order to insert more information while preserving the robustness of the scheme without degrading the visual quality of the watermarked image. The host image is divided into blocks and each block can be subdivided into sub-blocks according to its texture analysis. The number of sub-blocks that each block will be divided into depends on the amount of texture or edge pixels presented by it. The numerical results show that the proposed scheme is better in JPEG compression attacks, and far exceeds others in watermark size capacity.
A Novel DWT Based Blind Watermarking for Image Authentication.
Progressive sharing for a secret image Based on the wavelet transform, a new progressive sharing scheme is proposed to share a secret image into several shadow images using SPIHT encoding processes and Shamir's threshold scheme. Quality refinement of the recovered image is achieved by the data consumed from the threshold number (r) of shadow images and each single shadow image reveals no information about the secret image. The size of each shadow image is smaller than 1/r of the secret image and any number of shadow images that is less than r reveals no information about the secret image. The proposed approach is secure for image sharing and provides excellent peak signal-to-noise ratio (PSNR) versus rate performance. Experimental results have demonstrated the promising performance of this method in progressive sharing.
A novel data hiding scheme for color images using a BSP tree In this paper, we propose a novel data hiding technique for color images using a BSP (Binary Space Partitioning) tree. First, we treat the RGB values at each pixel as a three-dimensional (3D) virtual point in the XYZ coordinates and a bounding volume is employed to enclose them. Using predefined termination criteria, we construct a BSP tree by recursively decomposing the bounding volume into voxels containing one or several 3D virtual points. The voxels are then further categorized into eight subspaces, each of which is numbered and represented as three-digit binary characters. In the embedding process, we first traverse the BSP tree, locating a leaf voxel; then we embed every three bits of the secret message into the points inside the leaf voxel. This is realized by translating a point's current position to the corresponding numbered subspace. Finally, we transform the data-embedded 3D points to the stego color image. Our technique is a blind extraction scheme, where embedded messages can be extracted without the aid of the original cover image. It achieves high data capacity, equivalent to at least three times the number of pixels in the cover image. The stego image causes insignificant visual distortion under this high data capacity embedding scheme. In addition, we can take advantage of the properties of tree data structure to improve the security of the embedding process, making it difficult to extract the secret message without the secret key provided. Finally, when we adaptively modify the thresholds used to construct the BSP tree, our technique can be robust against attacks including image cropping, pixel value perturbation, and pixel reordering. But, the scheme is not robust against image compression, blurring, scaling, sharpening, and rotation attacks.
Secret Image Sharing Using Grayscale Payload Decomposition and Irreversible Image Steganography. To provide an added security level most of the existing reversible as well as irreversible image steganography schemes emphasize on encrypting the secret image (payload) before embedding it to the cover image. The complexity of encryption for a large payload where the embedding algorithm itself is complex may adversely affect the steganographic system. Schemes that can induce same level of distortion, as any standard encryption technique with lower computational complexity, can improve the performance of stego systems. In this paper, we propose a secure secret image sharing scheme, which bears minimal computational complexity. The proposed scheme, as a replacement for encryption, diversifies the payload into different matrices which are embedded into carrier image (cover image) using bit X-OR operation. A payload is a grayscale image which is divided into frequency matrix, error matrix, and sign matrix. The frequency matrix is scaled down using a mapping algorithm to produce Down Scaled Frequency (DSF) matrix. The DSF matrix, error matrix, and sign matrix are then embedded in different cover images using bit X-OR operation between the bit planes of the matrices and respective cover images. Analysis of the proposed scheme shows that it effectively camouflages the payload with minimum computation time.
A steganographic method based upon JPEG and particle swarm optimization algorithm In this paper, a novel steganographic method, based on JPEG and Particle Swarm Optimization algorithm (PSO), is proposed. In order to improve the quality of stego-images, an optimal substitution matrix for transforming the secret messages is first derived by means of the PSO algorithm. The standard JPEG quantization table is also modified to contain more secret messages. The transformed messages are then hidden in the DC-to-middle frequency components of the quantized DCT coefficients of the cover-image. Finally, a JPEG file with secret messages is generated through JPEG entropy coding. We compare our algorithm with Chang et al.'s JPEG-based steganographic algorithm. The experimental results show that our proposed method has larger message capacity and better image quality than Chang et al.'s. In addition, our method also has a high security level.
Multimedia watermarking techniques Multimedia watermarking technology has evolved very quickly during the last few years. A digital watermark is information that is imperceptibly and robustly embedded in the host data such that it cannot be removed. A watermark typically contains information about the origin, status, or recipient of the host data. In this tutorial paper, the requirements and applications for watermarking are reviewed. Applications include copyright protection, data monitoring, and data tracking. The basic concepts of watermarking systems are outlined and illustrated with proposed watermarking methods for images, video, audio, text documents, and other media. Robustness and security aspects are discussed in detail. Finally, a few remarks are made about the state of the art and possible future developments in watermarking technology
Data embedding in random domain universal data embedding method based on histogram mapping called DeRand (Data embedding in Random domain) is proposed. DeRand theoretically defines redundancy in any digital signal by applying the universal parser such that high entropy random signals can certainly be utilized for data embedding. First, DeRand recursively parses a random signal into a set of tuples each of certain length until there exist some tuples of zero occurrences in the histogram. Then, tuples that occur in the histogram are associated with those of zero occurrences. Next, a tuple (of non-zero occurrence) is mapped to its corresponding associated tuple to embed "1", while the tuple is left unmodified to embed "0". DeRand is universal, reversible, applicable to any random signal and scalable in terms of embedding capacity and signal quality. Experimental results show that DeRand achieves an embedding capacity up to 4909 bits in random signal of size 256 Kbytes. In addition, the quality of the processed signal ranges from 0.0075 to 395.67 in terms of MSE. HighlightsAddress the general problem of data embedding methods in handling random signals.Survey potential applications of data embedding in random domain.Theoretically prove that redundancy can certainly be located in any random signal.Propose data embedding method that utilizes the redundancy as defined in previous point.Achieve reversibility, scalable embedding capacity and scalable quality degradation.
Information Hiding Based on Block Match Coding for Vector Quantization-Compressed Images In this paper, a novel highly efficient lossless data hiding scheme is proposed to achieve the goal of hiding secret data into vector quantization (VQ)-compressed images that can be losslessly reconstructed after the secret data is extracted in the decoder. For VQ-compressed images, the index-modifying and the side-match VQ (SMVQ) techniques can be applied to encode indices and hide secret data. In general, data hiding accompanied by the SMVQ technique can yield a higher embedding capacity and a lower bit rate, yet more time consuming. In contrast, data hiding accompanied by the index-modifying technique can yield a lower embedding capacity and a higher bit rate, yet less time consuming. To possess the advantages of the two techniques while removing the shortcomings, the neighboring processed indices are employed to speed up the processes of generating state codebooks required for encoding and hiding. To evaluate the effectiveness of this approach, various test images are employed in the experiments. As documented in the experimental results, it is shown that the performance of the proposed scheme is superior to former schemes in terms of compression ratio, embedding rate, processing efficiency, and embedding capacity.
Novel image encryption technique and its application in progressive transmission
Parallelizing CAD: a timely research agenda for EDA The relative decline of single-threaded processor performance, coupled with the ongoing shift towards on chip parallelism requires that CAD applications run efficiently on parallel microprocessors. We believe that an ad hoc approach to parallelizing CAD applications will not lead to satisfactory results: neither in terms of return on engineering investment nor in terms of the computational efficiency of end applications. Instead, we propose that a key area of CAD research is to identify the design patterns underlying CAD applications and then build CAD application frameworks that aid efficient parallel software implementations of these design patterns. Our initial results indicate that parallel patterns exist in a broad range of CAD problems. We believe that frameworks for these patterns will enable CAD to successfully capitalize on increased processor performance through parallelism.
Using additive interpolation-error expansion for reversible digital watermarking in audio signals Data hiding is a technique that embeds an imperceptible and statistically undetectable signal to the digital content. Reversible watermarking is able to restore the original signal without distortion after data extraction, this property is useful in several application as military communications, medical diagnostics and legal evidence. In this paper, a reversible audio watermarking scheme using an interpolation technique is proposed. From experimental results the high capacity and auditive transparency of the proposed scheme are shown. Moreover, a multi-embedding approach is explored. Accordingly, a new subject for research is discovered.
1.019216
0.027588
0.023111
0.013669
0.006874
0.004593
0.002032
0.000995
0.00029
0.000081
0.000025
0.000001
0
0
The narrowing-driven approach to functional logic program specialization Partial evaluation is a semantics-based program optimization technique which has been investigated within different programming paradigms and applied to a wide variety of languages. Recently, a partial evaluation framework for functional logic programs has been proposed. In this framework, narrowing-the standard operational semantics of integrated languages-is used to drive the partial evaluation process. This paper surveys the essentials of narrowing-driven partial evaluation.
The Supercompiler SCP4: General Structure Supercompilation is a program transformation technique introduced in the 1970s by V. Turchin [13,14,16]. His ideas were studied by a number of authors for a long time. We constructed an experimental supercompiler for a functional language Refal-5 [15]. The Scp4 project was discussed with V. Turchin. He initiated and supported our work. Scp4 has been implemented once again using Refal-5. Sources of Scp4 and the whole version of this paper are available for immediate download [8,9,17]. A user manual on Scp4 and reports on several interesting experiments can be found in [4,5,6]. Scp4 is a first experimental supercompiler for the real functional language Refal-5. The principal new tool in Scp4 is an online analysis of global properties of folded-components of the meta-tree MTr of all potential computations. Let a program P and a parameterized input of the P be given. Then such a pair defines a partial mapping. A supercompiler is a transformer of such pairs. The transformer must preserve the map values on the map domain. Scp4 unfolds a potentially infinite tree of all possible computations. It reduces in the process the redundancy that could be present in the original program. It folds the tree into a finite graph of states and transitions between possible configurations of the computing system. It analyses global properties of the graph and specializes this graph w.r.t. these properties. The resulting definition is constructed solely based on the meta-interpretation of the source program rather than by a step-by-step transformation of the program. The size of the Scp4 system is about 19500 lines of commented source code (800 KB).
Positive supercompilation for a higher order call-by-value language Previous deforestation and supercompilation algorithms may introduce accidental termination when applied to call-by-value programs. This hides looping bugs from the programmer, and changes the behavior of a program depending on whether it is optimized or not. We present a supercompilation algorithm for a higher-order call-by-value language and we prove that the algorithm both terminates and preserves termination properties. This algorithm utilizes strictness information for deciding whether to substitute or not and compares favorably with previous call-by-name transformations.
Cost-augmented narrowing-driven specialization The aim of many program transformers is to improve efficiency while preserving program meaning. Correctness issues have been dealt with extensively. However, very little attention has been paid to formally establish the improvements achieved by these transformers. In this work, we introduce the scheme of a narrowing-driven partial evaluator enhanced with abstract costs. They are "abstract" in the sense that they measure the number of basic operations performed during a computation rather than actual execution times. Thus, we have available a setting in which one can discuss the effects of the program transformer in a precise framework and, moreover, to quantify these effects. Our scheme may serve as a basis to develop speedup analyses and cost-guided transformers. An implementation of the cost-augmented specializer has been undertaken, which demonstrates the practicality of our approach.
Deforestation for Higher-Order Functions Deforestation is an automatic transformation scheme for functional programswhich attempts to remove unnecessary intermediate data structures.The algorithm presented here is a variant of the original, adapted for a higher order language. A detailed description of how this may be implemented in an optimising compiler is also given.1 Introduction Program transformation is one of the most powerful tools available to the functional compiler writer. Because of the absence of assignment and...
A Naïve Time Analysis and its Theory of Cost Equivalence Techniques for reasoning about extensional properties of functional programs are well understood but methods for analysing the underlying intensional or operational properties have been much neglected. This paper begins with the development of a simple but useful calculus for time analysis of non-strict functional programs with lazy lists. One limitation of this basic calculus is that the ordinary equational reasoning on functional programs is not valid. In order to buy back some of these equational properties we develop a non-standard operational equivalence relation called cost equivalence, by considering the number of computation steps as an 'observable' component of the evaluation process. We define this relation by analogy with Park's definition of bisimulation in ccs. This formulation allows us to show that cost equivalence is a contextual congruence (and thus is substitutive with respect to the basic calculus) and provides useful proof techniques for establishing cost-equivalence laws. It is shown that basic evaluation time can be derived by demonstrating a certain form of cost equivalence, and we give an axiomatization of cost equivalence which is complete with respect to this application. This shows that cost equivalence subsumes the basic calculus. Finally we show how a new operational interpretation of evaluation demands can be used to provide a smooth interface between this time analysis and more compositional approaches, retaining the advantages of both.
Partial Evaluation of Computation Process, Revisited
A System for Proving Equivalences of Recursive Programs We present a system for proving equivalences of recursive programs based on program transformations, namely the fold/unfold method and a generalisation of this method.
An Implementation Methodology for Semantic Data Base Models
The Algebra of Functional Programs: Function Level Reasoning, Linear Equations, and Extended Definitions Without Abstract
Integrated coverage and connectivity configuration in wireless sensor networks An effective approach for energy conservation in wireless sensor networks is scheduling sleep intervals for extraneous nodes, while the remaining nodes stay active to provide continuous service. For the sensor network to operate successfully, the active nodes must maintain both sensing coverage and network connectivity. Furthermore, the network must be able to configure itself to any feasible degrees of coverage and connectivity in order to support different applications and environments with diverse requirements. This paper presents the design and analysis of novel protocols that can dynamically configure a network to achieve guaranteed degrees of coverage and connectivity. This work differs from existing connectivity or coverage maintenance protocols in several key ways: 1) We present a Coverage Configuration Protocol (CCP) that can provide different degrees of coverage requested by applications. This flexibility allows the network to self-configure for a wide range of applications and (possibly dynamic) environments. 2) We provide a geometric analysis of the relationship between coverage and connectivity. This analysis yields key insights for treating coverage and connectivity in a unified framework: this is in sharp contrast to several existing approaches that address the two problems in isolation. 3) Finally, we integrate CCP with SPAN to provide both coverage and connectivity guarantees. We demonstrate the capability of our protocols to provide guaranteed coverage and connectivity configurations, through both geometric analysis and extensive simulations.
Secure reversible visible image watermarking with authentication This paper proposes a secure reversible visible watermarking approach. The proposed pixel mapping function superposes a binary watermark image on a host image to create an intermediate visible watermarked image. Meanwhile, an almost inverse function generates the recovery data for restoring the original pixels. To prevent unauthorized users from approximating the original pixels in the watermarked region, this method adds an integer sequence in the intermediate watermarked image. The sequence is composed of integers generated by two random variables having normal distributions with zero means and distinct variances. The variances facilitate a trade-off between the watermark transparency and the noise generated by unauthorized users. The proposed method also uses Lagrange multipliers to find the optimized variances for the trade-off. Finally, this method uses reversible data embedding to embed the recovery data and hash value for reversibility and authentication, respectively. Experimental results show the watermark visibility for test images along with the watermark transparency for different variances. Using the optimized variances, the watermarked image is at the balance between the watermark transparency and the unauthorized-user-generating noise.
Property-preserving program refinement During the development and maintenance process, a program changes form, often being refined as specifications and implementation decisions are realized. A correctness proof built in parallel with an original program can be extended to a proof of refined program by showing equivalences between the original and refined program. This paper illustrates two examples of property-preserving refinement, partial evaluation and generalization, and explores the correctness-preserving equivalences underpinning those refinement techniques. We plan to explore ways in which the informal reasoning behind these and similar program refinement tasks may be captured to extend the proof for an original program into a proof of the refined program.
Reversible data hiding based on local histogram shifting with multilayer embedding We use the two neighboring points of the peak point to embed the secret data.We improve the histogram of embedded cover image to ensure security of secret data.The multilayer embedding of pyramid structure is adopted to increase the capacity.Our proposed method has a higher embedding capacity with lower distortion. In this paper, we present a new reversible data hiding method based on histogram shifting using localization. Our proposed method selects peak point as the reference point, then uses the two neighboring points of the peak point to achieve secret data embedding based on histogram shifting and the peak point keeps unchanged. In the extraction end, we no longer need the key information about the peak point, we can directly find the peak point from the histogram to extract the secret data. We also exploit the localization to make the histogram of embedded cover image become almost the same as the histogram of the original cover image. The embedding capacity is also increased rapidly by the localization with multilayer embedding. Experimental results show that our proposed method is effective and superior.
1.052292
0.083328
0.083328
0.035353
0.021866
0.010526
0.003346
0.000371
0.000062
0
0
0
0
0
High capacity reversible data hiding scheme based upon discrete cosine transformation In this paper, we propose a reversible data hiding scheme based on the varieties of coefficients of discrete cosine transformation of an image. Cover images are decomposed into several different frequencies, and the high-frequency parts are embedded with secret data. We use integer mapping to implement our 2-dimensional discrete cosine transformation. Thus, the image recovered from the modified coefficients can be transformed back to the correct data-hidden coefficients. Since the distribution of 2-dimensional DCT coefficients looks close to Gaussian distribution centralized at zero, it is a natural candidate for embedding secret data using the histogram shifting approach. Thus, our approach shifts the positive coefficients around zero to the right and the negative coefficients around zero to the left in order to leave a space to hide the secret data. The experimental comparisons show that, compared to Chang et al. and Lin et al.'s method, the embedding capacity and quality of the stego-image of the proposed method is a great improvement.
A Lossless Data Hiding based on Discrete Haar Wavelet Transform In this paper, a lossless data-hiding scheme is presented based on quantized coefficients of discrete wavelet transform (DWT) in the frequency domain to embed secret message. Using the quantized DWT-based method, we embed secret data into the successive zero coefficients of the medium-high frequency components in each reconstructed block for 3-level 2-D DWT of a cover-image. The procedures of the proposed system mainly include embedding, extraction, and restoration. Experiment results show that this purposed method can achieve high embedding capacity and acceptable image quality of stego-image, and data reversibility.
Reversible Watermarking Scheme with Visibility Using Randomize Distance Digital watermarking is widely used to protect the copyright of digital media. The watermarking approach is usually to embed a copyright mark into the target media to claim the ownership. Any ownership dispute can be judged by detecting the existence of the copyright mark. In the paper, we propose a reversible visible watermark method, which embeds binary-imaged watermark into gray-scale images to create a visible watermark. Not using complex calculations, this paper tries to simply change the pixel value to achieve the digital watermark. Besides, a reversible steganographic method is used to embed the watermark information into the watermarked images. The watermark information can be used to recover the original images.
A Reversible Data Hiding Scheme for Block Truncation Compressions Based on Histogram Modification A novel, reversible, data hiding method is presented in this paper. The proposed method focuses on hiding secret data in block truncation coding (BTC) compressed images. Conventionally, each block of an image compressed by BTC is encoded by two representative gray levels and a bitmap. The secret data can be embedded according to the order of each pair of gray levels. Moreover, we generated the histogram of an image by different types of each block's bitmap and slightly modified the values to embed data. The proposed method was compared with some existing reversible data hiding schemes, and the experimental results demonstrate that our method can embed more secret data while preserving high image quality after embedding.
Reversible data hiding scheme based on the Haar discrete wavelet transform and interleaving prediction method. Although many data hiding schemes have been proposed in the frequency domain, the tradeoff between hiding capacity and image quality is still an existing problem to be solved. In this paper, we proposed a novel reversible data hiding scheme based on the Haar discrete wavelet transform (DWT) and interleaving-prediction method. First, a one-level Haar discrete wavelet transform (DWT) is implemented to the cover image, and four sub-bands,  ,   ,   and , are obtained. Sub-bands ,   and  are chosen for embedding. After that, the wavelet coefficients of the chosen sub-bands are zig-zag scanned and two adjacent coefficients are used for prediction. The secret data is embedded in the prediction errors, which is the difference between the original value and the predicted value of the wavelet coefficients. The experimental results showed that our scheme has good performance compared with other existing reversible data hiding schemes.
Image interpolation based high capacity reversible data hiding scheme. In this paper, we propose a new interpolation technique which considers all the neighboring pixels as well as their impact on the reference pixels to provide better quality interpolated image and a new data hiding scheme which embeds the secret data in the interpolated pixels by taking into account the human visual system so that quality of the resultant image is maintained. The proposed interpolation technique is an improvement of the existing neighbor mean interpolation (NMI) technique in such a way that the interpolated image would have more resemblance to the input image. The proposed interpolation technique has less computational cost like NMI as it does not perform much computation during estimation unlike B-Spline, Bilinear Interpolation etc. The proposed data hiding scheme comes into the category of reversible data hiding scheme as the input image can be reconstructed after extraction of the entire secret data at the receiver side. Thus, it reduces the communication cost. Furthermore, the proposed data hiding scheme identifies the smooth and complex regions of the interpolated (or cover) image by dividing the same into blocks. It then embeds more bits into the complex regions of the image so that data hiding capacity as well as the image quality can be enhanced. The experimental results shows that the percentage increment in the PSNR value and capacity of the proposed scheme with respect to Chang et al. method is in the range of 0.26 to 30.60% and 0.87 to 73.82%, respectively. Moreover, the modified NMI yields higher PSNRs than other interpolating methods such as NMI, BI, and ENMI.
A Gradient Based Predictive Coding for Lossless Image Compression Natural, continuous tone images have a very important property of high correlation of adjacent pixels. Images which we wish to compress are usually non-stationary and can be reasonably modeled as smooth and textured areas separated by edges. This property has been successfully exploited in LOCO-I and CALIC by applying gradient based predictive coding as a major de-correlation tool. However, they only examine the horizontal and vertical gradients, and assume the local edge can only occur in these two directions. Their over-simplified assumptions hurt the robustness of the prediction in higher complex areas. In this paper, we propose an accurate gradient selective prediction (AGSP) algorithm which is designed to perform robustly around any type of image texture. Our method measures local texture information by comparison and selection of normalized scalar representation of the gradients in four directions. An adaptive predictor is formed based on the local gradient information and immediate causal pixels. Local texture properties are also exploited in the context modeling of the prediction error. The results we obtained on a test set of several standard images are encouraging. On the average, our method achieves a compression ratio significantly better than CALIC without noticeably increasing of computational complexity.
Enhanced watermarking scheme based on texture analysis This paper proposes a new approach in digital watermarking applications that can be adapted for embedding either fragile or robust watermarking in a digital image in the spatial domain or in the frequency domain. The main objective of the proposed scheme is to explore the amount of texture or edge pixels belonging to the host image in order to insert more information while preserving the robustness of the scheme without degrading the visual quality of the watermarked image. The host image is divided into blocks and each block can be subdivided into sub-blocks according to its texture analysis. The number of sub-blocks that each block will be divided into depends on the amount of texture or edge pixels presented by it. The numerical results show that the proposed scheme is better in JPEG compression attacks, and far exceeds others in watermark size capacity.
Matrix embedding for large payloads Matrix embedding is a previously introduced coding method that is used in steganography to improve the embedding efficiency (increase the number of bits embedded per embedding change). Higher embedding efficiency translates into better steganographic security. This gain is more important for long messages than for shorter ones because longer messages are, in general, easier to detect. In this paper, we present two new approaches to matrix embedding for large payloads suitable for practical steganographic schemes-one based on a family of codes constructed from simplex codes and the second one based on random linear codes of small dimension. The embedding efficiency of the proposed methods is evaluated with respect to theoretically achievable bounds
Block-based reversible data embedding This paper proposes a block-based reversible data embedding method applied to grayscale cover images. The image is first divided into two areas: data embedding area and auxiliary information embedding area. The data embedding area is further partitioned into 3*3 blocks. According to two predefined thresholds, all the blocks are divided into three categories: smooth, normal or complex. The complex blocks do not embed any data. A smooth block will embed twice the data of a normal block. After predicting the edge direction in a smooth or normal block, Alattar's difference expansion of a generalized integer transform method is exploited to embed data into each vector in the block. Then, the location map and other auxiliary information are embedded in the auxiliary information embedding area using a novel least significant bit (LSB) substitution method. Our method not only improves the embedded image quality but also provides a higher payload capacity compared to Alattar's method. For some smoother images, the embedding capacity of the proposed method is higher than 1 bit/pixel in a single pass and also derives acceptable embedded image quality (PSNR=30). To the best of our knowledge, the proposed method is the first to do this.
Design of capacity-approaching irregular low-density parity-check codes We design low-density parity-check (LDPC) codes that perform at rates extremely close to the Shannon capacity. The codes are built from highly irregular bipartite graphs with carefully chosen degree patterns on both sides. Our theoretical analysis of the codes is based on the work of Richardson and Urbanke (see ibid., vol.47, no.2, p.599-618, 2000). Assuming that the underlying communication channel is symmetric, we prove that the probability densities at the message nodes of the graph possess a certain symmetry. Using this symmetry property we then show that, under the assumption of no cycles, the message densities always converge as the number of iterations tends to infinity. Furthermore, we prove a stability condition which implies an upper bound on the fraction of errors that a belief-propagation decoder can correct when applied to a code induced from a bipartite graph with a given degree distribution. Our codes are found by optimizing the degree structure of the underlying graphs. We develop several strategies to perform this optimization. We also present some simulation results for the codes found which show that the performance of the codes is very close to the asymptotic theoretical bounds
The Gaussian watermarking game Watermarking models a copyright protection mechanism where an original source sequence or "covertext" is modified before distribution to the public in order to embed some extra information. The embedding should be transparent (i.e., the modified data sequence or "stegotext" should be similar to the covertext) and robust (i.e., the extra information should be recoverable even if the stegotext is modified further, possibly by a malicious "attacker"). We compute the coding capacity of the watermarking game for a Gaussian covertext and squared-error distortions. Both the public version of the game (covertext known to neither attacker nor decoder) and the private version of the game (covertext unknown to attacker but known to decoder) are treated. While the capacity of the former cannot, of course, exceed the capacity of the latter, we show that the two are, in fact, identical. These capacities depend critically on whether the distortion constraints are required to be met in expectation or with probability one. In the former case, the coding capacity is zero, whereas in the latter it coincides with the value of related zero-sum dynamic mutual information games of complete and perfect information. We also compute the capacity when the attacker is restricted to additive attacks. This capacity turns out to be strictly larger than the watermarking capacity, thus demonstrating that additive attacks are suboptimal. In fact, under the additive attack restriction, capacity turns out to coincide with the capacity of Costa's (1983) model for "writing on dirty paper," thus demonstrating that in Costa's model, the independent and identically distributed (i.i.d.) Gaussian "noise" is the most malevolent power-limited "noise". Additionally, Costa's observation that in the presence of i.i.d. Gaussian "noise," an i.i.d. Gaussian "dirt" process that is noncausally known to the transmitter (but not receiver) does not reduce capacity, is extended to general ergodic "dirt" and to stationary (but not necessarily white) Gaussian "noise"
Factorization with morton-ordered quadtree matrices for memory re-use and parallelism Quadtree matrices using Morton-order storage provide natural blocking on every level of a memory hierarchy. Writing the natural recursive algorithms to take advantage of this blocking results in code that honors the memory hierarchy without the need for transforming the code. Furthermore, the divide-and-conquer algorithm breaks problems down into independent computations. These independent computations can be dispatched in parallel for straightforward parallel processing.Proof-of-concept is given by an algorithm for factorization based on Givens rotations for quadtree matrices in Morton-order storage. The algorithms deliver positive results, competing with and even beating the LAPACK equivalent.
Using additive interpolation-error expansion for reversible digital watermarking in audio signals Data hiding is a technique that embeds an imperceptible and statistically undetectable signal to the digital content. Reversible watermarking is able to restore the original signal without distortion after data extraction, this property is useful in several application as military communications, medical diagnostics and legal evidence. In this paper, a reversible audio watermarking scheme using an interpolation technique is proposed. From experimental results the high capacity and auditive transparency of the proposed scheme are shown. Moreover, a multi-embedding approach is explored. Accordingly, a new subject for research is discovered.
1.009402
0.01523
0.009146
0.008
0.006
0.004
0.002011
0.001067
0.000467
0.000116
0.000028
0
0
0
Incremental physical design Incremental modification and optimization in VLSI Computer-Aided Design (CAD) is of fundamental importance. However, it has not been investigated as a discipline. Previous research and development effort is very unfocused and incomplete. Comprehensive study of incremental algorithms and solutions in the context of CAD tool development is an open area of research with a great deal of potential. Full understanding and focused participation in research and development in the area of incremental and dynamic would help us cope with the complexity of present day VLSI systems and facilitates concurrent optimization. In this paper we formulate and survey fundamental problems in incremental physical design. Preliminary solutions to a subset of these problems will be outlined.
On mismatches between incremental optimizers and instance perturbations in physical design tools The incremental, "construct by correction" design methodology has become widespread in constraint-dominated DSM design. We study the problem of ECO for physical design domains in the general context of incremental optimization. We observe that an incremental design methodology is typically built from a full optimizer that generates a solution for an initial instance, and an incremental optimizer that generates a sequence of solutions corresponding to a sequence of perturbed instances. Our hypothesis is that in practice, there can be a mismatch between the strength of the incremental optimizer and the magnitude of the perturbation between successive instances. When such a mismatch occurs, the solution quality will degrade -- perhaps to the point where the incremental optimizer should be replaced by the full optimizer. We document this phenomenon for three distinct domains -- partitioning, placement and routing -- using leading industry and academic tools. Our experiments show that current CAD tools may not be correctly designed for ECO-dominated design processes. Thus, compatibility between optimizer and instance perturbation merits attention both as a research question and as a matter of industry design practice.
FastPlace 2.0: an efficient analytical placer for mixed-mode designs In this paper, we present FastPlace 2.0 - an extension to the efficient analytical standard-cell placer - FastPlace [15], to address the mixed-mode placement problem. The main contributions of our work are: (1) Extensions to the global placement framework of FastPlace to handle mixed-mode designs. (2) An efficient and optimal minimum perturbation macro legalization algorithm that is applied after global placement to resolve overlaps among the macros. (3) An efficient legalization scheme to legalize the standard cells among the placeable segments created after fixing the movable macros. On the ISPD 02 Mixed-Size placement benchmarks [31, our algorithm is 16.8X and 7.8X faster than state-of-the-art academic placers Capo 9.1 and Fengshui 5.0 respectively. Correspondingly, we are on average, 12% and 3% better in terms of wirelength over the respective placers.
Fast and accurate rectilinear steiner minimal tree algorithm for VLSI design In this paper, we present a very fast and accurate rectilinear Steiner minimal tree (RSMT) algorithm called FLUTE. The algorithm is an extension of the wirelength estimation approach by fast lookup table [1]. The main contribution of this paper is a new net breaking technique which is much better than the one in [1]. A scheme is also presented to allow users to control the tradeoff between accuracy and runtime.FLUTE is optimal for nets up to degree 9 and is still very accurate for nets up to degree 100. So it is particularly suitable for VLSI applications in which most nets have a degree 30 or less. We show experimentally that over 18 industrial circuits in the ISPD98 benchmark suite, FLUTE with default accuracy is more accurate than the Batched 1-Steiner heuristic and is almost as fast as a very efficient implementation of Prim's rectilinear minimum spanning tree (RMST) algorithm. By adjusting the accuracy parameter, the error can be further reduced with only a small increase in runtime (e.g., 2.7x error reduction with 2.2x runtime increase).
Toward better wireload models in the presence of obstacles Wirelength estimation techniques typically contain a site density function that enumerates all possible path sites for each wirelength in an architecture and an occupation probability function that assigns a probability to each of these paths to be occupied by a wire. In this paper, we apply a generating polynomial technique to derive complete expressions for site density functions which take effects of layout region aspect ratio and the presence of obstacles into account. The effect of an obstacle is separated into two parts: the terminal redistribution effect and the blockage effect. The layout region aspect ratio and the obstacle area are observed to have a much larger effect on the wirelength distribution than the obstacle's aspect ratio and location. Accordingly, we suggest that these two parameters be included as indices of lookup tables in wireload models. Our results apply to a priori wirelength estimation schemes in chip planning tools to improve parasitic estimation accuracy and timing closure; this is particularly relevant for system-on-chip designs where IP blocks are combined with row-based layout.
Algorithms for large-scale flat placement This is a survey on the algorithms which are part ofa program for flat placement of large-scale VLSI processorchips. The basis is a quadratic optimization approachcombined with a new quadrisection algorithm.In contrast to most previous quadratic placement methods,no min-cut objective is used at all. Based on aquadratic placement, a completely new algorithm findsa four-way partitioning meeting capacity constraintsand minimizing the total movement.
The ISPD98 circuit benchmark suite From 1985-1993, the MCNC regularly introduced and maintained circuit benchmarks for use by the Design Automation community. However, during the last five years, no new circuits have been introduced that can be used for developing fundamental physical design applications, such as partitioning and placement. The largest circuit in the existing set of benchmark suites has over 100,000 modules, but the second largest has just over 25,000 modules, which is small by today's standards. This paper introduces the ISPD98 benchmark suite which consists of 18 circuits with sizes ranging from 13,000 to 210,000 modules. Experimental results for three existing partitioners are presented so that future researchers in partitioning can more easily evaluate their heuristics.
Large-scale placement by grid-warping Grid-warping is a new placement algorithm based on a strikingly simple idea: rather than move the gates to optimize their location, we elastically deform a model of the 2-D chip surface on which the gates have been roughly placed, "stretching" it until the gates arrange themselves to our liking. Put simply: we move the grid, not the gates. Deforming the elastic grid is a surprisingly simple, low-dimensional nonlinear optimization, and augments a traditional quadratic formulation. A preliminary implementation, WARP1, is already competitive with most recently published placers, e.g., placements that average 4% better wirelength, 40% faster than GORDIAN-L-DOMINO.
On legalization of row-based placements Cell overlaps in the results of global placement are guaranteed to prevent successful routing. However, common techniques for fixing these problems may endanger routing in a different way --- through increased wirelength and congestion. We evaluate several such techniques with routability of row-based placements in mind, and propose new ones that, in conjunction with our detail placer, improve overall routability and routed wirelength. Our generic two-phase approach for resolving illegal placements calls for (i) balancing the numbers of cells in rows, (ii) removing overlaps within rows through a generic dynamic programming procedure. Relevant objectives include minimum total perturbation, minimum wirelength increase and minimum maximum movement. Additionally, we trace cell overlaps in min-cut placement to vertical cuts and show that, if bisection cut directions are varied, overlaps anti-correlate with improved wirelength.Empirical validation is performed using placers Capo and Cadence QPlace, followed by various legalizers and detail placers, with subsequent routing by Cadence WarpRoute. We use a number of IBMv2 benchmarks with routing information. Our legalizer reduces both Capo and QPlace placements' wirelength by up to 4% compared to results of Capo legalized by Cadence's QPlace in the ECO mode.
Designing and reporting on computational experiments with heuristic methods This article discusses the design of computational experiments to test heuristic methods and provides reporting guidelines for such experimentation. The goal is to promote thoughtful, well-planned, and extensive testing of heuristics, full disclosure of experimental conditions, and integrity in and reproducibility of the reported results.
Fragile watermarking scheme using a hierarchical mechanism This paper proposes a novel fragile watermarking scheme with a hierarchical mechanism, in which pixel-derived and block-derived watermark data are carried by the least significant bits of all pixels. On the receiver side, after identifying the blocks containing tampered content, the watermark data hidden in the rest blocks are exploited to exactly locate the tampered pixels. Moreover, using exhaustive attempts, the proposed scheme is capable of recovering the original watermarked version without any error.
Reversible data embedding into images using wavelet techniques and sorting. The proliferation of digital information in our society has enticed a lot of research into data-embedding techniques that add information to digital content, like images, audio, and video. In this paper, we investigate high-capacity lossless data-embedding methods that allow one to embed large amounts of data into digital images (or video) in such a way that the original image can be reconstructed from the watermarked image. We present two new techniques: one based on least significant bit prediction and Sweldens' lifting scheme and another that is an improvement of Tian's technique of difference expansion. The new techniques are then compared with various existing embedding methods by looking at capacity-distortion behavior and capacity control.
A survey on image steganography and steganalysis Steganography and steganalysis are important topics in information hiding. Steganography refers to the technology of hiding data into digital media without drawing any suspicion, while steganalysis is the art of detecting the presence of steganography. This paper provides a survey on steganography and steganalysis for digital images, mainly covering the fundamental concepts, the progress of steganographic methods for images in spatial representation and in JPEG format, and the development of the corresponding steganalytic schemes. Some commonly used strategies for improving steganographic security and enhancing steganalytic capability are summarized and possible research trends are discussed. © 2011 ISSN 2073-4212.
Steganography Based On Grayscale Images Using (5,3) Hamming Code Steganography is a technique to hide secret data in cover images securely. This technique is used for secret communication. However, steganography is not as strong as watermark against various attacks. "Hamming+1" scheme is a well known scheme in the steganography. In this paper, we propose new data hiding scheme that showed better performance compared to "Hamming+1". The proposed scheme conceals 3 bits per 5 pixels of an image. The experimental result showed that the proposed scheme achieves an 0.599 bpp embedding payload and a higher visual quality of stego images compared to the previous schemes.
1.057125
0.051279
0.010056
0.005452
0.004274
0.003371
0.001216
0.000301
0.000119
0.000045
0
0
0
0
An improved lossless data hiding scheme based on image VQ-index residual value coding Copyright protection and information security have become serious problems due to the ever growing amount of digital data over the Internet. Reversible data hiding is a special type of data hiding technique that guarantees not only the secret data but also the cover media can be reconstructed without any distortion. Traditional schemes are based on spatial, discrete cosine transformation (DCT) and discrete wavelet transformation (DWT) domains. Recently, some vector quantization (VQ) based reversible data hiding schemes have been proposed. This paper proposes an improved reversible data hiding scheme based on VQ-index residual value coding. Experimental results show that our scheme outperforms two recently proposed schemes, namely side-match vector quantization (SMVQ)-based data hiding and modified fast correlation vector quantization (MFCVQ)-based data hiding.
An Improved Histogram-Shifting-Imitated reversible data hiding based on HVS characteristics. In this paper, we propose a reversible data hiding scheme to hide a secret message into a cover image by considering the characteristics of Human Visual System (HVS) in order to improve the visual imperceptibility. The human eyes are more sensitive to the changes in the low intensity pixels than the higher intensity ones. Therefore, we divide the intensity levels (0–255) into four groups: the first group contains 0–79 intensity level; second, third, and fourth group contain, respectively, 80–151, 152–215, and 216–255 intensity levels. We further divide first group into segments of size 2 elements, second, third, and fourth group into 3, 4, and 5 elements sized segments, respectively. After constructing the segments, we scan the image in raster order to identify the peak points for each segment, which are used to embed the secret data. The secret data is also divided into the four segments according to the identified peak points per group. The first segment data is converted into base2 representation, second, third and fourth segment secret data into 3, 4, and 5 base representation, respectively. The first segment of secret data is embedded into the peak points belonging to first group, second, third and fourth group secret data is embedded into the peak points of second, third and fourth group, respectively. Thus, our scheme makes least changes into the pixels belonging to the first group which have least intensity values and most to the fourth group pixels which have highest intensity values. Experimentally, our scheme provides better quality stego image and hides more secret data than the other state of the art schemes. We also build a location map for all the peak points to ensure the reversibility of the proposed scheme.
Capacity improvement of reversible data hiding scheme through better prediction and double cycle embedding process For better protection of surreptitious information from the intruders, steganography is now a widely used technique for hiding information through bits embedding inside a picture, speech and other multimedia contents. Among many different steganography schemes, prediction errors based reversible data hiding methods are more effective for security reasons. Nevertheless such techniques presented so far can embed limited number of message bits in a cover media and thus limit the length of covert message to be sent through embedding. In this paper, the authors presented a novel embedding process, called double cycle embedding scheme, where embedding capacity is increased significantly by modifying the scheme of estimation of pixel values, the block-variance calculation technique, selecting two highest most peaks in the histogram regardless their positions and embedding twice in the same cover image. The scheme also modified the pixel-prediction technique for the better predictions and for generalization of block-sizes in the cover image to increase the robustness of the policy. The experimental results demonstrate that the proposed scheme dictates other competing schemes with an embedding gain of 40%~700% depending on the cover image. The experimental results also demonstrate that the scheme exhibits stronger resistance against several statistical attacks, e.g. Benford generalized laws, SPAM features.
A Novel DWT Based Blind Watermarking for Image Authentication.
An adaptive data hiding scheme with high embedding capacity and visual image quality based on SMVQ prediction through classification codebooks This study exploits the characteristics of image blocks to develop an adaptive data hiding scheme that is based on SMVQ prediction. Since human beings' eyes are highly sensitive to smooth images, changes in smooth cause great distortion and attract the attention of interceptors. Hence, this study proposes a data embedding scheme for embedding secret data into edge blocks and non-sufficiently smooth blocks. The experimental results show that the proposed scheme improves the quality of the stego-image and the embedding capacity.
Lossless data hiding based on companding technique and difference expansion of triplets A reversible data hiding scheme based on the companding technique and the difference expansion (DE) of triplets is proposed in this paper. The companding technique is employed to increase the number of the expandable triplets. The capacity consumed by the location map recording the expanded positions is largely decreased. As a result, the hiding capacity is considerably increased. The experimental results reveal that high hiding capacity can be achieved at low embedding distortion.
Adaptive embedding techniques for VQ-compressed images An embedding algorithm, which can adaptively embed a binary message into a VQ-compressed image, is proposed in this paper. The proposed algorithm is divided into three phases. In the codeword grouping phase, a new group of codewords is initiated by the two most similar codewords which do not belong to any group. For each codeword which does not belong to any group, if the codeword is similar to all of the codewords in the group, it will be added to the group. In the embedding phase, each codeword in a group will be assigned to embed a certain sub-message whose length is determined by the number of codewords in the group. The more codewords a group has, the higher the embedding capacity of a codeword in the group will be. In the extracting phase, given a codeword and the number of codewords in the group to which the codeword belongs, the embedded message can be extracted from the codeword by simply determining the order of the codeword in the group. Experimental results show that the proposed algorithm performs better than previous algorithms with regard to embedding capacity and image quality. For the test images, when the embedding capacity is less than 5bits per codeword index, the difference of the PSNR values between the stego-image and its VQ-compressed cover image will be no more than 5dB on average.
Block-based reversible data embedding This paper proposes a block-based reversible data embedding method applied to grayscale cover images. The image is first divided into two areas: data embedding area and auxiliary information embedding area. The data embedding area is further partitioned into 3*3 blocks. According to two predefined thresholds, all the blocks are divided into three categories: smooth, normal or complex. The complex blocks do not embed any data. A smooth block will embed twice the data of a normal block. After predicting the edge direction in a smooth or normal block, Alattar's difference expansion of a generalized integer transform method is exploited to embed data into each vector in the block. Then, the location map and other auxiliary information are embedded in the auxiliary information embedding area using a novel least significant bit (LSB) substitution method. Our method not only improves the embedded image quality but also provides a higher payload capacity compared to Alattar's method. For some smoother images, the embedding capacity of the proposed method is higher than 1 bit/pixel in a single pass and also derives acceptable embedded image quality (PSNR=30). To the best of our knowledge, the proposed method is the first to do this.
Data Hiding: Current Status and Key Issues
Active steganalysis for interpolation-error based reversible data hiding The interpolation-error based (IEB) reversible data hiding combines difference expansion and histogram-shifting techniques to embed secret data into interpolation-error histograms of a stego-image with high payload and low distortion has been proposed recently. In this paper, an active steganalysis scheme is proposed by analyzing and modeling histogram abnormality in the interpolation-error domain of sub-sampled images with the general Gaussian distribution (GGD) features. A support vector machine (SVM) classifier is trained by estimated parameters of GGD, and then a quantitative algorithm is presented to estimate the embedding length and locations. The experimental results show that the proposed active steganalysis scheme is effective in not only detecting the IEB method but also estimating its message length and embedding locations.
Improving various reversible data hiding schemes via optimal codes for binary covers. In reversible data hiding (RDH), the original cover can be losslessly restored after the embedded information is extracted. Kalker and Willems established a rate-distortion model for RDH, in which they proved out the rate-distortion bound and proposed a recursive code construction. In our previous paper, we improved the recursive construction to approach the rate-distortion bound. In this paper, we generalize the method in our previous paper using a decompression algorithm as the coding scheme for embedding data and prove that the generalized codes can reach the rate-distortion bound as long as the compression algorithm reaches entropy. By the proposed binary codes, we improve three RDH schemes that use binary feature sequence as covers, i.e., an RS scheme for spatial images, one scheme for JPEG images, and a pattern substitution scheme for binary images. The experimental results show that the novel codes can significantly reduce the embedding distortion. Furthermore, by modifying the histogram shift (HS) manner, we also apply this coding method to one scheme that uses HS, showing that the proposed codes can be also exploited to improve integer-operation-based schemes.
Medical image integrity control combining digital signature and lossless watermarking Enforcing protection of medical content becomes a major issue of computer security. Since medical contents are more and more widely distributed, it is necessary to develop security mechanism to guarantee their confidentiality, integrity and traceability in an autonomous way. In this context, watermarking has been recently proposed as a complementary mechanism for medical data protection. In this paper, we focus on the verification of medical image integrity through the combination of digital signatures with such a technology, and especially with Reversible Watermarking (RW). RW schemes have been proposed for images of sensitive content for which any modification may affect their interpretation. Whence, we compare several recent RW schemes and discuss their potential use in the framework of an integrity control process in application to different sets of medical images issued from three distinct modalities: Magnetic Resonance Images, Positron Emission Tomography and Ultrasound Imaging. Experimental results with respect to two aspects including data hiding capacity and image quality preservation, show different limitations which depend on the watermark approach but also on image modality specificities.
Segmentation of the left ventricle of the heart in 3-D+t MRI data using an optimized nonrigid temporal model. Modern medical imaging modalities provide large amounts of information in both the spatial and temporal domains and the incorporation of this information in a coherent algorithmic framework is a significant challenge. In this paper, we present a novel and intuitive approach to combine 3-D spatial and temporal (3-D + time) magnetic resonance imaging (MRI) data in an integrated segmentation algorith...
Reversible De-Identification for lossless image compression using Reversible Watermarking De-Identification is a process which can be used to ensure privacy by concealing the identity of individuals captured by video surveillance systems. One important challenge is to make the obfuscation process reversible so that the original image/video can be recovered by persons in possession of the right security credentials. This work presents a novel Reversible De-Identification method that can be used in conjunction with any obfuscation process. The residual information needed to reverse the obfuscation process is compressed, authenticated, encrypted and embedded within the obfuscated image using a two-level Reversible Watermarking scheme. The proposed method ensures an overall single-pass embedding capacity of 1.25 bpp, where 99.8% of the images considered required less than 0.8 bpp while none of them required more than 1.1 bpp. Experimental results further demonstrate that the proposed method managed to recover and authenticate all images considered.
1.00865
0.008621
0.008416
0.00707
0.004585
0.003697
0.002548
0.001313
0.000488
0.000113
0.000041
0.000003
0
0
Parametric datatype-genericity Datatype-generic programs are programs that are parametrized by a datatype or type functor: whereas polymorphic programs abstract from the "integers" in "lists of integers",datatype-generic programs abstract from the "lists of". There are two main styles of datatype-generic programming: the Algebra of Programming approach, characterized by structured recursion operators arising from initial algebras and final coalgebras, and the Generic Haskell approach, characterized by case analysis over the structure of a datatype. We show that the former enjoys a kind of higher-order naturality, relating the behaviours of generic functions at different types; in contrast, the latter is ad~hoc, with no coherence required or provided between the various clauses of a definition. Moreover, the naturality properties arise "for free", simply from the parametrized types of the generic functions: we present a higher-order parametricity theorem for datatype-generic operators.
Emulating C++0x concepts. A library for the emulation of C++0x concepts developed using the emerging C++11 programming language is presented. The library integrates existing techniques for concept checking and template metaprogramming to provide a uniform interface to defining and using concepts. The purpose of this work is to establish a concrete foundation for experimentation of design techniques for concepts and to motivate and evaluate language design. The viability of the approach is demonstrated by applying it to characterize a number of previously identified usability problems with concepts in the proposed C++0x language. In particular, issues related to the use of explicit and automatic concepts in generic library design from the perspective of this experiment are examined. Issues related to concept refinement, default implementations of requirements, and the generation of error messages are also discussed.
A language for generic programming in the large Generic programming is an effective methodology for developing reusable software libraries. Many programming languages provide generics and have features for describing interfaces, but none completely support the idioms used in generic programming. To address this need we developed the language G. The central feature of G is the concept, a mechanism for organizing constraints on generics that is inspired by the needs of modern C++ libraries. G provides modular type checking and separate compilation (even of generics). These characteristics support modular software development, especially the smooth integration of independently developed components. In this article we present the rationale for the design of G and demonstrate the expressiveness of G with two case studies: porting the Standard Template Library and the Boost Graph Library from C++ to G. The design of G shares much in common with the concept extension proposed for the next C++ Standard (the authors participated in its design) but there are important differences described in this article.
Formal derivation of a generic algorithmic program for solving a class extremum problems In this paper, we derive formally, using PAR method, a generic algorithmic program for solving a class of extremum problems which can be abstract into a algebra structure called semiring. Some typical algorithms, such as minimal sum problem, maximal product problem, longest ascending segment problem, etc, are all instances of the generic algorithmic program. We put emphasis on the algorithmic derivation rather than the algorithmic verification. Not only embody the designer insight but also reveal the main ideas and ingenuity of these algorithms. The example in this paper confirms that PAR method is a simple and practical approach in deriving high-trustworthy generic algorithm programs. © 2009 IEEE.
Formal derivation of graph algorithmic programs using partition-and-recur In this paper, we derive, by presenting some suitable notations, three typical graph algorithms and corresponding programs using a unified approach, partition-and-recur. We put emphasis on the derivation rather than the algorithms themselves. The main ideas and ingenuity of these algorithms are revealed by formula deduction. Success in these examples gives us more evidence that partition-and-recur is a simple and practical approach and developing enough suitable notations is the key in designing and deriving efficient and correct algorithmic programs.
Population Classification in Fire Evacuation: A Multiobjective Particle Swarm Optimization Approach In an emergency evacuation operation, accurate classification of the evacuee population can provide important information to support the responders in decision making; and therefore, makes a great contribution in protecting the population from potential harm. However, real-world data of fire evacuation is often noisy, incomplete, and inconsistent, and the response time of population classification is very limited. In this paper, we propose an effective multiobjective particle swarm optimization method for population classification in fire evacuation operations, which simultaneously optimizes the precision and recall measures of the classification rules. We design an effective approach for encoding classification rules, and use a comprehensive learning strategy for evolving particles and maintaining diversity of the swarm. Comparative experiments show that the proposed method performs better than some state-of-the-art methods for classification rule mining, especially on the real-world fire evacuation dataset. This paper also reports a successful application of our method in a real-world fire evacuation operation that recently occurred in China. The method can be easily extended to many other multiobjective rule mining problems.
An algorithm to generate a random cyclic permutation The purpose of this work is to present an algorithm to generate a random cyclic permutation. The proof of its correctness, the complexity and the results of the chi-square distribution test are given.
Knowledge-based software design using design schemas Design schemas provide a means for abstracting software designs into broadly reusable components that can be assembled and refined into new software designs. This paper describes a knowledge-based software development paradigm that is based on the design schema representation. It combines design schemas, domain knowledge, and various types of rules to assist in the quick generation of software designs from user specifications. A prototypical environment, IDeA (Intelligent Design Aid), is described that supports the knowledge-based paradigm. The schema-based techniques used in IDeA are presented along with some examples of their use.
An incremental algorithm for software analysis In programming environments aimed at “industrial strength” software development, there is a need for software tools which facilitate both design and maintenance. These tools should encourage experimentation with different system configurations which enable designers to a priori estimate the associated system complexity and judge the ease of accommodating enhancements. Maintainers should be able to check straightforwardly the ramifications of system changes due to enhancements or “bug fixes”. With interprocedural data flow information about the definition and use of global variables and parameters in a software system, tools can be built to perform these tasks.For large, complex systems, efficient methods for interprocedural analysis are necessarily incremental, as a software system is a dynamically evolving entity. Incremental algorithms update current information about a system in response to a change rather than re-calculating the information by re-analyzing the entire system. This paper reports our development of a general purpose incremental data flow analysis algorithm, which is applicable to both intraprocedural and interprocedural domains. It is based on interval analysis, a technique whose observed performance is linear for most programs; under reasonable assumptions about program flow graphs this linearity can be verified [20].
Multilevel hypergraph partitioning: applications in VLSI domain In this paper, we present a new hypergraph-partitioning algorithm that is based on the multilevel paradigm. In the multilevel paradigm, a sequence of successively coarser hypergraphs is constructed. A bisection of the smallest hypergraph is computed and it is used to obtain a bisection of the original hypergraph by successively projecting and refining the bisection to the next level finer hypergraph. We have developed new hypergraph coarsening strategies within the multilevel framework. We evaluate their performance both in terms of the size of the hyperedge cut on the bisection, as well as on the run time for a number of very large scale integration circuits. Our experiments show that our multilevel hypergraph-partitioning algorithm produces high-quality partitioning in a relatively small amount of time. The quality of the partitionings produced by our scheme are on the average 6%-23% better than those produced by other state-of-the-art schemes. Furthermore, our partitioning algorithm is significantly faster, often requiring 4-10 times less time than that required by the other schemes. Our multilevel hypergraph-partitioning algorithm scales very well for large hypergraphs. Hypergraphs with over 100 000 vertices can be bisected in a few minutes on today's workstations. Also, on the large hypergraphs, our scheme outperforms other schemes (in hyperedge cut) quite consistently with larger margins (9%-30%).
A linear quadtree compression scheme for image encryption A private key encryption scheme for a two-dimensional image data is proposed in this work. This scheme is designed on the basis of lossless data compression principle. The proposed scheme is developed to have both data encryption and compression performed simultaneously. For the lossless data compression effect, the quadtree data structure is used to represent the image; for the encryption purpose, various scanning sequences of image data are provided. The scanning sequences comprise a private key for encryption. Twenty four possible combinations of scanning sequences are defined for accessing four quadrants, thereby making available 24(n) x 4(n(n-1)/2) possibilities to encode an image of resolution 2(n) x 2(n). The security of the proposed encryption scheme therefore relies on the computational infeasibility of an exhaustive search approach. Three images of 512 x 512 pixels are used to verify the feasibility of the proposed scheme. The testing results and analysis demonstrate the characteristics of the proposed scheme. This scheme can be applied for problems of data storage or transmission in a public network. (C) 1997 Elsevier Science B.V.
Implementation and extensibility of an analytic placer Automated cell placement is a critical problem in very large scale integration (VLSI) physical design. New analytical placement methods that simultaneously spread cells and optimize wirelength have recently received much attention from both academia and industry. A novel and simple objective function for spreading cells over the placement area is described in the patent of Naylor et al. (U.S. Pat. 6301693). When combined with a wirelength objective function, this allows efficient simultaneous cell spreading and wirelength optimization using nonlinear optimization techniques. In this paper, we implement an analytic placer (APlace) according to these ideas (which have other precedents in the open literature), and conduct in-depth analysis of characteristics and extensibility of the placer. Our contributions are as follows. 1) We extend the objective functions described in (Naylor et al., U.S. Patent 6301693) with congestion information and implement a top-down hierarchical (multilevel) placer (APlace) based on them. For IBM-ISPD04 circuits, the half-perimeter wirelength of APlace outperforms that of FastPlace, Dragon, and Capo, respectively, by 7.8%, 6.5%, and 7.0% on average. For eight IBM-PLACE v2 circuits, after the placements are detail-routed using Cadence WRoute, the average improvement in final wirelength is 12.0%, 8.1%, and 14.1% over QPlace, Dragon, and Capo, respectively. 2) We extend the placer to address mixed-size placement and achieve an average of 4% wirelength reduction on ten ISPD'02 mixed-size benchmarks compared to results of the leading-edge solver, FengShui. 3) We extend the placer to perform timing-driven placement. Compared with timing-driven industry tools, evaluated by commercial detailed routing and static timing analysis, we achieve an average of 8.4% reduction in cycle time and 7.5% reduction in wirelength for a set of six industry testcases. 4) We also extend the placer to perform input/output-core coplacement and constraint handing for mixed-signal designs. Our paper aims to, and empirically demonstrates, that the APlace framework is a general, and extensible platform for "spatial embedding" tasks across many aspects of system physical implementation.
Profiling an Incremental Data Flow Analysis Algorithm Incremental data flow analysis algorithms have been designed to deal efficiently with change in evolving software systems. These algorithms document the current state of a software system by incorporating change effects into previously derived information describing the definition and use of data in the system. Unfortunately, the performance of these algorithms cannot, in general, be characterized by analytic predictions of their expected behavior. It is possible, however, to observe their performance empirically and predict their average behavior. The authors report on experiments on the empirical profiling of a general-purpose, incremental data flow analysis algorithm. The algorithm, dominator based and coded in C, was applied to statistically significant numbers of feasible, random software systems of moderate size. The experimental results, with quantifiable confidence limits, substantiate the claim that incremental analyses are viable and grow more valuable as a software system grows in size.
New high-performance reversible data hiding method for VQ indices based on improved locally adaptive coding scheme. •We propose a new data hiding method using improved locally adaptive coding (ILAS).•ILAS uses the two-dimensional structure of image and correlation of adjacent indices.•Our proposed ILAS can significantly improve the performance of LAS.•Experiments show our method has a better performance compared with other methods.
1.244
0.244
0.244
0.244
0.022982
0.011
0.0044
0.000111
0
0
0
0
0
0
A Robust and Secure Video Steganography Method in DWT-DCT Domains Based on Multiple Object Tracking and ECC. Over the past few decades, the art of secretly embedding and communicating digital data has gained enormous attention because of the technological development in both digital contents and communication. The imperceptibility, hiding capacity, and robustness against attacks are three main requirements that any video steganography method should take into consideration. In this paper, a robust and secure video steganographic algorithm in discretewavelet transform (DWT) and discrete cosine transform (DCT) domains based on the multiple object tracking (MOT) algorithm and error correcting codes is proposed. The secret message is preprocessed by applying both Hamming and Bose, Chaudhuri, and Hocquenghem codes for encoding the secret data. First, motion-based MOT algorithm is implemented on host videos to distinguish the regions of interest in the moving objects. Then, the data hiding process is performed by concealing the secret message into the DWT and DCT coefficients of all motion regions in the video depending on foreground masks. Our experimental results illustrate that the suggested algorithm not only improves the embedding capacity and imperceptibility but also enhances its security and robustness by encoding the secret message and withstanding against various attacks.
Fast retrieval of hidden data using enhanced hidden Markov model in video steganography. •The hidden Markov model is enhanced in the proposed system for fast retrieval of hidden data from video files.•Data embedding and retrieval processes are performed using the conditional states and state transition dynamics between the video frames.•It enhances the retrieval data rate with minimized computation cost.
A reversible data hiding method for H.264 with Shamir's (t, n)-threshold secret sharing. This paper proposes a new robust reversible data hiding scheme for H.264. The embedded data is first distributed into n sub-secrets with matrix equation by using Shamir’s (t, n)-threshold secret sharing to improve the robustness of the embedded data. Then we choose the block with prediction mode and embed the sub-secrets into the coefficients of the 4×4 discrete cosine transform (DCT) block of the selected frames which meet our conditions to avert the distortion drift. The experimental results show that this new robust reversible data hiding algorithm can get more robustness, effectively avert intra-frame distortion drift and get good visual quality.
Compressed and raw video steganography techniques: a comprehensive survey and analysis. In the last two decades, the science of covertly concealing and communicating data has acquired tremendous significance due to the technological advancement in communication and digital content. Steganography is the art of concealing secret data in a particular interactive media transporter, e.g., text, audio, image, and video data in order to build a covert communication between authorized parties. Nowadays, video steganography techniques have become important in many video-sharing and social networking applications such as Livestreaming, YouTube, Twitter, and Facebook because of the noteworthy development of advanced video over the Internet. The performance of any steganographic method ultimately relies on the imperceptibility, hiding capacity, and robustness. In the past decade, many video steganography methods have been proposed; however, the literature lacks of sufficient survey articles that discuss all techniques. This paper presents a comprehensive study and analysis of numerous cutting edge video steganography methods and their performance evaluations from literature. Both compressed and raw video steganography methods are surveyed. In the compressed domain, video steganography techniques are categorized according to the video compression stages as venues for data hiding such as intra frame prediction, inter frame prediction, motion vectors, transformed and quantized coefficients, and entropy coding. On the other hand, raw video steganography methods are classified into spatial and transform domains. This survey suggests current research directions and recommendations to improve on existing video steganography techniques.
A reversible video steganography algorithm for MVC based on motion vector In this paper we present a reversible video steganography scheme for hiding secret data into the motion vector of each block in 3D MVC videos. Under this approach the idea of the inner product is introduced to achieve reversibility. By establishing the inner product between the motion vector and the modulation vector and setting the embedding conditions, we embed 1 bit data into each motion vector and the proposed algorithm is reversible. Moreover, in order to avoid distortion drift, we only embed data into b4-frames with the coding feature of 3D MVC videos. Experimental results also confirm that the proposed scheme can provide expected acceptable video quality of stegovideos and successfully achieve reversibility.
Secret Image Sharing Using Grayscale Payload Decomposition and Irreversible Image Steganography. To provide an added security level most of the existing reversible as well as irreversible image steganography schemes emphasize on encrypting the secret image (payload) before embedding it to the cover image. The complexity of encryption for a large payload where the embedding algorithm itself is complex may adversely affect the steganographic system. Schemes that can induce same level of distortion, as any standard encryption technique with lower computational complexity, can improve the performance of stego systems. In this paper, we propose a secure secret image sharing scheme, which bears minimal computational complexity. The proposed scheme, as a replacement for encryption, diversifies the payload into different matrices which are embedded into carrier image (cover image) using bit X-OR operation. A payload is a grayscale image which is divided into frequency matrix, error matrix, and sign matrix. The frequency matrix is scaled down using a mapping algorithm to produce Down Scaled Frequency (DSF) matrix. The DSF matrix, error matrix, and sign matrix are then embedded in different cover images using bit X-OR operation between the bit planes of the matrices and respective cover images. Analysis of the proposed scheme shows that it effectively camouflages the payload with minimum computation time.
Data embedding in digital images using critical functions. In this paper, “uniform embedding” (independent of image contents and pixel correlations while embedding) and “adaptive embedding” (depend on image contents and pixel correlations while embedding) in image steganography are investigated. A compact steganographic embedding function is proposed to ensure the correctness and efficiency, and a pixel correlation function is utilized to discriminate the image smoothness. Two feasible image steganographic frameworks using these critical functions are presented, and some well-known image steganographic methods can be derived from the proposed frameworks. The effectiveness of the proposed frameworks is experimentally validated by constructing and testing some special data hiding methods in the case of four neighboring pixel as a processing unit. Experimental results show that the proposed methods can achieve better visual performance and statistical undetectability compared with the prior works. Another promising merit of our work is the potential to provide steganographers general-purpose strategies to acquire new image steganographic methods.
Efficient Steganographic Embedding by Exploiting Modification Direction A novel method of steganographic embedding in digital images is described, in which each secret digit in a (2n+1)-ary notational system is carried by n cover pixels and, at most, only one pixel is increased or decreased by 1. In other words, the (2n+1) different ways of modification to the cover pixels correspond to (2n+1) possible values of a secret digit. Because the directions of' modification are fully exploited, the proposed method provides high embedding efficiency that is better than previous techniques
A solution to line-routing problems on the continuous plane This paper discusses a new line-routing algorithm. The algorithm has been programmed in FORTRAN II for the IBM 7094 and in FORTRAN IV for the IBM 360/65. It has given good results when applied to many line-routing problems such as mazes, printed circuit boards, substrates, and PERT diagrams. The main advantages of this algorithm, which is based on the continuous plane, over conventional algorithms based on the discrete plane are twofold: 1. Since the algorithm is based on the continuous plane, there is theoretically no limit to the degree of precision used to describe the position of points. In practice, the only factor restricting the precision is the magnitude of the largest (or smallest) number which may be stored in a computer. As a result, the nodes on a printed circuit board, for example, can be input with mil accuracy. If this feat were to be accomplished by existing methods on a 9×9 inch board, a matrix of 81,000,000 cells would have to be stored (and searched) in the computer. 2. The algorithm stores only line segments; therefore to find a path, only the segments that are currently defined need be investigated. Usually with conventional methods, every cell that lies on every possible minimal path must be investigated. The net result is that this algorithm is much faster than the conventional method.
Fundamental-Studies - Equivalences And Transformations Of Regular Systems - Applications To Recursive Program Schemes And Grammars This work presents a unified theory of recursive program schemes, context-free grammars, grammars on arbitrary algebraic structures and, in fact, recursive definitions of all kind by means of regular systems. The equivalences of regular systems associated with either all their solutions or their least solutions (in all domains of appropriate type satisfying a set of algebraic laws expressed by equations) are systematically investigated and characterized (in some cases) in terms of system transformations by folding, unfolding and rewriting according to the equational algebraic laws. Grammars are better characterized in terms of polynomial systems which are regular systems involving the operation of set union, and the same questions are raised for them. We also examine conditions insuring the uniqueness of the solution of a regular or of a polynomial system. This theory applies to grammars of many kinds which generate trees, graphs, etc. We formulate some classical transformations of context-free grammars in terms of correct transformations which only use folding, unfolding and algebraic laws and we immediately obtain their correctness.
Total correctness by local improvement in program transformation The goal of program transformation is to improve efficiency while preserving meaning. One of the best known transformation techniques is Burstall and Darlington's unfold-fold method. Unfortunately the unfold-fold method itself guarantees neither improvement in efficiency nor total-correctness. The correctness problem for unfold-fold is an instance of a strictly more general problem: transformation by locally equivalence-preserving steps does not necessarily preserve (global) equivalence.This paper presents a condition for the total correctness of transformations on recursive programs, which, for the first time, deals with higher-order functional languages (both strict and non-strict) including lazy data structures. The main technical result is an improvement theorem which says that if the local transformation steps are guided by certain optimisation concerns (a fairly natural condition for a transformation, then correctness of the transformation follows.The improvement theorem makes essential use of a formalised improvement-theory; as a rather pleasing corollary it also guarantees that the transformed program is a formal improvement over the original. The theorem has immediate practical consequences:• It is a powerful tool for proving the correctness of existing transformation methods for higher-order functional programs, without having to ignore crucial factors such as memoization or folding. We have applied the theorem to obtain a particularly simple proof of correctness for a higher-order variant of deforestation.• It yields a simple syntactic method for guiding and constraining the unfold/fold method in the general case so that total correctness (and improvement) is always guaranteed.
Multiple description coding using pairwise correlating transforms. The objective of multiple description coding (MDC) is to encode a source into multiple bitstreams supporting multiple quality levels of decoding. In this paper, we only consider the two-description case, where the requirement is that a high-quality reconstruction should be decodable from the two bitstreams together, while lower, but still acceptable, quality reconstructions should be decodable from either of the two individual bitstreams. This paper describes techniques for meeting MDC objectives in the framework of standard transform-based image coding through the design of pairwise correlating transforms. The correlation introduced by the transform helps to reduce the distortion when only a single description is received, but it also increases the bit rate beyond that prescribed by the rate-distortion function of the source. We analyze the relation between the redundancy (i.e., the extra bit rate) and the single description distortion using this transform-based framework. We also describe an image coder that incorporates the pairwise transform and show its redundancy-rate-distortion performance for real images.
Quantifying academic placer performance on custom designs There have been significant prior efforts to quantify performance of academic placement algorithms, primarily by creating artificial test cases that attempt to mimic real designs, such as the PEKO benchmark containing known optimas [5]. The idea was to create benchmarks with a known optimal solution and then measure how far existing placers were from the known optimal. Since the benchmarks do not necessarily correspond to properties of real VLSI netlists, the conclusions were met with some skepticism. This work presents two custom constructed datapath designs that perform common logic functions with hand-designed layouts for each. The new generation of academic placers is then compared against them to see how the placers performed for these design styles. Experiments show that all academic placers have wirelengths significantly greater then the manual solution; solutions range from 1.75 to 4.88 times greater wirelengths. These testcases will be released publically to stimulate research into automatically solving structured datapath placement problems.
Lossless image data embedding in plain areas This letter presents a lossless data hiding scheme for digital images which uses an edge detector to locate plain areas for embedding. The proposed method takes advantage of the well-known gradient adjacent prediction utilized in image coding. In the suggested scheme, prediction errors and edge values are first computed and then, excluding the edge pixels, prediction error values are slightly modified through shifting the prediction errors to embed data. The aim of proposed scheme is to decrease the amount of modified pixels to improve transparency by keeping edge pixel values of the image. The experimental results have demonstrated that the proposed method is capable of hiding more secret data than the known techniques at the same PSNR, thus proving that using edge detector to locate plain areas for lossless data embedding can enhance the performance in terms of data embedding rate versus the PSNR of marked images with respect to original image.
1.24
0.24
0.12
0.08
0.048
0.02
0.005714
0.000816
0
0
0
0
0
0
Improving Detailed Routability and Pin Access with 3D Monolithic Standard Cells. We study the impact of using 3D monolithic (3DM) standard cells on improving detailed routability and pin access. We propose a design flow which transforms standard rows of single-tier \"2D\" cells into rows of standard 3DM cells folded into two tiers. The transformation preserves layout characteristics such as overall area and number of metal layers for signal routing (i.e., M2 and above). It also creates redundant pins and free routing tracks in the two tiers used by the 3DM cells. We then propose an Integer Linear Program which routes as many nets as possible on the free 3DM routing tracks, leaving the rest of the nets to be routed via a standard global and detailed router on the metal layers dedicated for signal routing. Our experiments show significant improvement in detailed routability metrics using 3DM cells compared to using 2D standard cells.
Assessing Benefits of a Buried Interconnect Layer in Digital Designs. In sub-15 nm technology nodes, local metal layers have witnessed extremely high congestion leading to pin-access-limited designs, and hence affecting the chip area and related performance. In this paper, we assess the benefits of adding a buried interconnect layer below the device layers for the purpose of reducing cell area, improving pin access, and reducing chip area. After adding the buried layer to a projected 7 nm standard cell library, results show $\\sim 9$ %–13% chip area reduction and 126% pin access improvement. This shows that buried interconnect, as an integration primitive, is very promising as an alternative method to density scaling.
Physical Layout after Half a Century: From Back-Board Ordering to Multi-Dimensional Placement and Beyond. Innovations and advancements on physical design (PD) in the past half century significantly contribute to the progresses of modern VLSI designs. While ``Moore's Law'' and ``Dennard Scaling'' have become slowing down recently, physical design society encountered a set of challenges and opportunities. This article is presented at the event of the Life Time Achievement Award for Dr. Satoshi Goto by ISPD 2017. Dr. Goto's career in VLSI designs sets an exemplar role model for young engineers. Thus, we use his contributions as a thread to describe our personal view of physical layout from early back-board ordering to recent multi-dimensional placement and the future.
Pin Accessibility-Driven Detailed Placement Refinement. The significantly increased number of routing design rules at sub-20nm nodes has made pin access one of the most critical challenges in detailed routing. Resolving pin access issues in detailed routing stage may be too late due to the fixed pin locations, especially in the area with high pin density. In placement stage when cell movement is allowed, the consideration of pin access has more flexibility. We propose a refinement stage after detailed placement to improve pin access. To respect the given placement solution, the refinement techniques are restricted to cell flipping, same-row adjacent cell swap, and cell shifting. A cost function is presented to model pin access for each pin-to-pin connection. Based on the cost function, two phases are proposed to improve pin access for all the connections simultaneously. In the first phase, we refine the placement by cell flipping and same-row adjacent cell swap. The problem is solved by dynamic programming row by row. In the second phase, only cell shifting is used, and a linear program is formulated to further refine the placement. Experimental results demonstrate that the proposed detailed placement refinement can improve pin access and reduce unroutable nets by about 33% in the detailed routing stage.
MCFRoute: a detailed router based on multi-commodity flow method Detailed routing is an important stage in VLSI physical design. Due to the high routing complexity, it is difficult for existing routing methods to guarantee total completion without design rule checking violations (DRCs) and it generally takes several days for designers to fix remaining DRCs. Studies has shown that the low routing quality partly results from non-optimal net-ordering nature of traditional sequential methods. In this paper, a novel concurrent detailed routing algorithm is presented that overcomes the net-order problem. Based on the multi-commodity flow (MCF) method, detailed routing problem with complex design rule constraints is formulated as an integer linear programming (ILP) problem. Experiments show that the proposed algorithm is capable of reducing design rule violations while introducing no negative effects on wirelength and via count. Implemented as a detailed router following track assignment, the algorithm can reduce the DRCs by 38%, meantime, wirelength and via count are reduced by 3% and 2.7% respectively comparing with an industry tool. Additionally, the algorithm is adopted as an incremental detailed router to refine a routing solution, and experimental results show that the number of DRCs that industry tool can't fix are further reduce by half. Utilizing the independency between subregions, an efficient parallelization algorithm is implemented that can get a close to linear speedup.
Secure spread spectrum watermarking for multimedia This paper presents a secure (tamper-resistant) algorithm for watermarking images, and a methodology for digital watermarking that may be generalized to audio, video, and multimedia data. We advocate that a watermark should be constructed as an independent and identically distributed (i.i.d.) Gaussian random vector that is imperceptibly inserted in a spread-spectrum-like fashion into the perceptually most significant spectral components of the data. We argue that insertion of a watermark under this regime makes the watermark robust to signal processing operations (such as lossy compression, filtering, digital-analog and analog-digital conversion, requantization, etc.), and common geometric transformations (such as cropping, scaling, translation, and rotation) provided that the original image is available and that it can be successfully registered against the transformed watermarked image. In these cases, the watermark detector unambiguously identifies the owner. Further, the use of Gaussian noise, ensures strong resilience to multiple-document, or collusional, attacks. Experimental results are provided to support these claims, along with an exposition of pending open problems.
Wire routing by optimizing channel assignment within large apertures The purpose of this paper is to introduce a new wire routing method for two layer printed circuit boards. This technique has been developed at the University of Illinois Center for Advanced Computation and has been programmed in ALGOL for a B5500 computer. The routing method is based on the newly developed channel assignment algorithm and requires many via holes. The primary goals of the method are short execution time and high wireability. Actual design specifications for ILLIAC IV Control Unit boards have been used to test the feasibility of the routing technique. Tests have shown that this algorithm is very fast and can handle large boards.
Synthesizing Systolic Arrays with Control Signals from Recurrence Equations We present a technique for synthesizing systolic arrays which have non-uniform data flow governed by control signals. The starting point for the synthesis is anAffine Recurrence Equation—a generalization of the simple recurrences encountered in mathematics. A large class of programs, including most (single and multiple) nested-loop programs can be described by such recurrences. In this paper we extend our earlier work (Rajopadhye and Fujimoto 1986) in two principal directions. Firstly, we characterize a class of transformations calleddata pipelining and show that they yield recurrences that havelinear conditional expressions governing the computation. Secondly, we discuss the synthesis of systolic arrays that have non-uniform data flow governed by control signals. We show how to derive the control signals in such arrays by applying similar pipelining transformations to theselinear conditional expressions. The approach is illustrated by deriving the Guibas-Kung-Thompson architecture for computing the cost of optimal string parenthesization.
Algorithm 245: Treesort
Special purpose automatic programming for hidden surface elimination In many applications of three dimensional computer graphics, the appearance of the same scene must be computed repeatedly for many different positions of the viewer. This paper concerns a scheme for exploiting this property of an application for the purpose of improving the efficiency of the hidden surface computation. The scheme involves a kind of automatic programming: for each scene to be considered, a different special purpose program is automatically constructed. The special purpose program then takes the position of the viewer as input, and generates a suitable description of the scene with hidden surfaces removed as output. Since the special purpose program has a very limited task to perform - it is adapted to handling just one scene - it can be much faster than any general purpose algorithm would be for the same scene. The paper describes a method by which special purpose programs for hidden surface elimination can be constructed in a fully automatic manner. The method has been implemented, and results of experiments are given.
Incremental Data-Flow Analysis An incremental update algorithm modifies the solution of a problem that has been changed, rather than re-solving the entire problem. ACINCF and ACINCB are incremental update algorithms for forward and backward data-flow analysis, respectively, based on our equations model of Allen-Cocke interval analysis. In addition, we have studied their performance on a “nontoy” structured programming language L. Given a set of localized program changes in a program written in L, we identify a priori the nodes in its flow graph whose corresponding data-flow equations may be affected by the changes. We characterize these possibly affected nodes by their corresponding program structures and their relation to the original change sites, and do so without actually performing the incremental updates. Our results can be refined to characterize the reduced equations possibly affected if structured loop exit mechanisms are used, either singly or together, thereby relating richness of programming language usage to the ease of incremental updating.
A data-hiding technique with authentication, integration, and confidentiality for electronic patient records. A data-hiding technique called the "bipolar multiple-number base" was developed to provide capabilities of authentication, integration, and confidentiality for an electronic patient record (EPR) transmitted among hospitals through the Internet. The proposed technique is capable of hiding those EPR related data such as diagnostic reports, electrocardiogram, and digital signatures from doctors or a hospital into a mark image. The mark image could be the mark of a hospital used to identify the origin of an EPR. Those digital signatures from doctors and a hospital could be applied for the EPR authentication. Thus, different types of medical data can be integrated into the same mark image. The confidentiality is ultimately achieved by decrypting the EPR related data and digital signatures with an exact copy of the original mark image. The experimental results validate the integrity and the invisibility of the hidden EPR related data. This newly developed technique allows all of the hidden data to be separated and restored perfectly by authorized users.
High Capacity Lossless Data Hiding Based On Histogram Modification This letter introduces a lossless reversible data hiding method. The original image is recovered without any distortion from the marked image following the extraction of the hidden data. The main algorithm is based on the relocation of zeros ( or minima) and peaks of the histograms of the blocks of the original image to embed the data. It leads to the modi. cation of the grey values of some pixels. It can embed more data than many of the existing reversible data hiding algorithms.The PSNR of the proposed method is better than many of the existing reversible data hiding techniques. Some of the huge experimental results are presented to prove its validity.
Efficient aerial data collection with UAV in large-scale wireless sensor networks AbstractData collection from deployed sensor networks can be with static sink, ground-based mobile sink, or Unmanned Aerial Vehicle (UAV) based mobile aerial data collector. Considering the large-scale sensor networks and peculiarity of the deployed environments, aerial data collection based on controllable UAV has more advantages. In this paper, we have designed a basic framework for aerial data collection, which includes the following five components: deployment of networks, nodes positioning, anchor points searching, fast path planning for UAV, and data collection from network. We have identified the key challenges in each of them and have proposed efficient solutions. This includes proposal of a Fast Path Planning with Rules (FPPWR) algorithm based on grid division, to increase the efficiency of path planning, while guaranteeing the length of the path to be relatively short. We have designed and implemented a simulation platform for aerial data collection from sensor networks and have validated performance efficiency of the proposed framework based on the following parameters: time consumption of the aerial data collection, flight path distance, and volume of collected data.
1.1
0.1
0.1
0.05
0.025
0
0
0
0
0
0
0
0
0
Localized Lossless Authentication Watermark (LAW) A novel framework is proposed for lossless authentication watermarking of images which allows authentication and recovery of original images without any distortions. This overcomes a significant limitation of traditional authentication watermarks that irreversibly alter image data in the process of watermarking and authenticate the watermarked image rather than the original. In particular, authenticity is verified before full reconstruction of the original image, whose integrity is inferred from the reversibility of the watermarking procedure. This reduces computational requirements in situations when either the verification step fails or the zero-distortion reconstruction is not required. A particular instantiation of the framework is implemented using a hierarchical authentication scheme and the lossless generalized-LSB data embedding mechanism. The resulting algorithm, called localized lossless authentication watermark (LAW), can localize tampered regions of the image; has a low embedding distortion, which can be removed entirely if necessary; and supports public/private key authentication and recovery options. The effectiveness of the framework and the instantiation is demonstrated through examples.
Digital Watermarking - 9th International Workshop, IWDW 2010, Seoul, Korea, October 1-3, 2010, Revised Selected Papers
Robust Lossless Watermarking Based On Circular Interpretation Of Bijective Transformations For The Protection Of Medical Databases In this paper, we adapt the image lossless watermarking modulation proposed by De Vleeschouwer et al., based on the circular interpretation of bijective modulations, to the protection of medical relational databases. Our scheme modulates the numerical attributes of the database. It is suited for either copyright protection, integrity control or traitor tracing, being robust to most common database attacks, such as the addition and removal of tuples and the modification of attributes' values. Conducted experiments on a medical database of inpatient hospital stay records illustrate the overall performance of our method and its suitability to the requirements of the medical domain.
Reversible Data Hiding for Audio Based on Prediction Error Expansion This paper proposes a reversible data hiding method for digital audio using prediction error expansion technique. Firstly, the prediction error of the original audio is obtained by applying an integer coefficient predictor. Secondly, a location map is set up to record the expandability of all audio samples, and then it is compressed by lossless compression coding and taken as a part of secret information. Finally, the reconstructed secret information is embedded into the audio using prediction error expansion technique. After extracting the embedded information, the original audio can be perfectly restored. Experimental results show that the proposed algorithm can achieve high embedding capacity while keeping good quality of the stego-audio.
Lossless Watermarking Of Categorical Attributes For Verifying Medical Data Base Integrity In this article, we propose a new lossless or reversible watermarking approach that allows the embedding of a message within categorical data of relational database. The reversibility property of our scheme is achieved by adapting the well known histogram shifting modulation. Based on this algorithm we derive a system for verifying the integrity of the database content, it means detecting addition, removal or modification of any t-uples or attributes. Such a content integrity check is independent of the manner the database is stored or structured. We illustrate the overall capability of our method and its constraints of deployment considering one medical database of inpatient hospital stay records. Especially, we reversibly watermark ICD-10 diagnostic codes.
Database Relation Watermarking Resilient against Secondary Watermarking Attacks There has been tremendous interest in watermarking multimedia content during the past two decades, mainly for proving ownership and detecting tamper. Digital fingerprinting, that deals with identifying malicious user(s), has also received significant attention. While extensive work has been carried out in watermarking of images, other multimedia objects still have enormous research potential. Watermarking database relations is one of the several areas which demand research focus owing to the commercial implications of database theft. Recently, there has been little progress in database watermarking, with most of the watermarking schemes modeled after the irreversible database watermarking scheme proposed by Agrawal and Kiernan. Reversibility is the ability to re-generate the original (unmarked) relation from the watermarked relation using a secret key. As explained in our paper, reversible watermarking schemes provide greater security against secondary watermarking attacks, where an attacker watermarks an already marked relation in an attempt to erase the original watermark. This paper proposes an improvement over the reversible and blind watermarking scheme presented in [5], identifying and eliminating a critical problem with the previous model. Experiments showing that the average watermark detection rate is around 91% even with attacker distorting half of the attributes. The current scheme provides security against secondary watermarking attacks.
Capacity bounds and constructions for reversible data-hiding An undesirable side effect of many watermarking and data-hiding schemes is that the host signal into which auxiliary data is embedded is distorted. Finding an optimal balance between the amount of information embedded and the induced distortion is therefore an active field of research. With the rediscovery of Costa's (1983) seminal paper entitled Writing on dirty paper, there has been considerable progress in understanding the fundamental limits of the capacity versus distortion of watermarking and data-hiding schemes. For some applications, however, no distortion resulting from auxiliary data, however small, is allowed. In these cases the use of reversible data-hiding methods provide a way out. A reversible data-hiding scheme is defined as a scheme that allows complete and blind restoration (i.e. without additional signaling) of the original host data. Practical reversible data-hiding schemes have been proposed by Fridrich et al. (2002), but little attention has been paid to the theoretical limits. It is the purpose of this paper to repair this situation and to provide some first results on the limits of reversible data-hiding. Admittedly, the examples provided in this paper are toy examples, but they are indicative of more practical schemes that will be presented in subsequent papers.
Invertible authentication watermark for JPEG images We present two new invertible watermarking methods for authentication of digital images in the JPEG format. While virtually all previous authentication watermarking schemes introduced some small amount of non-invertible distortion in the image, the new methods are invertible in the sense that, if the image is deemed authentic, the distortion due to authentication can be completely removed to obtain the original image data. The first technique is based on lossless compression of biased bit-streams derived from the quantized JPEG coefficients. The second technique modifies the quantization matrix to enable lossless embedding of one bit per DCT coefficient. Both techniques are fast and can be used for general distortion-free (invertible) data embedding. The new methods provide new information assurance tools for integrity protection of sensitive imagery, such as medical images or high-importance military images viewed under non-standard conditions when usual criteria for visibility do not apply
Robust lossless data hiding: Analysis and evaluation
Reversible steganographic method using SMVQ approach based on declustering Reversible steganography allows an original image to be completely restored after the extraction of hidden data embedded in a cover image. In this paper, we propose a reversible scheme for VQ-compressed images that is based on a declustering strategy and takes advantage of the local spatial characteristics of the image. The main advantages of our method are ease of implementation, low computational demands, and no requirement for auxiliary data. The experimental results show that the embedding capacity of the proposed method is controlled by the number of declustered groups and the spatial characteristics of the cover image. The results also show that the proposed method has the best embedding capacity and the least running time compared with other methods.
The first 50 years of electronic watermarking Electronic watermarking can be traced back as far as 1954. The last 10 years has seen considerable interest in digital watermarking, due, in large part, to concerns about illegal piracy of copyrighted content. In this paper, we consider the following questions: is the interest warranted? What are the commercial applications of the technology? What scientific progress has been made in the last 10 years? What are the most exciting areas for research? And where might the next 10 years take us? In our opinion, the interest in watermarking is appropriate. However, we expect that copyright applications will be overshadowed by applications such as broadcast monitoring, authentication, and tracking content distributed within corporations. We further see a variety of applications emerging that add value to media, such as annotation and linking content to the Web. These latter applications may turn out to be the most compelling. Considerable progress has been made toward enabling these applications--perceptual modelling, security threats and countermeasures, and the development of a bag of tricks for efficient implementations. Further progress is needed in methods for handling geometric and temporal distortions. We expect other exciting developments to arise from research in informed watermarking.
Asymmetric-histogram based reversible information hiding scheme using edge sensitivity detection •An edge sensitivity analysis method is applied to reduce the prediction error.•The asymmetric-histogram shifting is used to restore the error value.•Complementary mechanism can get better image quality in multi-level embedding.•Better information capacity is achieved through two predictive methods.
On synthetic benchmark generation methods In the process of designing complex chips and systems, the use of benchmark designs is often necessary. However, the existing benchmark suites are not sufficient for the evaluation of new archi- tectures and EDA tools; synthetic benchmark circuits are a viable alternative. In this paper, a systematic approach for the genera- tion and evaluation of synthetic benchmark circuits is presented. A number of existing benchmark generation methods are examined using direct validation of size and topological parameters. This ex- poses certain features and drawbacks of the different methods. 1. INTRODUCTION VLSI technology continues to evolve. The development and eval- uation of new technologies, architectures and electronic design au- tomation (EDA) tools requires a large number of benchmark cir- cuits. A careful selection allows the reduction of the number of circuits needed. A benchmark suite is a set of circuits that - in the ideal case - is representative for the circuit space, or at least that part at which the EDA tool is aimed. Benchmark suites form a standard comparison basis that is accepted throughout the EDA community. with low level physical descriptions. Since most of the benchmark generation methods that have appeared in literature apply to FPGA circuits, we will focus this study on structural descriptions of tech- nology mapped circuits at the logic level. However, most of this section can be extrapolated to other descriptions at other levels of abstraction as well. A circuit is a graph that consists of nodes that are connected through nets. The nodes are fanin-restricted lookup tables (LUTs), sequential elements, such as latches or flip flops (FFs) and primary in- and outputs (PIs, POs). A number of parameters can be identi- fied. We distinguish three classes: size parameters These parameters express the size of the circuit, and can be directly obtained from the netlist of the graph. Ex- amples are: the number of nodes and nets, the number of un- reachable nodes, the number of FFs, the number of primary in- and outputs, etc. topological parameters These parameters determine the intercon- nection structure of the circuit. Some can be obtained directly, such as the average fanin and the fanin distribution of the LUTs, and the average net degree and net degree distribution. When using a bipartite graph representation where we distinguish be- tween logic elements (LUTs) and sequential elements (FFs), we can define delay using the unit delay model. The delay distribu- tion is an important characteristic, especially for timing-driven applications. The most important topological parameter is prob- ably the Rent characteristic, which will be addressed more thor- oughly in section 4. functional parameters At a detailed functional level, the logic function of every node is known. Typical functional parame- ters are: the number of literals in a two level sum-of-product expansion, or the number of BDD nodes for a given variable or- dering. The redundancy can be defined as ,w here
A Webpage Data Hiding Method by Using Tag and CSS Attribute Setting Computer networks connection becomes the most important way for people to contact each other, share information, and transmit privacy data. Because the Internet is not secure enough, data hiding techniques provide a good manner to deliver secret data with security. HTML webpage not only can be used to advertise a company's product but also used to share someones experience or knowledge. The HTML file is different from a digital image because it is composed of tags but pixels. CSS provides more options and assistance to help HTML file coding to create colorful web pages. The proposed method utilizes both HTML and CSS's properties to achieve the goal of secret data delivery. The experimental results indicate that the proposed method has a larger embedding capacity than others.
1.014973
0.016893
0.011765
0.005946
0.003929
0.002419
0.000863
0.000177
0.000035
0.000022
0.000011
0.000002
0
0
Hiding message into DNA sequence through DNA coding and chaotic maps. The paper proposes an improved reversible substitution method to hide data into deoxyribonucleic acid (DNA) sequence, and four measures have been taken to enhance the robustness and enlarge the hiding capacity, such as encode the secret message by DNA coding, encrypt it by pseudo-random sequence, generate the relative hiding locations by piecewise linear chaotic map, and embed the encoded and encrypted message into a randomly selected DNA sequence using the complementary rule. The key space and the hiding capacity are analyzed. Experimental results indicate that the proposed method has a better performance compared with the competing methods with respect to robustness and capacity.
Histogram-bin-shifting-based reversible watermarking for colour images Histogram-bin-shifting has been previously shown to be an effective method of reversibly watermarking greyscale images. For colour image reversible watermarking, histogram-bin-shifting technique can be extended trivially to RGB colour space. However, direct application of histogram-bin-shifting to the RGB colour image components, results in relatively poor performance of the watermarking algorithm. In order to improve the performance of the algorithm in terms of embedding capacity and distortion whereas preserving the inherent computational simplicity of the histogram-bin-shifting technique, the authors propose a technique of shifting frequency histogram bins of transformed colour components. In this study, the authors consider the YCbCr colour-space. Experimental results on standard test images, prove that the proposed technique achieves high embedding capacity with considerably low distortion.
A DNA-based data hiding technique with low modification rates In 2010, Shiu et al. proposed three DNA-based reversible data hiding schemes with high embedding capacity. However, their schemes were not focused on DNA modification rate or the expansion problem. Therefore, we propose a novel reversible data hiding scheme based on histogram technique to solve the weaknesses of Shiu et al.'s schemes. The proposed scheme transforms the DNA sequence into a binary string and then combines several bits into a decimal integer. These decimal integers are used to generate a histogram. Afterwards, the proposed scheme uses a histogram technique to embed secret data. The experimental results show that the modification rate of our proposed scheme is 69 % lower than that of Shiu et al.'s schemes for the same embedding capacity. In addition, the length of the DNA sequence remains unchanged in the proposed scheme.
Nested quantization index modulation for reversible watermarking and its application to healthcare information management systems. Digital watermarking has attracted lots of researches to healthcare information management systems for access control, patients' data protection, and information retrieval. The well-known quantization index modulation-(QIM-) based watermarking has its limitations as the host image will be destroyed; however, the recovery of medical images is essential to avoid misdiagnosis. In this paper, we propose the nested QIM-based watermarking, which is preferable to the QIM-based watermarking for the medical image applications. As the host image can be exactly reconstructed by the nested QIM-based watermarking. The capacity of the embedded watermark can be increased by taking advantage of the proposed nest structure. The algorithm and mathematical model of the nested QIM-based watermarking including forward and inverse model is presented. Due to algorithms and architectures of forward and inverse nested QIM, the concurrent programs and special processors for the nested QIM-based watermarking are easily implemented.
A novel high-capacity reversiblewatermarking scheme A novel reversible data hiding scheme is proposed in this article. Each pixel is predicted by its right neighboring pixel in scan order to get its prediction-error. Then, a companding technique is introduced so as to largely increase the number of prediction-errors available for embedding. Accordingly, a location map recording available positions can be compressed into a short bitstream By largely decreasing the capacity consumed by the compressed location map, the high hiding capacity is achieved. A series of experiments are conducted to verify the feasibility and effectiveness of the proposed approach. ©2007 IEEE.
Separable data hiding in encrypted image based on compressive sensing and discrete fourier transform. Reversible data hiding in encrypted images has become an effective and popular way to preserve the security and privacy of users’ personal images. Recently, Xiao et al. firstly presented reversible data hiding in encrypted images with use of the modern signal processing technique compressive sensing (CS). However, the quality of decrypted image is not great enough. In this paper, a new method of separable data hiding in encrypted images are proposed by using CS and discrete fourier transform, which takes full advantage of both real and imaginary coefficients for ensuring great recovery and providing flexible payload. Compared with the original work, the proposed method can obtain better image quality when concealing the same embedding capacity. Furthermore, image decryption and data extraction are separable in the proposed method, and the secret data can be extracted relatively accurately.
Reversible watermarking of 2D-vector data This paper presents a reversible watermarking scheme for the 2D-vector data (point coordinates) which are popularly used in geographical information related applications. This reversible watermarking scheme exploits the high correlation among points in the same polygon in a map and achieves the reversibility of the whole scheme by an 8-point integer DCT, which ensures that the original 2D-vector data can be watermarked during the watermark embedding process and then perfectly restored during the watermark extraction process, with the watermark accurately extracted at the same time.In our scheme, we use an efficient highest frequency coefficient modification technique in the integer DCT domain to modulate the watermark bit "0" or "1", which can be determined during extraction without any additional information to indicate.To alleviate the visual distortion in the watermarked map caused by the coefficient modification, we propose an improved reversible watermarking scheme based on the original coefficient modification technique. Combined with this improved scheme, the embedding capacity can be greatly increased while the watermarking distortion is otherwise deduced compared to the original coefficient modification scheme.To control the distortion between the watermarked 2D-vector data and its original version, we design an error estimation method to prevent those data which are able to cause large distortions in the coefficient modification process from being watermarked. This distortion limitation scheme can fine adjust the watermarking distortion and capacity for the proposed coefficient modification based reversible watermarking scheme according to different practical requirements.
Reversible Data Hiding in Encrypted Images by Reversible Image Transformation. With the popularity of outsourcing data to the cloud, it is vital to protect the privacy of data and enable the cloud server to easily manage the data at the same time. Under such demands, reversible data hiding in encrypted images (RDH-EI) attracts more and more researchers’ attention. In this paper, we propose a novel framework for RDH-EI based on reversible image transformation (RIT). Different from all previous encryption-based frameworks, in which the ciphertexts may attract the notation of the curious cloud, RIT-based framework allows the user to transform the content of original image into the content of another target image with the same size. The transformed image, that looks like the target image, is used as the “encrypted image,” and is outsourced to the cloud. Therefore, the cloud server can easily embed data into the “encrypted image” by any RDH methods for plaintext images. And thus a client-free scheme for RDH-EI can be realized, that is, the data-embedding process executed by the cloud server is irrelevant with the processes of both encryption and decryption. Two RDH methods, including traditional RDH scheme and unified embedding and scrambling scheme, are adopted to embed watermark in the encrypted image, which can satisfy different needs on image quality and large embedding capacity, respectively.
Efficient Steganographic Embedding by Exploiting Modification Direction A novel method of steganographic embedding in digital images is described, in which each secret digit in a (2n+1)-ary notational system is carried by n cover pixels and, at most, only one pixel is increased or decreased by 1. In other words, the (2n+1) different ways of modification to the cover pixels correspond to (2n+1) possible values of a secret digit. Because the directions of' modification are fully exploited, the proposed method provides high embedding efficiency that is better than previous techniques
A Sorting Problem
A fast algorithm for active contours A method of controlling snakes that combines speed, flexibility, and simplicity is presented. It is compared to the original variational calculus method of M. Kass et al. (1987) and the dynamic programming method developed by A.A. Amini et al. (1988) and found to be comparable in final results, while being faster than dynamic programming and more stable and flexible for including hard constraints than the variational calculus approach. The introduction of the concept of curvature highlights the problem of how to approximate curvature when a curve is represented by a set of discrete points. The advantages and disadvantages of a number of different approximations of curvature are pointed out
Introduction of Sensor Spectral Response Into Image Fusion Methods. Application to Wavelet-Based Methods Usual image fusion methods inject features from a high spatial resolution panchromatic sensor into every low spatial resolution multispectral band trying to preserve spectral signatures and improve spatial resolution to that of the panchromatic sensor. The objective is to obtain the image that would be observed by a sensor with the same spectral response (i.e., spectral sensitivity and quantum eff...
Wavelet Based Texture Segmentation Of Multi-Modal Tomographic Images This paper presents a segmentation pipeline for computer-based automatic analysis of multi-modal tomographic images. It is a computer based support for the localization of pathological tissues such as brain tumors. The segmentation pipeline of the presented approach includes texture analysis, classification with a modified Kohonen Feature Map, a collection of classifiers and knowledge based morphological postprocessing. Furthermore this paper presents a statistical investigation that compares the wavelet transform to classical texture analysis methods. Patient data which was acquired using magnetic resonance imaging (MRI) and computer tomography (CT) is used for this investigation. (C) 1997 Elsevier Science Ltd.
Dual-image-based reversible data hiding method using center folding strategy In recent years, dual-image techniques have been widely used in reversible data hiding. Most data hiding methods focus on finding the best predictive value to reduce the degree of stego image distortion. After finding a predicted value, most methods convert the secret data directly into secret symbols, using the decimal or other numeral systems, and then have these added to (or subtracted from) the predictive value (or pixel) to complete the embedding. However, this study found that the secret data are a very important factor affecting image quality. If the value of the secret symbols is too large, it may cause greater differences between the stego-pixels and original pixels. Hence, this study employs the center folding strategy to reduce the value of the secret symbols. The reduced symbols are then embedded in two stego images through an averaging method, to maintain image quality. In addition, underflow/overflow pixels are replaced with the original pixels to reduce unnecessary image distortion. The experimental results show that the proposed method has fairly good performance compared to the other dual-image techniques. For the same embedding capacity, its image quality is also better, on average, than that of other methods by at least 2dB. Furthermore, the proposed method determines the embedding capacity with the k value. The larger the k value, the higher the embedding capacity. The experimental results show that for the same image quality, the embedding capacity of the proposed method is better, on average, than that of other methods by at least 0.5 bits per pixel. The proposed method is applicable not only to experimental images, but also works well for general images in terms of embedding capacity, image quality and execution time. This research presented a dual Imaging-based reversible hiding technique.The Central Folding Strategy is applied to achieve low distortion purpose.The reduced symbols are embedded in two stego-images through an averaging method.The performance of the proposed method is better than that of other methods.The proposed method can adapt to different image characteristics.
1.24
0.08
0.048
0.021818
0.01
0.005
0.001611
0.000476
0.000009
0
0
0
0
0
A robust error concealment technique using data hiding for image and video transmission over lossy channels A robust error concealment scheme using data hiding which aims at achieving high perceptual quality of images and video at the end-user despite channel losses is proposed. The scheme involves embedding a low-resolution version of each image or video frame into itself using spread-spectrum watermarking, extracting the embedded watermark from the received video frame, and using it as a reference for reconstruction of the parent image or frame, thus detecting and concealing the transmission errors. Dithering techniques have been used to obtain a binary watermark from the low-resolution version of the image/video frame. Multiple copies of the dithered watermark are embedded in frequencies in a specific range to make it more robust to channel errors. It is shown experimentally that, based on the frequency selection and scaling factor variation, a high-quality watermark can be extracted from a low-quality lossy received image/video frame. Furthermore, the proposed technique is compared to its two-part variant where the low-resolution version is encoded and transmitted as side information instead of embedding it. Simulation results show that the proposed concealment technique using data hiding outperforms existing approaches in improving the perceptual quality, especially in the case of higher loss probabilities.
The Lightweight User Datagram Protocol (UDP-Lite)
Robust source decoding of variable-length encoded video data taking into account source constraints We first provide an analysis of available redundancy remaining in compressed video data. This residual redundancy has two origins: the variable-length code (VLC) syntax and the source constraints. We compute the two in terms of equivalent redundancy bits. Then, a VLC decoding algorithm able to exploit both VLC syntax and source constraints for improving the decoding performance is outlined. Finally, it is shown that by taking this redundancy into account, video decoding over additive white Gaussian noise channels is substantially improved.
Iterative error detection and correction of H.263 coded video for wireless networks Due to the use of variable length code (VLC), a single bit error in the discrete cosine transform (DCT)-based coded bit stream, such as H.263, may propagate up to the end of the group of blocks (GOBs) or slice. In this paper, we propose an iterative error detection and correction algorithm for the slice mode of the H.263 bit stream. The visibly erroneous macroblocks (MBs) in the decoded frames are detected by checking a set of error detection conditions derived from the redundant information (such as neighboring MBs and inner-DCT block similarity measure) inherent within the frame. In each slice, the part of the bit stream before the first erroneous MB is decoded in the conventional manner. The remaining part after this MB is redecoded iteratively, skipping one bit at a time until a decodable subbit stream is found. In case an MB is nondecodable, it is replaced with gray level. Once all the slices are checked, the frame is reconstructed and again checked for erroneous MBs. This process is repeated until the decoder detects no more corrupted MB. The proposed step-by-step decoding technique limits the error into a few MBs only, which can easily be concealed by any error concealment technique. The simulation results demonstrate that our scheme can recover the corrupted frames under the bit error rates up to 1% over binary symmetric channel (BSC), and improve the concealed picture quality by 4-6 dB over the conventional methods.
A fragile watermark error detection scheme for wireless video communications In video communications over error-prone channels, compressed video streams are extremely sensitive to bit errors. Often random and burst bit errors impede correct decoding of parts of a received bitstream. Video decoders normally utilize error concealment techniques to repair a damaged decoded frame, but the effectiveness of these error concealment schemes relies heavily on correctly locating errors in the bitstream. In this paper, we propose a fragile watermark-based error detection and localization scheme called "force even watermarking (FEW)". A fragile watermark is forced onto quantized DCT coefficients at the encoder. If at the decoder side the watermark is no longer intact, errors exist in the bitstream associated with a particular macro-block (MB). Thanks to the watermark, bitstream errors can accurately be located at MB level, which facilitates proper error concealment. This paper describes the algorithm, model and analysis of the watermarking procedure. Our simulation results show that compared to the syntax-based error detection schemes, the proposed FEW scheme significantly improves the error detection capabilities of the video decoder, while the peak signal-to-noise ratio loss and additional computational costs due to watermark embedding and extraction are small.
H.264/AVC over IP H.264 is the ITU-T's new, nonbackward compatible video compression Recommendation that significantly outperforms all previous video compression standards. It consists of a video coding layer (VCL) which performs all the classic signal processing tasks and generates bit strings containing coded macroblocks, and a network adaptation layer (NAL) which adapts those bit strings in a network friendly way. The paper describes the use of H.264 coded video over best-effort IP networks, using RTP as the real-time transport protocol. After a description of the environment, the error-resilience tools of H.264 and the draft specification of the RTP payload format are introduced. Next the performance of several possible VCL- and NAL-based error-resilience tools of H.264 are verified in simulations.
Reversible fragile watermarking for locating tampered blocks in JPEG images This paper proposes a novel fragile watermarking scheme for JPEG image authentication. The watermark is generated by folding the hash results of quantized coefficients, and each block is used to carry two watermark bits using a reversible data-hiding method. Because modification to the cover is small, the visual quality of watermarked image is satisfactory. On the receiver side, one may attempt to extract the watermark and recover the original content. By measuring mismatch between the watermark data extracted from the received image and derived from the recovered content, the blocks containing fake content can be located accurately, while the original information in the other blocks is retrieved without any error as long as the tampered area is not extensive.
Self-embedding fragile watermarking with restoration capability based on adaptive bit allocation mechanism In this paper, we propose a novel fragile watermarking scheme with content restoration capability. Authentication-bits are produced using the image hashing method with a folding operation. The low-frequency component of the nonsubsampled contourlet transform (NSCT) coefficients is used to encode the restoration-bits for each block by the adaptive bit allocation mechanism. During the bit allocation, all the blocks are categorized into different types according to their degree of smoothness, and, complex blocks, which are deemed to have higher priority than smooth blocks, are allocated more bits. Two algorithms are utilized to adjust the block classification and the binary representations in order to guarantee that the numbers of the self-embedding authentication-bits and restoration-bits are exactly suitable for 1-LSB embedding capacity. On the receiver side, the extracted authentication-bits and the decoded restoration-bits are used to localize and restore the tampered blocks, respectively. Due to the low embedding volume, the visual quality of the watermarked image is satisfactory. Experimental results also show that the proposed scheme provides high restoration quality.
Counterfeiting attacks on oblivious block-wise independent invisible watermarking schemes We describe a class of attacks on certain block-based oblivious watermarking schemes. We show that oblivious watermarking techniques that embed information into a host image in a block-wise independent fashion are vulnerable to a counterfeiting attack. Specifically, given a watermarked image, one can forge the watermark it contains into another image without knowing the secret key used for watermark insertion and in some cases even without explicitly knowing the watermark. We demonstrate successful implementations of this attack on a few watermarking techniques that have been proposed in the literature. We also describe a possible solution to this problem of block-wise independence that makes our attack computationally intractable
A reversible data hiding scheme for VQ indices using locally adaptive coding Data hiding is designed to solve the problem of secure information exchange through public networks such as Internet. In this paper, we present an improved reversible data hiding scheme that can recover original VQ indices after data extraction. As with Chang et al.'s scheme, our proposed scheme also depends on the locally adaptive coding scheme. However, experimental results confirm that the hiding capacity of our proposed scheme is around 1.36 bpi in most digital images, which is typically higher than that of Chang et al.'s [17]. Moreover, the average compression rate that can be achieved with our proposed scheme is 0.49 bpp, which outperforms both Lin and Chang's scheme (0.50bpp), Tsai (0.50 bpp), Chang et al.'s scheme (0.53 bpp), and Yang and Lin's scheme (0.53 bpp).
Secure reversible visible image watermarking with authentication This paper proposes a secure reversible visible watermarking approach. The proposed pixel mapping function superposes a binary watermark image on a host image to create an intermediate visible watermarked image. Meanwhile, an almost inverse function generates the recovery data for restoring the original pixels. To prevent unauthorized users from approximating the original pixels in the watermarked region, this method adds an integer sequence in the intermediate watermarked image. The sequence is composed of integers generated by two random variables having normal distributions with zero means and distinct variances. The variances facilitate a trade-off between the watermark transparency and the noise generated by unauthorized users. The proposed method also uses Lagrange multipliers to find the optimized variances for the trade-off. Finally, this method uses reversible data embedding to embed the recovery data and hash value for reversibility and authentication, respectively. Experimental results show the watermark visibility for test images along with the watermark transparency for different variances. Using the optimized variances, the watermarked image is at the balance between the watermark transparency and the unauthorized-user-generating noise.
Modulo arithmetic-based image watermarking and its theoretical analysis of image-quality This paper proposes a watermarking method that hides an integer in an image by modifying the sum of the quantized discrete cosine transformed coefficients. Because modulo arithmetic-based modulation is applied to an integer to be embedded, less image distortion is achieved. Moreover, coefficients not being chosen to embed an integer are restored to their pre-quantized form, even though they are once quantized. The image-quality of a watermarked image is theoretically analyzed and is found to be superior to those of image-quality guaranteed watermarking methods.
A hierarchical SOM-based intrusion detection system Purely based on a hierarchy of self-organizing feature maps (SOMs), an approach to network intrusion detection is investigated. Our principle interest is to establish just how far such an approach can be taken in practice. To do so, the KDD benchmark data set from the International Knowledge Discovery and Data Mining Tools Competition is employed. Extensive analysis is conducted in order to assess the significance of the features employed, the partitioning of training data and the complexity of the architecture. Contributions that follow from such a holistic evaluation of the SOM include recognizing that (1) best performance is achieved using a two-layer SOM hierarchy, based on all 41-features from the KDD data set. (2) Only 40% of the original training data is sufficient for training purposes. (3) The 'Protocol' feature provides the basis for a switching parameter, thus supporting modular solutions to the detection problem. The ensuing detector provides false positive and detection rates of 1.38% and 90.4% under test conditions; where this represents the best performance to date of a detector based on an unsupervised learning algorithm.
Adaptive predictor combination for lossless image coding This paper is concerned with adaptive prediction for lossless image coding. A new predictor which is an adaptive combination of a set of fixed predictors with a transform domain LMS based predictor is proposed. When a context-based arithmetic encoder is used to encode the prediction error, the compression performance of the proposed algorithm is better than that of the state-of-the-art algorithms and is close to TMW at a fraction of its computational complexity.
1.026639
0.029
0.029
0.027629
0.017631
0.005473
0.001851
0.000147
0.000053
0.000025
0.00001
0
0
0
Random Walks for Image Segmentation A novel method is proposed for performing multilabel, interactive image segmentation. Given a small number of pixels with user-defined (or predefined) labels, one can analytically and quickly determine the probability that a random walker starting at each unlabeled pixel will first reach one of the prelabeled pixels. By assigning each pixel to the label for which the greatest probability is calculated, a high-quality image segmentation may be obtained. Theoretical properties of this algorithm are developed along with the corresponding connections to discrete potential theory and electrical circuits. This algorithm is formulated in discrete space (i.e., on a graph) using combinatorial analogues of standard operators and principles from continuous potential theory, allowing it to be applied in arbitrary dimension on arbitrary graphs.
Variational B-spline level-set: a linear filtering approach for fast deformable model evolution. In the field of image segmentation, most level-set-based active-contour approaches take advantage of a discrete representation of the associated implicit function. We present in this paper a different formulation where the implicit function is modeled as a continuous parametric function expressed on a B-spline basis. Starting from the active-contour energy functional, we show that this formulation allows us to compute the solution as a restriction of the variational problem on the space spanned by the B-splines. As a consequence, the minimization of the functional is directly obtained in terms of the B-spline coefficients. We also show that each step of this minimization may be expressed through a convolution operation. Because the B-spline functions are separable, this convolution may in turn be performed as a sequence of simple 1-D convolutions, which yields an efficient algorithm. As a further consequence, each step of the level-set evolution may be interpreted as a filtering operation with a B-spline kernel. Such filtering induces an intrinsic smoothing in the algorithm, which can be controlled explicitly via the degree and the scale of the chosen B-spline kernel. We illustrate the behavior of this approach on simulated as well as experimental images from various fields.
Visualization of 3D ultrasound data It is suggested that ultrasound data acquisition will play an increasing role in the future of medical imaging. Unlike magnetic resonance imaging (MRI) and computerized tomography (CT), ultrasound offers interactive visualization of underlying anatomy in real time. Additionally, ultrasound equipment costs far less and does not use ionizing radiation or require specialized facilities. The different methods for multidimensional medical imaging and scientific visualization are reviewed. Several volume visualization algorithms are discussed. They are multiplexer slicing, surface fitting, volume rendering, data classification, and viewing and shading. Three-dimensional ultrasound data display methods are also discussed.<>
Embedding Overlap Priors in Variational Left Ventricle Tracking. We propose to embed overlap priors in variational tracking of the left ventricle (LV) in cardiac magnetic resonance (MR) sequences. The method consists of evolving two curves toward the LV endo- and epicardium boundaries. We derive the curve evolution equations by minimizing two functionals each containing an original overlap prior constraint. The latter measures the conformity of the overlap between the nonparametric (kernel-based) intensity distributions within the three target regions-LV cavity, myocardium and background-to a prior learned from a given segmentation of the first frame. The Bhattacharyya coefficient is used as an overlap measure. Different from existing intensity-driven constraints, the proposed priors do not assume implicitly that the overlap between the intensity distributions within different regions has to be minimal. This prevents both the papillary muscles from being included erroneously in the myocardium and the curves from spilling into the background. Although neither geometric training nor preprocessing were used, quantitative evaluation of the similarities between automatic and independent manual segmentations showed that the proposed method yields a competitive score in comparison with existing methods. This allows more flexibility in clinical use because our solution is based only on the current intensity data, and consequently, the results are not bounded to the characteristics, variability, and mathematical description of a finite training set. We also demonstrate experimentally that the overlap measures are approximately constant over a cardiac sequence, which allows to learn the overlap priors from a single frame.
Statistical deformable model-based segmentation of image motion. We present a statistical method for the motion-based segmentation of deformable structures undergoing nonrigid movements. The proposed approach relies on two models describing the shape of interest, its variability, and its movement. The first model corresponds to a statistical deformable template that constrains the shape and its deformations. The second model is introduced to represent the optical flow field inside the deformable template. These two models are combined within a single probability distribution, which enables to derive shape and motion estimates using a maximum likelihood approach. The method requires no manual initialization and is demonstrated on synthetic data and on a medical X-ray image sequence.
A survey on super-resolution imaging. The key objective of super-resolution (SR) imaging is to reconstruct a higher-resolution image based on a set of images, acquired from the same scene and denoted as ‘low-resolution’ images, to overcome the limitation and/or ill-posed conditions of the image acquisition process for facilitating better content visualization and scene recognition. In this paper, we provide a comprehensive review of SR image and video reconstruction methods developed in the literature and highlight the future research challenges. The SR image approaches reconstruct a single higher-resolution image from a set of given lower-resolution images, and the SR video approaches reconstruct an image sequence with a higher-resolution from a group of adjacent lower-resolution image frames. Furthermore, several SR applications are discussed to contribute some insightful comments on future SR research directions. Specifically, the SR computations for multi-view images and the SR video computation in the temporal domain are discussed.
A Unified Framework for Atlas Matching Using Active Appearance Models We propose to use statistical models of shape and texture as deformable anatomical atlases. By training on sets of labelled examples these can represent both the mean structure and appearance of anatomy in medical images, and the allowable modes of deformation. Given enough training examples such a model should be able synthesise any image of normal anatomy. By finding the parameters which minimise the difference between the synthesised model image and the target image we can locate all the modelled structure. This potentially time consuming step can be solved rapidly using the Active Appearance Model (AAM). In this paper we describe the models and the AAM algorithm and demonstrate the approach on structures in MR brain cross-sections.
Quadruple Difference Expansion-Based Reversible Data Hiding Method for Digital Images Lossless data hiding is a special type of data hiding technique that guarantees not only the secret data but also the ability of cover media to be reconstructed without any distortion. A latest lossless data hiding technique is proposed by Hong Lin Jin's that is based on hiding only one data bit in the spatial domain in gray-level image. However, this method uses double difference expansion to embed bits which results in a small embedding capacity. For this purpose, we propose an improved algorithm with the potential of increasing the payload capacity and maintaining good image quality. The proposed improved algorithm is characterized by two aspects. First, the proposed improved reversible data hiding scheme is enhanced to exhibit data hiding in color palette images. Second, the embedding level is improved by using quadruple difference expansion to guarantee the embedding of 2-bit data into color images. Experiments of the proposed improved method have been conducted over several well-known test images. The results show that the proposed improved method significantly improves the embedding capacity over Hong Lin Jin's scheme by the range of 15–35% for grayscale images and 20–46% for color images while still maintaining the quality of the stego-images.
Tailored reversible watermarking schemes for authentication of electronic clinical atlas. It is accepted that digital watermarking is quite relevant in medical imaging. However, due to the special nature of clinical practice, it is often required that watermarking not introduce irreversible distortions to medical images. The electronic clinical atlas has such a need of "lossless" watermarking. We present two tailored reversible watermarking schemes for the clinical atlas by exploiting its inherent characteristics. We have implemented the schemes and our experimental results look very promising.
A new polynomial-time algorithm for linear programming We present a new polynomial-time algorithm for linear programming. The running-time of this algorithm is O(n3-5L2), as compared to O(n6L2) for the ellipsoid algorithm. We prove that given a polytope P and a strictly interior point a &egr; P, there is a projective transformation of the space that maps P, a to P', a' having the following property. The ratio of the radius of the smallest sphere with center a', containing P' to the radius of the largest sphere with center a' contained in P' is O (n). The algorithm consists of repeated application of such projective transformations each followed by optimization over an inscribed sphere to create a sequence of points which converges to the optimal solution in polynomial-time.
Software development: two approaches to animation of Z specifications using Prolog Formal methods rely on the correctness of the formal requirements specification, but this correctness cannot be proved. This paper discusses the use of software tools to assist in the validation of formal specifications and advocates a system by which Z specifications may be animated as Prolog programs. Two Z/Prolog translation strategies are explored; formal program synthesis and structure simulation. The paper explains why the former proved to be unsuccessful and describes the techniques developed for implementing the latter approach, with the aid of case studies
Optimal source codes for geometrically distributed integer alphabets (Corresp.) LetP(i)= (1 - theta)theta^ibe a probability assignment on the set of nonnegative integers wherethetais an arbitrary real number,0 < theta < 1. We show that an optimal binary source code for this probability assignment is constructed as follows. Letlbe the integer satisfyingtheta^l + theta^{l+1} leq 1 < theta^l + theta^{l-1}and represent each nonnegative integeriasi = lj + rwhenj = lfloor i/l rfloor, the integer part ofi/l, andr = [i] mod l. Encodejby a unary code (i.e.,jzeros followed by a single one), and encoderby a Huffman code, using codewords of lengthlfloor log_2 l rfloor, forr < 2^{lfloor log l+1 rfloor} - l, and lengthlfloor log_2 l rfloor + 1otherwise. An optimal code for the nonnegative integers is the concatenation of those two codes.
Geometrically invariant image watermarking using Polar Harmonic Transforms This paper presents an invariant image watermarking scheme by introducing the Polar Harmonic Transform (PHT), which is a recently developed orthogonal moment method. Similar to Zernike moment (ZM) and pseudo-Zernike moment (PZM) approaches, PHT is defined on a circular domain. The magnitudes of PHTs are invariant to image rotation and scaling. Furthermore, the PHTs are free of numerical instability, so they are more suitable for watermarking. In this paper, the invariant properties of PHTs are investigated. During embedding, a subset of the accurate PHTs are modified according to the binary watermark sequence. Then a compensation image is formatted by reconstructing the modified PHT vector. The final watermarked image is obtained by adding the compensation image to the original image. In the decoder, the watermark can be retrieved from the magnitudes of the PHTs directly. Experimental results illustrate that the proposed scheme out-performs ZM/PZM based schemes in terms of embedding capacity and watermark robustness and is also robust to both geometric and signal processing based attacks. (C) 2012 Elsevier Inc. All rights reserved.
Reversible data hiding based on local histogram shifting with multilayer embedding We use the two neighboring points of the peak point to embed the secret data.We improve the histogram of embedded cover image to ensure security of secret data.The multilayer embedding of pyramid structure is adopted to increase the capacity.Our proposed method has a higher embedding capacity with lower distortion. In this paper, we present a new reversible data hiding method based on histogram shifting using localization. Our proposed method selects peak point as the reference point, then uses the two neighboring points of the peak point to achieve secret data embedding based on histogram shifting and the peak point keeps unchanged. In the extraction end, we no longer need the key information about the peak point, we can directly find the peak point from the histogram to extract the secret data. We also exploit the localization to make the histogram of embedded cover image become almost the same as the histogram of the original cover image. The embedding capacity is also increased rapidly by the localization with multilayer embedding. Experimental results show that our proposed method is effective and superior.
1.084444
0.075556
0.075556
0.075556
0.075556
0.066667
0.06
0.033333
0.000013
0
0
0
0
0
Minimizing Congestion in General Networks A principle task in parallel and distributed systems is to reduce the communication load in the interconnection network, as this is usually the major bottleneck for the performance of distributed applications. In this paper we introduce a framework for solving on-line problems that aim to minimize the congestion (i.e. the maximum load of a network link) in general topology networks.We apply this framework to the problem of on-line routing of virtual circuits and to a dynamic data management problem. For both scenarios we achieve a competitive ratioof O(log3n) with respect to the congestion of the network links.Our on-line algorithm for the routing problem has the remarkable property that it is oblivious, i.e., the path chosen for a virtual circuit is independent of the current network load. Oblivious routing strategies can easily be implemented in distributed environments and have thereforebeen intensively studied for certain network topologies as e.g. meshes, tori and hypercubic networks. This is the first oblivious path selection algorithm that achieves a polylogarithmiccompetitive ratio in general networks.
Optimal oblivious routing in polynomial time A recent seminal result of Racke is that for any network there is an oblivious routing algorithm with a polylog competitive ratio with respect to congestion. Unfortunately, Racke's construction is not polynomial time. We give a polynomial time construction that guarantee's Racke's bounds, and more generally gives the true optimal ratio for any network.
A polynomial-time tree decomposition to minimize congestion Räcke recently gave a remarkable proof showing that any undirected multicommodity flow problem can be routed in an oblivious fashion with congestion that is within a factor of O(log3 n) of the best off-line solution to the problem. He also presented interesting applications of this result to distributed computing. Maggs, Miller, Parekh, Ravi and Wu have shown that such a decomposition also has an application to speeding up iterative solvers of linear systems. Räcke's construction finds a decomposition tree of the underlying graph, along with a method to obliviously route in a hierarchical fashion on the tree. The construction, however, uses exponential-time procedures to build the decomposition. The non-constructive nature of his result was remedied, in part, by Azar, Cohen, Fiat, Kaplan, and Räcke, who gave a polynomial time method for building an oblivious routing strategy. Their construction was not based on finding a hierarchical decomposition, and this precludes its application to iterative methods for solving linear systems. In this paper, we show how to compute a hierarchical decomposition and a corresponding oblivious routing strategy in polynomial time. In addition, our decomposition gives an improved competitive ratio for congestion of O(log2 n log log n). In an independent result in this conference, Bienkowski, Korzeniowski, and Räcke give a polynomial-time method for constructing a decomposition tree with competitive ratio O(log4 n). We note that our original submission used essentially the same algorithm, and we appreciate them allowing us to present this improved version.
Shortest-path routing in arbitrary networks We introduce an on-line protocol which routes any set ofNpackets along shortest paths with congestionCand dilationDthrough an arbitrary network inO(C+D+logN) steps, with high probability. This time bound is optimal up to the additive logN, and it has previously only been reached for bounded-degree leveled networks.
Optimal Oblivious Path Selection on the Mesh In the oblivious path selection problem, each packet in the network independently chooses a path, which is an important property if the routing algorithm is to be independent of the traffic distribution. The quality of the paths is determined by the congestion C, the maximum number of paths crossing an edge, and the dilation D, the maximum path length. So far, the oblivious algorithms studied in the literature have focused on minimizing the congestion while ignoring the dilation. An open problem is to give algorithms for networks in which C and D can be controlled simultaneously. Here, we solve this problem for the d-dimensional mesh. We present an oblivious algorithm for which C and D are both within O(d2) of optimal. The algorithm uses randomization, and we show that the number of random bits required per packet is within O(d) of the minimum number of random bits required by any algorithm that obtains the same congestion. For fixed d, our algorithm is asymptotically optimal.
A Scheme For Fast Parallel Communication
Generative programming: methods, tools, and applications
AntHocNet: an adaptive nature-inspired algorithm for routing in mobile ad hoc networks In this paper, we describe AntHocNet, an algorithm for routing in mobile ad hoc networks. It is a hybrid algorithm, which combines reactive path setup with proactive path probing, maintenance and improvement. The algorithm is based on the nature-inspired ant colony optimisation framework. Paths are learned by guided Monte Carlo sampling using ant-like agents communicating in a stigmergic way. In an extensive set of simulation experiments, we compare AntHocNet with AODV, a reference algorithm in the field. We show that our algorithm can outperform AODV on different evaluation criteria. AntHocNet's performance advantage is visible over a broad range of possible network scenarios, and increases for larger, sparser and more mobile networks. Copyright (c) 2005 AEIT.
A practical theory of programming Programs are predicates, programming is proving, and termination is timing.
Special purpose automatic programming for hidden surface elimination In many applications of three dimensional computer graphics, the appearance of the same scene must be computed repeatedly for many different positions of the viewer. This paper concerns a scheme for exploiting this property of an application for the purpose of improving the efficiency of the hidden surface computation. The scheme involves a kind of automatic programming: for each scene to be considered, a different special purpose program is automatically constructed. The special purpose program then takes the position of the viewer as input, and generates a suitable description of the scene with hidden surfaces removed as output. Since the special purpose program has a very limited task to perform - it is adapted to handling just one scene - it can be much faster than any general purpose algorithm would be for the same scene. The paper describes a method by which special purpose programs for hidden surface elimination can be constructed in a fully automatic manner. The method has been implemented, and results of experiments are given.
Annotated production systems: a model for skill acquisition Annotated Production Systems provide a procedural model for skill acquisition by augmenting a production model of the skill with formal commentary describing plans, hugs, and interrelationships among various productions. This commentary supports efficient interpretation, self-debugging and self-improvement. The theory of annotated productions is developed by analyzing the skill of attitude instrument flying. An annotated production interpreter has been written that executes skill models which control a flight simulator. Preliminary evidence indicates that annotated productions effectively model certain bugs and certain learning behaviors characteristic of student.
Iterative water-filling for Gaussian vector multiple access channels We develop an efficient iterative water-filling algorithm to find an optimal transmit spectrum for maximum sum capacity in a Gaussian multiple access channel with vector inputs and a vector output. The iterative algorithm converges from any starting point and reaches within (K-1)/2 nats per output dimension from the K-user sum capacity after just one iteration
Adaptive DE-based reversible steganographic technique using bilinear interpolation and simplified location map In this paper, an adaptive DE-based reversible steganographic scheme with bilinear interpolation and simplified location map is proposed. In traditional reversible difference expansion (DE) scheme, it suffers from two problems: the embeddable location is considered insufficient and the embedding payload control capability in single layer embedding is weak. For the first problem, the kernel of bilinear interpolation is applied to effectively improve the number of the embeddable location while the quality of the stego-image can be maintained at a good level. In addition, the proposed simplified location map is used for the existing adaptive embedding rule to improve the second problem where the secret data can be adaptively embedded and also the load of additional information can be reduced. The experimental results revealed that the proposed scheme presented better visual quality of the stego-image and carried larger embedding payload than some other revised DE schemes, such as Alattar's and Lee's schemes.
Reversible data hiding based on local histogram shifting with multilayer embedding We use the two neighboring points of the peak point to embed the secret data.We improve the histogram of embedded cover image to ensure security of secret data.The multilayer embedding of pyramid structure is adopted to increase the capacity.Our proposed method has a higher embedding capacity with lower distortion. In this paper, we present a new reversible data hiding method based on histogram shifting using localization. Our proposed method selects peak point as the reference point, then uses the two neighboring points of the peak point to achieve secret data embedding based on histogram shifting and the peak point keeps unchanged. In the extraction end, we no longer need the key information about the peak point, we can directly find the peak point from the histogram to extract the secret data. We also exploit the localization to make the histogram of embedded cover image become almost the same as the histogram of the original cover image. The embedding capacity is also increased rapidly by the localization with multilayer embedding. Experimental results show that our proposed method is effective and superior.
1.028109
0.029224
0.028587
0.027935
0.022852
0.011532
0
0
0
0
0
0
0
0
Genetic algorithm and difference expansion based reversible watermarking for relational databases In this paper, we present a new robust and reversible watermarking approach for the protection of relational databases. Our approach is based on the idea of difference expansion and utilizes genetic algorithm (GA) to improve watermark capacity and reduce distortion. The proposed approach is reversible and therefore, distortion introduced after watermark insertion can be fully restored. Using GA, different attributes are explored to meet the optimal criteria rather than selecting less effective attributes for watermark insertion. Checking only the distortion tolerance of two attributes for a selected tuple may not be useful for watermark capacity and distortion therefore, distortion tolerance of different attributes are explored. Distortion caused by difference expansion can help an attacker to predict watermarked attribute. Thus, we have incorporated tuple and attribute-wise distortion in the fitness function of GA, making it tough for an attacker to predict watermarked attribute. From experimental analysis, it is concluded that the proposed technique provides improved capacity and reduced distortion compared to existing approaches. Problem of false positives and change in attribute order at detection side is also resolved. Additionally, the proposed technique is resilient against a wide range of attacks such as addition, deletion, sorting, bit flipping, tuple-wise-multifaceted, attribute-wise-multifaceted, and additive attacks.
The Distortion Control Method of Reversible Contrast Mapping Hiding Scheme The reversible information hiding technique is to embed secret information in multi-media. Through extraction algorithm, the concealed information can be extracted and the original host media can be totally recovered. Coltuc et al. developed a simple and efficient reversible hiding scheme based on reversible contrast mapping (RCM), which does not need location maps to recover the host image. However, their scheme can not fully control the position where information is embedded, so the quality of the stego image is poor. Therefore, this paper proposed an enhanced hiding scheme based on Kamstra et al.’s distortion control technique to improve Coltuc et al.’s scheme. The experimental results show that the image quality of the proposed scheme is better than that of Coltuc et al.’s scheme for limited amount of embedded data.
Procedural content generation for games: A survey Hundreds of millions of people play computer games every day. For them, game content—from 3D objects to abstract puzzles—plays a major entertainment role. Manual labor has so far ensured that the quality and quantity of game content matched the demands of the playing community, but is facing new scalability challenges due to the exponential growth over the last decade of both the gamer population and the production costs. Procedural Content Generation for Games (PCG-G) may address these challenges by automating, or aiding in, game content generation. PCG-G is difficult, since the generator has to create the content, satisfy constraints imposed by the artist, and return interesting instances for gamers. Despite a large body of research focusing on PCG-G, particularly over the past decade, ours is the first comprehensive survey of the field of PCG-G. We first introduce a comprehensive, six-layered taxonomy of game content: bits, space, systems, scenarios, design, and derived. Second, we survey the methods used across the whole field of PCG-G from a large research body. Third, we map PCG-G methods to game content layers; it turns out that many of the methods used to generate game content from one layer can be used to generate content from another. We also survey the use of methods in practice, that is, in commercial or prototype games. Fourth and last, we discuss several directions for future research in PCG-G, which we believe deserve close attention in the near future.
A High Payload Histogram-Based Reversible Wartermarking Using Linear Prediction Reversible watermarking is an efficient method of embedding secret information into digital work for the purpose of copyright protection, certification, tracking, etc. This paper presents a high payload reversible watermarking algorithm for digital images using linear prediction. Prediction error and histogram shifting are used for embedding data. Different from previous methods, we utilize linear prediction to obtain the initial prediction, which is then modified by calculating the variance of adjacent pixels to get a more accurate prediction image. Since the prediction-error histogram is more concentrated, the proposed method can achieve larger capacity and less distortion by using histogram shifting. The experimental results indicate the validity of the proposed method compared with other schemes.
Extracting multiple features in the CID color space for face recognition. This correspondence presents a novel face recognition method that extracts multiple features in the color image discriminant (CID) color space, where three new color component images, D1, D2, and D3, are derived using an iterative algorithm. As different color component images in the CID color space display different characteristics, three different image encoding methods are presented to effectively extract features from the component images for enhancing pattern recognition performance. To further improve classification performance, the similarity scores due to the three color component images are fused for the final decision making. Experimental results using two large-scale face databases, namely, the face recognition grand challenge (FRGC) version 2 database and the FERET database, show the effectiveness of the proposed method.
Lossless Watermarking Of Categorical Attributes For Verifying Medical Data Base Integrity In this article, we propose a new lossless or reversible watermarking approach that allows the embedding of a message within categorical data of relational database. The reversibility property of our scheme is achieved by adapting the well known histogram shifting modulation. Based on this algorithm we derive a system for verifying the integrity of the database content, it means detecting addition, removal or modification of any t-uples or attributes. Such a content integrity check is independent of the manner the database is stored or structured. We illustrate the overall capability of our method and its constraints of deployment considering one medical database of inpatient hospital stay records. Especially, we reversibly watermark ICD-10 diagnostic codes.
Efficient adaptive prediction based reversible image watermarking In this paper, we propose a new reversible watermarking algorithm based on additive prediction-error expansion which can recover original image after extracting the hidden data. Embedding capacity of such algorithms depend on the prediction accuracy of the predictor. We observed that the performance of a predictor based on full context prediction is preciser as compared to that of partial context prediction. In view of this observation, we propose an efficient adaptive prediction (EAP) method based on full context, that exploits local characteristics of neighboring pixels much effectively than other prediction methods reported in literature. Experimental results demonstrate that the proposed algorithm has a better embedding capacity and also gives better Peak Signal to Noise Ratio (PSNR) as compared to state-of-the-art reversible watermarking schemes.
Reversible data hiding with contrast enhancement and tamper localization for medical images. Digital transmission of medical images often involves hiding crucial information in some parts of the images which should be later extracted to authenticate ownership and identity. In this paper, a new reversible data hiding (RDH) algorithm for medical images is proposed. The primary objective of the algorithm is to achieve contrast enhancement of the region of interest (ROI) without introducing distortion, and achieve tamper localization against attacks on the ROI. First, the background and ROI of the medical image are segmented using Otsu's automatic optimal thresholding method. To reduce the visual distortion, an improved scheme for preprocessing is applied to reduce the number of disordered pixels. By expanding the peak-pairs of the ROI histogram, data embedding along with distortion-less contrast enhancement of the ROI is achieved. The feature-bit matrix generated from the ROI is embedded into the least significant bits (LSBs) of the background pixels. At the receiving end, the tampered contents from the ROI of the detected image can be located using a difference matrix between the feature-bit matrix generated from the ROI and that extracted from the background. In the absence of tampering, the original ROI can be completely restored after the embedded data is extracted. Experimental results demonstrate that in comparison with some state-of-the-art RDH algorithms, the proposed algorithm achieves better performance in terms of contrast enhancement of ROI, preserving visual quality of the background and tamper localization.
Reversible watermarking techniques: an overview and a classification An overview of reversible watermarking techniques appeared in literature during the last five years approximately is presented in this paper. In addition to this a general classification of algorithms on the basis of their characteristics and of the embedding domain is given in order to provide a structured presentation simply accessible for an interested reader. Algorithms are set in a category and discussed trying to supply the main information regarding embedding and decoding procedures. Basic considerations on achieved results are made as well.
A Novel Reversible Data Hiding Scheme Using Ripple Strategy and Histogram Shifting In this paper, we propose a reversible data hiding method based on pixel difference histogram shifting and ripple strategy. Because the local area has similar pixel value distribution in the natural image, the pixel differences will be narrowed down within a small range. The ripple strategy is used to calculate the pixel difference using the pixels in the outer ripple to subtract the pixels in the inner ripples. Also, we found that the three highest pixel difference bins occure most frequently at -1, 0, and 1, thus the proposed method need not remember the peak-point and zero-point information for secret extraction and image recovering. The experimental results show that the proposed method has better performance than Huang and Chang's method in terms of the embedding capacity.
An Improved Reversible Difference Expansion Watermarking Algorithm In this paper, we propose an improved reversible watermarking algorithm by using the simplified location map. The proposed embedding method is based on the Alattar integer transform [3]. Here, we extend the case of using four pixels. Proposed simplified location map we propose just covers those necessary quads, the improved watermarking algorithm has a larger embedding capacity in comparison with the work of Alattar[3]. Simulation testing shows that our embedding strategy has achieved a better performance for all tested images.
Multiple layer data hiding scheme based on difference expansion of quad. Performance of the difference expansion of quad scheme can be improved by applying IRDE.Multiple-layer embedding can be done by applying scheme of reversible data hiding.The combined of scheme difference expansion of quad with IRDE can be to improve the performance in multiple-layer embedding.The performance of the combined scheme difference expansion of quad with IRDE has a better than the combined of difference expansion of quad with RDE in single-layer or multiple-layer embedding. For the past few years, the schemes of data hiding are growing rapidly. Generally, data hiding performs well on common images but it does not provide satisfying results on distortion sensitive images such as medical, military, or forensic images. This is because embedding data into an image can cause permanent distortion after extraction (irreversible). As a solution, a certain scheme is required for the process of embedding data into an image, such as reversible data hiding (RDH). One well-known RDH scheme is difference expansion, a simple and easy to implement scheme. In this study, a new scheme, multiple-layer embedding based on difference expansion of quad, is proposed, focusing on increasing capacity and visual quality of data hiding by reducing difference value in pixel with improved reduced difference expansion (IRDE). The proposed scheme has been evaluated with 14 grayscale images, consisting of six common images and eight medical images. Results show that the proposed scheme has higher capacity and better visual quality compared to the original scheme and previously similar schemes.
rCOS: theory and tool for component-based model driven development We present the roadmap of the development of the rCOS theory and its tool support for component-based model driven software development (CB-MDD). First the motivation for using CB-MDD, its needs for a theoretical foundation and tool support are discussed, followed by a discussion of the concepts, techniques and design decisions in the research of the theory and the development of the prototype tool. The concepts, techniques and decisions discussed here have been formalized and published. References to those publications are provided with explanations. Based on the initial experiences with a case study and the preliminary rCOS tool development, further development trajectory leading to further integration with transformation and analysis plug-ins is delineated.
Formal program developments directed by operational properties evaluation Transformation systems allow to support developments leading from an initial specification to a final program in a totally formal way. Transformations use valid properties of input objects to produce new equivalent ones. Most transformations use functional properties to increase the efficiency of programs. Doing so, they affect non nonfunctional properties which, more often, are not formally expressed. The problem the author addresses is to recognize situations in which transformations can be achieved on the basis of the evaluation of a defined nonfunctional property and his aim is to relate program transformations to nonfunctional properties evaluations. Indeed, a particular transformation tactic can be applied when a given property does not hold. The DEVA language has been used as a support of experiments in the development of programs
1.029099
0.028694
0.028694
0.028694
0.014616
0.010159
0.007143
0.002865
0.000605
0.000089
0.000031
0.000001
0
0
Cheating in Visual Cryptography A secret sharing scheme allows a secret to be shared among a set of participants, P, such that only authorized subsets of P can recover the secret, but any unauthorized subset cannot recover the secret. In 1995, Naor and Shamir proposed a variant of secret sharing, called visual cryptography, where the shares given to participants are xeroxed onto transparencies. If X is an authorized subset of P, then the participants in X can visually recover the secret image by stacking their transparencies together without performing any computation. In this paper, we address the issue of cheating by dishonest participants, called cheaters, in visual cryptography. The experimental results demonstrate that cheating is possible when the cheaters form a coalition in order to deceive honest participants. We also propose two simple cheating prevention visual cryptographic schemes.
Cheating immune (2, n)-threshold visual secret sharing Cheating in secret sharing has been considered in several papers. Recently cheating in visual cryptography has been considered in [10], where (2,n)-threshold visual cryptography schemes are provided. In this paper we provide new (2,n)-threshold visual cryptography schemes. Our model is different from the one considered in [10]; in particular we aim at constructing cheating immune schemes without the use of extra information, like additional shares or images as done in [10]. We have provided a formal definition of cheating which requires that a group of cheaters be able to deterministically force a honest participant to reconstruct a wrong secret. The (2,n)-threshold schemes that we provide do not allow such cheating, regardless of the number of cheaters.
A Probabilistic Visual Secret Sharing Scheme for Grayscale Images with Voting Strategy In 2007, Wang et al. proposed two visual secret sharing schemes based on Boolean operations. One is a probabilistic (2, n) secret sharing (SS) scheme, called (2, n) ProbVSS scheme, for binary images and the other is a deterministic (n, n) SS scheme for grayscale images. Their schemes can solve the problems of computational complexity and pixel expansion at the same time. To provide a (2, n) ProbVSS scheme that demonstrates better reconstructed grayscale image quality than Wang et al.'s scheme without significantly increasing computational complexity, we apply the voting strategy and the least significant bits abandoning approach in combination with the first scheme of Wang et al.'s to handle grayscale images. Experimental results confirm that the reconstructed grayscale image quality in this proposed scheme is better than that achieved by using the pure Wang et al.’s scheme.
Inverse halftoning algorithm using edge-based lookup table approach. The inverse halftoning algorithm is used to reconstruct a gray image from an input halftone image. Based on the recently published lookup table (LUT) technique, this paper presents a novel edge-based LUT method for inverse halftoning which improves the quality of the reconstructed gray image. The proposed method first uses the LUT-based inverse halftoning method as a preprocessing step to transform the given halftone image to a base gray image, and then the edges are extracted and classified from the base gray image. According to these classified edges, a novel edge-based LUT is built up to reconstruct the gray image. Based on a set of 30 real training images with both low-and high-frequency contents, experimental results demonstrated that the proposed method achieves a better image quality when compared to the currently published two methods, by Chang et al. and Meşe and Vaidyanathan.
Efficient visual secret sharing scheme for color images A k-out-of-n visual secret sharing scheme (VSSS) resolves the visual variant of the k-out-of-n secret sharing problem where only k or more out of n participants can reveal the secret by human visual system without any cryptographic computation. The best pixel expansion of the general k-out-of-n VSSS for c-colored images was cxm by Yang and Laih [New colored visual secret sharing schemes, Des Codes Cryptogr. 24 (2000) 325-335] where m is the pixel expansion of an existing binary k-out-of-n VSSS. Regarding the c-colored n-out-of-n scheme, the best pixel expansion is (c-1)2^n^-^1-c+2 and c(c-1)2^n^-^2-c when n is odd and even, respectively, by Blundo et al. [Improved schemes for visual cryptography, Des Codes Cryptogr. 24 (2001) 255-278]. In this paper, we propose a new c-colored k-out-of-n VSSS by using a pixel expansion of @?log"2c@?xm that is more efficient than ever.
Visual cryptography for color images Visual cryptography, an emerging cryptography technology, uses the characteristics of human vision to decrypt encrypted images. It needs neither cryptography knowledge nor complex computation. For security concerns, it also ensures that hackers cannot perceive any clues about a secret image from individual cover images. Since Naor and Shamir proposed the basic model of visual cryptography, researchers have published many related studies. Most of these studies, however, concentrate on binary images; few of them proposed methods for processing gray-level and color images. This paper proposes three methods for visual cryptography of gray-level and color images based on past studies in black-and-white visual cryptography, the halftone technology, and the color decomposition method. Our methods not only retain the advantages of black-and-white visual cryptography, which exploits the human visual system to decrypt secret images without computation, but also have the backward compatibility with the previous results in black-and-white visual cryptography, such as the t out of n threshold scheme, and can be applied to gray-level and color images easily.
Sharing and hiding secret images with size constraint This paper presents a method for sharing and hiding secret images. The method is modified from the (t,n) threshold scheme. (Comput.Graph. 26(5)(2002)765) The given secret image is shared and n shadow images are thus generated. Each shadow image is hidden in an ordinary image so as not to attract an attacker's attention. Any t of the n hidden shadows can be used to recover the secret image. The size of each stego image (in which a shadow image is hidden) is about 1/t of that of the secret image, avoiding the need for much storage space and transmission time (in the sense that the total size of t stego images is about the size of the secret image). Experimental results indicate that the qualities of both the recovered secret image and the stego images that contain the hidden shadows are acceptable. The photographers who work in enemy areas can use this system to transmit photographs.
Artifact reduction of JPEG coded images using mean-removed classified vector quantization Image compression techniques are frequently applied to reduce the network bandwidth and storage space. In the case of higher compression ratios, annoying artifacts may be generated and they degrade the perceptual quality of compressed images. This paper modified mean-removed classified vector quantization (MRCVQ) to reduce the artifacts of JPEG coded images. This algorithm consists of four phases: mean removal, encoding, decoding, and mean restoration. The mean removal phase removes the mean values of compressed image blocks. The encoding procedure needs a codebook for the encoder, which transforms a mean-removed compressed image to a set of codeword-indices. The decoding phase requires a different codebook for the decoder, which enhances a mean-removed compressed image from a set of codeword-indices. Finally, the mean values are restored in the mean restoration phase. The experimental results show that the proposed approach can remove effectively the artifacts caused by high compression and improve the perceptual quality significantly. Compared to the existing methods, our approach usually has the much better performance in terms of computing time, storage space and PSNR.
The JPEG still picture compression standard A joint ISO/CCITT committee known as JPEG (Joint Photographic Experts Group) has been working to establish the first international compression standard for continuous-tone still images, both grayscale and color. JPEG's proposed standard aims to be generic, to support a wide variety of applications for continuous-tone images. To meet the differing needs of many applications, the JPEG standard includes two basic compression methods, each with various modes of operation. A DCT (discrete cosine transform)-based method is specified for `lossy' compression, and a predictive method for `lossless' compression. JPEG features a simple lossy technique known as the Baseline method, a subset of the other DCT-based modes of operation. The Baseline method has been by far the most widely implemented JPEG method to date, and is sufficient in its own right for a large number of applications. The author provides an overview of the JPEG standard, and focuses in detail on the Baseline method
Adaptive reversible data hiding with pyramidal structure. In this paper, we propose an adaptive algorithm for reversible data hiding by employing the characteristics and pyramidal relationships of original images. The major goal of reversible data hiding is to keep the reversibility of algorithm. By use of the pyramidal structure to explore the inherent characteristics of original images, regions with different smoothness levels can be determined, and then data hiding can be performed adaptively with the pre-determined threshold for balancing the output image quality and embedding capacity. On the one hand, larger capacity can be hidden into smoother regions with limited degradation of output image quality. On the other hand, the size of location map, which serves as the side information for keeping reversibility, can be reduced for embedding into smoother or less smooth regions of original image. By carefully manipulating difference values between layers in pyramidal structure, secret information can effectively be embedded. With our method, we observe better performances over relating methods with enhanced image quality, the more embedding capacity, and comparable amount of side information for decoding. More importantly, the reversibility of our method is guaranteed, meaning that original image and secret information can both be perfectly recovered at the decoder. Simulation results demonstrate that proposed method in this paper outperforms those in conventional algorithms.
Lossless Data Hiding Using Integer Wavelet Transform and Threshold Embedding Technique This paper presents a new lossless data hiding method for digital images using integer wavelet transform and threshold embedding technique. Data are embedded into the least significant bit-plane (LSB) of high frequency CDF (2, 2) integer wavelet coefficients whose magnitudes are smaller than a certain predefined threshold. Histogram modification is applied as a preprocessing to prevent overflow/underflow. Experimental results show that this scheme outperforms the prior arts in terms of a larger payload (at the same PSNR) or a higher PSNR (at the same payload)
A System for Assisting Program Transformation
A transformation method for dynamic-sized tabulation Tupling is a transformation tactic to obtain new functions, without redundant calls and/or multiple traversals of common inputs. It achieves this feat by allowing each set (tuple) of function calls to be computed recursively from its previous set. In previous works by Chin and Khoo [8,9], a safe (terminating) fold/unfold transformation algorithm was developed for some classes of functions which are guaranteed to be successfully tupled.However, these classes of functions currently use tables for eliminating the redundant calls. As shown by Richard Bird in [3], there are also other classes of programs whose redundant calls could only be eliminated by using tabulation. This paper proposes a new solution to dynamic-sized tabulation by an extension to the tupling tactic. Our extension uses which can be viewed as either dynamic-sized tables or applications of the higher-order generalisation technique to facilitate tupling. Significant speedups could be obtained after the transformed programs were vectorised, as confirmed by experiment.
Efficient aerial data collection with UAV in large-scale wireless sensor networks AbstractData collection from deployed sensor networks can be with static sink, ground-based mobile sink, or Unmanned Aerial Vehicle (UAV) based mobile aerial data collector. Considering the large-scale sensor networks and peculiarity of the deployed environments, aerial data collection based on controllable UAV has more advantages. In this paper, we have designed a basic framework for aerial data collection, which includes the following five components: deployment of networks, nodes positioning, anchor points searching, fast path planning for UAV, and data collection from network. We have identified the key challenges in each of them and have proposed efficient solutions. This includes proposal of a Fast Path Planning with Rules (FPPWR) algorithm based on grid division, to increase the efficiency of path planning, while guaranteeing the length of the path to be relatively short. We have designed and implemented a simulation platform for aerial data collection from sensor networks and have validated performance efficiency of the proposed framework based on the following parameters: time consumption of the aerial data collection, flight path distance, and volume of collected data.
1.063413
0.072028
0.072028
0.037817
0.021614
0.010822
0.000125
0.000014
0.000003
0
0
0
0
0