{"id": "41379", "source": "Generate a title for the following abstract of a paper: Food image recognition is challenging due to the diversity of food, and color, light, view angles' effect on food image. The recognition process is also a computation heavy process. Therefore, We also propose to use pervasive cloud computing paradigm to improve the performance of food image recognition. Based on empirical and experimental explorations, we propose to use SIFT(Scale Invariant Feature Transform) and Gabor descriptors as food image features and KMeans algorithm for feature clustering. Evaluations show that the proposed approach can give acceptable recognition rate with good performance gains.", "target": "Towards a Pervasive Cloud Computing Based Food Image Recognition"} | |
{"id": "412232", "source": "Generate a title for the following abstract of a paper: The problem of high sensitivity in modeling is well known. Small perturbations in the model parameters may result in large, undesired changes in the model behavior. A number of authors have considered the issue of sensitivity in feedforward neural networks from a probabilistic perspective. Less attention has been given to such issues in recurrent neural networks. In this article, we present a new recurrent neural network architecture, that is capable of significantly improved parameter sensitivity properties compared to existing recurrent neural networks. The new recurrent neural network generalizes previous architectures by employing alternative discrete-time operators in place of the shift operator normally used. An analysis of the model demonstrates the existence of parameter sensitivity in recurrent neural networks and supports the proposed architecture. The new architecture performs significantly better than previous recurrent neural networks, as shown by a series of simple numerical experiments.", "target": "A low-sensitivity recurrent neural network"} | |
{"id": "41322", "source": "Generate a title for the following abstract of a paper: The function decomposition described can identify subsets of existing features and discover nongiven functions that map these subsets to a new feature, also, it can organize the existing and new features into a hierarchy. The authors demonstrate their Hierarchy Induction Tool (HINT) system on a housing loan-allocation application. Methods for switching circuit design often implicitly deal with feature transformation. Such methods construct a circuit to implement a given or partially given tabulated Boolean function. The authors' function-decomposition method can discover and construct a hierarchy of new features that one can add to the original dataset or transform into a hierarchy of less complex datasets. The method allows the decomposition to deal with nominal-feature (that is, not necessarily binary) functions", "target": "Feature transformation by function decomposition"} | |
{"id": "41914", "source": "Generate a title for the following abstract of a paper: A recently developed topological mesh modeling approach allows users to change topol- ogy of orientable 2-manifold meshes and to create unusual faces. Handle-faces are one of such faces that are commonly created during topology changes. This paper shows that vertex insertion and corner cutting subdivision schemes can efiectively be used to recon- struct handle-faces. These reconstructions efiectively show the structure of these unusual faces. The paper has three contributions. First, we develop a new corner cutting scheme, which provides a tension parameter to control the shape of subdivided surface. Second, we develop careful and e-cient remeshing algorithms for our corner cutting scheme that use only the basic operations provided by our topological mesh modeling approach. This implementation ensures that our new corner cutting scheme preserves topological robust- ness. Finally, a comparative study shows that the corner cutting schemes create better handles and holes than the well-known Catmull-Clark scheme.", "target": "A NEW CORNER CUTTING SCHEME WITH TENSION AND HANDLE-FACE RECONSTRUCTION"} | |
{"id": "412472", "source": "Generate a title for the following abstract of a paper: This paper analyzes the translation quality of machine translation systems for 10 language pairs translating between Czech, English, French, German, Hungarian, and Spanish. We report the translation quality of over 30 diverse translation systems based on a large-scale manual evaluation involving hundreds of hours of effort. We use the human judgments of the systems to analyze automatic evaluation metrics for translation quality, and we report the strength of the correlation with human judgments at both the system-level and at the sentence-level. We validate our manual evaluation methodology by measuring intra- and inter-annotator agreement, and collecting timing information.", "target": "Further meta-evaluation of machine translation"} | |
{"id": "4126", "source": "Generate a title for the following abstract of a paper: Economic and social sciences will drive Internet protocols and services into the future.", "target": "Computational challenges in e-commerce"} | |
{"id": "411083", "source": "Generate a title for the following abstract of a paper: Conventional instruction sets or directly interpretable languages (DILs) have not been designed with high-level languages (HLLs) in mind. The modern design problem is to derive a space-time efficient DIL for a HLL processing system. In this paper, we present our approach to the problem of designing well-matched, space-time efficient DILs. A systematic, syntax- and semantics-directed DIL design methodology is presented. It calls for an incremental transformation of the source HLL, until a suitable target DIL is obtained. At the heart of the methodology is a canonic set of language transformations. An experimental study, involving several systematically derived DILs is carried out in order to characterize the relative merits and disadvantages of various sequences of transformations. Various space, time and interpretability trade-offs implied by the transformations are studied.", "target": "Design of instruction set architectures for support of high-level languages"} | |
{"id": "411408", "source": "Generate a title for the following abstract of a paper: In case of insufficient data samples in high-dimensional classification problems, sparse scatters of samples tend to have\n many \u2018holes\u2019\u2014regions that have few or no nearby training samples from the class. When such regions lie close to inter-class\n boundaries, the nearest neighbors of a query may lie in the wrong class, thus leading to errors in the Nearest Neighbor classification\n rule. The K-local hyperplane distance nearest neighbor (HKNN) algorithm tackles this problem by approximating each class with\n a smooth nonlinear manifold, which is considered to be locally linear. The method takes advantage of the local linearity assumption\n by using the distances from a query sample to the affine hulls of query\u2019s nearest neighbors for decision making. However,\n HKNN is limited to using the Euclidean distance metric, which is a significant limitation in practice. In this paper we reformulate\n HKNN in terms of subspaces, and propose a variant, the Local Discriminative Common Vector (LDCV) method, that is more suitable\n for classification tasks where the classes have similar intra-class variations. We then extend both methods to the nonlinear\n case by mapping the nearest neighbors into a higher-dimensional space where the linear manifolds are constructed. This procedure\n allows us to use a wide variety of distance functions in the process, while computing distances between the query sample and\n the nonlinear manifolds remains straightforward owing to the linear nature of the manifolds in the mapped space. We tested\n the proposed methods on several classification tasks, obtaining better results than both the Support Vector Machines (SVMs)\n and their local counterpart SVM-KNN on the USPS and Image segmentation databases, and outperforming the local SVM-KNN on the\n Caltech visual recognition database.", "target": "Manifold Based Local Classifiers: Linear and Nonlinear Approaches"} | |
{"id": "411728", "source": "Generate a title for the following abstract of a paper: This paper presents a robust digital image watermarking scheme using feature point detection and watermark template match. A scale interactive model based filter is used to extract the feature points of original image, based on which a watermark template is constructed and embedded adaptively into the local region of these points. Watermark decision is made by computing the statistical correlation between the watermark and the embedded region. Because the proposed feature detection is robust against JPEG compression, filtering, noise addition and geometric distortions, the proposed watermarking scheme can achieve good performance against these attacks, and experimental results also demonstrate the superiority.", "target": "Feature based watermarking using watermark template match."} | |
{"id": "41356", "source": "Generate a title for the following abstract of a paper: Discriminative sequential learning models like Conditional Random Fields (CRFs) have achieved significant success in several areas such as natural language processing or information extraction. Their key advantage is the ability to capture various non--independent and overlapping features of inputs. However, several unexpected pitfalls have a negative influence on the model's performance; these mainly come from an imbalance among classes/labels, irregular phenomena, and potential ambiguity in the training data. This paper presents a data--driven approach that can deal with such hard--to--predict data instances by discovering and emphasizing rare--but--important associations of statistics hidden in the training data. Mined associations are then incorporated into these models to deal with difficult examples. Experimental results of English phrase chunking and named entity recognition using CRFs show a significant improvement in accuracy. In addition to the technical perspective, our approach also highlights a potential connection between association mining and statistical learning by offering an alternative strategy to enhance learning performance with interesting and useful patterns discovered from large dataset.", "target": "Improving discriminative sequential learning with rare--but--important associations"} | |
{"id": "412298", "source": "Generate a title for the following abstract of a paper: This paper presents two topics. The first is an overview of our recently started project called \"experiential supplement\", which is to transfer human experiences by recording and processing them to be acceptable by others. The second is sensing technologies for producing experiential supplements in the context of learning. Because a basic activity of learning is reading, we also deal with sensing of reading. Methods for quantifying the reading in terms of the number of read words, the period of reading, type of read documents, identifying read words are shown with experimental results. As for learning, we propose methods for estimating the English ability, confidence in answers to English questions, and estimating unknown words. The above are sensed by various sensors including eye trackers, EOG, EEG, and the first person vision.\n\n", "target": "Quantified reading and learning for sharing experiences."} | |
{"id": "41130", "source": "Generate a title for the following abstract of a paper: Recent years have witnessed the explosive growth of online social networks (OSNs). They provide powerful IT-innovations for online social activities such as organizing contacts, publishing content, and sharing interests between friends who may never meet before. As more and more people become active users of OSNs, one may ponder questions such as (1) Do OSNs indeed improve our sociability? (2) To what extent can we expand our offline social spectrum in OSNs? (3) Can we identify some interesting user behaviors in OSNs? Our work in this paper attempts to answer these interesting questions. First, we systematically validate the existence of a new Dunbar@?s number in OSNs, which is ranging from 200 to 300 empirically. To reach this, we conduct local-structure analysis as well as user-interaction analysis on extensive real-world OSNs. Second, based on this new number, we divide OSN users into two categories: the rational and the aggressive, and find that rational users intend to develop close and reciprocated relationship, whereas aggressive users have no consistent behaviors. Third, we propose a simple model to highlight the constraints of time and cognition that may affect the evolution of OSNs heavily. Finally, we discuss the potential use of our findings for viral marketing and privacy management in OSNs.", "target": "Being rational or aggressive? A revisit to Dunbar's number in online social networks"} | |
{"id": "412418", "source": "Generate a title for the following abstract of a paper: There are various kinds of evolutionary computations (ECs) and they have their own merits and demerits. For example, PSO (Particle Swarm Optimization) shows high ability during initial period in general, whereas DE (Differential Evolution) shows high ability especially in the latter period in search to find more accurate solutions. This paper proposes a novel and integrated framework to effectively combine the merits of several evolutionary computations. There are five distinctive features in the proposed framework. 1) There are several individual pools, and each pool corresponds to one EC. 2) Parents do not necessarily belong to the same EC: for example, a GA type individual can be a spouse of a PSO type individual. 3) Each incorporated EC has its own evaluated value (EV), and it changes according to the best fitness value at each generation. 4) The number of individuals in each EC changes according to the EV. 5) All of the individuals have their own lifetime to avoid premature convergence; when an individual meets lifetime, the individual reselect EC, and the probability of each EC to be selected depends on the EV. In the proposed framework, therefore, more individuals are allotted to the ECs which show higher performance than the other at each generation: effective usage of individuals is enabled. In this way, this framework can make use of merits of incorporated ECs. Original GA, original PSO and original DE are used to construct a simple proposed framework-based system. We carried out experiments using well-known benchmark functions. The results show that the new system outperformed there incorporated ECs in 9 functions out of 13 functions.", "target": "An integrated framework of hybrid evolutionary computations"} | |
{"id": "41419", "source": "Generate a title for the following abstract of a paper: It is crucial to ensure correct process model executions. However, existing process testing approaches struggle with the verification of concurrent resource access patters that can lead to concurrency faults, such as, deadlocks or data corruption during runtime. Thus, we provide a concurrency verification approach that exploits recorded executions to verify the most frequently occurring concurrent resource access patterns with low test execution time. A prototypical implementation along with real life and artificial process execution logs is utilized for an evaluation.", "target": "A Testing Approach for Hidden Concurrencies Based on Process Execution Logs."} | |
{"id": "41383", "source": "Generate a title for the following abstract of a paper: A graph is an interval graph tf and only if each of Its verttces can be associated with an interval on the real hne m such a way that two vertices are adjacent m the graph exactly when the corresponding mtervals have a nonempty mtersectmn An effictent algonthrn for testing tsomorpinsm of interval graphs ts unplemented using a data structure called a PQ-tree. The algorithm runs m O(n + e) steps for graphs having n vemces and e edges It is shown that for a somewhat larger class of graphs, namely the chordal graphs, lsomorpinsm is as hard as for general graphs", "target": "A Linear Time Algorithm for Deciding Interval Graph Isomorphism"} | |
{"id": "41177", "source": "Generate a title for the following abstract of a paper: The number and the importance of Web applications have increased rapidly over the last years. At the same time, the quantity and impact of security vulnerabilities in such applications have grown as well. Since manual code reviews are time-consuming, error-prone and costly, the need for automated solutions has become evident. In this paper, we address the problem of vulnerable Web applications by means of static source code analysis. More precisely, we use flow-sensitive, interprocedural and context-sensitive data flow analysis to discover vulnerable points in a program. In addition, alias and literal analysis are employed to improve the correctness and precision of the results. The presented concepts are targeted at the general class of taint-style vulnerabilities and can be applied to the detection of vulnerability types such as SQL injection, cross-site scripting, or command injection. Pixy, the open source prototype implementation of our concepts, is targeted at detecting cross-site scripting vulnerabilities in PHP scripts. Using our tool, we discovered and reported 15 previously unknown vulnerabilities in three web applications, and reconstructed 36 known vulnerabilities in three other web applications. The observed false positive rate is at around 50% (i.e., one false positive for each vulnerability) and therefore, low enough to permit effective security audits.", "target": "Pixy: A Static Analysis Tool for Detecting Web Application Vulnerabilities (Short Paper)"} | |
{"id": "41881", "source": "Generate a title for the following abstract of a paper: Sum-product networks (SPNs) are a promising avenue for probabilistic modeling and have been successfully applied to various tasks. However, some theoretic properties about SPNs are not yet well understood. In this paper we fill some gaps in the theoretic foundation of SPNs. First, we show that the weights of any complete and consistent SPN can be transformed into locally normalized weights without changing the SPN distribution. Second, we show that consistent SPNs cannot model distributions significantly (exponentially) more compactly than decomposable SPNs. As a third contribution, we extend the inference mechanisms known for SPNs with finite states to generalized SPNs with arbitrary input distributions.", "target": "On Theoretical Properties of Sum-Product Networks."} | |
{"id": "41814", "source": "Generate a title for the following abstract of a paper: A credal network associates sets of probability distributions with directed acyclic graphs. Under strong independence assumptions, inference with credal networks is equivalent to a signomial program under linear constraints, a problem that is NP-hard even for categorical variables and polytree mod- els. We describe an approach for inference with polytrees that is based on branch-and-bound optimization/search algorithms. We use bounds gener- ated by Tessem's A/R algorithm, and consider various branch-and-bound schemes.", "target": "Inference in Credal Networks with Branch-and-Bound Algorithms"} | |
{"id": "411731", "source": "Generate a title for the following abstract of a paper: When choosing a testing technique, practitioners want to know which one will detect the faults that matter most to them in the programs that they plan to test. Do empirical evaluations of testing techniques provide this information? More often than not, they report how many faults in a carefully chosen \"representative\" sample the evaluated techniques detect. But the population of faults that such a sample would represent depends heavily on the faults' context or environment---as does the cost of failing to detect those faults. If empirical studies are to provide information that a practitioner can apply outside the context of the study, they must characterize the faults studied in a way that translates across contexts. A testing technique's fault-detecting abilities could then be interpreted relative to the fault characterization. In this paper, we present a list of criteria that a fault characterization must meet in order to be fit for this task, and we evaluate several well-known fault characterizations against the criteria. Two families of characterizations are found to satisfy the criteria: those based on graph models of programs and those based on faults' detection by testing techniques.", "target": "Faults' context matters"} | |
{"id": "41571", "source": "Generate a title for the following abstract of a paper: How to render very complex datasets, and yet maintain interactive response times, is a hot topic in computer graphics. The MagicSphere idea originated as a solution to this problem, but its potential goes much further than this original scope. In fact, it has been designed as a very generical 3D widget: it defines a spherical volume of interest in the dataset modeling space. Then, several filters can be associated with the MagicSphere, which apply different visualization modalities to the data contained in the volume of interest. The visualization of multi-resolution datasets is selected here as a case study and an ad hoc filter has been designed, the MultiRes filter. Some results of a prototipal implementation are presented and discussed.", "target": "Magicsphere - An Insight Tool For 3d Data Visualization"} | |
{"id": "41396", "source": "Generate a title for the following abstract of a paper: We describe in this paper the different approaches tested for the Photo Annotation task for CLEF 2011. We experimented state of the art techniques, by proposing late fusions of several classifiers trained on several features extracted from the images. The classifiers are SVMs and the late fusion is a simple addition of classification probabilities coming from the SVMs. The results obtained place our runs in the middle of the pack, with our best visual-based MAP at 0.337 We also integrated of Flickr human annotations, leading to a large increase of the MAP with a value of 0.377.", "target": "LIG-MRIM at Image Photo Annotation Task in ImageCLEF 2011."} | |
{"id": "41108", "source": "Generate a title for the following abstract of a paper: This paper explores the possibility of a grass roots approach to engaging people in community change initiatives by designing simple interactive exploratory prototypes for use by communities over time that support shared action. The prototype is gradually evolved in response to community use, fragments of data gathered through the prototype, and participant feedback with the goal of building participation in community change initiatives. A case study of a system to support ridesharing is discussed. The approach is compared and contrasted to a traditional IT systems procurement approach.", "target": "Designing for participation in local social ridesharing networks: grass roots prototyping of IT systems"} | |
{"id": "411881", "source": "Generate a title for the following abstract of a paper: Current mobility management approaches are highly centralized and hierarchical. They make use of centralized anchors located in core networks, responsible for forwarding traffic to/from mobile nodes' location. Emerging networks standards developed for new broadband cellular radio interfaces are use flat architecture where control functions are deployed in network edges such as evolved base stations. In order to better fit to such distributed logic while optimizing handover efficiency and minimizing traffic encapsulation, we propose a new scheme designed for mobility management in flat architectures. Our scheme dynamically anchors mobile nodes' traffic in access nodes, depending on their location when sessions are set up. A first estimation shows that we can obtain low handover delays, promising for future adoption and development of our distributed and dynamic approach.", "target": "A Distributed Dynamic Mobility Management Scheme Designed for Flat IP Architectures"} | |
{"id": "411839", "source": "Generate a title for the following abstract of a paper: Logic is currently the target of the majority of the upcoming efforts towards the realization of the Semantic Web vision, namely making the content of the Web accessible not only to humans, as it is today, but to machines as well. Defeasible reasoning, a rule-based approach to reasoning with incomplete and conflicting information, is a powerful tool in many Semantic Web applications. Despite its strong mathematical background, logic, in general, and defeasible logic, in particular, may overload the user with tons of additional complex semantic relationships among data and metadata of the Semantic Web. To this end, a comprehensible, visual representation of these semantic relationships (rules) would help users understand them and make more use of them. This paper presents VDR-DEVICE, a defeasible reasoning system, designed specifically for the Semantic Web environment. VDR-DEVICE is an integrated development environment for deploying and visualizing defeasible logic rule bases on top of RDF Schema ontologies. The system consists of a number of sub-components, which, though developed autonomously, are combined efficiently, forming a flexible framework. The system employs a defeasible reasoning system that supports direct importing and processing of RDF data and RDF Schema ontologies as well as a number of user-friendly rule base and ontology visualization modules.", "target": "Deploying defeasible logic rule bases for the semantic web"} | |
{"id": "41456", "source": "Generate a title for the following abstract of a paper: Behavioral entrainment is an important, naturally-occurring dynamic phenomenon in human interactions. In this paper, we carry out two quantitative analyses of the vocal entrainment phenomenon in the context of studying conflictual marital interactions. We investigate the role of vocal entrainment in reflecting different dimensions of couple-specific behaviors, such as withdrawal, that are commonly-used in assessing the effectiveness on the outcome of couple therapy. The results indicate a statistically-significant relation between these behaviors and vocal entrainment, as quantified using our proposed unsupervised signal-derived computational framework. We further demonstrate the potential of the signal-based vocal entrainment framework in characterizing influential factors in distressed couples relationship satisfaction outcomes.", "target": "Using measures of vocal entrainment to inform outcome-related behaviors in marital conflicts"} | |
{"id": "41902", "source": "Generate a title for the following abstract of a paper: Atlases have a tremendous impact on the study of anatomy and function, such as in neuroimaging, or cardiac analysis. They provide a means to compare corresponding measurements across populations, or model the variability in a population. Current approaches to construct atlases rely on examples that show the same anatomical structure (e. g., the brain). If we study large heterogeneous clinical populations to capture subtle characteristics of diseases, we cannot assume consistent image acquisition any more. Instead we have to build atlases from imaging data that show only parts of the overall anatomical structure. In this paper we propose a method for the automatic contruction of an un-biased whole body atlas from so-called fragments. Experimental results indicate that the fragment based atlas improves the representation accuracy of the atlas over an initial whole body template initialization.", "target": "Constructing an un-biased whole body atlas from clinical imaging data by fragment bundling."} | |
{"id": "412413", "source": "Generate a title for the following abstract of a paper: In this paper, we consider the problem of locating an Automated Guided Vehicle (AGV) which moves on a plane in an industrial environment by means of Ultra-Wide Band (UWB) signaling from fixed Anchors Nodes (ANs) situated in the (three-dimensional) space. An analytical approach to optimize, under proper (realistic) constraints, the placement of the ANs used to locate the AGV is proposed. Analytical results are confirmed by simulations.", "target": "Optimized Anchors Placement: An Analytical Approach In Uwb-Based Tdoa Localization"} | |
{"id": "411393", "source": "Generate a title for the following abstract of a paper: We introduce a new cryptographic tool: multiset hash functions. Unlike standard hash functions which take strings as input, multiset hash functions operate on multisets (or sets). They map multisets of arbitrary finite size to strings (hashes) of fixed length. They are incremental in that, when new members are added to the multiset, the hash can be updated in time proportional to the change. The functions may be multiset-collision resistant in that it is difficult to find two multisets which produce the same hash, or just set-collision resistant in that it is difficult to find a set and a multiset which produce the same hash. We demonstrate how set-collision resistant multiset hash functions make an existing offline memory integrity checker secure against active adversaries. We improve on this checker such that it can use smaller time stamps without increasing the frequency of checks. The improved checker uses multiset-collision resistant multiset hash functions.", "target": "Incremental Multiset Hash Functions and Their Application to Memory Integrity Checking"} | |
{"id": "41511", "source": "Generate a title for the following abstract of a paper: OWL has inherent limitations in expressing many Object Oriented (OO) features such as default inheritance, conflict resolution in multiple inheritance, method inheritance and encapsulation. But these features are essential to model real world phenomena, a fact that is underscored by the presence of many object oriented programming languages. OWL chooses not to support these features possibly because of its two design decisions (i) decidability and (ii) low computational complexity. Usually more expressive language tends to be computationally expensive, and in some cases, the language becomes either semi-decidable or undecidable. In this paper, we extend OWL to OWL$++ by supporting two inheritance modes, overriding and inflating, and three inheritance types, value, code and null. Our goal is to increase the expressive power of OWL as well as to maintain it as a computationally efficient decidable language. We demonstrate this by taking translational semantics of OWL++ to OWL. It allows us to build our OWL++ reasoner by using existing technologies such as Jena that works on top of well known OWL reasoners such as Pellet.", "target": "OWL that can Choose to Inherit and Hide it Too"} | |
{"id": "412233", "source": "Generate a title for the following abstract of a paper: Speaker recognition remains a challenging task under noisy conditions. Inspired by auditory perception, computational auditory scene analysis (CASA) typically segregates speech by producing a binary time-frequency mask. We first show that a recently introduced speaker feature, Gammatone Frequency Cepstral Coefficient, performs substantially better than conventional speaker features under noisy conditions. To deal with noisy speech, we apply CASA separation and then either reconstruct or marginalize corrupted components indicated by the CASA mask. Both methods are effective. We further combine them into a single system depending on the detected signal to noise ratio (SNR). This system achieves significant performance improvements over related systems under a wide range of SNR conditions.", "target": "Robust speaker identification using a CASA front-end"} | |
{"id": "41418", "source": "Generate a title for the following abstract of a paper: The capability to safely interrupt business process activities is an important requirement for advanced process-aware information systems. Indeed, exceptions stemming from the application environment often appear while one or more application-related process activities are running. Safely interrupting an activity consists of preserving its context, i.e., saying the data associated with this activity. This is important since possible solutions for an exceptional situation are often based on the current data context of the interrupted activity. In this paper, a data classification scheme based on data relevance and on data update frequency is proposed and discussed with respect to two different real-world applications. Taking into account this classification. a correctness criterion for interrupting running activities while preserving their context is proposed and analyzed.", "target": "Preserving the Context of Interrupted Business Process Activities"} | |
{"id": "412261", "source": "Generate a title for the following abstract of a paper: We propose a novel Sorted Switching Median Filter (i.e. SSMF) for effectively denoising extremely corrupted images while preserving the image details. The center pixel is considered as ''uncorrupted'' or ''corrupted'' noise in the detecting stage. The corrupted pixels that possess more noise-free surroundings will have higher processing priority in the SSMF sorting and filtering stages to rescue the heavily noisy neighbors. Five noise models are considered to assess the performance of the proposed SSMF algorithm. Several extensive simulation results conducted on both grayscale and color images with a wide range (from 10% to 90%) of noise corruption clearly show that the proposed SSMF substantially outperforms all other existing median-based filters.", "target": "Using Sorted Switching Median Filter to remove high-density impulse noises"} | |
{"id": "411139", "source": "Generate a title for the following abstract of a paper: The growing amount of web-based attacks poses a severe threat to the security of web applications. Signature-based detection techniques increasingly fail to cope with the variety and complexity of novel attack instances. As a remedy, we introduce a protocol-aware reverse HTTP proxy TokDoc (the token doctor), which intercepts requests and decides on a per-token basis whether a token requires automatic \"healing\". In particular, we propose an intelligent mangling technique, which, based on the decision of previously trained anomaly detectors, replaces suspicious parts in requests by benign data the system has seen in the past. Evaluation of our system in terms of accuracy is performed on two real-world data sets and a large variety of recent attacks. In comparison to state-of-the-art anomaly detectors, TokDoc is not only capable of detecting most attacks, but also significantly outperforms the other methods in terms of false positives. Runtime measurements show that our implementation can be deployed as an inline intrusion prevention system.", "target": "TokDoc: a self-healing web application firewall"} | |
{"id": "411003", "source": "Generate a title for the following abstract of a paper: Large knowledge graphs increasingly add value to various applications that require machines to recognize and understand queries and their semantics, as in search or question answering systems. Latent variable models have increasingly gained attention for the statistical modeling of knowledge graphs, showing promising results in tasks related to knowledge graph completion and cleaning. Besides storing facts about the world, schema-based knowledge graphs are backed by rich semantic descriptions of entities and relation-types that allow machines to understand the notion of things and their semantic relationships. In this work, we study how type-constraints can generally support the statistical modeling with latent variable models. More precisely, we integrated prior knowledge in form of type-constraints in various state of the art latent variable approaches. Our experimental results show that prior knowledge on relation-types significantly improves these models up to 77% in link-prediction tasks. The achieved improvements are especially prominent when a low model complexity is enforced, a crucial requirement when these models are applied to very large datasets. Unfortunately, type-constraints are neither always available nor always complete e.g., they can become fuzzy when entities lack proper typing. We show that in these cases, it can be beneficial to apply a local closed-world assumption that approximates the semantics of relation-types based on observations made in the data.", "target": "Type-Constrained Representation Learning in Knowledge Graphs"} | |
{"id": "412465", "source": "Generate a title for the following abstract of a paper: Algorithms for classifying one-factorizations of regular graphs are studied. The smallest open case is currently graphs of order 12; one-factorizations of r-regular graphs of order 12 are here classified for r less than or equal to 6 and r = 10, 11. Two different approaches are used for regular graphs of small degree; these proceed one-factor by one-factor and vertex by vertex, respectively. For degree r = 11, we have one-factorizations of K-12. These have earlier been classified, but a new approach is presented which views these as certain triple systems on 4n - 1 points and utilizes an approach developed for classifying Steiner triple systems. Some properties of the classified one-factorizations are also tabulated.", "target": "One-Factorizations of Regular Graphs of Order 12"} | |
{"id": "411556", "source": "Generate a title for the following abstract of a paper: The problem of implementing socially intelligent agents has been widely investigated in the field of both Embodied Conversational Agents (ECAs) and Social Robots that have the advantage of offering to people the possibility to relate with computer media at a social level. We focus our study on the recognition of the social response of users to embodied agents in the context of ambient intelligence. In this paper we describe how we extended a model for recognizing the social attitude in natural conversation from text by adding two additional knowledge sources: speech and gestures.", "target": "Towards a Model for Recognising the Social Attitude in Natural Interaction with Embodied Agents"} | |
{"id": "411126", "source": "Generate a title for the following abstract of a paper: Designated verifier signatures (DVS) allow a signer to create a signature whose validity can only be verified by a specific entity chosen by the signer. In addition, the chosen entity, known as the designated verifier, cannot convince any body that the signature is created by the signer. Multi-designated verifiers signatures (MDVS) are a natural extension of DVS in which the signer can choose multiple designated verifiers. DVS and MDVS are useful primitives in electronic voting and contract signing. In this paper, we investigate various aspects of MDVS and make two contributions. Firstly, we revisit the notion of unforgeability under rogue key attack on MDVS. In this attack scenario, a malicious designated verifier tries to forge a signature that passes through the verification of another honest designated verifier. A common counter-measure involves making the knowledge of secret key assumption (KOSK) in which an adversary is required to produce a proof-of-knowledge of the secret key. We strengthened the existing security model to capture this attack and propose a new construction that does not rely on the KOSK assumption. Secondly, we propose a generic construction of strong MDVS.", "target": "(Strong) multi-designated verifiers signatures secure against rogue key attack"} | |
{"id": "411550", "source": "Generate a title for the following abstract of a paper: In this paper, we present deterministic and probabilistic methods for simulating PRAM computations on linear arrays with reconfigurable pipelined bus systems (LARPBS). The following results are established in this paper. (1) Each step of a p-processor PRAM with m=O(p) shared memory cells can be simulated by a p-processors LARPBS in O(log p) time, where the constant in the big-O notation is small. (2) Each step of a p-processor PRAM with m=\u03a9(p) shared memory cells can be simulated by a p-processors LARPBS in O(log m) time. (3) Each step of a p-processor PRAM can be simulated by a p-processor LARPBS in O(log p) time with probability larger than 1\u22121/p^c for all c0. (4) As an interesting byproduct, we show that a p-processor LARPBS can sort p items in O(log p) time, with a small constant hidden in the big-O notation. Our results indicate that an LARPBS can simulate a PRAM very efficiently.", "target": "Efficient Deterministic and Probabilistic Simulations of PRAMs on Linear Arrays with Reconfigurable Pipelined Bus Systems"} | |
{"id": "41952", "source": "Generate a title for the following abstract of a paper: This paper addresses the automatic recognition of handwritten temperature values in weather records. The localization of table cells is based on line detection using projection profiles. Further, a stroke-preserving line removal method which is based on gradient images is proposed. The presented digit recognition utilizes features which are extracted using a set of filters and a Support Vector Machine classifier. It was evaluated on the MNIST and the USPS dataset and our own database with about 17,000 RGB digit images. An accuracy of 99.36% per digit is achieved for the entire system using a set of 84 weather records. ", "target": "Digit Recognition in Handwritten Weather Records"} | |
{"id": "41102", "source": "Generate a title for the following abstract of a paper: In this work we present the principles and experimental demonstration of a BB84 quantum key distribution (QKD) one way system using fainted pulses and a quadrature phase-shift-keying (QPSK) format including a time- multiplexed unmodulated carrier reference at Aliceu0027s end, and a differential homodyne reception at Bobu0027s end. We also describe the secure electronics interface subsystem concept for the interaction with upper layers in an IP application.", "target": "Towards Quantum Key Distribution System using Homodyne Detection with Differential Time-Multiplexed Reference"} | |
{"id": "411470", "source": "Generate a title for the following abstract of a paper: We consider a router on the Internet analyzing the statistical properties of a TCP/IP packet stream. A fundamental difficulty with measuring traffic behavior on the Internet is that there is simply too much data to be recorded for later analysis, on the order of gigabytes a second. As a result, network routers can collect only relatively few statistics about the data. The central problem addressed here is to use the limited memory of routers to determine essential features of the network traffic stream. A particularly difficult and representative subproblem is to determine the top k categories to which the most packets belong, for a desired value of k and for a given notion of categorization such as the destination IP address.We present an algorithm that deterministically finds (in particular) all categories having a frequency above 1/(m+1) using m counters, which we prove is best possible in the worst case. We also present a sampling-based algorithm for the case that packet categories follow an arbitrary distribution, but their order over time is permuted uniformly at random. Under this model, our algorithm identifies flows above a frequency threshold of roughly 1/\u9a74nm with high probability, where m is the number of counters and n is the number of packets observed. This guarantee is not far off from the ideal of identifying all flows (probability 1/n), and we prove that it is best possible up to a logarithmic factor. We show that the algorithm ranks the identified flows according to frequency within any desired constant factor of accuracy.", "target": "Frequency Estimation of Internet Packet Streams with Limited Space"} | |
{"id": "412069", "source": "Generate a title for the following abstract of a paper: Many blind source separation algorithms only try to find the estimation of original signals and the de-mixing system. However, finding the mixing system is also an important task, which is a representation for the recording environment. Direct inversing the mixing system will introduce many noises. We proposed a correlation based backward-forward combined algorithm to do blind source separation. In the proposed algorithm, an alternative optimization strategy is applied. The estimation of mixing/de-mixing system is used as an initial guess to optimize the de-mixing/mixing system iteratively. This alternative optimization algorithm can obtain better estimation for mixing and de-mixing system at the same time. In addition, it can reduce the chance to be tracked into a local minimal. As a result, we can get a better estimation for the sources. Good experimental results were obtained on both simulated and real signals.", "target": "Backward-Forward Combined Convolutive Blind Source Separation"} | |
{"id": "412196", "source": "Generate a title for the following abstract of a paper: In this paper, the homogeneous weights of matrix product codes over finite principal ideal rings are studied and a lower bound for the minimum homogeneous weights of such matrix product codes is obtained.", "target": "Homogeneous weights of matrix product codes over finite principal ideal rings"} | |
{"id": "41653", "source": "Generate a title for the following abstract of a paper: This paper caters the need of acquiring the principal objects, characters, and scenes from a video in order to entertain the image based query. The movie frames are divided into frames with 2D representative images called \"key frames\". Various regions in a key frame are marked as key objects according to their textures and shapes. These key objects serve as a catalogue of regions to be searched and matched from rest of the movie, using viewpoint invariant regions calculation, providing the location, size, and orientation of all the objects occurring in the movie in the form of a set of structures collaborating as video profile. The profile provides information about occurrences of every single key object from every frame of the movie it exists in. This information can further ease streaming of objects over various network-based viewing qualities. Hence, the method provides an effective reduced profiling approach of automatic logging and viewing information through query by example (QBE) procedure, and deals with video streaming issues at the same time.", "target": "Key Objects Based Profile for a Content-Based Video Information Retrieval and Streaming System Using Viewpoint Invariant Regions"} | |
{"id": "411718", "source": "Generate a title for the following abstract of a paper: . A strategy in solving the shape from shading problem forthe shape and albedo recovery of book surfaces under the fully perspectiveenvironment is proposed. The whole recovery process is composedof three sequential steps : preprocessing, recovery of apparent shape, andortho-image generation. A set of pure shade and albedo images are separatedin preprocessing step. Implicit equations governing the shadingand observation have been transformed into explicit ones. Direct and uniquerecovery... ", "target": "A Divide-and-Conquer Strategy in Recovering Shape of Book Surface from Shading"} | |
{"id": "41895", "source": "Generate a title for the following abstract of a paper: The goal of data mining algorithm is to discover useful information embedded in large databases. Frequent itemset mining and sequential pattern mining are two important data mining problems with broad applications. Perhaps the most efficient way to solve these problems sequentially is to apply a pattern-growth algorithm, which is a divide-and-conquer algorithm [9, 10]. In this paper, we present a framework for parallel mining frequent itemsets and sequential patterns based on the divide-and-conquer strategy of pattern growth. Then, we discuss the load balancing problem and introduce a sampling technique, called selective sampling, to address this problem. We implemented parallel versions of both frequent itemsets and sequential pattern mining algorithms following our framework. The experimental results show that our parallel algorithms usually achieve excellent speedups.", "target": "A sampling-based framework for parallel data mining"} | |
{"id": "41636", "source": "Generate a title for the following abstract of a paper: The increasing ability to quickly collect and cheaply store large volumes of data, and the need for extracting concise information to be efficiently manipulated and intuitively analyzed, are posing new requirements for Database Management Systems (DBMS) in both industrial and scientific applications. A common approach to deal with huge data volumes is to reduce the available information to knowledge artifacts (i.e., clusters, rules, etc.), hereafter called patterns, through data processing methods (pattern recognition, data mining, knowledge extraction). Patterns reduce the number and size of the original information to manageable size while preserving as much as possible its hidden / interesting content. In order to efficiently and effectively deal with patterns, academic groups and industrial consortiums have recently devoted efforts towards modeling, storage, retrieval, analysis and manipulation of patterns with results mainly in the areas of Inductive Databases and Pattern Base Management Systems (PBMS).", "target": "Report on the International Workshop on Pattern Representation and Management (PaRMa'04)"} | |
{"id": "411378", "source": "Generate a title for the following abstract of a paper: Infinite families of planar cubic hypohamiltonian and hypotraceable graphs are described and these are used to prove that the maximum degree and the maximum number of edges in a hypohamiltonian graph with n vertices are approximately n2 and n24, respectively. Also, the possible order of a cubic hypohamiltonian graph is determined.", "target": "Planar cubic hypohamiltonian and hypotraceable graphs"} | |
{"id": "411138", "source": "Generate a title for the following abstract of a paper: Traditional structured analysis and design methods have been criticized because the methods lack formality to provide a design for rigorous development. Several approaches have been developed for verifying a design by integrating traditional structured analysis and design methods with formal specification languages. However, the integration of traditional structured analysis and design methods with formal specification languages may confuse the problem definition and understanding with the attempt to define system structures and algorithms in the design phase. In addition, formal specification languages may add complexity to the design due to the complexity of the language syntax and semantics. In this paper, we are presenting an approach to verification of a structured design in HOS without using formal specification languages. Rules for the partial and total correctness of the design are also discussed.", "target": "Formal Verification of Structured Analysis and Design in HOS"} | |
{"id": "41122", "source": "Generate a title for the following abstract of a paper: In recent years, substantial progress has been achieved in the area of volume visualization on irregular grids, which is mainly based on tetrahedral meshes. Even moderately fine tetrahedral meshes consume several mega-bytes of storage. For archivation and transmission compression algorithms are essential. In scientific applications lossless compression schemes are of primary interest. This paper introduces a new lossless compression scheme for the connectivity of tetrahedral meshes. Our technique can handle all tetrahedral meshes in three dimensional euclidean space even with non manifold border. We present compression and decompression algorithms which consume for reasonable meshes linear time in the number of tetrahedra. The connectivity is compressed to less than 2.4 bits per tetrahedron for all measured meshes. Thus a tetrahedral mesh can almost be reduced to the vertex coordinates, which consume in a common representation about one quarter of the total storage space.We complete our work with solutions for the compression of vertex coordinates and additional attributes, which might be attached to the mesh.", "target": "Tetrahedral mesh compression with the cut-border machine"} | |