id
stringlengths 7
7
| title
stringlengths 3
578
| abstract
stringlengths 0
16.7k
| keyphrases
sequence | prmu
sequence |
---|---|---|---|---|
2rfYoEf | An overview of the BIOASQ large-scale biomedical semantic indexing and question answering competition | This article provides an overview of the first BioASQ challenge, a competition on large-scale biomedical semantic indexing and question answering (QA), which took place between March and September 2013. BioASQ assesses the ability of systems to semantically index very large numbers of biomedical scientific articles, and to return concise and user-understandable answers to given natural language questions by combining information from biomedical articles and ontologies. | [
"semantic indexing",
"question answering",
"bioasq competition",
"hierarchical text classification",
"information retrieval",
"passage retrieval",
"multi-document text summarization"
] | [
"P",
"P",
"R",
"U",
"M",
"U",
"U"
] |
-XXxufK | Regreening the Metropolis: Pathways to More Ecological Cities: Keynote Address | Eighty percent of the American population now lives in metropolitan regions whose geographic extent continues to expand even as many core cities and inner-tier suburbs lose middle-class populations, jobs, and tax base. Urban sprawl and the socioeconomic polarizing of metropolitan America have been fostered by public policies including (1) federal subsidies for new infrastructure on the urban fringe; (2) tax policies that favor home ownership over rental properties; (3) local zoning codes; and (4) federal and state neglect of older urban neighborhoods. In the face of diminished access to nature outside of metropolitan areas, locally based efforts to protect and restore greenspaces within urban areas seek to make older communities more habitable and more ecological. Some pathways to more ecological cities include the following | [
"urban ecology",
"city nature",
"urban biodiversity",
"spirit of place"
] | [
"R",
"R",
"M",
"M"
] |
48zubXF | A modified runs test for symmetry | We propose a modification of a ModarresGastwirth test for the hypothesis of symmetry about a known center. By means of a Monte Carlo Study we show that the modified test overtakes the original ModarresGastwirth test for a wide spectrum of asymmetrical alternatives coming from the lambda family and for all assayed sample sizes. We also show that our test is the best runs test among the runs tests we have compared. | [
"runs test",
"test of symmetry",
"generalized lambda family",
"power",
"primary 62g10",
"secondary 62g30"
] | [
"P",
"R",
"M",
"U",
"U",
"U"
] |
57msgzz | Probability-based approaches to VLSI circuit partitioning | Iterative-improvement two-way min-cut partitioning is an important phase in most circuit placement tools, and finds use in many other computer-aided design (CAD) applications. Most iterative improvement techniques for circuit netlists like the Fiduccia-Mattheyses (FM) method compute the gains of nodes using local netlist information that is only concerned with the immediate improvement in the cutset, This can lead to misleading gain information. Krishnamurthy suggested a lookahead (LA) gain calculation method to ameliorate this situation; however, as we show, it leaves room for improvement. We present here a probabilistic gain computation approach called probabilistic partitioner (PROP) that is capable of capturing the future implications of moving a node at the current time. We also propose an extended algorithm SHRINK-PROP that increases the provability of removing recently "perturbed" nets (nets whose nodes have been moved for the first time) from the cutset, Experimental results on medium- to large-size ACM/SIGDA benchmark circuits show that PROP and SHRINK-PROP outperform previous iterative-improvement methods like FM (bq. about 30% and 37%, respectively) and LA (by about 27% and 34%, respectively). Both PROP and SHRINK-PROP also obtain much better cutsizes than many recent state-of-the-art partitioners like EIG1, WINDOW MELO, PARABOLI, GFM and CMetis (by 4.5% to 67%). Our empirical timing results reveal that PROP is appreciably Faster than most recent techniques, We also obtain results on the more recent ISPD-98 benchmark suite that show similar substantial mincut improvements by PROP and SHRINK-PROP over FM (24% and 31%, respectively). it is also noteworthy that SHRINK-PROP's results are within 2.5% of those obtained by hMetis. one of the best multilevel partitioners. However. the multilevel paradigm is orthogonal to SHRINK-PROP. Further, since it is a "flat" partitioner, it has advantages over hMetis in partition-driven placement applications. | [
"vlsi circuit",
"min-cut partitioning",
"iterative improvement",
"probabilistic gain",
"clustering effect"
] | [
"P",
"P",
"P",
"P",
"U"
] |
dX1XqyC | ShengBTE: A solver of the Boltzmann transport equation for phonons ? | ShengBTE is a software package for computing the lattice thermal conductivity of crystalline bulk materials and nanowires with diffusive boundary conditions. It is based on a full iterative solution to the Boltzmann transport equation. Its main inputs are sets of second- and third-order interatomic force constants, which can be calculated using third-party ab-initio packages. Dirac delta distributions arising from conservation of energy are approximated by Gaussian functions. A locally adaptive algorithm is used to determine each process-specific broadening parameter, which renders the method fully parameter free. The code is free software, written in Fortran and parallelized using MPI. A complementary Python script to help compute third-order interatomic force constants from a minimum number of ab-initio calculations, using a real-space finite-difference approach, is also publicly available for download. Here we discuss the design and implementation of both pieces of software and present results for three example systems: Si, InAs and lonsdaleite. Program title: ShengBTE Catalogue identifier: AESL_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AESL_v1_0.html Program obtainable from: CPC Program Library, Queens University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 292052 No. of bytes in distributed program, including test data, etc.: 1989781 Distribution format: tar.gz Programming language: Fortran 90, MPI. Computer: Non-specific. Operating system: Unix/Linux. Has the code been vectorized or parallelized?: Yes, parallelized using MPI. RAM: Up to several GB Classification: 7.9. External routines: LAPACK, MPI, spglib (http://spglib.sourceforge.net/) Nature of problem: Calculation of thermal conductivity and related quantities, determination of scattering rates for allowed three-phonon processes Solution method: Iterative solution, locally adaptive Gaussian broadening Running time: Up to several hours on several tens of processors | [
"boltzmann transport equation",
"phonon",
"thermal conductivity"
] | [
"P",
"P",
"P"
] |
-LHbKSy | Tracing impact in a usability improvement process | Analyzing usability improvement processes as they take place in real-life organizations is necessary to understand the practice of usability work. This paper describes a case study where the usability of an information system is improved and a relationship between the improvements and the evaluation efforts is established. Results show that evaluation techniques complemented each other by suggesting different kinds of usability improvement. Among the techniques applied, a combination of questionnaires and Metaphors of Human Thinking (MOT) showed the largest mean impact and MOT produced the largest number of impacts. Logging of real-life use of the system over 6 months indicated six aspects of improved usability, where significant differences among evaluation techniques were found. Concerning five of the six aspects Think Aloud evaluations and the above-mentioned combination of questionnaire and MOT performed equally well, and better than MOT. Based on the evaluations 40 redesign proposals were developed and 30 of these were implemented. Four of the implemented redesigns where considered especially important. These evolved with inspiration from multiple evaluations and were informed by stakeholders with different kinds of expertise. Our results suggest that practitioners should not rely on isolated evaluations. Instead complementing techniques should be combined, and people with different expertise should be involved. (C) 2007 Elsevier B.V. All rights reserved. | [
"usability improvement process",
"case study",
"questionnaire",
"metaphors of human thinking",
"usability engineering",
"think loud"
] | [
"P",
"P",
"P",
"P",
"M",
"M"
] |
4sJYLN3 | HOMAN, a learning based negotiation method for holonic multi-agent systems | Holonic multi-agent systems are a special category of multi-agent systems that best fit to environments with numerous agents and high complexity. Like in general multi-agent systems, the agents in the holonic system may negotiate with each other. These systems have their own characteristics and structure, for which a specific negotiation mechanism is required. This mechanism should be simple, fast and operable in real world applications. It would be better to equip negotiators with a learning method which can efficiently use the available information. The learning method should itself be fast, too. Additionally, this mechanism should match the special characteristics of the holonic multi-agent systems. In this paper, we introduce such a negotiation method. Experimental results demonstrate the efficiency of this new approach. | [
"negotiation",
"holonic multi-agent systems",
"semi-cooperative",
"agreement",
"regression"
] | [
"P",
"P",
"U",
"U",
"U"
] |
1edzP1F | the portable common runtime approach to interoperability | Operating system abstractions do not always reach high enough for direct use by a language or applications designer. The gap is filled by language-specific runtime environments, which become more complex for richer languages (CommonLisp needs more than C+ +, which needs more than C). But language-specific environments inhibit integrated multi-lingual programming, and also make porting hard (for instance, because of operating system dependencies). To help solve these problems, we have built the Portable Common Runtime (PCR), a language-independent and operating-system-independent base for modern languages. PCR offers four interrelated facilities: storage management (including universal garbage collection), symbol binding (including static and dynamic linking and loading), threads (lightweight processes), and low-level I/O (including network sockets). PCR is common because these facilities simultaneously support programs in several languages. PCR supports C. Cedar, Scheme, and CommonLisp intercalling and runs pre-existing C and CommonLisp (Kyoto) binaries. PCR is portable because it uses only a small set of operating system features. The PCR source code is available for use by other researchers and developers. | [
"portability",
"runtime",
"interoperability",
"operating system",
"systems",
"abstraction",
"direct",
"use",
"language",
"applications",
"design",
"environments",
"complexity",
"integrability",
"program",
"dependencies",
"help",
"storage management",
"collect",
"binding",
"dynamic",
"linking",
"thread",
"process",
"network",
"support",
"scheme",
"feature",
"research",
"developer",
"source-code"
] | [
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"U"
] |
-jH8-z2 | Efficient keyword search over virtual XML views | Emerging applications such as personalized portals, enterprise search, and web integration systems often require keyword search over semi-structured views. However, traditional information retrieval techniques are likely to be expensive in this context because they rely on the assumption that the set of documents being searched is materialized. In this paper, we present a system architecture and algorithm that can efficiently evaluate keyword search queries over virtual (unmaterialized) XML views. An interesting aspect of our approach is that it exploits indices present on the base data and thereby avoids materializing large parts of the view that are not relevant to the query results. Another feature of the algorithm is that by solely using indices, we can still score the results of queries over the virtual view, and the resulting scores are the same as if the view was materialized. Our performance evaluation using the INEX data set in the Quark (Bhaskar et al. in Quark: an efficient XQuery full-text implementation. In: SIGMOD, 2006) open-source XML database system indicates that the proposed approach is scalable and efficient. | [
"keyword search",
"xml views",
"document projections",
"document pruning",
"top-k"
] | [
"P",
"P",
"M",
"M",
"U"
] |
2Ghuhss | A novel clustering method on time series data | Time series is a very popular type of data which exists in many domains. Clustering time series data has a wide range of applications and has attracted researchers from a wide range of discipline. In this paper a novel algorithm for shape based time series clustering is proposed. It can reduce the size of data, improve the efficiency and not reduce the effects by using the principle of complex network. Firstly, one-nearest neighbor network is built based on the similarity of time series objects. In this step, triangle distance is used to measure the similarity. Of the neighbor network each node represents one time series object and each link denotes neighbor relationship between nodes. Secondly, the nodes with high degrees are chosen and used to cluster. In clustering process, dynamic time warping distance function and hierarchical clustering algorithm are applied. Thirdly, some experiments are executed on synthetic and real data. The results show that the proposed algorithm has good performance on efficiency and effectiveness. (C) 2011 Elsevier Ltd. All rights reserved. | [
"clustering",
"time series",
"dynamic time warping",
"nearest neighbor network"
] | [
"P",
"P",
"P",
"M"
] |
3VTZwih | An Improved Floating-to-Fixed-Point Conversion Scheme for DCT Quantization Algorithm | Conventional fixed-point implementation of the DCT coefficients quantization algorithm in video compression may result in deteriorated image quality. The paper investigates this problem and proposes an improved floating-to-fixed-point conversion scheme. With a proper scaling factor and a new-established look-up table, the proposed fixed-point scheme can obtain bit-wise consistence to the floating-point realization. Experimental results verify the validity of the proposed method. | [
"floating-to-fixed-point conversion",
"quantization",
"video compression",
"discrete cosine transform"
] | [
"P",
"P",
"P",
"U"
] |
-3tsAWm | A radial basis function network approach for the computation of inverse continuous time variant functions | This Paper presents an efficient approach for the fast computation of inverse continuous time variant functions with the proper use of Radial Basis Function Networks (RBFNs). The approach is based on implementing RBFNs for computing inverse continuous time variant functions via an overall damped least squares solution that includes a novel null space vector for singularities prevention. The singularities avoidance null space vector is derived from developing a sufficiency condition for singularities prevention that conduces to establish some characterizing matrices and an associated performance index. | [
"radial basis functions network",
"artificial neural networks",
"inverse functions"
] | [
"P",
"M",
"R"
] |
-xPhNq2 | Cryptography on smart cards ? | This article presents an overview of the cryptographic primitives that are commonly implemented on smart cards. We also discuss attacks that can be mounted on smart cards as well as countermeasures against such attacks. | [
"cryptography",
"smart cards"
] | [
"P",
"P"
] |
4wjfFqN | The antecedents of customer satisfaction and its link to complaint intentions in online shopping: An integration of justice, technology, and trust | Complaint behaviors are critical to maintaining customer loyalty in an online market. They provide insight into the customer's experience of service failure and help to redress the failures. Previous studies have shown the importance of customer satisfaction as a mediator for complaint intentions. It is important to examine the antecedents of customer satisfaction and its link to complaint intentions. Online shoppers are both buyers of products/services and users of web-based systems. Trust also plays a critical role in forming a psychological state with positive or negative feelings toward e-vendors. In this context, there are three major concerns: justice, technology and trust. This study proposes a research model to combine these issues, in order to investigate complaint intentions. Data were collected from an online survey wherein subjects were encouraged to reflect on recent service failure experiences. The results from testing a structural equation model indicate that distributive and interactional justice contribute significantly to customer satisfaction and, in turn, to complaint intentions, but procedural justice does not. Technology-based features and trust are also important in determining the two target variables. The implications for managers and scholars are also discussed. | [
"customer satisfaction",
"complaint intention",
"online shopping",
"trust",
"justice theory",
"expectationconfirmation model"
] | [
"P",
"P",
"P",
"P",
"M",
"M"
] |
3tCn&9v | to divide and conquer search ranking by learning query difficulty | Learning to rank plays an important role in information retrieval. In most of the existing solutions for learning to rank, all the queries with their returned search results are learnt and ranked with a single model. In this paper, we demonstrate that it is highly beneficial to divide queries into multiple groups and conquer search ranking based on query difficulty. To this end, we propose a method which first characterizes a query using a variety of features extracted from user search behavior, such as the click entropy, the query reformulation probability. Next, a classification model is built on these extracted features to assign a score to represent how difficult a query is. Based on this score, our method automatically divides queries into groups, and trains a specific ranking model for each group to conquer search ranking. Experimental results on RankSVM and RankNet with a large-scale evaluation dataset show that the proposed method can achieve significant improvement in the task of web search ranking. | [
"query difficulty",
"learning to rank"
] | [
"P",
"P"
] |
21vW&rx | Defect reduction in PCB contract manufacturing operations | This study addresses the identification and improvement of a defect-reducing process step in plated-through-hole (PTH) technology of printed circuit board (PCB) assemblies. The process step discussed is a step in which the substrates are baked prior to assembly. While this step is developed to address defect problems faced by both OEMs and contract manufacturers alike, this paper discusses an experiment designed to improve the effect of the baking step that was performed at a PCB contract manufacturing facility. Furthermore, due to the tremendous variations in product complexity, a relatively new statistical process control chart which tracks defects per millions of opportunities (DPMO), was used to help evaluate the results. (C) 1997 Elsevier Science Ltd. | [
"contract manufacturing",
"printed circuit boards",
"statistical process control",
"electronics manufacturing",
"plated-throughhole technology",
"quality control",
"dpmo chart",
"outgassing",
"yields"
] | [
"P",
"P",
"P",
"M",
"M",
"M",
"R",
"U",
"U"
] |
1fDXJGT | a new form of dos attack in a cloud and its avoidance mechanism | Data center networks are typically grossly under-provisioned. This is not a problem in a corporate data center, but it could be a problem in a shared infrastructure, such as a co-location facility or a cloud infrastructure. If an application is deployed in such an infrastructure, the application owners need to take into account the infrastructure limitations. They need to build in counter-measures to ensure that the application is secure and it meets its performance requirements. In this paper, we describe a new form of DOS attack, which exploits the network under-provisioning in a cloud infrastructure. We have verified that such an attack could be carried out in practice in one cloud infrastructure. We also describe a mechanism to detect and avoid this new form of attack. | [
"dos attack",
"bandwidth estimation"
] | [
"P",
"U"
] |
2GqQ:AV | interdisciplinary applications of mathematical modeling | We demonstrate applications of numerical integration and visualization algorithms in diverse fields including psychological modeling (biometrics); in high energy physics for the study of collisions of elementary particles; and in medical physics for regulating the dosage of proton beam radiation therapy. We discuss the problems and solution methods, as supported by numerical results. | [
"numerical integration and visualization",
"proton beam radiation therapy",
"sensory discriminal process",
"duo-trio method",
"feynman diagram",
"adaptive partitioning algorithm"
] | [
"P",
"P",
"U",
"M",
"U",
"M"
] |
36ZxhLX | The probability ranking principle revisited | A theoretic framework for multimedia information retrieval is introduced which guarantees optimal retrieval effectiveness. In particular. a Ranking Principle for Distributed Multimedia-Documents (RPDM) is described together with an algorithm that satisfies this principle. Finally, the RPDM is shown to be a generalization of the Probability Ranking principle (PRP) which guarantees optimal retrieval effectiveness in the case of text document retrieval. The PRP justifies theoretically the relevance ranking adopted by modern search engines. In contrast to the classical PRP. the new RPDM takes into account transmission and inspection time, and most importantly, aspectual recall rather than simple recall. | [
"probability ranking principle",
"multimedia information retrieval",
"relevance ranking",
"optimal search performance",
"maximum retrieval effectiveness"
] | [
"P",
"P",
"P",
"M",
"M"
] |
-u5EZwc | how users associate wireless devices | In a wireless world, users can establish connections between devices spontaneously, and unhampered by cables. However, in the absence of cables, what is the natural interaction to connect one device with another? A wide range of device association techniques have been demonstrated, but it has remained an open question what actions users would spontaneously choose for device association. We contribute a study eliciting device association actions from non-technical users without premeditation. Over 700 user-defined actions were collected for 37 different device combinations. We present a classification of user-defined actions, and observations of the users' rationale. Our findings indicate that there is no single most spontaneous action; instead five prominent categories of user-defined actions were found. | [
"wireless devices",
"device association",
"spontaneous interaction",
"input actions"
] | [
"P",
"P",
"R",
"M"
] |
3neBVar | Analyticity of weighted central paths and error bounds for semidefinite programming | The purpose of this paper is two-fold. Firstly, we show that every Cholesky-based weighted central path for semidefinite programming is analytic under strict complementarity. This result is applied to homogeneous cone programming to show that the central paths defined by the known class of optimal self-concordant barriers are analytic in the presence of strictly complementary solutions. Secondly, we consider a sequence of primal-dual solutions that lies within a prescribed neighborhood of the central path of a pair of primal-dual semidefinite programming problems, and converges to the respective optimal faces. Under the additional assumption of strict complementarity, we derive two necessary and sufficient conditions for the sequence of primal-dual solutions to converge linearly with their duality gaps. | [
"error bound",
"semidefinite programming",
"homogeneous cone programming",
"weighted analytic center"
] | [
"P",
"P",
"P",
"M"
] |
14Goc4P | Multi-Class Blue Noise Sampling | Sampling is a core process for a variety of graphics applications. Among existing sampling methods, blue noise sampling remains popular thanks to its spatial uniformity and absence of aliasing artifacts. However, research so far has been mainly focused on blue noise sampling with a single class of samples. This could be insufficient for common natural as well as man-made phenomena requiring multiple classes of samples, such as object placement, imaging sensors, and stippling patterns. | [
"multi-class",
"blue noise",
"sampling",
"poisson hard/soft disk",
"dart throwing",
"relaxation"
] | [
"P",
"P",
"P",
"U",
"U",
"U"
] |
5-TY2HM | A note on the inventory models for deteriorating items with ramp type demand rate ? | In this research we study the inventory models for deteriorating items with ramp type demand rate. We first clearly point out some questionable results that appeared in (Mandal, B., Pal, A.K., 1998. Order level inventory system with ramp type demand rate for deteriorating items. Journal of Interdisciplinary Mathematics 1, 4966 and Wu, K.S., Ouyang, L.Y., 2000. A replenishment policy for deteriorating items with ramp type demand rate (Short Communication). Proceedings of National Science Council ROC (A) 24, 279286). And then resolve the similar problem by offering a rigorous and efficient method to derive the optimal solution. In addition, we also propose an extended inventory model with ramp type demand rate and its optimal feasible solution to amend the incompleteness in the previous work. Moreover, we also proposed a very good inventory replenishment policy for this kind of inventory model. We believe that our work will provide a solid foundation for the further study of this sort of important inventory models with ramp type demand rate. | [
"inventory",
"deteriorating item",
"ramp type demand rate"
] | [
"P",
"P",
"P"
] |
4r4ZxiC | Efficient multiple faces tracking based on Relevance Vector Machine and Boosting learning | A multiple faces tracking system was presented based on Relevance Vector Machine (RVM) and Boosting learning. In this system, a face detector based on Boosting learning is used to detect faces at the first frame, and the face motion model and color model are created. The face motion model consists of a set of RVMs that learn the relationship between the motion of the face and its appearance, and the face color model is the 2D histogram of the face region in CrCb color space. In the tracking process different tracking methods (RVM tracking, local search, giving up tracking) are used according to different states of faces, and the states are changed according to the tracking results. When the full image search condition is satisfied, a full image search is started in order to find new coming faces and former occluded faces. In the full image search and local search, the similarity matrix is introduced to help matching faces efficiently. Experimental results demonstrate that this system can (a) automatically find new coming faces; (b) recover from occlusion, for example, if the faces are occluded by others and reappear or leave the scene and return; (c) run with a high computation efficiency, run at about 20 frames/s. (C) 2008 Elsevier Inc. All rights reserved. | [
"multiple faces tracking",
"face tracking",
"relevance vector machine",
"boosting",
"face detection",
"real-time tracking",
"probabilistic algorithms",
"adaboost"
] | [
"P",
"P",
"P",
"P",
"R",
"M",
"U",
"U"
] |
2:aZ4ku | a design flow for application-specific networks on chip with guaranteed performance to accelerate soc design and verification | Systems on chip (SOC) are composed of intellectual property blocks (IP) and interconnect. While mature tooling exists to design the former, tooling for interconnect design is still a research area. In this paper we describe an operational design flow that generates and configures application-specific network on chip (NOC) instances, given application communication requirements. The NOC can be simulated in SystemC and RTL VHDL. An independent performance verification tool verifies analytically that the NOC instance (hardware) and its configuration (software) together meet the application performance requirements. The ?thereal NOC's guaranteed performance is essential to replace time-consuming simulation by fast analytical performance validation. As a result, application-specific NOCs that are guaranteed to meet the application's communication requirements are generated and verified in minutes, reducing the number of design iterations. A realistic MPEG SOC example substantiates our claims. | [
"design",
"flow",
"applications",
"network",
"performance",
"verification",
"intellectual property",
"interconnect",
"tool",
"research",
"paper",
"configurability",
"communication",
"requirements",
"simulation",
"systemc",
"hardware",
"software",
"iter",
"examples",
"system on-chip",
"timing",
"network on-chip"
] | [
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"M",
"U",
"M"
] |
4NzSBHc | Homing-pigeon-based messaging: multiple pigeon-assisted delivery in delay-tolerant networks | In this paper, we consider the applications of delay-tolerant networks (DTNs), where the nodes in a network are located in separated areas, and in each separated area, there exists (at least) an anchor node that provides regional network coverage for the nearby nodes. The anchor nodes are responsible for collecting and distributing messages for the nodes in the vicinity. This work proposes to use a set of messengers (named pigeons) that move around the network to deliver messages among multiple anchor nodes. Each source node (anchor node or Internet access point) owns multiple dedicated pigeons, and each pigeon takes a round trip starting from its home (i.e., the source) through the destination anchor nodes and then returns home, disseminating the messages on its way. We named this as a homing-pigeon-based messaging (HoPM) scheme. The HoPM scheme is different from the prior schemes in that each messenger is completely dedicated to its home node for providing messaging service. We obtained the average message delay of HoPM scheme in DTN through theoretical analysis with three different pigeon scheduling schemes. The analytical model was validated by simulations. We also studied the effects of several key parameters on the system performance and compared the results with previous solutions. The results allowed us to better understand the impacts of different scheduling schemes on the system performance of HoPM and demonstrated that our proposed scheme outperforms the previous ones. Copyright (c) 2011 John Wiley & Sons, Ltd. | [
"delay-tolerant network",
"messenger scheduling",
"partitioned wireless network",
"homing-pigeon messaging system",
"queueing theory",
"traffic modeling and mobility management"
] | [
"P",
"R",
"M",
"M",
"U",
"M"
] |
3bQJmFi | The West Nile Virus Encephalitis Outbreak in the United States (1999-2000) | Viruses cause most forms of encephalitis. The two main types responsible for epidemic encephalitis are enteroviruses and arboviruses. The City of New York reports about 10 cases of encephalitis yearly. Establishing a diagnosis is often difficult. In August 1999, a cluster of five patients with fever, confusion, and weakness were admitted to a community hospital in Flushing, New York. Flaccid paralysis developed in four of the five patients, and they required ventilatory support. Three, less severe, cases presented later in the same month. An investigation was conducted by the New York City (NYC) and New York State (NYS) health departments and the national Centers for Disease Control and Prevention (CDC). The West Nile virus (WNV) was identified as the etiologic agent. WNV is an arthropod-borne flavivirus, with a geographic distribution in Africa, the Middle East, and southwestern Asia. It has also been isolated in Australia and sporadically in Europe but never in the Americas. The majority of people infected have no symptoms. Fever, severe myalgias, headache, conjunctivitis, lymphadenopathy, and a roseolar rash can occur. Rarely, encephalitis or meningitis is seen. The NYC outbreak resulted in the first cases of WNV infection in the Western Hemisphere and the first arboviral infection in NYC since yellow fever in the nineteenth century. The WNV is now a public health concern in the United States. | [
"west nile virus",
"encephalitis",
"arbovirus"
] | [
"P",
"P",
"U"
] |
1JZscg6 | Existence results for impulsive neutral second-order stochastic evolution equations with nonlocal conditions | In this paper we consider a class of impulsive neutral second-order stochastic evolution equations with nonlocal initial conditions in a real separable Hilbert space. Sufficient conditions for the existence of mild solutions are established by operator theory and the Sadovskii fixed point theorem. An example is provided to illustrate the theory. (C) 2011 Elsevier Ltd. All rights reserved. | [
"stochastic evolution equations",
"nonlocal condition",
"impulsive equation"
] | [
"P",
"P",
"R"
] |
-7neibd | Distribution network design: New problems and related models | We study some complex distribution network design problems, which involve facility location, warehousing, transportation and inventory decisions. Several realistic scenarios are investigated. Two kinds of mathematical programming formulations are proposed for all the introduced problems, together with a proof of their correctness. Some formulations extend models proposed by Perl and Daskin (1985) for some warehouse location-routing problems; other formulations are based on flow variables and constraints. | [
"distribution",
"location-routing",
"integer linear programming models"
] | [
"P",
"P",
"M"
] |
-9bxE5& | Lyapunov-based nonlinear controllers for obstacle avoidance with a planar n-link doubly nonholonomic manipulator | A mobile manipulator is a robotic system made up of two components; a mobile platform and a manipulator mounted on the platform equipped with non-deformable wheels. Such a combined system requires complex design and control. This paper considers the autonomous navigation problem of a nonholonomic mobile platform and an n-link nonholonomic manipulator fixed to the platform. For this planar n-link doubly nonholonomic manipulator, we present the first ever set of nonlinear continuous controllers for obstacle avoidance. The controllers provide a collision-free trajectory within a constrained workspace cluttered with fixed obstacles of different shapes and sizes whilst satisfying the nonholonomic and kinodynamic constraints associated with the robotic system. An advantage of the proposed method is the ease at which the acceleration-based control laws can be derived from the Lyapunov function. The effectiveness of the nonholonomic planner is demonstrated via computer simulations. (C) 2012 Elsevier B.V. All rights reserved. | [
"n-link doubly nonholonomic manipulators",
"kinodynamic constraints",
"lyapunov-based control scheme",
"artificial potential fields",
"lyapunov stability"
] | [
"P",
"P",
"M",
"U",
"M"
] |
eotyBCZ | Automatic analysis of trabecular bone structure from knee MRI | We investigated the feasibility of quantifying osteoarthritis (OA) by analysis of the trabecular bone structure in low-field knee MRI. Generic texture features were extracted from the images and subsequently selected by sequential floating forward selection (SFFS), following a fully automatic, uncommitted machine-learning based framework. Six different classifiers were evaluated in cross-validation schemes and the results showed that the presence of OA can be quantified by a bone structure marker. The performance of the developed marker reached a generalization area-under-the-ROC (AUC) of 0.82, which is higher than the established cartilage markers known to relate to the OA diagnosis. | [
"bone structure",
"mri",
"oa",
"machine learning",
"texture analysis",
"classification",
"feature selection"
] | [
"P",
"P",
"P",
"U",
"R",
"U",
"R"
] |
3joZ2fF | An application of fuzzy sets theory to the EOQ model with imperfect quality items | This article investigates the inventory problem for items received with imperfect quality, where, upon the arrival of order lot, 100% screening process is performed and the items of imperfect quality are sold as a single batch at a discounted price, prior to receiving the next shipment. The objective is to determine the optimal order lot size to maximize the total profit. We first propose a model with fuzzy defective rate. Then, the model with fuzzy defective rate and fuzzy annual demand is presented. For each case, we employ the signed distance, a ranking method for fuzzy numbers, to find the estimate of total profit per unit time in the fuzzy sense, and then derive the corresponding optimal lot size. Numerical examples are provided to illustrate the results of proposed models. | [
"fuzzy set",
"imperfect quality",
"inventory",
"signed distance"
] | [
"P",
"P",
"P",
"P"
] |
1&vc3kz | Maximin performance of binary-input channels with uncertain noise distributions | We consider uncertainty classes of noise distributions defined by a bound on the divergence with respect to a nominal noise distribution. The noise that maximizes the minimum error probability for binary-input channels is found. The effect of the reduction in uncertainty brought about by knowledge of the signal-to-noise ratio is also studied. The particular class of Gaussian nominal distributions provides an analysis tool for near-Gaussian channels. Asymptotic behavior of the least favorable noise distribution and resulting error probability are studied in a variety of scenarios, namely: asymptotically small divergence with and without power constraint; asymptotically large divergence with and without power constraint; and asymptotically large signal-to-noise ratio. | [
"least favorable noise",
"detection",
"gaussian error probability",
"hypothesis testing",
"kullback-leibler divergence"
] | [
"P",
"U",
"R",
"U",
"M"
] |
3BfHpY- | Alleviating the problem of local minima in Backpropagation through competitive learning | The backpropagation (BP) algorithm is widely recognized as a powerful tool for training feedforward neural networks (FNNs). However, since the algorithm employs the steepest descent technique to adjust the network weights, it suffers from a slow convergence rate and often produces suboptimal solutions, which are the two major drawbacks of BP. This paper proposes a modified BP algorithm which can remarkably alleviate the problem of local minima confronted with by the standard BP (SBP). As one output of the modified training procedure, a bucket of all the possible solutions of weights matrices found during training is acquired, among which the best solution is chosen competitively based upon their performances on a validation dataset. Simulations are conducted on four benchmark classification tasks to compare and evaluate the classification performances and generalization capabilities of the proposed modified BP and SBP. | [
"local minima",
"competitive learning",
"backpropagation (bp)",
"feedforward neural networks (fnns)",
"classification"
] | [
"P",
"P",
"P",
"P",
"P"
] |
4A3Bc2n | Towards categorical models for fairness: fully abstract presheaf semantics of SCCS with finite delay | We present a presheaf model for the observation of infinite as well as finite computations. We give a concrete representation of the presheaf model as a category of generalised synchronisation trees and show that it is coreflective in a category of generalised transition systems, which are a special case of the general transition systems of Hennessy and Stirling. This can be viewed as a first step towards representing fairness in categorical models for concurrency. The open map bisimulation is shown to coincide with extended bisimulation of Hennessy and Stirling, which is essentially fair CTL*-bisimulation. We give a denotational semantics of Milner's SCCS with finite delay in the presheaf model, which differs from previous semantics by giving the meanings of recursion by final coalgebras and meanings of finite delay by initial algebras of the process equations for delay. Finally, we formulate Milner's operational semantics of SCCS with finite delay in terms of generalised transition systems and prove that the presheaf semantics is fully abstract with respect to extended bisimulation. (C) 2002 Published by Elsevier Science B.V. | [
"fairness",
"finite delay",
"concurrency",
"open maps",
"full abstraction"
] | [
"P",
"P",
"P",
"P",
"M"
] |
2mZP9L3 | Determining efficient temperature sets for the simulated tempering method | In statistical physics, the efficiency of tempering approaches strongly depends on ingredients such as the number of replicas R R , reliable determination of weight factors and the set of used temperatures, TR={T1,T2,,TR} T R = { T 1 , T 2 , , T R } . For the simulated tempering (ST) in particularuseful due to its generality and conceptual simplicitythe latter aspect (closely related to the actual R R ) may be a key issue in problems displaying metastability and trapping in certain regions of the phase space. To determine TR T R s leading to accurate thermodynamics estimates and still trying to minimize the simulation computational time, here a fixed exchange frequency scheme is considered for the ST. From the temperature of interest T1 T 1 , successive T T s are chosen so that the exchange frequency between any adjacent pair Tr T r and Tr+1 T r + 1 has a same value f f . By varying the f f s and analyzing the TR T R s through relatively inexpensive tests (e.g.,time decay towards the steady regime), an optimal situation in which the simulations visit much faster and more uniformly the relevant portions of the phase space is determined. As illustrations, the proposal is applied to three lattice models, BEG, BellLavis, and Potts, in the hard case of extreme first-order phase transitions, always giving very good results, even for R=3 R = 3 . Also, comparisons with other protocols (constant entropy and arithmetic progression) to choose the set TR T R are undertaken. The fixed exchange frequency method is found to be consistently superior, specially for small R R s. Finally, distinct instances where the prescription could be helpful (in second-order transitions and for the parallel tempering approach) are briefly discussed. | [
"simulated tempering",
"strong first-order phase transitions",
"monte carlo methods",
"replica temperatures optimal values"
] | [
"P",
"M",
"M",
"R"
] |
4m:RehH | A numerical method for solving variable coefficient elliptic equation with interfaces ? | A new 2nd order accurate numerical method on non-body-fitting grids is proposed for solving the variable coefficient elliptic equation in disjoint subdomains ? separated by interfaces ?. The variable coefficients, the source term, and hence the solution itself and its derivatives may be discontinuous across the interfaces. Jump conditions in solution and its co-normal derivative at interface are prescribed. Instead of smooth, the interfaces are only required to be Lipschitz continuous as submanifold. A weak formulation is developed, the existence, uniqueness and regularity of the solutions are studied. The numerical method is derived by discretizing the weak formulation. The method is different from traditional finite element methods. Extensive numerical experiments are presented and show that the method is 2nd order accurate in solution and 1st order accurate in its gradient in L? norm if the interface is C2 and solutions are C2 on the closures of the subdomains. The method can handle the problems when the solutions and/or the interfaces are weaker than C2. For example, u?H2(?), ? is Lipschitz continuous and their singularities coincide, see Example 18 in Section 4. The accuracies of the method under various circumstances are listed in Table 19. | [
"65n30",
"35j25"
] | [
"U",
"U"
] |
3zD8v6E | Attitudes of community pharmacists, university-based pharmacists, and students toward on-line information resources | The study sought to explore the attitudes of community pharmacists, university-based pharmacists, and pharmacy students before and after exposure to computerized systems of on-line information services. A 42-item attitudinal survey was administered to 21 community pharmacists, 7 university clinical pharmacist faculty, and 17 senior pharmacy students, prior to and at the end of a year of access to Grateful Med(R) and BRS Colleague(R). Few significant differences were noted among the participants at baseline. No significant interaction-effect differences for type of participant or system used were found. Participants were generally positive about computers in general, the accuracy of on-line information services, their impact on knowledge and confidence, and their usefulness for pharmacists. | [
"attitudes",
"pharmacists",
"computers",
"drug information"
] | [
"P",
"P",
"P",
"M"
] |
3&n-2Yo | Comparison of several approaches to the linear approximation of the yield condition and application to the robust design of plane frames for the case of uncertainty | Since the yield condition for frame structures is non-linear, piecewise linear approximations are needed in order to apply linear optimization methods. Four approaches are presented and compared. After the theoretical consideration and comparison of the different approximation methods, they are applied to the robust design of an 18-bar frame in case of uncertainty. Here, the less restrictive methods yield the cheapest design, as expected. It will be shown, that the approximation from inside of first level does not cause much higher costs than the other methods. But since its constraints are sufficient in contrast to other approximations, it is recommended. | [
"yield condition",
"plane frame",
"piecewise linear approximation",
"robust optimal design",
"stochastic uncertainty",
"stochastic applied load"
] | [
"P",
"P",
"P",
"R",
"M",
"M"
] |
2eb:zqV | alpha-RST: a generalization of rough set theory | The paper presents a transition from the crisp rough set theory to a fuzzy one, called Alpha Rough Set Theory or, in short, alpha-RST. All basic concepts or rough set theory are extended, i.e., information system, indiscernibility, dependency, reduction, core, definability, approximations and boundary. The resulted theory takes into account fuzzy data and allows the approximation of fuzzy concepts. Besides, the control of knowledge granularity is natural in a-RST which is based on a parameterized indiscernibility relation, a-RST is developed to recognize non-deterministic relationships using notions as alpha-dependency, alpha-reduct and so forth. On the other hand, we introduce a notion of relative dependency as an alternative of the absolute definibility presented in rough set theory. The extension a-RST leads naturally to the new concept of alpha rough sets which represents sets with fuzzy non-empty boundaries. (C) 2000 Elsevier Science Inc. All rights reserved. | [
"rough sets",
"fuzzy sets",
"attributes dependency",
"concept approximation"
] | [
"P",
"R",
"M",
"R"
] |
2QxtusY | A Class of Differential Vector Variational Inequalities in Finite Dimensional Spaces | In this paper, we introduce and study a class of differential vector variational inequalities in finite dimensional Euclidean spaces. We establish a relationship between differential vector variational inequalities and differential scalar variational inequalities. Under various conditions, we obtain the existence and linear growth of solutions to the scalar variational inequalities. In particular we prove existence theorems for Carathodory weak solutions of the differential vector variational inequalities. Furthermore, we give a convergence result on Euler time-dependent procedure for solving the initial-value differential vector variational inequalities. | [
"differential vector variational inequality",
"existence",
"linear growth",
"carathodory weak solution",
"euler time-stepping procedure"
] | [
"P",
"P",
"P",
"P",
"M"
] |
1EoRppT | Adaptive hypermedia | Adaptive hypermedia is a relatively new direction of research on the crossroads of hypermedia and user modeling. Adaptive hypermedia systems build a model of the goals, preferences and knowledge of each individual user, and use this model throughout the interaction with the user, in order to adapt to the needs of that user. The goal of this paper is to present the state of the art in adaptive hypermedia at the eve of the year 2000, and to highlight some prospects for the future. This paper attempts to serve both the newcomers and the experts in the area of adaptive hypermedia by building on an earlier comprehensive review (Brusilovsky, 1996; Brusilovsky, 1998). | [
"adaptation",
"hypermedia",
"user model",
"hypertext",
"user profile",
"adaptive presentation",
"adaptive navigation support",
"web-based systems"
] | [
"P",
"P",
"P",
"U",
"M",
"R",
"M",
"M"
] |
4WTV6nr | framing design in the third paradigm | This paper develops vocabulary to discuss the phenomena related to the new design paradigm, which considers designing as a situated and constructive activity of meaning making rather than as problem solving. The paper studies how design projects proceed from the fuzzy early phases towards the issues of central relevance to designing. A central concept is framing, and it is elaborated with examples from two case studies. Several aspects of framing are explicated, exploratory, anticipatory and social framing, and related concepts of 'focusing', 'priming', and 'grounding' are explained. The paper concludes that understanding designing as a situated and constructive making of meaning has bearings on how designing needs to be supported. | [
"design framing",
"reflective practice",
"user-centered design",
"user-driven innovation"
] | [
"R",
"U",
"M",
"U"
] |
276V-9- | Interval evaluations in the analytic hierarchy process by possibility analysis | Since a pairwise comparison matrix in the Analytic Hierarchy Process (AHP) is based on human intuition, the given matrix will always include inconsistent elements violating the transitivity property. We propose the Interval AI IP by which interval weights can be obtained. The widths of the estimated interval weights represent inconsistency in judging data. Since interval weights can be obtained from inconsistent data, the proposed Interval AI-IP is more appropriate to human judgment. Assuming crisp values in a pairwise comparison matrix, the interval comparisons including the given crisp comparisons can be obtained by applying the Linear Programming (LP) approach. Using an interval preference relation, the Interval AHP for crisp data can be extended to an approach for interval data allowing to express the uncertainty of human judgment in pairwise comparisons. | [
"interval evaluations",
"possibility analysis",
"ahp"
] | [
"P",
"P",
"P"
] |
ys1A3qh | A model for real-time failure prognosis based on hidden Markov model and belief rule base | As one of most important aspects of condition-based maintenance (CBM), failure prognosis has attracted an increasing attention with the growing demand for higher operational efficiency and safety in industrial systems. Currently there are no effective methods which can predict a hidden failure of a system real-time when there exist influences from the changes of environmental factors and there is no such an accurate mathematical model for the system prognosis due to its intrinsic complexity and operating in potentially uncertain environment. Therefore, this paper focuses on developing a new hidden Markov model (HMM) based method which can deal with the problem. Although an accurate model between environmental factors and a failure process is difficult to obtain, some expert knowledge can be collected and represented by a belief rule base (BRB) which is an expert system in fact. As such, combining the HMM with the BRB, a new prognosis model is proposed to predict the hidden failure real-time even when there are influences from the changes of environmental factors. In the proposed model, the HMM is used to capture the relationships between the hidden failure and monitored observations of a system. The BRB is used to model the relationships between the environmental factors and the transition probabilities among the hidden states of the system including the hidden failure, which is the main contribution of this paper. Moreover, a recursive algorithm for online updating the prognosis model is developed. An experimental case study is examined to demonstrate the implementation and potential applications of the proposed real-time failure prognosis method. | [
"failure prognosis",
"hidden markov model",
"belief rule base",
"environmental factors",
"expert systems"
] | [
"P",
"P",
"P",
"P",
"P"
] |
-KtNmRE | Computing the Volume of a Union of Balls: A Certified Algorithm | Balls and spheres are amongst the simplest 3D modeling primitives, and computing the volume of a union of balls is an elementary problem. Although a number of strategies addressing this problem have been investigated in several communities, we are not aware of any robust algorithm, and present the first such algorithm. Our calculation relies on the decomposition of the volume of the union into convex regions, namely the restrictions of the balls to their regions in the power diagram. Theoretically, we establish a formula for the volume of a restriction, based on Gauss' divergence theorem. The proof being constructive, we develop the associated algorithm. On the implementation side, we carefully analyse the predicates and constructions involved in the volume calculation, and present a certified implementation relying on interval arithmetic. The result is certified in the sense that the exact volume belongs to the interval computed. Experimental results are presented on hand-crafted models illustrating various difficulties, as well as on the 58,898 models found in the tenth of July 2009 release of the Protein Data Bank. | [
"union of balls",
"algorithms",
"volume calculation",
"interval arithmetic",
"design",
"reliability",
"theory",
"computational geometry",
"alpha-shapes",
"medial axis transform",
"structural biology",
"protein modeling",
"macro-molecular models",
"van der waals models",
"certified numerics",
"c plus plus design"
] | [
"P",
"P",
"P",
"P",
"U",
"U",
"U",
"M",
"U",
"U",
"U",
"R",
"M",
"M",
"M",
"U"
] |
-xYzxyM | Simple polynomial multiplication algorithms for exact conditional tests of linearity in a logistic model | The linear logistic model is often employed in the analysis of binary response data. The well-known asymptotic chi-square and likelihood ratio tests are usually used to detect the assumption of linearity in such a model. For small, sparse, or skewed data, the asymptotic theory is however dubious and exact conditional chi-square and likelihood ratio tests may provide reliable alternatives. In this article, we propose efficient polynomial multiplication algorithms to compute exact significance levels as well as exact powers of these tests. Two options, namely the cell- and stage-wise approaches, in implementing these algorithms will be discussed. When sample sizes are large, we propose an efficient Monte Carlo method for estimating the exact significance levels and exact powers. Real data are used to demonstrate the performance with an application of the proposed algorithms. | [
"polynomial multiplication algorithm",
"exact significance level",
"dose-response data",
"exact power computation"
] | [
"P",
"P",
"M",
"R"
] |
4zepaV7 | On the definitions of anonymity for ring signatures | This paper studies the relations among several definitions of anonymity for ring signature schemes in the same attack environment. It is shown that one intuitive and two technical definitions we consider are asymptotically equivalent, and the indistinguishability-based technical definition is the strongest, i.e., the most secure when achieved. when the exact reduction cost is taken into account. We then extend our result to the threshold case where a subset of members cooperate to create a signature. The threshold setting makes the notion of anonymity more complex and yields a greater variety of definitions. We explore several notions and observe certain relation does not seem hold unlike the simple single-signer case. Nevertheless, we see that an indistinguishability-based definition is the most favorable in the threshold case. We also study the notion of linkability and present a simple scheme that achieves both anonymity and linkability. | [
"anonymity",
"ring signature",
"linkability"
] | [
"P",
"P",
"P"
] |
3KJvLmA | scalable proximity estimation and link prediction in online social networks | Proximity measures quantify the closeness or similarity between nodes in a social network and form the basis of a range of applications in social sciences, business, information technology, computer networks, and cyber security. It is challenging to estimate proximity measures in online social networks due to their massive scale (with millions of users) and dynamic nature (with hundreds of thousands of new nodes and millions of edges added daily). To address this challenge, we develop two novel methods to efficiently and accurately approximate a large family of proximity measures. We also propose a novel incremental update algorithm to enable near real-time proximity estimation in highly dynamic social networks. Evaluation based on a large amount of real data collected in five popular online social networks shows that our methods are accurate and can easily scale to networks with millions of nodes. To demonstrate the practical values of our techniques, we consider a significant application of proximity estimation: link prediction, i.e., predicting which new edges will be added in the near future based on past snapshots of a social network. Our results reveal that (i) the effectiveness of different proximity measures for link prediction varies significantly across different online social networks and depends heavily on the fraction of edges contributed by the highest degree nodes, and (ii) combining multiple proximity measures consistently yields the best link prediction accuracy. | [
"link prediction",
"social network",
"proximity measure",
"matrix factorization",
"embedding",
"sketch"
] | [
"P",
"P",
"P",
"U",
"U",
"U"
] |
1XY11pz | Applications of regional strain energy in compliant structure design for energy absorption | Topology optimization of regional strain energy is studied in this paper. Unlike the conventional mean compliance formulation, this paper considers two main functions of structure: rigidity and compliance. For normal usages, rigidity is chosen as the design objective. For compliant design, a portion of the structure absorbs energy, while another part maintains the structural integrity. Therefore, we implemented a regional strain energy formulation for topology optimization. Sensitivity to regional strain energy is derived from the adjoint method. Numerical results from the proposed formulation are presented. | [
"compliant structure",
"energy absorption",
"topology optimization"
] | [
"P",
"P",
"P"
] |
3S4FFrA | conversion of control dependence to data dependence | Program analysis methods, especially those which support automatic vectorization, are based on the concept of interstatement dependence where a dependence holds between two statements when one of the statements computes values needed by the other. Powerful program transformation systems that convert sequential programs to a form more suitable for vector or parallel machines have been developed using this concept [AllK 82, KKLW 80].The dependence analysis in these systems is based on data dependence. In the presence of complex control flow, data dependence is not sufficient to transform programs because of the introduction of control dependences. A control dependence exists between two statements when the execution of one statement can prevent the execution of the other. Control dependences do not fit conveniently into dependence-based program translators.One solution is to convert all control dependences to data dependences by eliminating goto statements and introducing logical variables to control the execution of statements in the program. In this scheme, action statements are converted to IF statements. The variables in the conditional expression of an IF statement can be viewed as inputs to the statement being controlled. The result is that control dependences between statements become explicit data dependences expressed through the definitions and uses of the controlling logical variables.This paper presents a method for systematically converting control dependences to data dependences in this fashion. The algorithms presented here have been implemented in PFC, an experimental vectorizer written at Rice University. | [
"conversation",
"control dependence",
"control",
"dependencies",
"data dependence",
"data",
"program analysis",
"program",
"method",
"support",
"vectorization",
"concept",
"values",
"program transformation",
"transformation",
"systems",
"parallel",
"dependence analysis",
"presence",
"complexity",
"control flow",
"fit",
"translation",
"variability",
"scheme",
"action",
"express",
"definition",
"paper",
"algorithm",
"experimentation"
] | [
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P"
] |
-s6:Xo6 | An integration scheme for electromagnetic scattering using plane wave edge elements | Finite element techniques for the simulation of electromagnetic wave propagation are, like all conventional element based approaches for wave problems, limited by the ability of the polynomial basis to capture the sinusoidal nature of the solution. The Partition of Unity Method (PUM) has recently been applied successfully, in finite and boundary element algorithms, to wave propagation. In this paper, we apply the PUM approach to the edge finite elements in the solution of Maxwells equations. The electric field is expanded in a set of plane waves, the amplitudes of which become the unknowns, allowing each element to span a region containing multiple wavelengths. However, it is well known that, with PUM enrichment, the burden of computation shifts from the solver to the evaluation of oscillatory integrals during matrix assembly. A full electromagnetic scattering problem is not simulated or solved in this paper. This paper is an addition to the work of Ledger and concentrates on efficient methods of evaluating the oscillatory integrals that arise. A semi-analytical scheme of the Filon character is presented. | [
"edge elements",
"partition of unity",
"maxwells equations",
"oscillatory integrals"
] | [
"P",
"P",
"P",
"P"
] |
347aqaK | Search-based metamodel matching with structural and syntactic measures | Metamodel matching using search-based software engineering. The use of syntactic measures improve the results of metamodel matching We compared our approach to four ontology-based approaches. Our results show that our search-based approach was significantly better than state-of-the-art matching tools. | [
"search-based software engineering",
"model matching",
"simulated annealing"
] | [
"P",
"M",
"U"
] |
-6snK1b | A recursion-based broadcast paradigm in wormhole routed networks | A novel broadcast technique for wormhole-routed parallel computers based on recursion is presented in this paper. It works by partitioning the interconnection graph into a number of higher-level subgraphs. Then, we identify the Transmission SubGraph (TSG) in each subgraph. Both the higher-level subgraphs and the TSGs are recursively defined, i.e., we split each level i subgraph into several level i + 1 subgraphs and identify-level i + 1 TSGs accordingly. We first split and scatter the source message into the TSG of the original graph. Next, in each recursive round message transmissions are from lower-level TSGs to higher-level TSGs and all transmissions at the same level happen concurrently. The algorithm proceeds recursively from lower-level subgraphs to higher level subgraphs until each highest-level subgraph (a single node) gets the complete message. We have applied this general paradigm to a number of topologies including two or higher dimension mesh/torus and hypercube. Our results show considerable improvements over all other algorithms for a wide range of message sizes under both one-port and all-port models. | [
"wormhole routing",
"mesh",
"torus",
"hypercube",
"massive parallel computer",
"one-to-all broadcast",
"parallel processing"
] | [
"P",
"P",
"P",
"P",
"M",
"M",
"M"
] |
4ir2ZyZ | Stress analysis of three-dimensional contact problems using the boundary element method | This paper presents a technique based on the boundary element method[1] to analyse three-dimensional contact problems. The formulation is implemented for the frictionless and infinite friction conditions. Following a review of the basic nature of contact problems, the analytical basis of the direct formulation of the boundary element method is described. The numerical implementation employs linear triangular elements for the representation of the boundary and variables of the bodies in contact. Opposite nodal points in similar element pairs are defined on the two surfaces in the area which are expected to come into contact under the increasing load. The use of appropriate contact conditions enables the integral equations for the two bodies to be coupled together. Following an iteration procedure, the size of the contact zone is determined by finding a boundary solution compatible with the contact conditions. Different examples have been analysed in order to verify the applicability of the proposed method to various contact situations. The results have been compared with those obtained using the finite element method in conjunction with the ABAQUS[2] and IDEAS[3] packages which are shown to be in good agreement. | [
"stress analysis",
"three-dimensional contact problems",
"boundary element method"
] | [
"P",
"P",
"P"
] |
3DLJXzC | co-evolving application code and design models by exploiting meta-data | Evolvability and adaptability are intrinsic properties of today's software applications. Unfortunately, the urgency of evolving/adapting a system often drives the developer to directly modify the application code neglecting to update its design models. Even, most of the development environments support the code refactoring without supporting the refactoring of the design information. Refactoring, evolution and in general every change to the code should be reflected into the design models, so that these models consistently represent the application and can be used as documentation in the successive maintenance steps. The code evolution should not evolve only the application code but also its design models . Unfortunately, to co-evolve the application code and its design is a hard job to be carried out automatically, since there is an evident and notorious gap between these two representations. We propose a new approach to code evolution (in particular to code refactoring) that supports the automatic co-evolution of the design models. The approach relies on a set of predefined metadata that the developer should use to annotate the application code and to highlight the refactoring performed on the code. Then, these meta-data are retrieved through reflection and used to automatically and coherently update the application design models. | [
"meta-data",
"refactoring",
"reflection",
"co-evolution",
"software evolution"
] | [
"P",
"P",
"P",
"P",
"R"
] |
3ysG6Xa | Differential Effects of Donepezil on Methamphetamine and Cocaine Dependencies | Donepezil, a choline esterase inhibitor, has been widely used as a medicine for Alzheimer's disease. Recently, a study showed that donepezil inhibited addictive behaviors induced by cocaine, including cocaine-conditioned place preference (CPP) and locomotor sensitization to cocaine. In the present study, we investigated the effects of donepezil on methamphetamine (METH)-induced behavioral changes in mice. In counterbalanced CPP tests, the intraperitoneal (i.p.) administration of 3 mg/kg donepezil prior to 2 mg/kg METH i.p. failed to inhibit METH CPP, whereas pretreatment with 3 mg/kg donepezil abolished the CPP for cocaine (10 mg/kg, i.p.). Similarly, in locomotor sensitization experiments, i.p. administration of 1 mg/kg donepezil prior to 2 mg/kg METH i.p. failed to inhibit locomotor sensitivity to METH, whereas pretreatment with 1 mg/kg donepezil significantly inhibited locomotor sensitivity to cocaine (10 mg/kg, i.p.). These results suggest that donepezil may be a useful tool for treating cocaine dependence but not for treating METH dependence. The differences in the donepezil effects on addictive behaviors induced by METH and cocaine might be due to differences in the involvement of acetylcholine in the mechanisms of METH and cocaine dependencies | [
"donepezil",
"methamphetamine",
"cocaine",
"sensitization",
"mice",
"conditioned place preference"
] | [
"P",
"P",
"P",
"P",
"P",
"M"
] |
4-EAhsp | GraphClus, a MATLAB program for cluster analysis using graph theory | Cluster analysis is used in numerous scientific disciplines. A method of cluster analysis based on graph theory is discussed and a MATLAB (TM) code for its implementation is presented. The algorithm is based on the number of variables that are similar between samples. By changing the similarity criterion in a stepwise fashion, a hierarchical group structure develops, and can be displayed by a dendrogram. Three indexes describe the homogeneity of a given variable in a group, the heterogeneity of that variable between two groups, and the usefulness of that variable in distinguishing two groups. The algorithm is applied to both a synthetic dataset and a set of trace element analyses of lavas from Mount Etna in order to compare GraphClus to other Cluster analysis algorithms. (C) 2008 Elsevier Ltd. All rights reserved. | [
"cluster analysis",
"graph theory",
"classification"
] | [
"P",
"P",
"U"
] |
4QMBuVN | A Gaussian function model for simulation of complex environmental sensing | Sensors can be used to sense not only simple behavior but also complex ones. Previous work has demonstrated how agent-based modeling can be used to model sensing of complex behavior in Complex Environments. | [
"gaussian function",
"environmental sensing",
"complex adaptive system",
"mathematical model"
] | [
"P",
"P",
"M",
"M"
] |
2DyhaPA | An integer programming-based search technique for error-prone structures of LDPC codes ? | In this paper, an efficient, general framework is presented for finding common, devastating error-prone structures (EPS) of any finite-length low-density parity-check (LDPC) code. The smallest stopping set for the binary erasure channel (BEC), the smallest fully absorbing set, the smallest absorbing set, and the smallest elementary trapping set for the binary symmetric channel (BSC) are found and the dominant EPS are enumerated. The method involves integer programming optimization techniques, which guarantees that the results are provably optimal. | [
"integer programming",
"stopping sets",
"absorbing sets",
"trapping sets"
] | [
"P",
"P",
"P",
"P"
] |
55Ckc5g | Chemosensitization of tumors by resveratrol | Because tumors develop resistance to chemotherapeutic agents, the cancer research community continues to search for effective chemosensitizers. One promising possibility is to use dietary agents that sensitize tumors to the chemotherapeutics. In this review, we discuss that the use of resveratrol can sensitize tumor cells to chemotherapeutic agents. The tumors shown to be sensitized by resveratrol include lung carcinoma, acute myeloid leukemia, promyelocytic leukemia, multiple myeloma, prostate cancer, oral epidermoid carcinoma, and pancreatic cancer. The chemotherapeutic agents include vincristine, adriamycin, paclitaxel, doxorubicin, cisplatin, gefitinib, 5-fluorouracil, velcade, and gemcitabine. The chemosensitization of tumor cells by resveratrol appears to be mediated through its ability to modulate multiple cell-signaling molecules, including drug transporters, cell survival proteins, cell proliferative proteins, and members of the NF-?B and STAT3 signaling pathways. Interestingly, this nutraceutical has also been reported to suppress apoptosis induced by paclitaxel, vincristine, and daunorubicin in some tumor cells. The potential mechanisms underlying this dual effect are discussed. Overall, studies suggest that resveratrol can be used to sensitize tumors to standard cancer chemotherapeutics. | [
"chemosensitization",
"tumor",
"resveratrol",
"apoptosis",
"cancer therapy",
"chemoresistance"
] | [
"P",
"P",
"P",
"P",
"M",
"U"
] |
4nLEENV | Towards Scalable Summarization of Consumer Videos Via Sparse Dictionary Selection | The rapid growth of consumer videos requires an effective and efficient content summarization method to provide a user-friendly way to manage and browse the huge amount of video data. Compared with most previous methods that focus on sports and news videos, the summarization of personal videos is more challenging because of its unconstrained content and the lack of any pre-imposed video structures. We formulate video summarization as a novel dictionary selection problem using sparsity consistency, where a dictionary of key frames is selected such that the original video can be best reconstructed from this representative dictionary. An efficient global optimization algorithm is introduced to solve the dictionary selection model with the convergence rates as O(1/root K-2) (where K is the iteration counter), in contrast to traditional sub-gradient descent methods of O(1/root K). Our method provides a scalable solution for both key frame extraction and video skim generation, because one can select an arbitrary number of key frames to represent the original videos. Experiments on a human labeled benchmark dataset and comparisons to the state-of-the-art methods demonstrate the advantages of our algorithm. | [
"video summarization",
"key frame",
"video skim",
"group sparse",
"lasso",
"scene analysis",
"video analysis"
] | [
"P",
"P",
"P",
"M",
"U",
"U",
"M"
] |
Cp2kePz | Sufficient completeness verification for conditional and constrained TRS | We present a procedure for checking sufficient completeness of conditional and constrained term rewriting systems containing axioms for constructors which may be constrained (by e.g. equalities, disequalities, ordering, membership, ...). Such axioms allow to specify complex data structures like e.g. sets, sorted lists or powerlists. Our approach is integrated into a framework for inductive theorem proving based on tree grammars with constraints, a formalism which permits an exact representation of languages of ground constructor terms in normal form. The procedure is presented by an inference system which is shown sound and complete. A precondition of one inference of this system refers to a (undecidable) property called strong ground reducibility which is discharged to the above inductive theorem proving system. We have successfully applied our method to several examples, yielding readable proofs and, in case of negative answer, a counter-example suggesting how to complete the specification. Moreover, we show that it is a decision procedure when the TRS is unconditional but constrained, for an expressive class of constrained constructor axioms. (C) 2011 Elsevier B.V. All rights reserved. | [
"sufficient completeness",
"conditional and constrained term rewriting",
"tree grammars",
"narrowing"
] | [
"P",
"P",
"P",
"U"
] |
8PwUjqf | Scheduling Parallel Programs by Work Stealing with Private Deques | Work stealing has proven to be an effective method for scheduling parallel programs on multicore computers. To achieve high performance, work stealing distributes tasks between concurrent queues, called deques, which are assigned to each processor. Each processor operates on its deque locally except when performing load balancing via steals. Unfortunately, concurrent deques suffer from two limitations: 1) local deque operations require expensive memory fences in modern weak-memory architectures, 2) they can be very difficult to extend to support various optimizations and flexible forms of task distribution strategies needed many applications, e. g., those that do not fit nicely into the divide-and-conquer, nested data parallel paradigm. For these reasons, there has been a lot recent interest in implementations of work stealing with non-concurrent deques, where deques remain entirely private to each processor and load balancing is performed via message passing. Private deques eliminate the need for memory fences from local operations and enable the design and implementation of efficient techniques for reducing task-creation overheads and improving task distribution. These advantages, however, come at the cost of communication. It is not known whether work stealing with private deques enjoys the theoretical guarantees of concurrent deques and whether they can be effective in practice. In this paper, we propose two work-stealing algorithms with private deques and prove that the algorithms guarantee similar theoretical bounds as work stealing with concurrent deques. For the analysis, we use a probabilistic model and consider a new parameter, the branching depth of the computation. We present an implementation of the algorithm as a C++ library and show that it compares well to Cilk on a range of benchmarks. Since our approach relies on private deques, it enables implementing flexible task creation and distribution strategies. As a specific example, we show how to implement task coalescing and steal-half strategies, which can be important in fine-grain, non-divide-and-conquer algorithms such as graph algorithms, and apply them to the depth-first-search problem. | [
"work stealing",
"nested parallelism",
"dynamic load balancing"
] | [
"P",
"R",
"M"
] |
43DYDVx | Multilevel Huffman coding: An efficient test-data compression method for IP cores | A new test-data compression method suitable for cores of unknown structure is introduced in this paper. The proposed method encodes the test data provided by the core vendor using a new, very effective compression scheme based on multilevel Huffman coding. Each Huffman codeword corresponds to three different kinds of information, and thus, significant compression improvements compared to the already known techniques are achieved. A simple architecture is proposed for decoding the compressed data on chip. Its hardware overhead is very low and comparable to that of the most efficient methods in the literature. Moreover, the major part of the decompressor can be shared among different cores, which reduces the hardware overhead of the proposed architecture considerably. Additionally, the proposed technique offers increased probability of detection of unmodeled faults since the majority of the unknown values of the test sets are replaced by pseudorandom data generated by a linear feedback shift register. | [
"test-data compression",
"embedded testing techniques",
"huffman encoding",
"intellectual property (ip) cores",
"linear feedback shift registers (lfsrs)"
] | [
"P",
"M",
"R",
"M",
"M"
] |
2pyiTp8 | Variable selection in regression models using nonstandard optimisation of information criteria | The question of variable selection in a regression model is a major open research topic in econometrics. Traditionally two broad classes of methods have been used. One is sequential testing and the other is information criteria. The advent of large datasets used by institutions such as central banks has exacerbated this model selection problem. A solution in the context of information criteria is provided in this paper. The solution rests on the judicious selection of a subset of models for consideration using nonstandard optimisation algorithms for information criterion minimisation. In particular, simulated annealing and genetic algorithms are considered. Both a Monte Carlo study and an empirical forecasting application to UK CPI inflation suggest that the proposed methods are worthy of further consideration. | [
"information criteria",
"model selection",
"simulated annealing",
"genetic algorithms",
"forecasting",
"inflation"
] | [
"P",
"P",
"P",
"P",
"P",
"P"
] |
3YVSBLa | Highly nonlinear photonic crystal fiber with ultrahigh birefringence using a nano-scale slot core | A new type of slot photonic crystal fiber is proposed. Ultrahigh nonlinear coefficient up to 3.5739104W?1km?1 can be achieved for the quasi-TM mode. The modal birefringence at 1.55?m is up to 0.5015. The proposed PCF is suitable for all-optical signal processing. | [
"nonlinearity",
"photonic crystal fiber",
"birefringence",
"slot core",
"chromatic dispersion"
] | [
"P",
"P",
"P",
"P",
"U"
] |
2YVBV-f | what will system level design be when it grows up? | We have seen a growing new interest in Electronic System Level (ESL) architectures, design methods, tools and implementation fabrics in the last few years. But the picture of what types and approaches to building embedded systems will become the most widely-accepted norms in the future remains fuzzy at best. Everyone want to know where systems and system design is going "when it grows up", if it ever "grows up". Some of the key questions that need to be answered include which applications will be key system drivers, what SW & HW architectures will suit best, how programmable and configurable will they be, will systems designers need to deal with physical implementation issues or will that be hidden behind fabric abstractions and programming models, and what will those abstractions and models be? Moreover, will these abstractions stabilize and be still useful as the underlying technology keeps developing at high speed.This panel consists of proponents of a number of alternative visions for where we will end up, and how we will get there. | [
"process variability",
"system-level compensation",
"parametric yield"
] | [
"U",
"U",
"U"
] |
4qLAcei | The effectiveness of bootstrap methods in evaluating skewed auditing populations: A simulation study | This article describes a comparison among four bootstrap methods: the percentile, reflective, bootstrap-t, and variance stabilized bootstrap-t using a simple new stabilization procedure. The four methods are employed in constructing upper confidence bounds for the mean error in a wide variety of audit populations. The simulation results indicate that the variance stabilized bootstrap-t bound is to be preferred. It exhibits reliable coverage while maintaining reasonable tightness. | [
"confidence bounds",
"dollar unit sampling",
"t-pivot"
] | [
"P",
"U",
"U"
] |
4PfVZ9: | Evaluation of arctic multibeam sonar data quality using nadir crossover error analysis and compilation of a full-resolution data product | Characterize uncertainty in multi-source multibeam data sets. Highest spatial resolution compilation for the Canada Basin and Chukchi Borderland. Fully resolvable pdf for interpretation of Arctic seafloor morphology. | [
"multibeam",
"canada basin",
"chukchi",
"arctic ocean",
"crossover analysis",
"ecs"
] | [
"P",
"P",
"P",
"M",
"R",
"U"
] |
4VbM69o | Software Trace Cache for commercial applications | In this paper we address the important problem of instruction fetch for future wide issue superscalar processors. Our approach focuses on understanding the interaction between software and hardware techniques targeting an increase in the instruction fetch bandwidth. That is the objective, for instance, of the Hardware Trace Cache (HTC). We design a profile based code reordering technique which targets a maximization of the sequentiality of instructions, while still trying to minimize instruction cache misses. We call our software approach, Software Trace Cache (STC). We evaluate our software approach, and then compare it with the HTC and the combination of both techniques. Our results on PostgreSQL show that for large codes with few loops and deterministic execution sequences the STC offers better results than a HTC. Also, both the software and hardware approaches combine well to obtain improved results. | [
"software trace cache",
"instruction fetch",
"code layout"
] | [
"P",
"P",
"M"
] |
1zz6wSm | Goal state optimization algorithm considering computational resource constraints and uncertainty in task execution time | A search methodology with goal state optimization considering computational resource constraints is proposed. The combination of an extended graph search methodology and parallelization of task execution and online planning makes it possible to solve the problem. The uncertainty of the task execution time is also considered. The problem can be solved by utilizing a random-based and/or a greedy-based graph-searching methodology. The proposed method is evaluated using a rearrangement problem of 20 movable objects with uncertainty in the task execution time, and the effectiveness is shown with simulation results. | [
"resource constraints",
"graph searching",
"robot motion planning",
"parallelization of action and plan",
"rearrangement planning"
] | [
"P",
"P",
"M",
"M",
"R"
] |
3-cvVp: | A 270-MHz CMOS quadrature modulator for a GSM transmitter | This paper describes a 270-MHz CMOS quadrature modulator (QMOD) fur a global system for mobile communications (GSM) transmitter. QMOD consists of two attenuators and two doubly-balanced modulators (DBM's) and fabricated by using 0.35-mu m CMOS process. The carrier leakage level of -35.7 dBc and the image rejection level of -45.1 dBc are achieved. It's total chip area is 880 mu m x 550 mu m and it consumes 1.0 mA with 3.0 V power supply. | [
"cmos gsm transmitter qmod"
] | [
"R"
] |
-C5PZ7i | Mining multi-tag association for image tagging | Automatic media tagging plays a critical role in modern tag-based media retrieval systems. Existing tagging schemes mostly perform tag assignment based on community contributed media resources, where the tags are provided by users interactively. However, such social resources usually contain dirty and incomplete tags, which severely limit the performance of these tagging methods. In this paper, we propose a novel automatic image tagging method aiming to automatically discover more complete tags associated with information importance for test images. Given an image dataset, all the near-duplicate clusters are discovered. For each near-duplicate cluster, all the tags occurring in the cluster form the cluster's "document". Given a test image, we firstly initialize the candidate tag set from its near-duplicate cluster's document. The candidate tag set is then expanded by considering the implicit multi-tag associations mined from all the clusters' documents, where each cluster's document is regarded as a transaction. To further reduce noisy tags, a visual relevance score is also computed for each candidate tag to the test image based on a new tag model. Tags with very low scores can be removed from the final tag set. Extensive experiments conducted on a real-world web image dataset-NUS-WIDE, demonstrate the promising effectiveness of our approach. | [
"image tagging",
"tag completion",
"tag denoising",
"weighted association rule mining"
] | [
"P",
"R",
"M",
"M"
] |
wQUJT2i | Improve the performance of co-training by committee with refinement of class probability estimations | Semi-supervised learning is a popular machine learning technique where only a small number of labeled examples are available and a large pool of unlabeled examples can be obtained easily. In co-training by committee, a paradigm of semi-supervised learning, it is necessary to pick out a fixed number of most confident examples according to the ranking of class probability values at each iteration. Unfortunately, the class probability values may repeat, which results in the problem that some unlabeled instances share the same probability and will be picked out randomly. This brings a negative effect on the improvement of the performance of classifiers. In this paper, we propose a simple method to deal with this problem under the intuition that different probabilities are crucial. The distance metric between unlabeled instances and labeled instances can be combined with the probabilities of class membership of committee. Two distance metrics are considered to assign each unlabeled example a unique probability value. In order to prove that our method can get higher-quality examples and reduce the introduction of noise, a data editing technique is used to compare with our method. Experimental results verify the effectiveness of our method and the data editing technique, and also confirm that the method for the first distance metric is generally better than the data editing technique. | [
"co-training",
"class probability",
"semi-supervised learning",
"distance metric",
"data editing",
"ensemble learning"
] | [
"P",
"P",
"P",
"P",
"P",
"M"
] |
1f1rjbp | A unified RANSLES model: Computational development, accuracy and cost | Large eddy simulation (LES) is computationally extremely expensive for the investigation of wall-bounded turbulent flows at high Reynolds numbers. A way to reduce the computational cost of LES by orders of magnitude is to combine LES equations with Reynolds-averaged NavierStokes (RANS) equations used in the near-wall region. A large variety of such hybrid RANSLES methods are currently in use such that there is the question of which hybrid RANS-LES method represents the optimal approach. The properties of an optimal hybrid RANSLES model are formulated here by taking reference to fundamental properties of fluid flow equations. It is shown that unified RANSLES models derived from an underlying stochastic turbulence model have the properties of optimal hybrid RANSLES models. The rest of the paper is organized in two parts. First, a priori and a posteriori analyses of channel flow data are used to find the optimal computational formulation of the theoretically derived unified RANSLES model and to show that this computational model, which is referred to as linear unified model (LUM), does also have all the properties of an optimal hybrid RANSLES model. Second, a posteriori analyses of channel flow data are used to study the accuracy and cost features of the LUM. The following conclusions are obtained. (i) Compared to RANS, which require evidence for their predictions, the LUM has the significant advantage that the quality of predictions is relatively independent of the RANS model applied. (ii) Compared to LES, the significant advantage of the LUM is a cost reduction of high-Reynolds number simulations by a factor of 0.07 Re 0.46 . For coarse grids, the LUM has a significant accuracy advantage over corresponding LES. (iii) Compared to other usually applied hybrid RANSLES models, it is shown that the LUM provides significantly improved predictions. | [
"unified ransles models",
"rans",
"les",
"stochastic turbulence model",
"channel flow application"
] | [
"P",
"P",
"P",
"P",
"M"
] |
-tBF7Z8 | On Kelly networks with shuffling | We consider Kelly networks with shuffling of customers within each queue. Specifically, each arrival, departure or movement of a customer from one queue to another triggers a shuffle of the other customers at each queue. The shuffle distribution may depend on the network state and on the customer that triggers the shuffle. We prove that the stationary distribution of the network state remains the same as without shuffling. In particular, Kelly networks with shuffling have the product form. Moreover, the insensitivity property is preserved for symmetric queues. | [
"shuffling",
"product form",
"insensitivity",
"symmetric queues"
] | [
"P",
"P",
"P",
"P"
] |
-xAMmx- | log-based receiver-reliable multicast for distributed interactive simulation | Reliable multicast communication is important in large-scale distributed applications. For example, reliable multicast is used to transmit terrain and environmental updates in distributed simulations. To date, proposed protocols have not supported these applications' requirements, which include wide-area data distribution, low-latency packet loss detection and recovery, and minimal data and management over-head within fine-grained multicast groups, each containing a single data source.In this paper, we introduce the notion of Log-Based Receiver-reliable Multicast (LBRM) communication, and we describe and evaluate a collection of log-based receiver reliable multicast optimizations that provide an efficient, scalable protocol for high-performance simulation applications. We argue that these techniques provide value to a broader range of applications and that the receiver-reliable model is an appropriate one for communication in general. | [
"multicast",
"distributed",
"interaction",
"simulation",
"reliability",
"communication",
"large-scale",
"distributed application",
"applications",
"examples",
"update",
"distributed simulation",
"requirements",
"data distributed",
"data",
"detection",
"recovery",
"minimal",
"management",
"group",
"paper",
"evaluation",
"value",
"collect",
"optimality",
"efficiency",
"scalability",
"high-performance",
"model",
"general",
"log",
"latency",
"packet-loss"
] | [
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"U",
"U",
"U"
] |
4Z4gEhn | design of relational views over network schemas | An algorithm is presented for designing relational views over network schemas to: (1) support general query and update capability, (2) preserve the information content of the data base and (3) provide independence from its physical organization. The proposed solution is applicable to many existing CODASYL databases without data or schema conversion. The particular declarations of a CODASYL schema which supply sources of logical data definition are first identified. Then the view design algorithm is derived on the basis of a formal analysis of the semantic constraints established by these declarations. A new form of data structure diagram is also introduced to visualize these constraints. | [
"design",
"relation",
"views",
"network",
"schema",
"algorithm",
"support",
"general",
"query",
"update",
"informal",
"contention",
"data base",
"data",
"physical",
"organization",
"database",
"conversation",
"definition",
"formalism",
"analysis",
"semantic",
"constraint",
"data structures",
"diagrams",
"visualization"
] | [
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P"
] |
4Mjoit2 | On the coverings by tolerance classes ? | A tolerance is a reflexive and symmetric, but not necessarily transitive, binary relation. Contrary to what happens with equivalence relations, when dealing with tolerances one must distinguish between blocks (maximal subsets where the tolerance is a total relation) and classes (the class of an element is the set of those elements tolerable with it). Both blocks and classes of a tolerance on a set define coverings of this set, but not every covering of a set is defined in this way. The characterization of those coverings that are families of blocks of some tolerance has been known for more than a decade now. In this paper we give a characterization of those coverings of a finite set that are families of classes of some tolerance. | [
"tolerance",
"class",
"block",
"similarity relation",
"neighborhood"
] | [
"P",
"P",
"P",
"M",
"U"
] |
-raY57D | Boundary conditions control for a shallow-water model | A variational data assimilation technique was used to estimate optimal discretization of interpolation operators and derivatives in the nodes adjacent to the rigid boundary. Assimilation of artificially generated observational data in the shallow-water model in a square box and assimilation of real observations in the model of the Black sea are discussed. It is shown in both experiments that controlling the discretization of operators near a rigid boundary can bring the model solution closer to observations as in the assimilation window and beyond the window. This type of control also allows to improve climatic variability of the model. Copyright (c) 2011 John Wiley & Sons, Ltd. | [
"boundary conditions",
"variational data assimilation",
"shallow water model",
"black sea model"
] | [
"P",
"P",
"M",
"R"
] |
-7YjLK- | A simple local smoothing scheme in strongly singular boundary integral representation of potential gradient | A new approach for computation of potential gradient at and near boundary is introduced. A strongly singular boundary integral representation of potential gradient, whose integral density is the potential gradient, is derived and analysed. Applying the concept of the osculating circle, a local smoothing procedure which computes a continuous approximation of potential gradient from the results of a 2D Boundary Element Method (BEM) analysis using linear elements is proposed and evaluated. This approximation is used in the integral representation derived as an integral density which fulfills the continuity requirements. Numerical experiments demonstrate, for quasiuniform meshes, an O(h2) accuracy of potential gradient computed by both the local smoothing procedure on smooth parts of the boundary and by the integral representation on smooth boundary parts and near smooth boundary parts for points inside the domain. A consequence of the latter result is that no significant increase in the error appears near the boundary, boundary layer effect thus being eliminated in this approach. | [
"boundary element method",
"potential gradient computation",
"boundary layer effect",
"potential theory",
"superconvergence"
] | [
"P",
"P",
"P",
"M",
"U"
] |
2s1Z-J5 | functional modularity for genetic programming | In this paper we introduce, formalize, and experimentally validate a novel concept of functional modularity for Genetic Programming (GP). We rely on module definition that is most natural for GP: a piece of program code (subtree). However, as opposed to syntax-based approaches that abstract from the actual computation performed by a module, we analyze also its semantic using a set of fitness cases. In particular, the central notion of this approach is subgoal , an entity that embodies module's desired semantic and is used to evaluate module candidates. As the cardinality of the space of all subgoals is exponential with respect to the number of fitness cases, we introduce monotonicity to assess subgoals' potential utility for searching for good modules. For a given subgoal and a sample of modules, monotonicity measures the correlation of subgoal's distance from module's semantics and the fitness of the solution the module is part of. In the experimental part we demonstrate how these concepts may be used to describe and quantify the modularity of two simple problems of Boolean function synthesis. In particular, we conclude that monotonicity usefully differentiates two problems with different nature of modularity, allows us to tell apart the useful subgoals from the other ones, and may be potentially used for problem decomposition and enhance the efficiency of evolutionary search. | [
"modularity",
"genetic programming",
"problem decomposition"
] | [
"P",
"P",
"P"
] |
pRzceJS | On the global output convergence of a class of recurrent neural networks with time-varying inputs | This paper studies the global output convergence of a class of recurrent neural networks with globally Lipschitz continuous and monotone nondecreasing activation functions and locally Lipschitz continuous time-varying inputs. We establish two sufficient conditions for global output convergence of this class of neural networks. Symmetry in the connection weight matrix is not required in the present results which extend the existing ones. (c) 2004 Elsevier Ltd. All rights reserved. | [
"global output convergence",
"recurrent neural networks",
"time-varying input",
"lipschitz continuity",
"lyapunov diagonal stability",
"optimization"
] | [
"P",
"P",
"P",
"P",
"U",
"U"
] |
1Yg-z-g | answering approximate queries over autonomous web databases | To deal with the problem of empty or too little answers returned from a Web database in response to a user query, this paper proposes a novel approach to provide relevant and ranked query results. Based on the user original query, we speculate how much the user cares about each specified attribute and assign a corresponding weight to it. This original query is then rewritten as an approximate query by relaxing the query criteria range. The relaxation order of all specified attributes and the relaxed degree on each specified attribute are varied with the attribute weights. For the approximate query results, we generate users' contextual preferences from database workload and use them to create a priori orders of tuples in an off-line preprocessing step. Only a few representative orders are saved, each corresponding to a set of contexts. Then, these orders and associated contexts are used at query time to expeditiously provide ranked answers. Results of a preliminary user study demonstrate that our query relaxation and results ranking methods can capture the user's preferences effectively. The efficiency and effectiveness of our approach is also demonstrated by experimental result. | [
"web database",
"query relaxation",
"top-k.",
"query results ranking"
] | [
"P",
"P",
"U",
"R"
] |
-:R2yVw | Stochastic finite learning of the pattern languages | The present paper proposes a new learning model-called stochastic finite learning-and shows the whole class of pattern languages to be learnable within this model. This main result is achieved by providing a new and improved average-case analysis of the Lange-Wiehagen (New Generation Computing, 8, 361-370) algorithm learning the class of all pattern languages in the limit from positive data. The complexity measure chosen is the total learning time, i.e., the overall time taken by the algorithm until convergence. The expectation of the total learning time is carefully analyzed and exponentially shrinking tail bounds for it are established for a large class of probability distributions. For every pattern pi containing k different variables it is shown that Lange and Wiehagen's algorithm possesses an expected total learning time of O( ) over cap>alpha (k) E[Lambda ]log(1/beta)(k)), where ) over cap> and beta are two easily computable parameters arising naturally from the underlying probability distributions, and E[Lambda] is the expected example string length. Finally, assuming a bit of domain knowledge concerning the underlying class of probability distributions, it is shown how to convert learning in the limit into stochastic finite learning. | [
"stochastic finite learning",
"pattern languages",
"average-case analysis",
"learning in the limit",
"inductive learning"
] | [
"P",
"P",
"P",
"P",
"M"
] |
11WLFRZ | researches on scheme of pairwise key establishment for distributed sensor networks | Security schemes of pairwise key establishment, which enable sensors to communicate with each other securely, play a fundamental role in research on security issue in wireless sensor networks. A new kind of cluster deployed sensor networks distribution model is presented, and based on which, an innovative Hierarchical Hypercube model - H(k,u,m,v,n) and the mapping relationship between cluster deployed sensor networks and the H(k,u,m,v,n) are proposed. By utilizing nice properties of H(k,u,m,v,n) model, a new general framework for pairwise key predistribution and a new pairwise key establishment algorithm are designed, which combines the idea of KDC(Key Distribution Center) and polynomial pool schemes. Furthermore, the working performance of the newly proposed pairwise key establishment algorithm is seriously inspected. Theoretic analysis and experimental figures show that the new algorithm has better performance and provides higher possibilities for sensor to establish pairwise key, compared with previous related works. | [
"pairwise key",
"sensor networks",
"hierarchical hypercube model",
"key predistribution",
"key pool"
] | [
"P",
"P",
"P",
"P",
"R"
] |
2tPp6q8 | PROBABILISTIC QUANTUM KEY DISTRIBUTION | This work presents a new concept in quantum key distribution called the probabilistic quantum key distribution (PQKD) protocol, which is based on the measurement uncertainty in quantum phenomena. It allows two mutually untrusted communicants to negotiate an unpredictable key that has a randomness guaranteed by the laws of quantum mechanics. In contrast to conventional QKD (e.g., BB84) in which one communicant has to trust the other for key distribution or quantum key agreement (QKA) in which the communicants have to artificially contribute subkeys to a negotiating key, PQKD is a natural and simple method for distributing a secure random key. The communicants in the illustrated PQKD take Einstein-Podolsky-Rosen (EPR) pairs as quantum resources and then use entanglement swapping and Bell-measurements to negotiate an unpredictable key. | [
"quantum key distribution",
"quantum key agreement",
"quantum information",
"quantum cryptography"
] | [
"P",
"P",
"M",
"M"
] |
-FYgAkY | An experiment with reflective middleware to support grid-based flood monitoring | Flooding is a growing problem, which affects more than 10% of the U.K. population. The cost of damage caused by flooding correlates closely with the warning time given before a flood event, making flood monitoring and prediction critical to minimizing the cost of flood damage. This paper describes a wireless sensor network (WSN) for flood warning, which is capable of not only integrating with remote fixed-network grids for computationally intensive flood modelling purposes but also performing on-site grid computation. This functionality is supported by the reflective and component-based GridKit middleware, which provides support for both WSN and grid application domains. Copyright (c) 2007 John Wiley & Sons, Ltd. | [
"middleware",
"grid",
"wsn"
] | [
"P",
"P",
"P"
] |
2bt55Ag | rate control for delay-sensitive traffic in multihop wireless networks | We propose two multipath rate control algorithms that guarantee bounded end-to-end delay in multihop wireless networks. Our work extends the previous research on optimal rate control and scheduling in multihop wireless networks, to support inelastic delay requirements. Using the relationship between dual variables and packet delay, we develop two alternative solutions that are independent from any queuing model assumption, contrary to the previous research. In the first solution, we derive lower bounds on source rates that achieve the required delay bounds. We then develop a distributed algorithm comprising scheduling and rate control functions, which requires each source to primarily check the feasibility of its QoS before initiating its session. In the second solution we eliminate the admission control phase by developing an algorithm that converges to the utility function weights that ensure the required delay bounds for all flows. Both solutions carry out scheduling at slower timescale than rate control, and consequently are more efficient than previous cross-layer algorithms. We show through numerical examples that even when there are no delay constraints, the proposed algorithms significantly reduce the delay compared to the previous solutions. | [
"rate control",
"delay",
"multihop wireless networks",
"qos",
"cross-layer optimization"
] | [
"P",
"P",
"P",
"P",
"R"
] |
3x3widU | Optimality of KLT for high-rate transform coding of Gaussian vector-scale mixtures: Application to reconstruction, estimation, and classification | The Karhunen-Loeve transform (KLT) is known to be optimal for high-rate transform coding of Gaussian vectors for both fixed-rate and variable-rate encoding. The KLT is also known to be suboptimal for some non-Gaussian models. This paper proves high-rate optimality of the KLT for variable-rate encoding of a broad class of non-Gaussian vectors: Gaussian vector-scale mixtures (GVSM), which extend the Gaussian scale mixture (GSM) model of natural signals. A key concavity property of the scalar GSM (same as the scalar GVSM) is derived to complete the proof. Optimality holds under a broad class of quadratic criteria, which include mean-squared error (MSE) as well as generalized f-divergence loss in estimation and binary classification systems. Finally, the theory is illustrated using two applications: signal estimation in multiplicative noise and joint optimization of classification/reconstruction systems. | [
"estimation",
"classification",
"karhunen-loeve transform (klt)",
"gaussian scale mixture",
"mean-squared error (mse)",
"f-divergence",
"multiplicative noise",
"chernoff distance",
"high-resolution quantization",
"quadratic criterion"
] | [
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"U",
"U",
"M"
] |
4-77vE7 | The Norepinephrine Transporter and Pheochromocytoma | Pheochromocytomas are rare neuroendocrine tumors of chromaffin cell origin that synthesize and secrete excess quantities of catecholamines and other vasoactive peptides. Pheochromocytomas also express the norepinephrine transporter (NET), a molecule that is used clinically as a means of incorporating radiolabelled substrates such as 131I-MIBG (iodo-metaiodobenzylguanidine) into pheochromocytoma tumor cells. This allows the diagnostic localization of these tumors and, more recently, 131I-MIBG has been used in trials in the treatment of pheochromocytoma, potentially giving rise to NET as a therapeutic target. However, because of varying levels or activities of the transporter, the ability of 131I-MIBG to be consistently incorporated into tumor cells is limited, and therefore various strategies to increase NET functional activity are being investigated, including the use of traditional chemotherapeutic agents such as cisplatin or doxorubicin. Other aspects of NET discussed in this short review include the regulation of the transporter and how novel proteinprotein interactions between NET and structures such as syntaxin 1A may hold the key to innovative ways to increase the therapeutic value of 131I-MIBG | [
"norepinephrine transporter",
"cisplatin",
"pc12 cells",
"uptake assay"
] | [
"P",
"P",
"M",
"U"
] |
2T1JQ-D | MetaEasy: A Meta-Analysis Add-In for Microsoft Excel | Meta-analysis is a statistical methodology that combines or integrates the results of several independent clinical trials considered by the analyst to be 'combinable' (Huque 1988). However, completeness and user-friendliness are uncommon both in specialised meta-analysis software packages and in mainstream statistical packages that have to rely on user-written commands. We implemented the meta-analysis methodology in a Microsoft Excel add-in which is freely available and incorporates more meta-analysis models (including the iterative maximum likelihood and profile likelihood) than are usually available, while paying particular attention to the user-friendliness of the package. | [
"meta-analysis",
"excel",
"maximum likelihood",
"profile likelihood",
"forest plot",
"vba"
] | [
"P",
"P",
"P",
"P",
"U",
"U"
] |
22wCGV6 | Adaptive data collection strategies for lifetime-constrained wireless sensor networks | Communication is a primary source of energy consumption in wireless sensor networks. Due to resource constraints, the sensor nodes may not have enough energy to report every reading to the base station over a required network lifetime. This paper investigates data collection strategies in lifetime-constrained wireless sensor networks. Our objective is to maximize the accuracy of data collected by the base station over the network lifetime. Instead of sending sensor readings periodically, the relative importance of the readings is considered in data collection: the sensor nodes send data updates to the base station when the new readings differ more substantially from the previous ones. We analyze the optimal update strategy and develop adaptive update strategies for both individual and aggregate data collections. We also present two methods to cope with message losses in wireless transmission. To make full use of the energy budgets, we design an algorithm to allocate the numbers of updates allowed to be sent by the sensor nodes based on their topological relations. Experimental results using real data traces show that, compared with the periodic strategy, adaptive strategies significantly improve the accuracy of data collected by the base station. | [
"data collection",
"sensor network",
"network lifetime",
"energy efficiency",
"data accuracy"
] | [
"P",
"P",
"P",
"M",
"R"
] |
Q9iK&11 | Image fusion-based contrast enhancement | The goal of contrast enhancement is to improve visibility of image details without introducing unrealistic visual appearances and/or unwanted artefacts. While global contrast-enhancement techniques enhance the overall contrast, their dependences on the global content of the image limit their ability to enhance local details. They also result in significant change in image brightness and introduce saturation artefacts. Local enhancement methods, on the other hand, improve image details but can produce block discontinuities, noise amplification and unnatural image modifications. To remedy these shortcomings, this article presents a fusion-based contrast-enhancement technique which integrates information to overcome the limitations of different contrast-enhancement algorithms. The proposed method balances the requirement of local and global contrast enhancements and a faithful representation of the original image appearance, an objective that is difficult to achieve using traditional enhancement methods. Fusion is performed in a multi-resolution fashion using Laplacian pyramid decomposition to account for the multi-channel properties of the human visual system. For this purpose, metrics are defined for contrast, image brightness and saturation. The performance of the proposed method is evaluated using visual assessment and quantitative measures for contrast, luminance and saturation. The results show the efficiency of the method in enhancing details without affecting the colour balance or introducing saturation artefacts and illustrate the usefulness of fusion techniques for image enhancement applications. | [
"contrast enhancement",
"luminance",
"image fusion",
"pyramidal image decomposition",
"gaussian pyramid decomposition",
"image blending"
] | [
"P",
"P",
"R",
"R",
"M",
"M"
] |
48QDwre | Reachability analysis for uncertain SSPs | Stochastic Shortest Path problems (SSPs) can be efficiently dealt with by the Real-Time Dynamic Programming algorithm (RTDP). Yet, RTDP requires that a goal state is always reachable. This article presents an algorithm checking for goal reachability, especially in the complex case of an uncertain SSP where only a possible interval is known for each transition probability. This gives an analysis method for determining if SSP algorithms such as RTDP are applicable, even if the exact model is not known. As this is a time-consuming algorithm, we also present a simple process that often speeds it up dramatically. Yet, the main improvement still needed is to turn to a symbolic analysis in order to avoid a complete state-space enumeration. | [
"reachability analysis",
"stochastic shortest-path problems",
"uncertain model"
] | [
"P",
"M",
"R"
] |
4QmLSWU | embodiment in brain-computer interaction | With emerging opportunities for using Brain-Computer Interaction (BCI) in gaming applications, there is a need to understand the opportunities and constraints of this interaction paradigm. To complement existing laboratory-based studies, there is also a call for the study of BCI in real world contexts. In this paper we present such a real world study of a simple BCI game called MindFlex, played as a social activity in the home. In particular, drawing on the philosophical traditions of embodied interaction, we highlight the importance of considering the body in BCI and not simply what is going on in the head. The study shows how people use bodily actions to facilitate control of brain activity but also to make their actions and intentions visible to, and interpretable by, others playing and watching the game. It is the public availability of these bodily actions during BCI that allows action to be socially organised, understood and coordinated with others and through which social relationships can be played out. We discuss the implications of this perspective and findings for BCI. | [
"brain-computer interaction",
"gaming",
"play",
"embodied interaction"
] | [
"P",
"P",
"P",
"P"
] |
aSmKLcb | formally measuring agreement and disagreement in ontologies | Ontologies are conceptual models of particular domains, and domains can be modeled differently, representing different opinions, beliefs or perspectives. In other terms, ontologies may disagree with some particular pieces of information and among themselves. Assessing such agreements and disagreements is very useful in a variety of scenarios, in particular when integrating external elements of information into existing ones. In this paper, we present a set of measures to evaluate the agreement and disagreement of an ontology with a statement or with other ontologies. While our work goes beyond the naive approach of checking for logical inconsistencies, it relies on a complete formal framework based on the semantics of the considered ontologies. The experiments realized on several concrete scenarios show the validity of our approach and the usefulness of measuring agreement and disagreement in ontologies. | [
"agreement",
"disagreement",
"ontologies",
"controversy",
"consensus"
] | [
"P",
"P",
"P",
"U",
"U"
] |
2e7Cp7f | Minimal Realizations of Linear Systems: The "Shortest Basis" Approach | Given a discrete-time linear system C, a shortest basis for is a set of linearly independent generators for C with the least possible lengths. A basis B is a shortest basis if and only if it has the predictable span property (i.e., has the predictable delay and degree properties, and is non-catastrophic), or alternatively if and only if it has the subsystem basis property (for any interval J, the generators in B whose span is in J is a basis for the subsystem C(J)). The dimensions of the minimal state spaces and minimal transition spaces of C are simply the numbers of generators in a shortest basis B that are active at any given state or symbol time, respectively. A minimal linear realization for C in controller canonical form follows directly from a shortest basis for C, and a minimal linear realization for C in observer canonical form follows directly from a shortest basis for the orthogonal system C(perpendicular to). This approach seems conceptually simpler than that of classical minimal realization theory. | [
"minimal realizations",
"linear systems"
] | [
"P",
"P"
] |
15tE-Ea | A Low-Latency Multi-layer Prefix Grouping Technique for Parallel Huffman Decoding of Multimedia Standards | Huffman coding is a popular and important lossless compression scheme for various multimedia applications. This paper presents a low-latency parallel Huffman decoding technique with efficient memory usage for multimedia standards. First, the multi-layer prefix grouping technique is proposed for sub-group partition. It exploits the prefix characteristic in Huffman codewords to solve the problem of table size explosion. Second, a two-level table lookup approach is introduced which can promptly branch to the correct sub-group by level-1 table lookup and decode the symbols by level-2 table lookup. Third, two optimization approaches are developed; one is to reduce the branch cycles and the other is parallel processing between two-level table lookup and direct table lookup approaches to fully utilize the advantage of VLIW parallel processing. An AAC Huffman decoding example is realized on the Parallel Architecture Core DSP (PAC DSP) processor. The simulation results show that the proposed method can further improve about 89% of decoding cycles and 33% of table size comparing to the linear search method. | [
"prefix grouping",
"multimedia",
"huffman coding",
"parallel processing",
"vliw dsp processor"
] | [
"P",
"P",
"P",
"P",
"R"
] |