id
stringlengths 7
7
| title
stringlengths 3
578
| abstract
stringlengths 0
16.7k
| keyphrases
sequence | prmu
sequence |
---|---|---|---|---|
-cUueSe | MOBILE SHOPPERS: TYPES, DRIVERS, AND IMPEDIMENTS | The technology adoption of mobile commerce has frequently been studied by considering the extended technology acceptance model (TAM). However, the role of the perceived drivers and impediments affecting potential mobile shoppers' acceptance has been scarcely analyzed. This article highlights: (1) the typology of potential m-shoppers described by their reasons for, and perceived impediments to, mobile shopping and (2) the possible differences in the extended TAM in the resulting categories. In order to do so, we advance a single hypothesis about moderation of the m-shopper type on the relationships presented in the extended TAM. The study was conducted in Spain, a country with significant current and forecasted use of mobile shopping. Data from 476 Spanish mobile phone users were analyzed. The use of latent class cluster allowed us to identify three types of mobile shoppers that show different profiles based on their perception about drivers and impediments. Differences in the extended TAM relations across the clusters were identified using the multigroup approach of structural equation models. The results show support for the moderation effect, providing valuable information for practitioners to understand how consumers develop mobile shopping intentions, which is necessary to implement effective marketing strategies. | [
"drivers",
"impediments",
"technology adoption",
"mobile commerce",
"technology acceptance model"
] | [
"P",
"P",
"P",
"P",
"P"
] |
AGq5gh6 | visualizing textual travelogue with location-relevant images | A travelogue can be regarded as a location-oriented or scene-based document. Visualizing a pure textual travelogue with location-based images makes it convenient for readers to understand the main content of the travelogue and thus share the author's experience. Though a large number of images exist in web albums such as Flickr, they are not directly, explicitly associated with a travelogue. In this paper, we propose a general framework and four approaches to accomplish the visualization task. The first step of the framework is to extract location names and other location-related information from a travelogue (or a set of travelogues). In the second step, we use the location names as queries to retrieve candidate images together with their tags from Flickr. In the last step, the retrieved images are carefully refined by using a proper similarity function. The similarity function measures the similarity between the travelogue and the tags of each candidate Flickr image. In addition to the framework, our main contributions lie in three topic models which are used for computing the similarity functions. The models are not only adopted to visualize a single travelogue but also employed to summarize a collection of travelogues. Experimental results on a set of Chinese travelogues demonstrate the proposed methods' ability to visualize travelogues. | [
"flickr",
"topic models",
"travelogue visualization",
"text mining"
] | [
"P",
"P",
"R",
"U"
] |
nvQKwRs | Asymptotic approximations to the distribution of Kendall's sample | This paper provides a saddlepoint approximation to the distribution of the sample version of Kendall's , which is a measure of association between two samples. The saddlepoint approximation is compared with the Edgeworth and the normal approximations, and with the bootstrap resampling distribution. A numerical study shows that with small sample sizes the saddlepoint approximation outperforms both the normal and the Edgeworth approximations. This paper gives also an analytical comparison between approximated and exact cumulants of the sample Kendall's when the two samples are independent. | [
"normal approximation",
"bootstrap resampling",
"cumulant approximation",
"density estimation",
"edgeworth expansion",
"general saddlepoint approximation",
"u-statistic"
] | [
"P",
"P",
"R",
"U",
"M",
"M",
"U"
] |
36kaEqJ | ABNORMAL INTERICTAL GAMMA ACTIVITY MAY MANIFEST A SEIZURE ONSET ZONE IN TEMPORAL LOBE EPILEPSY | Even though recent studies have suggested that seizures do not occur suddenly and that before a seizure there is a period with an increased probability of seizure occurrence, neurophysiological mechanisms of interictal and pre-seizure states are unknown. The ability of mathematical methods to provide much more sensitive tools for the detection of subtle changes in the electrical activity of the brain gives promise that electrophysiological markers of enhanced seizure susceptibility can be found even during interictal periods when EEG of epilepsy patients often looks 'normal'. Previously, we demonstrated in animals that hippocampal and neocortical gamma-band rhythms (30-100 Hz) intensify long before seizures caused by systemic infusion of kainic acid. Other studies in recent years have also drawn attention to the fast activity (> 30 Hz) as a possible marker of epileptogenic tissue. The current study quantified gamma-band activity during interictal periods and seizures in intracranial EEG (iEEG) in 5 patients implanted with subdural grids/intracranial electrodes during their pre-surgical evaluation. In all our patients, we found distinctive (abnormal) bursts of gamma activity with a 3 to 100 fold increase in power at gamma frequencies with respect to selected by clinicians, quiescent, artifact-free, 7-20 min "normal" background (interictal) iEEG epochs 1 to 14 hours prior to seizures. Increases in gamma activity were largest in those channels which later displayed the most intensive electrographic seizure discharges. Moreover, location of gamma-band bursts correlated (with high specificity, 96.4% and sensitivity, 83.8%) with seizure onset zone (SOZ) determined by clinicians. Spatial localization of interictal gamma rhythms within SOZ suggests that the persistent presence of abnormally intensified gamma rhythms in the EEG may be an important tool for focus localization and possibly a determinant of epileptogenesis. | [
"epilepsy",
"pre-seizure state",
"ieeg",
"seizure prediction",
"gamma oscillations",
"neural synchrony"
] | [
"P",
"P",
"P",
"M",
"M",
"U"
] |
433agcT | A UML-based quantitative framework for early prediction of resource usage and load in distributed real-time systems | This paper presents a quantitative framework for early prediction of resource usage and load in distributed real-time systems (DRTS). The prediction is based on an analysis of UML 2.0 sequence diagrams, augmented with timing information, to extract timed-control flow information. It is aimed at improving the early predictability of a DRTS by offering a systematic approach to predict, at the design phase, system behavior in each time instant during its execution. Since behavioral models such as sequence diagrams are available in early design phases of the software life cycle, the framework enables resource analysis at a stage when design decisions are still easy to change. Though we provide a general framework, we use network traffic as an example resource type to illustrate how the approach is applied. We also indicate how usage and load analysis of other types of resources (e.g., CPU and memory) can be performed in a similar fashion. A case study illustrates the feasibility of the approach. | [
"uml",
"real-time systems",
"load analysis",
"resource usage prediction",
"load forecasting",
"resource overuse detection",
"distributed systems"
] | [
"P",
"P",
"P",
"R",
"M",
"M",
"R"
] |
BWpzX9L | application-specific trace compression for low bandwidth trace logging | This poster introduces an application-specific trace log compression mechanism targeted for execution on wireless sensor network nodes. Trace logs capture sequences of significant events executed on a node to provide visibility into the system. The application-specific compression mechanism exploits static program control flow knowledge to automate insertion of trace statements that capture trace data in a concise form. Initial evaluation reveals that these compressed trace logs, when generated, consume just over a fifth of the space required by standard trace logging techniques. | [
"logging",
"wireless sensor networks",
"debugging"
] | [
"P",
"P",
"U"
] |
2APuh6b | A Data Integration Framework for Prediction of Transcription Factor Targets | We present a computational framework for predicting targets of transcription factor regulation. The framework is based on the integration of a number of sources of evidence, derived from DNA-sequence and gene-expression data, using a weighted sum approach. Sources of evidence are prioritized based on a training set, and their relative contributions are then optimized. The performance of the proposed framework is demonstrated in the context of BCL6 target prediction. We show that this framework is able to uncover BCL6 targets reliably when biological prior information is utilized effectively, particularly in the case of sequence analysis. The framework results in a considerable gain in performance over scores in which sequence information was not incorporated. This analysis shows that with assessment of the quality and biological relevance of the data, reliable predictions can be obtained with this computational framework. | [
"data integration",
"network inference",
"transcription factor binding site prediction"
] | [
"P",
"U",
"M"
] |
3pNEvtb | A new approach to the identification of a fuzzy model | This paper presents an approach which is useful for the identification of a fuzzy model. The identification of a fuzzy model using input-output data consists of two parts: structure identification and parameter identification. In this paper, algorithms to identify those parameters and structures are suggested to solve the problems of conventional methods. Given a set of input-output data, the consequent parameters are identified by the Hough transform and clustering method, which consider the linearity and continuity, respectively. For the premise part identification, the input space is partitioned by a clustering method. The gradient descent algorithm is used to fine-tune parameters of a fuzzy model. Finally, it is shown that this method is useful for the identification of a fuzzy model by simulation. (C) 1999 Elsevier Science B.V. All rights reserved. | [
"identification of a fuzzy model",
"fuzzy model"
] | [
"P",
"P"
] |
-GB78Bp | An approach to the radiometric aerotriangulation of photogrammetric images | Harnessing the radiometric information provided by photogrammetric flights could be useful in increasing the thematic applications of aerial images. The aim of this paper is to improve relative and absolute homogenization in aerial images by applying atmospheric correction and treatment of bidirectional effects. We propose combining remote sensing methodologies based on radiative transfer models and photogrammetry models, taking into account the three-dimensional geometry of the images (external orientation and Digital Elevation Model). The photogrammetric flight was done with a Z/I Digital Mapping Camera (DMC) with a Ground Sample Distance (GSD) of 45cm. Spectral field data were acquired by defining radiometric control points in order to apply atmospheric correction models, obtaining calibration parameters from the camera and surface reflectance images. Kernel-driven models were applied to correct the anisotropy caused by the bidirectional reflectance distribution function (BRDF) of surfaces viewed under large observation angles with constant illumination, using the overlapping area between images and the establishment of radiometric tie points. Two case studies were used: 8-bit images with applied Lookup Tables (LUTs) resulting from the conventional photogrammetric workflow for BRDF studies and original 12-bit images (Low Resolution Color, LRC) for the correction of atmospheric and bidirectional effects. The proposed methodology shows promising results in the different phases of the process. The geometric kernel that shows the best performance is the Lidense kernel. The homogenization factor in 8-bit images ranged from 6% to 25% relative to the range of digital numbers (0255), and from 18% to 35% relative to levels of reflectance (0100) in the 12-bit images, representing a relative improvement of approximately 130%, depending on the band analyzed. | [
"radiometric aerotriangulation",
"aerial images",
"atmospheric correction",
"bidirectional effects",
"calibration",
"kernel-driven models"
] | [
"P",
"P",
"P",
"P",
"P",
"P"
] |
45Tdvqd | On-line market research | This article Focuses upon innovation and issues in on-line market research. On-line research, on-line consumer behavior, and e-commerce are new areas of academic study in marketing. Most of the early work in these areas was done by practitioners, as illustrated by research reports and case, studies presented at professional conferences. The present article reviews technologies and methods of on-line research and points to various methodological and ethical issues. On-line research is evaluated From two perspectives: orthodox thinking about the! validity of research, and out-of-the-box thinking about how on-line research can increase the impact of market research and develop the core competitive capabilities of firms. Further academic research and learning From the data of on-line research and electronic commerce are encouraged. | [
"consumer behavior",
"consumer research",
"interactive research",
"internet research",
"new product development",
"on-line focus groups",
"on-line shopping",
"web surveys"
] | [
"P",
"R",
"M",
"M",
"M",
"M",
"M",
"U"
] |
4R&WXF6 | Helmholtz stereopsis: Exploiting reciprocity for surface reconstruction | We present a method-termed Helmholtz stereopsis-for reconstructing the geometry of objects from a collection of images. Unlike existing methods for surface reconstruction (e.g., stereo vision, structure from motion, photometric stereopsis), Helmholtz stereopsis makes no assumptions about the nature of the bidirectional reflectance distribution functions (BRDFs) of objects. This new method of multinocular stereopsis exploits Helmholtz reciprocity by choosing pairs of light source and camera positions that guarantee that the ratio of the emitted radiance to the incident irradiance is the same for corresponding points in the two images. The method provides direct estimates of both depth and surface normals, and consequently weds the advantages of both conventional stereopsis and photometric stereopsis. Results from our implementation lend empirical support to our technique. | [
"stereo",
"surface reconstruction",
"reflectance",
"brdf",
"helmholtz reciprocity",
"photometric stereo"
] | [
"P",
"P",
"P",
"P",
"P",
"R"
] |
1jgRjbG | Using algebraic signatures to check data possession in cloud storage | Cloud storage enables users to access their data anywhere and at any time. It also can comply with a growing number of regulations. However, it brings about many new security challenges. When users store their data in cloud storage, they are mostly concerned about whether the data is intact. This is the goal of remote data possession checking (RDPC) schemes. This paper proposes an algebraic signature based RDPC scheme. Algebraic signature can improve efficiency and the running of algebraic signature can achieve tens to hundreds of megabytes per second. It allows verification without the need for the challenger to compare against the original data. The challenge/response protocol transmits a small, constant amount of data. The user needs to store only two secret keys and several random numbers. The algebraic property of algebraic signatures makes it possible to propose an efficient challenge updating method. Finally, experimental results reveal that the performance is bound by disk I/O and not by the algebraic signature and cryptographic computation, which enables it to be ideally suited for use in cloud storage. | [
"algebraic signature",
"cloud storage",
"remote data possession checking",
"storage security"
] | [
"P",
"P",
"P",
"R"
] |
2v7Ezdn | Artificial neural network training using a new efficient optimization algorithm | Because search space in artificial neural networks (ANNs) is high dimensional and multimodal which is usually polluted by noises and missing data, the process of weight training is a complex continuous optimization problem. This paper deals with the application of a recently invented metaheuristic optimization algorithm, bird mating optimizer (BMO), for training feed-forward ANNs. BMO is a population-based search method which tries to imitate the mating ways of bird species for designing optimum searching techniques. In order to study the usefulness of the proposed algorithm, BMO is applied to weight training of ANNs for solving three real-world classification problems, namely, Iris flower, Wisconsin breast cancer, and Pima Indian diabetes. The performance of BMO is compared with those of the other classifiers. Simulation results indicate the superior capability of BMO to tackle the problem of ANN weight training. BMO is also applied to model fuel cell system which has been addressed as an open and demanding problem in electrical engineering. The promising results verify the potential of BMO algorithm. | [
"artificial neural network",
"weight training",
"bird mating optimizer",
"fuel cell"
] | [
"P",
"P",
"P",
"P"
] |
-JW5NvF | Discrete Calderon's projections on parallelepipeds and their application to computing exterior magnetic fields for FRC plasmas | Confining dense plasma in a field reversed configuration (FRC) is considered a promising approach to fusion. Numerical simulation of this process requires setting artificial boundary conditions (ABCs) for the magnetic field because whereas the plasma itself occupies a bounded region (within the FRC coils), the field extends from this region all the way to infinity. If the plasma is modeled using single fluid magnetohydrodynamics (MHD), then the exterior magnetic field can be considered quasi-static. This field has a scalar potential governed by the Laplace equation. The quasi-static ABC for the magnetic field is obtained using the method of difference potentials, in the form of a discrete Calderon boundary equation with projection on the artificial boundary shaped as a parallelepiped. The Calderon projection itself is computed by convolution with the discrete fundamental solution on the three-dimensional Cartesian grid. (C) 2012 Elsevier Inc. All rights reserved. | [
"field reversed configuration (frc)",
"artificial boundary condition (abc)",
"single fluid magnetohydrodynamics (mhd)",
"the method of difference potentials",
"boundary equations with projections",
"quasi-static magnetic field",
"calderon's potentials and projections"
] | [
"P",
"P",
"P",
"P",
"P",
"R",
"R"
] |
BNJD35c | A micromechanical model for the elasticplastic behavior of porous rocks | In this paper, we propose a polycrystalline model to study the elasticplastic behavior of porous rocks. The proposed model will be applied to sandstone. For this purpose, the microstructure of porous rocks will be represented by an assembly of discrete grains and pores. The plastic deformation of each grain is related to the frictional sliding along a number of weakness planes. A specific plastic model is devised to describe the sliding phenomena to take into account the characteristics of rocks such as the pressure dependency and the volumetric dilatancy. A homogenization approachself-consistent is developed to account for the interactions between grains and pores. An efficient numerical procedure for model implementation is proposed. Finally, we present a series of numerical investigations and comparisons with experimental data to assess the capability of the proposed model to capture the main features of mechanical behaviors of porous rocks. | [
"porous rocks",
"polycrystalline model",
"sandstone",
"plasticity",
"homogenization",
"self-consistent scheme"
] | [
"P",
"P",
"P",
"P",
"P",
"U"
] |
23XRdd4 | performance optimization of cnfet for ultra-low power reconfigurable architecture | The designers of field programmable gate array (FPGAs) always devote to optimize the chip performance. The fabrication cost of ASICs is rising exponentially in deep submicron and hence it is important to investigate ways of reducing FPGA power consumption so that they can also deploy in place of ASICs in portable energy constrained applications. Minimum energy delay point occurs in subthreshold region and subthreshold operation of circuits shows order of power saving over superthreshold circuits. Therefore, it is also important to investigate the possibility of extending the use of FPGA even in subthreshold region for ultra low power applications. This paper investigates the subthreshold performance of a basic FPGA building block-a Look up Table (LUT). It presents the performance optimization of CNFET based fully encoded three inputs LUT in deep submicron (DSM) region for delay, power dissipation and switching energy. Proposed multi-chirality LUT implementation shows significant advantages in delay as well as switching energy. | [
"cnfet",
"fpga",
"subthreshold",
"multi-chirality ultra low power"
] | [
"P",
"P",
"P",
"R"
] |
4&ADw8R | 3D structuring of polymer parts using thermoforming processes | A new technique for the realization of complex three dimensional (3D) structures in polymer materials is presented. The described process can be applied for the fabrication of 3D structured foils as well as for 3D structured polymer parts using a replica molding process. In the first step, the foil is structured by Hot Embossing. This structured foil is then blown into a structured mold by pressure at high temperature, using a thermoforming process. The thermoforming process is realized in an especially designed tool, where different mold inserts can be applied. In the thermoforming process this tool, containing the mold insert and the structured foil is heated up and a pressure is applied which assures, that the foil covers the structured mold insert. Due to parameter optimization, high pattern fidelity can be achieved. As structures a porefield with a pore diameter of 500nm and lines and spaces with a width of 1.6?m were used. Besides a Moth Eye structure with a period of 280 nm and a blazed grating with a period of 1?m and a blazed angle of 15 were imprinted and blown into the structured mold. The results show, that this process can be used to fabricate 3D structured foils with good structure fidelity. Besides the structured foils, additional poly(dimethylsiloxane) (PDMS) replications can be made out of this foils. These PDMS replications are used as stamps in a replica molding process. A micro fluidic system containing hydrophobic and hydrophilic channels was created using this process. | [
"3d structuring",
"micro and nanointegration",
"soft lithography",
"uv casting"
] | [
"P",
"M",
"U",
"U"
] |
27tWBWz | MULTIQUBIT ENTANGLEMENT OF A GENERAL INPUT STATE | Measurement of entanglement remains an important problem for quantum information. We present the design and simulation of an experimental method for an entanglement indicator for a general multiqubit state. The system can be in a pure or a mixed state, and it need not be "close" to any particular state. The system contains information about its own entanglement; we use dynamic learning methods to map this information onto a single experimental measurement which is our entanglement indicator. Our method does not require prior state reconstruction or lengthy optimization. An entanglement witness emerges from the learning process, beginning with two-qubit systems, and extrapolating this to three, four, and five qubit systems where the entanglement is not well understood. Our independently learned measures for three-qubit systems compare favorably with known entanglement measures. As the size of the system grows the amount of additional training necessary diminishes, raising hopes for applicability to large computational systems. | [
"entanglement",
"dynamic learning",
"quantum algorithm"
] | [
"P",
"P",
"M"
] |
29tpp3E | Supervised Hyperspectral Image Classification Based on Spectral Unmixing and Geometrical Features | The spectral features of hyperspectral images, such as the spectrum at each pixel or the abundance maps of the endmembers, describe the material attributes of the structures. However, the spectrum on each pixel, which usually has hundreds of spectral bands, is redundant for classification task. In this paper, we firstly use spectral unmixing to reduce the dimensionality of the hyperspectal data in order to compute the abundance maps of the endmembers, since the number of endmembers in an image is much less than the number of the spectral bands. In addition, using only the spectral information, it is difficult to distinguish some classes. Moreover, it is impossible to separate objects made by the same material but with different semantic meanings. Some geometrical features are needed to separate such spectrally similar classes. In this paper, we introduce a new geometrical featurethe characteristic scales of structuresfor the classification of hyperspectral images. With the help of the abundance maps obtained by spectral unmixing, we propose a method based on topographic map of images to estimate local scales of structures in hyperspectral images. The experiments show that using geometrical features actually improves the classification results, especially for the classes made by the same material but with different semantic meanings. When compared to the traditional contextual features (such as morpholog ical profiles), the local scale provides satisfactory results without significantly increasing the feature dimension. | [
"hyperspectral image",
"spectral unmixing",
"geometrical feature",
"scale",
"supervised classification"
] | [
"P",
"P",
"P",
"P",
"R"
] |
zVzXtCE | Concept framework for audio information retrieval: ARF | The majority of researches on content-based retrieval focused on visual media. However audio is also an important medium and information carrier from the viewpoint of human auditory perception, so it is needed to retrieve for audio collection. Audio is handled by conventional methods as an opaque stream medium, which is not suitable for information retrieval by its content. In fact, audio carries rich aural information with the form of speech, musical, and sound effects, so it could be retrieved based on its aural content, such as acoustic features, musical melodies and associated semantics. In this paper, a concept framework (ARE) for content-based audio retrieval is proposed from systematic perspectives, which describes audio content model, audio retrieval architecture and audio query schemes. Audio contents are represented by a hierarchical model and a set of formal descriptions from physical to acoustic to semantic level, which depict acoustic features, logical structure and semantics of audio and audio objects. The architecture consisting of audio meta-database, populating and accessing modules presents a system structure view of audio information retrieval. The query schemes give generalized approaches and modes concerning how users deliver audio information needs to audio collections. Finally, an audio retrieval example implemented is used to explain and specify the application of the components in the proposed ARF. | [
"audio information retrieval",
"arf",
"content-based retrieval",
"multimedia information retrieval"
] | [
"P",
"P",
"P",
"M"
] |
3x&5kWd | Fuzzy group decision-making for facility location selection | The selection of a facility location among alternative locations is a multicriteria decision-making problem including both quantitative and qualitative criteria. The conventional approaches to facility location problem tend to be less effective in dealing with the imprecise or vagueness nature of the linguistic assessment. Under many situations, the values of the qualitative criteria are often imprecisely defined for the decision-makers. The aim of the paper is to solve facility location problems using different solution approaches of fuzzy multi-attribute group decision-making. The paper includes four different fuzzy multi-attribute group decision-making approaches. The first one is a fuzzy model of group decision proposed by Blin. The second is the fuzzy synthetic evaluation. The third is Yagers weighted goals method and the last one is fuzzy analytic hierarchy process. Although four approaches have the same objective of selecting the best facility location alternative, they come from different theoretic backgrounds and relate differently to the discipline of multi-attribute group decision-making. These approaches are extended to select the best facility location alternative by taking into account quantitative and qualitative criteria. A short comparative analysis among the approaches and a numeric example to each approach are given. | [
"group decision",
"facility location",
"synthetic evaluation",
"weighted goals",
"fuzzy sets",
"ahp"
] | [
"P",
"P",
"P",
"P",
"M",
"U"
] |
-cpGU22 | Improving the schedulability of soft real-time open dynamic systems: The inheritor is actually a debtor | This paper presents the Clearing Fund Protocol, a three layered protocol designed to schedule soft real-time sets of precedence related tasks with shared resources. These sets are processed in an open dynamic environment. Open because new applications may enter the system at any time and dynamic because the schedulability is tested on-line as tasks request admission. Top-down, the three layers are the Clearing Fund, the Bandwidth Inheritance and two versions of the Constant Bandwidth Server algorithms. Bandwidth Inheritance applies a priority inheritance mechanism to the Constant Bandwidth Server. However, a serious drawback is its unfairness. In fact, a task executing in a server can potentially steal the bandwidth of another server without paying any penalty. The main idea of the Clearing Fund Algorithm is to keep track of processor-time debts contracted by lower priority tasks that block higher priority ones and are executed in the higher priority servers by having inherited the higher priority. The proposed algorithm reduces the undesirable effects of those priority inversions because the blocked task can finish its execution in its own server or in the server of the blocking task, whichever has the nearest deadline. If demanded, debts are paid back in that way. Inheritors are therefore debtors. Moreover, at certain instants in time, all existing debts may be waived and the servers are reset making a clear restart of the system. The Clearing Fund Protocol showed definite better performances when evaluated by simulations against Bandwidth Inheritance, the protocol it tries to improve. | [
"scheduling",
"soft real-time",
"open systems"
] | [
"P",
"P",
"R"
] |
4p7JJZg | Critical evaluation of CFD codes for interfacial simulation of bubble-train flow in a narrow channel | Computational fluid dynamics (CFD) codes that are able to describe in detail the dynamic evolution of the deformable interface in gas-liquid or liquid-liquid flows may be a valuable tool to explore the potential of multi-fluid flow in narrow channels for process intensification. In the present paper, a computational exercise for co-current bubble-train flow in a square vertical mini-channel is performed to investigate the performance of well-known CFD codes for this type of flows. The computations are based on the volume-of-fluid method (VOF) where the transport equation for the liquid volumetric fraction is solved either by the methods involving a geometrical reconstruction of the interface or by the methods that use higher-order difference schemes instead. The codes contributing to the present code-to-code comparison are an in-house code and the commercial CFD packages CFX, FLUENT and STAR-CD. Results are presented for two basic cases. In the first one, the flow is driven by buoyancy only, while in the second case the flow is additionally forced by an external pressure gradient. The results of the code-to-code comparison show that only the VOF method with interface reconstruction leads to physically sound and consistent results, whereas the use of difference schemes for the volume fraction equation shows some deficiencies. Copyright (C) 2007 John Wiley & Sons, Ltd. | [
"volume-of-fluid method",
"code-to-code comparison",
"micro-process engineering",
"bubbletrain flow",
"taylor flow",
"square channel"
] | [
"P",
"P",
"U",
"M",
"M",
"R"
] |
1DHkTnq | A UML profile for multidimensional modeling in data warehouses | The multidimensional (MD) modeling, which is the foundation of data warehouses (DWs), MD databases, and On-Line Analytical Processing (OLAP) applications, is based on several properties different from those in traditional database modeling. In the past few years, there have been some proposals, providing their own formal and graphical notations, for representing the main MD properties at the conceptual level. However, unfortunately none of them has been accepted as a standard for conceptual MD modeling. In this paper, we present an extension of the Unified Modeling Language (UML) using a UML profile. This profile is defined by a set of stereotypes, constraints and tagged values to elegantly represent main MD properties at the conceptual level. We make use of the Object Constraint Language (OCL) to specify the constraints attached to the defined stereotypes, thereby avoiding an arbitrary use of these stereotypes. We have based our proposal in UML for two main reasons: (i) UML is a well known standard modeling language known by most database designers, thereby designers can avoid learning a new notation, and (ii) UML can be easily extended so that it can be tailored for a specific domain with concrete peculiarities such as the multidimensional modeling for data warehouses. Moreover, our proposal is Model Driven Architecture (MDA) compliant and we use the Query View Transformation (QVT) approach for an automatic generation of the implementation in a target platform. Throughout the paper, we will describe how to easily accomplish the MD modeling of DWs at the conceptual level. Finally, we show how to use our extension in Rational Rose for MD modeling. | [
"uml",
"uml profile",
"multidimensional modeling",
"data warehouses",
"uml extension"
] | [
"P",
"P",
"P",
"P",
"R"
] |
3YjHtL- | Compact factors of countable state Markov shifts | We study continuous shift commuting maps from transitive countable state Markov shifts into compact subshifts. The closure of the image is a coded system. On the other hand, any coded system is the surjective image of some transitive Markov shift, which may be chosen locally compact by construction. These two results yield a formal analogy to "the transitive sofic systems are the subshift factors of transitive shifts of finite type". Then we consider factor maps which have bounded coding length in some graph presentation (label maps). Now the image has to be synchronized, but not every synchronized system can be obtained in this way. We show various restrictions for a surjective label map to exist. (C) 2002 Elsevier Science B.V. All rights reserved. | [
"markov shift",
"coded system",
"synchronized system"
] | [
"P",
"P",
"P"
] |
4g1Z1Eh | Canonical processes in active reading and hypervideo production | Active reading of audiovisual documents is an iterative activity, dedicated to the analysis of the audiovisual source through its enrichment with structured metadata and the definition of appropriate visualisation means for this metadata, producing new multimedia objects called hypervideos. We will describe in this article the general decomposition of active reading and how it is put into practice in the Advene framework, analysing how its activities fit into the Canonical Media Processes model. | [
"hypervideo",
"advene",
"annotation",
"document template",
"audiovisual information visualisation",
"sharing",
"time and synchronisation"
] | [
"P",
"P",
"U",
"M",
"M",
"U",
"M"
] |
hDJiUu8 | Interacting with human physiology | We propose a novel system that incorporates physiological monitoring as part of the humancomputer interface. The sensing element is a thermal camera that is employed as a computer peripheral. Through bioheat modeling of facial imagery almost the full range of vital signs can be extracted, including localize blood flow, cardiac pulse, and breath rate. This physiological information can then be used to draw inferences about a variety of health symptoms and psychological states. Our research aims to realize the notion of desktop health monitoring and create truly collaborative interactions in which humans and machines are both observing and responding. | [
"blood flow",
"cardiac pulse",
"breath rate",
"humancomputer interaction",
"thermal imaging",
"facial tracking",
"stress",
"sleep apnea"
] | [
"P",
"P",
"P",
"R",
"M",
"M",
"U",
"U"
] |
d28d2w1 | Robust and Efficient Incentives for Cooperative Content Distribution | Content distribution via the Internet is becoming increasingly popular. To be cost-effective, commercial content providers are now using peer-to-peer (P2P) protocols such as BitTorrent to save bandwidth costs and to handle peak demands. When an online content provider uses a P2P protocol, it faces an incentive issue: how to motivate its clients to upload to their peers. This paper presents Dandelion, a system designed to address this issue. Unlike previous incentive-compatible systems, such as BitTorrent, our system provides non-manipulable incentives for clients to upload to their peers. A client that honestly uploads to its peers is rewarded in the following two ways. First, if its peers are unable to reciprocate its uploads, the content provider rewards the client's service with credit. This credit can be redeemed for discounts on paid content or other monetary rewards. Second, if the client's peers possess content of interest and have appropriate uplink capacity, the client is rewarded with reciprocal uploads from its peers. In designing Dandelion, we trade scalability for the ability to provide robust incentives for cooperation. The evaluation of our prototype system on PlanetLab demonstrates the viability of our approach. A Dandelion server that runs on commodity hardware with a moderate access link is capable of supporting up to a few thousand clients. The download completion time for these clients is substantially reduced due to the additional upload capacity offered by strongly incentivized uploaders. | [
"incentives",
"content distribution",
"peer-to-peer (p2p)",
"fair-exchange",
"symmetric cryptography"
] | [
"P",
"P",
"P",
"U",
"U"
] |
2LcNCuL | Ordering connected graphs having small degree distances | The concept of degree distance of a connected graph G G is a variation of the well-known Wiener index, in which the degrees of vertices are also involved. It is defined by D?(G)=?x?V(G)d(x)?y?V(G)d(x,y) D ? ( G ) = ? x ? V ( G ) d ( x ) ? y ? V ( G ) d ( x , y ) , where d(x) d ( x ) and d(x,y) d ( x , y ) are the degree of x x and the distance between x x and y y , respectively. In this paper it is proved that connected graphs of order n?4 n ? 4 having the smallest degree distances are K1,n?1,BS(n?3,1) K 1 , n ? 1 , B S ( n ? 3 , 1 ) and K1,n?1+e K 1 , n ? 1 + e (in this order), where BS(n?3,1) B S ( n ? 3 , 1 ) denotes the bistar consisting of vertex disjoint stars K1,n?3 K 1 , n ? 3 and K1,1 K 1 , 1 with central vertices joined by an edge. | [
"degree distance",
"bistar",
"eccentricity",
"diameter",
"tree"
] | [
"P",
"P",
"U",
"U",
"U"
] |
38WrSFp | A Galerkin implementation of the generalized Helmholtz decomposition for vorticity formulations | Vorticity formulations for the incompressible Navier-Stokes equations have certain advantages over primitive-variable formulations including the fact that the number of equations to be solved is reduced. However, the accurate implementation of the boundary conditions seems to continue to be an impediment to the acceptance and use of numerical methods based on vorticity formulations. Velocity boundary conditions can be implicitly satisfied by maintaining the kinematic compatibility of the velocity and vorticity fields as described by the generalized Helmholtz decomposition (GHD). This can be accomplished in one of two ways by either solving for boundary vorticity (leading to a Dirichlet boundary condition for the vorticity equation) or solving for boundary vortex sheet strengths (leading to a Neumann condition). In the past, vortex sheet strengths have often been determined by solving an over-specified set of linear equations. The over-specification arose because integral constraints were imposed on the vortex sheet strengths. These integral constraints are not necessary and typically are included to mitigate errors in determining the vortex sheet strengths themselves. Further, the constraints overspecify the linear system requiring least-squares solution techniques. To more accurately satisfy both components of the velocity boundary conditions, a Galerkin formulation is applied to the generalized Helmholtz decomposition. This formulation implicitly satisfies an integral constraint that is more general than many of the integral constraints that have been explicitly imposed. Two implementations of the Galerkin GHD are considered in the current work, one based on determining the boundary vorticity and one based on determining the boundary vortex sheet strengths. A finite element method (FEM) is implemented to solve the vorticity equation along with the boundary data generatedfrom the GHD. (C) 2001 Academic Press. | [
"generalized helmholtz decomposition",
"vorticity methods",
"galerkin method"
] | [
"P",
"R",
"R"
] |
1xDUBAk | An Inexact Alternating Direction Method for Structured Variational Inequalities | Recently, the alternating direction method of multipliers has attracted great attention. For a class of variational inequalities (VIs), this method is efficient, when the subproblems can be solved exactly. However, the subproblems could be too difficult or impossible to be solved exactly in many practical applications. In this paper, we propose an inexact method for structured VIs based on the projection and contraction method. Instead of solving the subproblems exactly, we use the simple projection to get a predictor and correct it to approximate the subproblems real solutions. The convergence of the proposed method is proved under mild assumptions and its efficiency is also verified by some numerical experiments. | [
"alternating direction methods",
"variational inequalities",
"projection and contraction methods",
"65k05",
"90c30",
"90c33"
] | [
"P",
"P",
"P",
"U",
"U",
"U"
] |
3whyUkf | Diabetes-Related Autoantibodies in Cord Blood from Children of Healthy Mothers Have Disappeared by the Time the Child Is One Year Old | Autoantibodies found in cord blood in children who later develop diabetes might be produced by the fetus. If so, continuous autoantibody production would still be expected in these children at one year of age. We decided to determine autoantibodies in cord blood and to see whether they persisted in these children at one year. Autoantibodies against GAD65 (glutamic acid decarboxylase) and IA-2 (tyrosine phosphatase) in cord blood were determined in 2,518 randomly selected children. Forty-nine (1.95%) were positive for GAD65 antibodies, 14 (0.56%) were positive for IA-2 antibodies, and 3 of them were positive for both GAD and IA-2. Four of the mothers of children with GAD65 autoantibodies in cord blood (8.2%) had type 1 diabetes as did 5 mothers of children with IA-2 antibodies (35.7 %), but only 0.4% of the mothers had type 1 diabetes in the autoantibody-negative group (P < 0.001). Information on infections during pregnancy was available in 2,169 pregnancies. In the autoantibody-positive group, 31.5% had an infection during pregnancy, which was more common than in the autoantibody-negative group of 500 children with the lowest values (20.1%; P < 0.04). At one year follow-up nobody of those with positive cord blood had GAD65 or IA-2 autoantibodies. We conclude that most autoantibodies found in cord blood samples of children are probably passively transferred from mother to child. Antibody screening of cord blood cannot be used to predict diabetes in the general population. Infections during pregnancy may initiate an immune process related to diabetes development. | [
"autoantibodies",
"cord blood",
"children",
"fetus",
"type 1 diabetes",
"screening",
"immune process",
"prevention"
] | [
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"U"
] |
2Ey-nH1 | Stability of Markovian jump neural networks with mode-dependent delays and generally incomplete transition probability | This paper deals with the robust exponential stability problem for a class of Markovian jump neural networks with mode-dependent delays and generally incomplete transition probability. The delays vary randomly depending on the mode of the networks. Each transition rate can be completely unknown, or only its estimate value is known. By using a new LyapunovKrasovskii functional, a delay-dependent stability criterion is presented in terms of linear matrix inequalities (LMIs). The proposed LMI results extend the earlier publications. Finally, a numerical example is given to show the effectiveness and efficiency of the results. | [
"markovian jumping neural networks",
"mode-dependent delay",
"generally incomplete transition probability",
"robust exponential stability",
"linear matrix inequality"
] | [
"P",
"P",
"P",
"P",
"P"
] |
1rsM6:T | intelligent web crawler | As the number of Internet users and the number of accessible Web pages grows, it is becoming increasingly difficult for users to find documents that are relevant to their particular needs. Users must either browse through a large hierarchy of concepts to find the information for which they are looking or submit a query to a publicly available search engine and wade through hundreds of results, most of them irrelevant. Web crawlers are one of the most crucial components in search engines and their optimization would have a great effect on improving the searching efficiency. This paper, introduces an intelligent web crawler that uses an ontological engineering concepts for improving its crawling performance. Intelligent crawler estimates the best path for crawling. This is the first crawler that acts intelligently without any relevance feedback or training. | [
"web crawler",
"ontology",
"importance-metrics",
"domain knowledge"
] | [
"P",
"P",
"U",
"U"
] |
-WUu6mF | A comparative study of STBC transmissions at 2.4 GHz over indoor channels using a 2 x 2 MIMO testbed | In this paper we employ a 2 x 2 Multiple-Input Multiple-Output (MIMO) hardware platform to evaluate, in realistic indoor scenarios, the performance of different space-time block coded (STBC) transmissions at 2.4 GHz. In particular, we focus on the Alamouti orthogonal scheme considering two types of channel state information (CSI) estimation: a conventional pilot-aided supervised technique and a recently proposed blind method based on second-order statistics (SOS). For comparison purposes, we also evaluate the performance of a Differential (non-coherent) space-time block coding (DSTBC). DSTBC schemes have the advantage of not requiring CSI estimation but they incur in a 3 dB loss in performance. The hardware MIMO platform is based on high-performance signal acquisition and generation boards, each one equipped with a 1 GB memory module that allows the transmission of extremely large data frames. Upconversion to RF is performed by two RF vector signal generators whereas downconversion is carried out with two custom circuits designed from commercial components. All the baseband signal processing is implemented off-line in MATLAB (R) making the MIMO testbed very flexible and easily reconfigurable. Using this platform we compare the performance of the described methods in line-of-sight (LOS) and non-line-of-sight (NLOS) indoor scenarios. Copyright (C) 2007 John Wiley & Sons, Ltd. | [
"mimo testbeds",
"multiple-input multiple-output (mimo)",
"space-time block codes",
"blind channel estimation",
"principal component analysis (pca)"
] | [
"P",
"P",
"P",
"R",
"M"
] |
3CLkDMY | SISG: self-immune automated signature generation for polymorphic worms | Purpose - The purpose of this paper is to propose a self-immune automated signature generation (SISG) for polymorphic worms which is able to work well, even while being attacked by any types of malicious adversary and produces global-suited signatures other than local-suited signatures for its distributed architecture. Through experimentations, the method is thereafter evaluated. Design/methodology/approach - The ideal worm signature exist in each copy of the corresponding worm, but never in other worm categories and normal network traffic. SISG compares each worm copy and extract the same components, then produces the worm signature from the components which must achieve low-false positive and low-false negative. SISG is immune from the most attacks by filtering the harmful noise made by malicious adversaries before signature generation. Findings - NOP sled, worm body and descriptor are not good to be signature because they can be confused intricately by polymorphic engines. Protocol frames may not suit to be signature for the anti-automated signature generation attacks. Exploit bytes is the essential part of an ideal worm signature and it can be extracted by SISG exactly. Originality/value - The paper proposes a SISG for polymorphic worms which is able to work well even while being attacked by any types of malicious adversary and produces global-suited signatures other than local-suited signatures for its distributed architecture. | [
"internet",
"data security",
"programming"
] | [
"U",
"U",
"U"
] |
-HEw5UY | Load-settlement behavior of rock-socketed drilled shafts using Osterberg-Cell tests | Osterberg-Cell (O-Cell) tests are widely used to predict the load-settlement behavior of large-diameter drilled shafts socketed in rock. The loading direction of O-Cell tests for shaft resistance is opposite to that of conventional downward load tests, meaning that the equivalent top load-settlement curve determined by the summation of the mobilized shaft resistance and end bearing at the same deflection neglects the pile-toe settlement caused by the load transmitted along the pile shaft. The emphasis is on quantifying the effect of coupled shaft resistance, which is closely related to the ratios of pile diameter to soil modulus (D/E(s)) and total shaft resistance to total applied load (R(s)/Q) in rock-socketed drilled shafts, using the coupled load-transfer method. The proposed analytical method, which takes into account the effect of coupled shaft resistance, was developed using a modified Mindlin's point load solution. Through comparisons with field case studies, it was found that the proposed method reasonably estimated the load-transfer behavior of piles and coupling effects due to the transfer of shaft shear loading. These results represent a significant improvement in the prediction of load-settlement behaviors of drilled shafts subjected to bidirectional loading from the O-Cell test. (C) 2009 Elsevier Ltd. All rights reserved. | [
"drilled shaft",
"osterberg-cell test",
"coupled shaft resistance",
"load-transfer analysis",
"mindlin's solution"
] | [
"P",
"P",
"P",
"M",
"R"
] |
-WdHhZt | Non-metric similarity search of tandem mass spectra including posttranslational modifications | In biological applications, the tandem mass spectrometry is a widely used method for determining protein and peptide sequences from an in vitro sample. The sequences are not determined directly, but they must be interpreted from the mass spectra, which is the output of the mass spectrometer. This work is focused on a similarity-search approach to mass spectra interpretation, where the parameterized Hausdorff distance (dHP d HP ) is used as the similarity. In order to provide an efficient similarity search under dHP d HP , the metric access methods and the TriGen algorithm (controlling the metricity of dHP d HP ) are employed. Moreover, the search model based on the dHP d HP supports posttranslational modifications (PTMs) in the query mass spectra, what is typically a problem when an indexing approach is used. Our approach can be utilized as a coarse filter by any other database approach for mass spectra interpretation. | [
"similarity search",
"posttranslational modifications",
"tandem mass spectrometry",
"metric access methods",
"peptide identification"
] | [
"P",
"P",
"P",
"P",
"M"
] |
4UKtM2k | A noncooperative game-theoretic framework for radio resource management in 4G heterogeneous wireless access networks | Fourth generation (4G) wireless networks will provide high-bandwidth connectivity with quality-of-service (QoS) support to mobile users in a seamless manner. In such a scenario, a mobile user will be able to connect to different wireless access networks such as a wireless metropolitan area network (WMAN), a cellular network, and a wireless local area network (WLAN) simultaneously. We present a game-theoretic framework for radio resource management (that is, bandwidth allocation and admission control) in such a heterogeneous wireless access environment. First, a noncooperative game is used to obtain the bandwidth allocations to a service area from the different access networks available in that service area (on a long-term basis). The Nash equilibrium for this game gives the optimal allocation which maximizes the utilities of all the connections in the network (that is, in all of the service areas). Second, based on the obtained bandwidth allocation, to prioritize vertical and horizontal handoff connections over new connections, a bargaining game is formulated to obtain the capacity reservation thresholds so that the connection-level QoS requirements can be satisfied for the different types of connections (on a long-term basis). Third, we formulate a noncooperative game to obtain the amount of bandwidth allocated to an arriving connection (in a service area) by the different access networks (on a short-term basis). Based on the allocated bandwidth and the capacity reservation thresholds, an admission control is used to limit the number of ongoing connections so that the QoS performances are maintained at the target level for the different types of connections. | [
"noncooperative game",
"bandwidth allocation and admission control",
"heterogeneous wireless networks",
"network utility"
] | [
"P",
"P",
"R",
"R"
] |
17ZyZMf | On the performance of online and offline green path establishment techniques | To date, significant effort has gone into designing green traffic engineering (TE) techniques that consolidate traffic onto the minimal number of links/switches/routers during off-peak periods. However, little works exist that aim to green Multi-Protocol Label Switching (MPLS) capable networks. Critically, no work has studied the performance of green label switched paths (LSPs) establishment methods in terms of energy savings and acceptance rates. Henceforth, we add to the current state-of-the-art by studying green online and offline (LSP) establishment methods. Online methods rely only on past and current LSP requests while offline ones act as a theoretical benchmark whereby they also have available to them future LSP requests. We introduce a novel metric that takes into account both energy savings and acceptance rates. We also identify a new simpler heuristic that minimizes energy use by routing sourcedestination demands over paths that contain established links and require the fewest number of new links. Our evaluation of two offline and four online LSP establishment methods over the Abilene and AT&T topologies with random LSP setup requests show that energy savings beyond 20 % are achievable with LSP acceptance rates above 90 %. | [
"traffic engineering",
"mpls",
"green technologies",
"online and offline green lsp establishment methods"
] | [
"P",
"P",
"M",
"R"
] |
1729&2d | Some aspects of the test generation problem for an application-oriented test of SRAM-based FPGAs | This paper presents a structural approach for testing SRAM-based FPGAs taking into account the configurability of such flexible devices. When SRAM-based FPGA testing is considered, different situations have first to be identified: namely the Application-Oriented Test situation and the Manufacturing-Oriented Test situation. This paper concentrates on Test Pattern Generation and DFT for an Application-Oriented test of SRAM-based FPGAs. | [
"testing",
"fpga",
"vlsi"
] | [
"P",
"P",
"U"
] |
n-:Pg:T | Improved slack-based context-dependent DEA - A study of international tourist hotels in Taiwan | In this paper, we improve the slack-based measure (SBM) of efficiency in context-dependent data envelopment analysis (DEA) and apply in measuring the performance of 34 selected Taiwan's International Tourist Hotels. Empirical results indicate that (1) the market differentiates five performance levels forming the benchmark structure for 34 hotels; (2) the hotels with higher attractiveness can be served as learning targets for the hotels in the lagging levels so as to establish the best path for performance improvements: (3) the hotels in the leading levels can use lower progress to analyze potential competitors in the lagging levels. The results of this study can provide hotel's managers with insights of competitive advantage and help them with strategic decision making. (C) 2010 Elsevier Ltd. All rights reserved. | [
"context-dependent",
"international tourist hotel",
"slack-based measure",
"data envelopment analysis"
] | [
"P",
"P",
"P",
"P"
] |
4nC7oaF | Joint Optimal Sensor Selection and Scheduling in Dynamic Spectrum Access Networks | Spectrum sensing is key to the realization of dynamic spectrum access. To protect primary users' communications from the interference caused by secondary users, spectrum sensing must meet the strict detectability requirements set by regulatory bodies, such as the FCC. Such strict detection requirements, however, can hardly be achieved using PHY-layer sensing techniques alone with one-time sensing by only a single sensor. In this paper, we jointly exploit two MAC-layer sensing methods-cooperative sensing and sensing scheduling-to improve spectrum sensing performance, while incurring minimum sensing overhead. While these sensing methods have been studied individually, little has been done on their combinations and the resulting benefits. Specifically, we propose to construct a profile of the primary signal's RSSs and design a simple, yet near-optimal, incumbent detection rule. Based on this constructed RSS profile, we develop an algorithm to find 1) an optimal set of sensors; 2) an optimal point at which to stop scheduling additional sensing; and 3) an optimal sensing duration for one-time sensing, so as to make a tradeoff between detection performance and sensing overhead. Our evaluation results show that the proposed sensing algorithms reduce the sensing overhead by up to 65 percent, while meeting the requirements of both false-alarm and misdetection probabilities of less than 0.01. | [
"sensor selection",
"dynamic spectrum access",
"cognitive radio",
"cooperative sensing",
"sensing scheduling"
] | [
"P",
"P",
"U",
"M",
"R"
] |
2ScwPFT | Investigating the application of opposition concept to colonial competitive algorithm | Evolutionary algorithms (EAs) are well-known optimisation approaches to deal with non-linear and complex problems. However, these population-based algorithms are computationally expensive due to the slow nature of the evolutionary process. This paper presents a novel algorithm to accelerate colonial competitive algorithm (CCA). The proposed opposition-based CCA (OCCA) employs opposition-based learning (OBL) for population initialisation and also for generation jumping. In this work, opposite countries and colonies have been utilised to improve the convergence rate of CCA. A comprehensive set of 15 complex benchmark functions including a wide range of dimensions is employed for experimental verification. The influences of dimensionality and population size are also investigated. Experimental results confirm that the OCCA outperforms the original CCA in terms of convergence speed and robust. | [
"colonial competitive algorithm",
"obl",
"cca",
"opposition-based learning",
"opposite countries",
"opposite colonies"
] | [
"P",
"P",
"P",
"P",
"P",
"R"
] |
w-&y-F- | A smoothed particle hydrodynamics model for miscible flow in three-dimensional fractures and the two-dimensional Rayleigh-Taylor instability | A numerical model based on smoothed particle hydrodynamics (SPH) hits been developed and used to simulate the classical two-dimensional Rayleigh-Taylor instability and three-dimensional miscible flow in fracture apertures with complex geometries. To model miscible flow fluid particles with variable, composition dependent, masses were used. By basing the SPH equations on the particle number density artificial surface tension effects were avoided. The simulation results for the growth of a single perturbation driven by the Rayleigh-Taylor instability compare well with numerical results obtained by Fournier et al., and the growth or a perturbation with time can be represented quite well by a second-degree polynomial, in accord with the linear stability analysis of Duff et al. The dispersion coefficient found from SPH simulation of flow and diffusion in an ideal fracture was in excellent agreement with the value predicted by the theory of Taylor and Aris. The simulations of miscible flow in fracture apertures can be used to determination dispersion coefficients for transport in fractured media - a parameter used in large-scale simulations of contaminant transport. (c) 2005 Elsevier Inc. All rights reserved. | [
"smoothed particle hydrodynamics",
"miscible flow",
"rayleigh-taylor instability",
"flow and transport in fractures"
] | [
"P",
"P",
"P",
"R"
] |
4&mSzTN | DOTS and DOTS-Plus | Multidrug-resistant tuberculosis is already a global pandemic, with focal hot spots of ongoing transmission. Although DOTS (directly observed treatment, short course) chemotherapy is the goal of global tuberculosis control, short-course chemotherapy will not cure multidrug-resistant tuberculosis. In settings of high transmission of multidrug-resistant tuberculosis, DOTS plus (a complementary DOTS-based strategy with provisions for treating multidrug-resistant tuberculosis) is warranted. DOTS-plus project implementation to date reveals important clinical, epidemiological, and economic lessons. Community-based strategies designed to enhance local capacity are cost effective and make it possible to meet new medical challenges. | [
"dots",
"dots-plus",
"multidrug-resistant tuberculosis",
"public health",
"pan-resistant tb",
"transnational case finding"
] | [
"P",
"P",
"P",
"U",
"U",
"U"
] |
-cpdgzc | Match-bounds revisited | The use of automata techniques to prove the termination of string rewrite systems and left-linear term rewrite systems is advocated by Geser et al. in a recent sequence of papers. We extend their work to non-left-linear rewrite systems. The key to this extension is the introduction of so-called raise rules and the use of tree automata that are not quite deterministic. Furthermore, to increase the applicability of the method we show how it can be incorporated into the dependency pair framework. To achieve this we introduce two new enrichments which take the special properties of dependency pair problems into account. (C) 2009 Elsevier Inc. All rights reserved. | [
"match-bounds",
"termination",
"term rewriting",
"tree automata",
"dependency pairs",
"automation"
] | [
"P",
"P",
"P",
"P",
"P",
"U"
] |
-HvinHz | A Haar wavelet approach to compressed image quality measurement | The traditional mean-squared-error and peak-si,anal-to-noise-ratio error measures are mainly focused on the pixel-by-pixel difference between the original and compressed images. Such metrics are improper for subjective quality assessment, since human perception is very sensitive to specific correlations between adjacent pixels. In this work, we explore the Haar wavelet to model the space-frequency localization property of human visual system (HVS) responses. It is shown that the physical contrast in different resolutions can be easily represented in terms of wavelet coefficients. By analyzing and modeling several visual mechanisms of the HVS with the Haar transform, we develop a new subjective fidelity measure which is more consistent with human observation experience. (C) 2000 Academic Press. | [
"human visual system (hvs)",
"haar transform",
"image fidelity assessment",
"compression artifact measure",
"wavelet transform"
] | [
"P",
"P",
"R",
"M",
"R"
] |
1ZiqvPR | Statistical extraction and modeling of inductance considering spatial correlation | In this paper, we present a novel method for statistical inductance extraction and modeling for interconnects considering process variations. The new method, called statHenry, is based on the collocation-based spectral stochastic method where orthogonal polynomials are used to represent the statistical processes. The coefficients of the partial inductance orthogonal polynomial are computed via the collocation method where a fast multi-dimensional Gaussian quadrature method is applied with sparse grids. To further improve the efficiency of the proposed method, a random variable reduction scheme is used. Given the interconnect wire variation parameters, the resulting method can derive the parameterized closed form of the inductance value. We show that both partial and loop inductance variations can be significant given the width and height variations. This new approach can work with any existing inductance extraction tool to extract the variational partial and loop inductance or impedance. Experimental results show that our method is orders of magnitude faster than the Monte Carlo method for several practical interconnect structures. | [
"statistical",
"spatial correlation",
"inductance extraction",
"process variation"
] | [
"P",
"P",
"P",
"P"
] |
-sLHchk | Evolutionary Design of Both Topologies and Parameters of a Hybrid Dynamical System | This paper investigates the issue of evolutionary design of open-ended plants for hybrid dynamical systems, i.e., both their topologies and parameters. Hybrid bond graphs (HBGs) are used to represent dynamical systems involving both continuous and discrete system dynamics. Genetic programming, with some special mechanisms incorporated, is used as a search tool to explore the open-ended design space of hybrid bond graphs. Combination of these two tools, i.e., HBGs and genetic programming, leads to an approach called HBGGP that can automatically generate viable design candidates of hybrid dynamical systems that fulfill predefined design specifications. A comprehensive investigation of a case study of DC-DC converter design demonstrates the feasibility and effectiveness of the HBGGP approach. Important characteristics of the approach are also discussed, with some future research directions pointed out. | [
"evolutionary design",
"bond graphs",
"genetic programming",
"automated design",
"hybrid mechatronic systems"
] | [
"P",
"P",
"P",
"M",
"M"
] |
2ng&Xwy | Mesh-size-objective XFEM for regularized continuousdiscontinuous transition | The focus is on a Regularized eXtended Finite Element Model (REXFEM) for modeling the transition from a local continuum damage model to a model with an embedded cohesive discontinuity. The discontinuity surface and the displacement jump are replaced by a volume, whose width depends on a regularization length, and by a regularized jump function, respectively. By exploiting the property that the stressstrain work spent within the regularized discontinuity, for vanishing regularization, is equivalent to the tractionseparation work dissipated in a zero-width discontinuity surface, a mesh-size independent transition from the continuous displacement regime to the discontinuous displacement regime is obtained. Sub-elemental enrichment is considered, leading to a localization band restricted to a single layer of finite elements, like in the smeared-crack approach. Therefore, a comparison in the two-dimensional case with a literature model and with a commercial code which are based on the smeared-crack approach is presented. | [
"continuousdiscontinuous transition",
"regularized xfem",
"concrete-like bulk",
"mesh independence"
] | [
"P",
"R",
"U",
"M"
] |
-m4BUWo | Building Different Mouse Models for Human MS | Multiple sclerosis (MS) is a clinically and pathologically heterogeneous inflammatory/demyelinating disease of the central nervous system (CNS). Many patients first present with isolated optic neuritis. In some variants of MS, like Devic's disease or neuromyelitis optica (NMO), lesions are predominantly found in the optic nerves and spinal cord but not in the brain. The immunological bases of the different forms of MS are unknown. Here, we summarize our published findings on two mouse model | [
"ms",
"optic neuritis",
"devic's disease",
"eae",
"mog",
"t cells",
"b cells",
"2d2",
"th"
] | [
"P",
"P",
"P",
"U",
"U",
"U",
"U",
"U",
"U"
] |
1NUKt7b | Accurate prediction of substrate parasitics in heavily doped CMOS processes using a calibrated boundary element solver | This paper presents an automated methodology for calibrating the doping profile and accurately predicting substrate parasitics with boundary element solvers. The technique requires fabrication of only a few test structures and results in an accurate three-layered approximation of a heavily doped epitaxial silicon substrate. Using this approximation, the extracted substrate resistances are accurate to within 10% of measurements. The calibrated parasitic extractor results in good agreement between simulations and measurements for substrate noise coupling in fabricated test circuits. | [
"boundary element solver",
"substrate noise coupling",
"green's function",
"integrated circuit noise",
"substrate parasiticm extraction"
] | [
"P",
"P",
"U",
"M",
"M"
] |
2SGE:-F | H-infinity Fuzzy Control for Systems With Repeated Scalar Nonlinearities and Random Packet Losses | This paper is concerned with the H-infinity fuzzy control problem for a class of systems with repeated scalar nonlinearities and random packet losses. A modified Takagi-Sugeno (T-S) fuzzy model is proposed in which the consequent parts are composed of a set of discrete-time state equations containing a repeated scalar nonlinearity. Such a model can describe some well-known nonlinear systems such as recurrent neural networks. The measurement transmission between the plant and controller is assumed to be imperfect and a stochastic variable satisfying the Bernoulli random binary distribution is utilized to represent the phenomenon of random packet losses. Attention is focused on the analysis and design of H-infinity fuzzy controllers with the same repeated scalar nonlinearities such that the closed-loop T-S fuzzy control system is stochastically stable and preserves a guaranteed H-infinity performance. Sufficient conditions are obtained for the existence of admissible controllers, and the cone complementarity linearization procedure is employed to cast the controller design problem into a sequential minimization one subject to linear matrix inequalities, which can be readily solved by using standard numerical software. Two examples are given to illustrate the effectiveness of the proposed design method. | [
"repeated scalar nonlinearity",
"random packet losses",
"diagonally dominant matrix",
"fuzzy systems",
"h-infinity control",
"linear matrix inequality (lmi)"
] | [
"P",
"P",
"M",
"R",
"R",
"M"
] |
4zV7iV4 | Relevant Information and Relevant Questions: Comment on Floridis Understanding Epistemic Relevance | Floridis chapter on relevant information bridges the analysis of being informed with the analysis of knowledge as relevant information that is accounted for by analysing subjective or epistemic relevance in terms of the questions that an agent might ask in certain circumstances. In this paper, I scrutinise this analysis, identify a number of problems with it, and finally propose an improvement. By way of epilogue, I offer some more general remarks on the relation between (bounded) rationality, the need to ask the right questions, and the ability to ask the right questions. | [
"questions",
"(bounded) rationality",
"subjective relevance",
"semantic information",
"erotetic logic"
] | [
"P",
"P",
"R",
"M",
"U"
] |
-bpM5H9 | Simple negotiation schemes for agents with simple preferences: sufficiency, necessity and maximality | We investigate the properties of an abstract negotiation framework where agents autonomously negotiate over allocations of indivisible resources. In this framework, reaching an allocation that is optimal may require very complex multilateral deals. Therefore, we are interested in identifying classes of valuation functions such that any negotiation conducted by means of deals involving only a single resource at a time is bound to converge to an optimal allocation whenever all agents model their preferences using these functions. In the case of negotiation with monetary side payments amongst self-interested but myopic agents, the class of modular valuation functions turns out to be such a class. That is, modularity is a sufficient condition for convergence in this framework. We also show that modularity is not a necessary condition. Indeed, there can be no condition on individual valuation functions that would be both necessary and sufficient in this sense. Evaluating conditions formulated with respect to the whole profile of valuation functions used by the agents in the system would be possible in theory, but turns out to be computationally intractable in practice. Our main result shows that the class of modular functions is maximal in the sense that no strictly larger class of valuation functions would still guarantee an optimal outcome of negotiation, even when we permit more general bilateral deals. We also establish similar results in the context of negotiation without side payments. | [
"negotiation",
"multiagent resource allocation"
] | [
"P",
"M"
] |
5-YBC4T | Generating all the acyclic orientations of an undirected graph | Let G be an undirected graph with n vertices, m edges and alpha acyclic orientations. We describe an algorithm for finding all these orientations in overall time O((n + m)alpha) and delay complexity O(n(n + m)). The space required is O(n + m). (C) 1999 Published by Elsevier Science B.V. All rights reserved. | [
"acyclic orientations",
"graphs",
"algorithms"
] | [
"P",
"P",
"P"
] |
1ki8Q:m | Modeling material responses by arbitrary Lagrangian Eulerian formulation and adaptive mesh refinement method | In this paper we report an efficient numerical method combining a staggered arbitrary Lagrangian Eulerian (ALE) formulation with the adaptive mesh refinement (AMR) method for materials modeling including elasticplastic flows, material failure, and fragmentation predictions. Unlike traditional AMR applied on fixed domains, our investigation focuses on the application to moving and deforming meshes resulting from Lagrangian motion. We give details of this numerical method with a capability to simulate elasticplastic flows and predict material failure and fragmentation, and our main focus of this paper is to create an efficient method which combines ALE and AMR methods to simulate the dynamics of material responses with deformation and failure mechanisms. The interlevel operators and boundary conditions for these problems in AMR meshes have been investigated, and error indicators to locate material deformation and failure regions are studied. The method has been applied on several test problems, and the solutions of the problems obtained with the ALEAMR method are reported. Parallel performance and software design for the ALEAMR method are also discussed. | [
"arbitrary lagrangian eulerian",
"adaptive mesh refinement",
"elasticplastic flow",
"fragmentation",
"johnsoncook failure model"
] | [
"P",
"P",
"P",
"P",
"M"
] |
4pnJ41z | a physically realistic framework for the generation of high-level animation controllers | Artificial life techniques can be seen as an evolution of animation techniques, shifting most of the low-level control tasks traditionally performed by an animator to control systems, which can be manually (task-driven animation) or automatically (behavioural animation) handled. In this last situation, a rich environment from which interesting strategies can be extracted by evolutive creatures is needed. We will describe here a simulation framework that we developed for this purpose, and show how it can be used as a starting point for the development of dynamic behavioural systems. | [
"animation",
"behaviour",
"simulation",
"artificial life.",
"high-level control"
] | [
"P",
"P",
"P",
"R",
"R"
] |
3:oTe9u | GC Assertions: Using the Garbage Collector to Check Heap Properties | This paper introduces GC assertions, a system interface that programmers can use to check for errors, such as data structure invariant violations, and to diagnose performance problems, such as memory leaks. GC assertions are checked by the garbage collector, which is in a unique position to gather information and answer questions about the lifetime and connectivity of objects in the heap. By piggybacking on existing garbage collector computations, our system is able to check heap properties with very low overhead-around 3% of total execution time-low enough for use in a deployed setting. We introduce several kinds of GC assertions and describe how they are implemented in the collector. We also describe our reporting mechanism, which provides a complete path through the heap to the offending objects. We report results on both the performance of our system and the experience of using our assertions to find and repair errors in real-world programs. | [
"performance",
"memory leaks",
"reliability",
"experimentation",
"managed languages",
"garbage collection"
] | [
"P",
"P",
"U",
"U",
"U",
"M"
] |
LQJRQAA | Middle-agents organized in fault tolerant and fixed scalable structure | Agents in a multi-agent system usually use middle-agents to locate service providers. Since one central middle-agent represents a single point of failure and communication bottleneck in the system, therefore a structure of middle-agents is used to overcome these issues. We designed and implemented a structure of middle-agents called dynamic hierarchical teams that has user-defined level of fault-tolerance and is moreover fixed scalable. We prove that the structure that has teams of size lambda has vertex and edge connectivity equal to lambda i.e., the structure stays connected despite lambda - 1 failures of middle-agents or lambda - 1 communication channels. We focus on social knowledge management describing several methods that can be used for social knowledge propagation and search in this structure. We also test the fault-tolerance of this structure in practical experiments. | [
"fault tolerance",
"scalability",
"multi-agent systems"
] | [
"P",
"P",
"P"
] |
2cuKH81 | Collaborative argumentation and justifications: A statistical discourse analysis of online discussions | As justifications (such as evidence or explanations) are central to productive argumentation, this study examines how the discourse moves of students engaged in collaborative learning are related to their justifications during computer mediated communication (CMC). Twenty-four students posted 131 messages on Knowledge Forum, an online collaborative learning environment. These messages were coded and analyzed with a multiple outcome, multilevel logit, vector autoregression. When students disagreed or made claims, they were more likely to use evidence. After a student made an alternative claim, the next student posting a message was less likely to use evidence. When students made claims, disagreed, disagreed with others justifications, or read more messages, they were more likely to use explanations. Boys were more likely than girls to make new claims. Together, these results suggest that discourse moves and sequences are linked to justifications on online forums. | [
"argumentation",
"justification",
"discourse analysis",
"collaborative learning",
"computer mediated communication"
] | [
"P",
"P",
"P",
"P",
"P"
] |
2aETBJC | An investigation of pixel resonance phenomenon in color imaging: the multiple interpretations of people with color vision deficiency | Multiple interpretations of behavior in human vision lead us to a dissimilar comprehension. The perceivable vision of normal people and dichromats were simulated by confusion lines and co-punctal points on the CIE chromaticity diagram to interpret the concept of multiple interpretations. In addition, the new principle of pixel resonance (PR) was proposed to aid dichromats in recognizing the correct objects from a variegated background. In this study, the principle of PR, which is mainly derived from the stochastic resonance (SR) theory, was slightly introduced as the opening of the research. A Monte Carlo simulation of random walks is a common method used to achieve the SR conception by simulating an experiment of the photon casting process. This process is analogous to how people prioritize and understand certain parts of a scene or an image. The concept of PR applied to intensity imaging was introduced in Section 2. Next, an extension of the theory of PR conception was applied to color imaging in Section 3. In addition, we proposed a creative method to simulate an Ishihara pseudoisochromatic test plate using three procedures: circle pattern construction, color sampling and luminance placement. The visual simulations of dichromats and normal people were realized by confusion lines and co-punctal points to obtain multiple interpretations. Finally, the PR phenomenon on the simulated Ishihara pseudoisochromatic test plates was discussed. The results of the current study showed that the PR phenomenon for the perceivable vision of normal people and tritanopes, but not for protanopes and deuteranopes, can be meaningfully observed. In conclusion, the application of PR presents meaningful results for tritanopes. This research can be applied to clinics to assist people with color vision deficiency in recognizing the correct number. | [
"pixel resonance",
"multiple interpretations",
"color vision deficiency",
"stochastic resonance",
"monte carlo",
"random walk",
"ishihara pseudoisochromatic test plates"
] | [
"P",
"P",
"P",
"P",
"P",
"P",
"P"
] |
2Z4GMBe | An Approximate Calculation of Ratio of Normal Variables and Its Application in Analog Circuit Fault Diagnosis | The challenging tolerance problem in fault diagnosis of analog circuit remains unsolved. To diagnose the soft-fault with tolerance effectively, a novel diagnosis approach based on the ratio of normal variables and the slope fault model was proposed. Firstly, the approximate distribution function of the ratio of normal variables was deduced and the basic approximate conditions were given to improve the approximation accuracy. The conditional monotonous and continuous mapping between the ratio of normal variables and the standard normal variable was proved. Based on the aforementioned proved mapping, the estimation formulas of the range of the ratio of normal variables were deduced. Then, the principle of the slope fault model for linear analog circuit was presented. After the contrastive analysis of the typical methods of handling tolerance based on the slope fault model, the ratio of normal variables and the slope fault model were combined and a test-nodes selection algorithm based on the basic approximate conditions of ratio of normal variables was designed, by which the computation can be reduced greatly. Finally, experiments were done to illustrate the proposed approach and demonstrate its effectiveness. | [
"ratio of normal variables",
"analog circuit",
"soft-fault",
"slope fault model"
] | [
"P",
"P",
"P",
"P"
] |
jMZpC6V | two-handed interaction on a tablet display | A touchscreen can be overlaid on a tablet computer to support asymmetric two-handed interaction in which the preferred hand uses a stylus and the non-preferred hand operates the touchscreen. The result is a portable device that allows both hands to interact directly with the display, easily constructed from commonly available hardware. The method for tracking the independent motions of both hands is described. A wide variety of existing two-handed interaction techniques can be used on this platform, as well as some new ones that exploit the reconfigurability of touchscreen interfaces. Informal tests show that, when the non-preferred hand performs simple actions, users find direct manipulation on the display with both hands to be comfortable, natural, and efficient. | [
"two-handed interaction",
"tablet",
"touchscreen",
"tablet computing",
"computation",
"support",
"stylus",
"portability",
"device",
"hardware",
"method",
"tracking",
"platform",
"exploit",
"reconfigurability",
"interfaces",
"informal",
"action",
"users",
"direct manipulating",
"efficiency",
"touch-sensitive screens",
"commodity hardware",
"asymmetric bimanual interaction"
] | [
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"U",
"M",
"M"
] |
-N-Bj92 | QUANTUM CORRELATIONS DYNAMICS OF QUASI-BELL CAT STATES | A model of dynamics of quantum correlations of two modes quasi-Bell cat states, based on Glauber coherent states, is considered. The analytic expressions of pairwise entanglement of formation, quantum discord and its geometrized variant are explicitly derived. We analyze the distribution of quantum correlations between the two modes and the environment. We show that, in contrast with squared concurrence, entanglement of formation, quantum discord and geometric quantum discord do not follow the property of monogamy except in some particular situations that we discuss. | [
"entanglement of formation",
"quantum discord",
"monogamy relation",
"bell states"
] | [
"P",
"P",
"M",
"M"
] |
98Kwc4p | Preparation of one-dimensional photonic crystal with variable period by using ultra-high vacuum electron beam evaporation | One-dimensional photonic crystal with variable period has been prepared by using the method of ultra-high vacuum electron beam evaporation. Advantage of this method is gaining variable period easily. The transmittance of the photonic crystal has been also studied. | [
"photonic crystal",
"variable period",
"ultra-high-vacuum electron beam evaporation"
] | [
"P",
"P",
"M"
] |
2VxfbvB | The effect of complex and negative indices in the transmission of electromagnetic waves through superlattices | We apply the transfer matrix method to study optical transmission properties of multilayered structures with complex and negative indices. We show that the true refraction angle of electromagnetic waves in complex permittivity media does not change sign at low frequencies. Based on the invariance of the slab's scattering amplitudes, we show that it is indifferent whether the negative sign be conveyed by the wave vector k or the admittance ? ? . We show also that in the scattering approach language, the change of sign of ? ? and ? ? leads to a change in the transmission amplitude phase from ?t ? t to -?t - ? t . We present some results for metallic slabs and superlattices containing right- and left-handed materials. | [
"wave propagation",
"complex permitivities",
"left-handed media"
] | [
"M",
"M",
"R"
] |
3fbAXhX | augmenting the learning experience with virtual simulation systems | The described half day workshop introduces all participants in a very interactive way into the activities of the ImREAL FP7 project which wants to augment the learning experience with the use of highly adaptive simulators. Real world activities will be integrated with the simulation environment by means of intelligent semantic services and a novel approach to digital and interactive storytelling. Researchers as well as practitioners are invited to interact with the simulation under development, discuss and participate to the definition of potential further development. | [
"adaptive simulator",
"semantic services",
"augmented learning experience",
"dialogue simulation",
"user model"
] | [
"P",
"P",
"R",
"M",
"U"
] |
2ukCX59 | SPT is optimally competitive for uniprocessor flow | We show that the Shortest Processing Time (SPT) algorithm is (Delta + 1)/2-competitive for nonpreemptive uniprocessor total flow time with release dates, where Delta is the ratio between the longest and shortest job lengths. This is best possible for a deterministic algorithm and improves on the (Delta + 1)-competitive ratio shown by Epstein and van Stee using different methods. (C) 2004 Elsevier B.V. All rights reserved. | [
"algorithms",
"on-line algorithms",
"scheduling"
] | [
"P",
"M",
"U"
] |
32b9rtA | Testing Convexity Properties of Tree Colorings | A coloring of a graph is convex if it induces a partition of the vertices into connected subgraphs. Besides being an interesting property from a theoretical point of view, tests for convexity have applications in various areas involving large graphs. We study the important subcase of testing for convexity in trees. This problem is linked, among other possible applications, with the study of phylogenetic trees, which are central in genetic research, and are used in linguistics and other areas. We give a 1-sided, non-adaptive, epsilon-test with query complexity O(k/epsilon (2)) and time complexity O(kn/epsilon). For both our convexity and quasi-convexity tests, we show that, assuming that a query takes constant time, the time complexity can be reduced to a constant independent of n if we allow a preprocessing stage of time O(n) and O(n (2)), respectively. Finally, we show how to test for a variation of convexity and quasi-convexity where the maximum number of connectivity classes of each color is allowed to be a constant value other than 1. | [
"phylogenetic trees",
"property testing",
"convex coloring",
"sublinear algorithms",
"massively parameterized",
"graph algorithms"
] | [
"P",
"R",
"R",
"U",
"U",
"M"
] |
1x2nT:n | Decision tool for the early diagnosis of trauma patient hypovolemia | We present a classifier for use as a decision assist tool to identify a hypovolemic state in trauma patients during helicopter transport to a hospital, when reliable acquisition of vital-sign data may be difficult. The decision tool uses basic vital-sign variables as input into linear classifiers, which are then combined into an ensemble classifier. The classifier identifies hypovolemic patients with an area under a receiver operating characteristic curve (AUC) of 0.76 (standard deviation 0.05, for 100 randomly-reselected patient subsets). The ensemble classifier is robust; classification performance degrades only slowly as variables are dropped, and the ensemble structure does not require identification of a set of variables for use as best-feature inputs into the classifier. The ensemble classifier consistently outperforms best-features-based linear classifiers (the classification AUC is greater, and the standard deviation is smaller, p<0.05). The simple computational requirements of ensemble classifiers will permit them to function in small fieldable devices for continuous monitoring of trauma patients. | [
"hypovolemia",
"decision assist",
"vital-signs",
"linear classifier",
"ensemble classifier",
"monitoring",
"hemorrhage",
"physiology"
] | [
"P",
"P",
"P",
"P",
"P",
"P",
"U",
"U"
] |
22rcmF: | Design and performance analysis on adaptive reservation-assisted collision resolution protocol for WLANs | In conventional IEEE 802.11 medium access control protocol, the distributed coordination function is designed for the wireless stations (WSs) to perform channel contention within the wireless local area networks (WLANs). Packet collision is considered one of the major issues within this type of contention-based scheme, which can severely degrade network performance for the WLANs. Research work has been conducted to modify the random backoff mechanism in order to alleviate the packet collision problem while the WSs are contending for channel access. However, most of the existing work can only provide limited throughput enhancement under specific number of WSs within the network. In this paper, an adaptive reservation-assisted collision resolution (ARCR) protocol is proposed to improve packet collision resulting from the random access schemes. With its adaptable reservation period, the contention-based channel access can be adaptively transformed into a reservation-based system if there are pending packets required to be transmitted between the WSs and the access point. Analytical model is derived for the proposed ARCR scheme in order to evaluate and validate its throughput performance. It can be observed from both analytical and simulation results that the proposed protocol outperforms existing schemes with enhanced channel utilization and network throughput. | [
"medium access control",
"wireless local area network (wlan)",
"random backoff mechanism",
"ieee 802.11 standards",
"reservation-based algorithm"
] | [
"P",
"P",
"P",
"M",
"M"
] |
-kn1NKj | Jumping emerging patterns with negation in transaction databases - Classification and discovery | This paper examines jumping emerging patterns with negation (JEPNs), i.e. JEPs that can contain negated items. We analyze the basic relations between these patterns and classical JEPs in transaction databases and local reducts from the rough set theory. JEPNs provide an interesting type of knowledge and can be successfully used for classification purposes. By analogy to JEP-Classifier, we consider negJEP-Classifier and JEPN-Classifier and compare their accuracy. The results are contrasted with changes in rule set complexity. In connection with the problem of JEPN discovery, JEP-Producer and rough set methods are examined. (C) 2007 Elsevier Inc. All rights reserved. | [
"jumping emerging pattern",
"negation",
"transaction database",
"local reduct",
"rough set",
"contradictory database",
"extended database"
] | [
"P",
"P",
"P",
"P",
"P",
"M",
"M"
] |
1gf3igh | Analysis of stiffness and elastic deformation of a 2(SP+SPR+SPU) serialparallel manipulator | A 2(SP+SPR+SPU) manipulator is a serialparallel manipulator, which includes an upper manipulator and a lower manipulator. Its stiffness and elastic deformation are studied systematically in this paper. Firstly, a 2(SP+SPR+SPU) manipulator is constructed and its characteristics are analyzed. Secondly, the formulae for solving the elastic deformation and the compliance matrix of the active legs are derived and the elastic deformation and the total stiffness matrix of this manipulator are solved and analyzed. Finally, a finite element model of this manipulator is constructed and its elastic deformations are solved. The analytic solutions of elastic deformations of this manipulator are coincident with that of its finite element model. | [
"elastic deformation",
"serialparallel manipulator",
"stiffness matrix",
"kinematics",
"statics"
] | [
"P",
"P",
"P",
"U",
"U"
] |
2UcNVWn | exploratory analysis of deviations from formal procedures during preoperative anaesthetic evaluation | Motivation --- The aim of this paper is to study deviations from formal procedures during preoperative anaesthetic evaluation and to investigate their possible association with the assumptions that anaesthesiologists make during the evaluation. The findings of this analysis can be applied for the identification of requirements and limitations for the standardisation of the task through supporting tools. Research approach --- Records of 100 consecutive preanaesthesia evaluations for elective surgery in a private hospital were retrospectively analysed. In addition, field observations were carried out in order to guide data collection and support the formulation of an initial framework for organizing our findings. This way, data analysis and fieldwork were interwoven, feeding each other. Findings/Design --- The review of 100 preanaesthesia evaluation records revealed that a significant number of them deviated from the normative course of action. Specifically, contrary to the stipulations of the prescribed procedure, in 26\% of our cases, the evaluation was performed without having available the preoperative laboratory test results. Furthermore, the form provided for the documentation of the evaluation was scarcely filled-in (75\% of the forms had less than 30 out of the 83 total fields completed. In the same time, free-text fields were extensively used, spilling over their content to other fields in 15\% of the cases. Our findings are consistent with prior research which indicates that routine laboratory tests are not critical for the evaluation of the patient. Furthermore, the frequently completed fields coincide with the main findings of previous research on the opinions of anaesthesiologists regarding what variables they consider as important. A possible explanation for the observed deviations from formal procedures and low utilisation of standardized forms could be that anaesthesiologists are engaged in a thinking-acting process rather than in a process of information collection directed by a protocol. Standardisation efforts through supporting tools ought to be non-obstructive to this process. Research limitations/Implications --- Our research is limited by the modest sample size of 100 cases and input from a single hospital. Nevertheless, the questions raised and initial hypotheses formulated can be further tested with a larger sample size and different medical establishments. Originality/Value --- Anaesthesiologists have been leaders in applying lessons from Human Factors and Cognitive Ergonomics, but most effort was directed to the development of support tools and decision aids for the operating theatre. The research presented here aims at extending those lessons to the preanaesthesia related tasks. Take away message --- Deviations from the formal procedure during preoperative anaesthetic evaluation can be used for the identification of requirements and limitations for the standardisation of the task through supporting tools. | [
"standardisation",
"preanaesthesia evaluation",
"professional practice",
"anaesthesia",
"patient record"
] | [
"P",
"P",
"U",
"U",
"R"
] |
NEoHCwE | On finding multiple Pareto-optimal solutions using classical and evolutionary generating methods | In solving multi-objective optimization problems, evolutionary algorithms have been adequately applied to demonstrate that multiple and well-spread Pareto-optimal solutions can be found in a single simulation run. In this paper, we discuss and put together various different classical generating methods which are either quite well-known or are in oblivion due to publication in less accessible journals and some of which were even suggested before the inception of evolutionary methodologies. These generating methods specialize either in finding multiple Pareto-optimal solutions in a single simulation run or specialize in maintaining a good diversity by systematically solving a number of scalarizing problems. Most classical generating methodologies are classified into four groups mainly based on their working principles and one representative method from each group is chosen in the present study for a detailed discussion and for its performance comparison with a state-of-the-art evolutionary method. On visual comparisons of the efficient frontiers obtained for a number of two and three-objective test problems, the results bring out interesting insights about the strengths and weaknesses of these approaches. The results should motivate researchers to design hybrid multi-objective optimization algorithms which may be better than each of the individual methods. | [
"pareto-optimal solutions",
"generating methods",
"multiple objective programming",
"performance analysis",
"evolutionary multi-objective optimization (emo)"
] | [
"P",
"P",
"M",
"M",
"M"
] |
G-v:65z | An Approximation Algorithm for Binary Searching in Trees | We consider the problem of computing efficient strategies for searching in trees. As a generalization of the classical binary search for ordered lists, suppose one wishes to find a (unknown) specific node of a tree by asking queries to its arcs, where each query indicates the endpoint closer to the desired node. Given the likelihood of each node being the one searched, the objective is to compute a search strategy that minimizes the expected number of queries. Practical applications of this problem include file system synchronization and software testing. Here we present a linear time algorithm which is the first constant factor approximation for this problem. This represents a significant improvement over previous O(log n)-approximation. | [
"approximation algorithms",
"binary search",
"trees",
"partially ordered sets"
] | [
"P",
"P",
"P",
"M"
] |
1YC:wts | An agglomerative clustering algorithm using a dynamic k-nearest-neighbor list | In this paper, a new algorithm is developed to reduce the computational complexity of Wards method. The proposed approach uses a dynamic k-nearest-neighbor list to avoid the determination of a clusters nearest neighbor at some steps of the cluster merge. Double linked algorithm (DLA) can significantly reduce the computing time of the fast pairwise nearest neighbor (FPNN) algorithm by obtaining an approximate solution of hierarchical agglomerative clustering. In this paper, we propose a method to resolve the problem of a non-optimal solution for DLA while keeping the corresponding advantage of low computational complexity. The computational complexity of the proposed method DKNNA+FS (dynamic k-nearest-neighbor algorithm with a fast search) in terms of the number of distance calculations is O(N2), where N is the number of data points. Compared to FPNN with a fast search (FPNN+FS), the proposed method using the same fast search algorithm (DKNNA+FS) can reduce the computing time by a factor of 1.902.18 for the data set from a real image. In comparison with FPNN+FS, DKNNA+FS can reduce the computing time by a factor of 1.922.02 using the data set generated from three images. Compared to DLA with a fast search (DLA+FS), DKNNA+FS can decrease the average mean square error by 1.26% for the same data set. | [
"agglomerative clustering",
"nearest neighbor",
"vector quantization"
] | [
"P",
"P",
"U"
] |
-EfwFVL | Texture image retrieval and image segmentation using composite sub-band gradient vectors ? | A new texture descriptor, called CSG vector, is proposed for image retrieval and image segmentation in this paper. The descriptor can be generated by composing the gradient vectors obtained from the sub-images through a wavelet decomposition of a texture image. By exercising a database containing 2400 images which were cropped from a set of 150 types of textures selected from the Brodatz Album, we demonstrated that 93% efficacy can be achieved in image retrieval. Moreover, using CSG vectors as the texture descriptor for image segmentation can generate very successful results for both synthesized and natural scene images. | [
"image retrieval",
"image segmentation",
"texture descriptor",
"csg vector",
"wavelet decomposition"
] | [
"P",
"P",
"P",
"P",
"P"
] |
4VmedgZ | Dynamic Neighbourhood Cellular Automata | We propose a definition of cellular automaton in which each cell can change its neighbourhood during a computation. This is done locally by looking not farther than neighbours of neighbours and the number of links remains bounded by a constant throughout. We suggest that dynamic neighbourhood cellular automata can serve as a theoretical model in studying algorithmic and computational complexity issues of ubiquitous computations. We illustrate our approach by giving an optimal, logarithmic time solution of the Firing Squad Synchronization problem in this setting. | [
"cellular automata",
"distributed algorithms",
"syncronization",
"firing squad problem"
] | [
"P",
"M",
"U",
"R"
] |
-HNTqfi | Query-biased summary generation assisted by query expansion | Query-biased summaries help users to identify which items returned by a search system should be read in full. In this article, we study the generation of query-biased summaries as a sentence ranking approach, and methods to evaluate their effectiveness. Using sentence-level relevance assessments from the TREC Novelty track, we gauge the benefits of query expansion to minimize the vocabulary mismatch problem between informational requests and sentence ranking methods. Our results from an intrinsic evaluation show that query expansion significantly improves the selection of short relevant sentences (513 words) between 7% and 11%. However, query expansion does not lead to improvements for sentences of medium (1420 words) and long (2129 words) lengths. In a separate crowdsourcing study, we analyze whether a summary composed of sentences ranked using query expansion was preferred over summaries not assisted by query expansion, rather than assessing sentences individually. We found that participants chose summaries aided by query expansion around 60% of the time over summaries using an unexpanded query. We conclude that query expansion techniques can benefit the selection of sentences for the construction of query-biased summaries at the summary level rather than at the sentence ranking level. | [
"query expansion",
"automatic extracting"
] | [
"P",
"U"
] |
4U3NX:L | Effects of Methyllycaconitine and Related Analogues on Bovine Adrenal ?3?4* Nicotinic Acetylcholine Receptors | Adrenal secretion and binding studies were performed using ring E analogues of methyllycaconitine to assess structural determinants affecting activity on bovine adrenal ?3?4* nicotinic receptors. The most potent analogues are as potent as many inhibitors of adrenal secretion. Our data support the potential use of methyllycaconitine analogues to generate nicotinic receptor subtype-specific compounds. | [
"methyllycaconitine",
"adrenal ?3?4 nicotinic acetylcholine receptors",
"mla"
] | [
"P",
"R",
"U"
] |
2uzBwgc | Code reservation schemes at the forward link in WCDMA | We examine resource reservation schemes for the management of orthogonal variable spreading factor (OVSF) codes at the forward link of 3G mobile communications systems employing WCDMA. Like in every multi-service network, calls with different rate requirements will perceive very dissimilar system performance at the forward link in 3G systems if no measures are taken and the channelization code tree is treated as a common pool of resources. Assuming that the traffic level for each class is known in advance, we introduce complete sharing (CS), complete partitioning (CP) and hybrid partitioning (HP) policies to manage the code tree. At the resource reservation level, we develop an efficient method to partition the available codes based on the offered traffic load of each class of calls and the size of the tree. The resulting partition is optimal in the sense that the maximum blocking probability of the different rate classes is minimized. At the call level, we use a real-time scheme to assign free codes to incoming requests, and evaluate its performance in terms of blocking probability per traffic class and utilization of codes in conjunction with the partitioning method used. It turns out that code blocking, which is encountered on this type of systems, further deteriorates the unfairness conditions at the forward link. Our simulation results show that fair access to codes by different rate calls is assured more by CP and less by HP schemes, at the expense of slightly lower code utilization at medium to high loads, compared to the CS scheme. Also, hybrid schemes absorb small traffic deviations more efficiently than CP, which is generally optimized for certain traffic mixes. (C) 2004 Elsevier B.V. All rights reserved. | [
"code reservation",
"wcdma",
"orthogonal variable spreading factor codes",
"third generation systems"
] | [
"P",
"P",
"R",
"M"
] |
2MiTrW5 | Prediction of biomagnification factors for some organochlorine compounds using linear free energy relationship parameters and artificial neural networks | Multiple linear regression and artificial neural networks (ANNs) as feature mapping techniques were used for the prediction of the biomagnification factor (BMF) of some organochlorine pollutants. As independent variables, or compound descriptors, the Abraham descriptors often employed in linear free energy relationships were used. Much better results were obtained from the nonlinear ANN model than from multiple linear regression. The average absolute error, average relative error and root mean square error in the calculation of log (BMF) by the ANN model were 0.055, 0.051 and 0.097 for the training set and 0.11, 0.086 and 0.175 for the internal validation set, respectively. The degree of importance of each descriptor was evaluated by carrying out a sensitivity analysis approach for the nonlinear model. The results obtained reveal that the order of importance is the pollutant volume, the pollutant dipolarity/polarizability and the pollutant excess molar refraction. In order to examine the credibility of the obtained ANN model the leave-many-out cross-validation test was applied which gave Q2= 0.827 and SPRESS = 0.15. Also the Y-scrambling procedure was applied to the ANN model in order to examine the effect of chance correlations. The results obtained reveal that it is possible to predict the BMFs of organochlorine pollutants using a nonlinear ANN model with Abraham descriptors as inputs. | [
"biomagnification factor",
"linear free energy relationship",
"artificial neural network",
"organochlorine pollutant",
"qsar"
] | [
"P",
"P",
"P",
"P",
"U"
] |
13-Ev53 | Homogeneous distribution of excitatory and inhibitory synapses on the dendrites of the cat surea triceps ?-motoneurons increases synaptic efficacy: Computer model | The effects of input distribution in dendrites on postsynaptic inhibition of spinal monosynaptic reflex were studied in morphologically and physiologically characterized ?-motoneurons. In homogeneous (HOM) and heterogeneous (HEM) models, the location of the excitatory and inhibitory synapses was randomly selected for each bin of dendritic length. In the HOM, each compartment was forced to contain only one synapse as long as other compartments did not have at least one synapse. In the HEM, no restriction was made for synaptic distribution within a bin. EPSP amplitude in the HOM was enhanced by 28% and inhibition of EPSP peak (upon activation of inhibition) was increased by 66%, compared to the HEM. These results indicate that synaptic efficacy is greater in the HOM, both for excitatory and inhibitory synapses. Thus, it is suggested that homogeneously distributed postsynaptic inhibition may serve as the powerful inhibition of the monosynaptic reflex in realistic ?-motoneurons. | [
"computer simulation",
"motoneuron",
"location of excitatory and inhibitory synapses"
] | [
"M",
"U",
"R"
] |
56srjRp | Comparing top-k XML lists | Systems that produce ranked lists of results are abundant. For instance, Web search engines return ranked lists of Web pages. There has been work on distance measure for list permutations, like Kendall tau and Spearman's footrule, as well as extensions to handle top-k lists, which are more common in practice. In addition to ranking whole objects (e.g., Web pages), there is an increasing number of systems that provide keyword search on XML or other semistructured data, and produce ranked lists of XML sub-trees. Unfortunately, previous distance measures are not suitable for ranked lists of sub-trees since they do not account for the possible overlap between the returned sub-trees. That is, two sub-trees differing by a single node would be considered separate objects. In this paper, we present the first distance measures for ranked lists of sub-trees, and show under what conditions these measures are metrics. Furthermore, we present algorithms to efficiently compute these distance measures. Finally, we evaluate and compare the proposed measures on real data using three popular XML keyword proximity search systems. | [
"total mapping",
"partial mapping",
"similarity distance",
"position distance"
] | [
"U",
"U",
"M",
"M"
] |
3uXcjpb | Does stock repurchase declaration affect stock price? Differences between the electrics industry and other industries | This study is to discuss the impact of stock repurchase declaration and purpose of repurchase on the stock price in the backdrop of listed companies on Taiwans stock market. Event Study Method is employed to discuss stock price fluctuations while GARCH (Generalized Autoregressive Conditional Heteroscedasticity) is applied to estimate the Market Model regressive coefficients. The samples consisted of companies declaring first stock repurchase are selected from August 9, 2000 to December 31, 2005 with a precondition that all the companies shall be listed ones 150 days prior to declaration. The study results reveal that companies from other industries have considerably bigger average CAR than companies of the electrics industry before and after the declaration of stock repurchase. Companies with application purpose of maintain stockholders equities and corporate credit have considerably bigger average CAR than companies with application purpose of transferring stocks to employees. In industries other than electrics, companies with application purpose of maintain stockholders equities and corporate credit have bigger accumulated abnormal return response than companies with application purpose of transferring stocks to employees. In case of maintain stockholders equities and corporate credit as the application purpose of stock repurchase, companies from industries other than electrics have relatively higher average CAR response. The empirical study results can serve as a reference for the listed company management and to related academic studies. | [
"stock repurchase",
"event study method",
"market model",
"average car",
"treasury stock",
"garch model"
] | [
"P",
"P",
"P",
"P",
"M",
"R"
] |
1txvCPX | QSAR classification of estrogen receptor binders and pre-screening of potential pleiotropic EDCs | Endocrine disrupting chemicals (EDCs) are suspected of posing serious threats to human and wildlife health through a variety of mechanisms, these being mainly receptor-mediated modes of action. It is reported that some EDCs exhibit dual activities as estrogen receptor (ER) and androgen receptor (AR) binders. Indeed, such compounds can affect the normal endocrine system through a dual complex mechanism, so steps should be taken not only to identify them a priori from their chemical structure, but also to prioritize them for experimental tests in order to reduce and even forbid their usage. To date, very few EDCs with dual activities have been identified. The present research uses QSARs, to investigate what, so far, is the largest and most heterogeneous ER binder data set (combined METI and EDKB databases). New predictive classification models were derived using different modelling methods and a consensus approach, and these were used to virtually screen a large AR binder data set after strict validation. As a result, 46 AR antagonists were predicted from their chemical structure to also have potential ER binding activities, i.e. pleiotropic EDCs. In addition, 48 not yet recognized ER binders were in silico identified, which increases the number of potential EDCs that are substances of very high concern (SVHC) in REACH. Thus, the proposed screening models, based only on structure information, have the main aim to prioritize experimental tests for the highlighted compounds with potential estrogenic activities and also to design safer alternatives. | [
"pleiotropic edc",
"estrogen receptor (er)",
"androgen receptor (ar)",
"virtual screening",
"external validation"
] | [
"P",
"P",
"P",
"P",
"M"
] |
1kZNxgv | exploring content-actor paired network data using iterative query refinement with netlens | Networks have remained a challenge for information retrieval and visualization because of the rich set of tasks that users want to accomplish. This paper demonstrates a tool, NetLens, to explore a Content-Actor paired network data model. The NetLens interface was designed to allow users to pose a series of elementary queries and iteratively refine visual overviews and sorted lists. This enables the support of complex queries that are traditionally hard to specify in node-link visualizations. NetLens is general and scalable in that it applies to any dataset that can be represented with our abstract Content-Actor data model | [
"iterative query refinement",
"human-computer interaction",
"user interfaces",
"digital library",
"incremental data exploration",
"network visualization",
"piccolo"
] | [
"P",
"U",
"R",
"U",
"M",
"R",
"U"
] |
1h4Y2YX | A coordinate transformation approach for efficient repeated solution of Helmholtz equation pertaining to obstacle scattering by shape deformations | A computational model is developed for efficient solutions of electromagnetic scattering from obstacles having random surface deformations or irregularities (such as roughness or randomly-positioned bump on the surface), by combining the Monte Carlo method with the principles of transformation electromagnetics in the context of finite element method. In conventional implementation of the Monte Carlo technique in such problems, a set of random rough surfaces is defined from a given probability distribution; a mesh is generated anew for each surface realization; and the problem is solved for each surface. Hence, this repeated mesh generation process places a heavy burden on CPU time. In the proposed approach, a single mesh is created assuming smooth surface, and a transformation medium is designed on the smooth surface of the object. Constitutive parameters of the medium are obtained by the coordinate transformation technique combined with the form-invariance property of Maxwells equations. At each surface realization, only the material parameters are modified according to the geometry of the deformed surface, thereby avoiding repeated mesh generation process. In this way, a simple, single and uniform mesh is employed; and CPU time is reduced to a great extent. The technique is demonstrated via various finite element simulations for the solution of two-dimensional, Helmholtz-type and transverse magnetic scattering problems. | [
"helmholtz equation",
"deformation",
"monte carlo",
"transformation electromagnetics",
"rough surface",
"anisotropic metamaterials",
"finite element method (fem)"
] | [
"P",
"P",
"P",
"P",
"P",
"U",
"M"
] |
52&k4sJ | Providing QOS guarantees for disk I/O | In this paper, we address the problem of providing different levels of performance guarantees or quality of service for disk I/O. We classify disk requests into three categories based on the provided level of service. We propose an integrated scheme that provides different levels of performance guarantees in a single system. We propose and evaluate a mechanism for providing deterministic service for variable-bit-rate streams at the disk. We will show that, through proper admission control and bandwidth allocation, requests in different categories can be ensured of performance guarantees without getting impacted by requests in other categories. We evaluate the impact of scheduling policy decisions on the provided service. We also quantify the improvements in stream throughput possible by using statistical guarantees instead of deterministic guarantees in the context of the proposed approach. | [
"qos",
"disk scheduling",
"vbr streams",
"multiple qos goals",
"seek optimization"
] | [
"P",
"R",
"M",
"M",
"U"
] |
-NNmJax | Business to business interoperability: A current review of XML data integration standards | Despite the dawn of the XML era, semantic interoperability issues still remain unsolved. As various initiatives trying to address how the underlying business information should be modelled, named and structured are being realised throughout the world, the importance of moving towards a holistic approach in eBusiness magnifies. In this paper, an attempt to clarify between the standards prevailing in the area is performed and the XML Data Standards providing generic XML Schemas are presented. Based on this XML Data Standards Map, a multi-faceted classification mechanism is proposed, leading to an extensible taxonomy of standards. A set of facets is analyzed for each standard, allowing for their classification based on their scope, completeness, compatibility with other standards, openness, ability to modify the schemas and maturity, to name a few. Through populating and querying this multi-faceted classification, a common understanding of Data Integration Standards can be ensured and the choice of a standard according to the requirements of each business can be systematically addressed. | [
"data integration",
"standards evaluation taxonomy",
"ubl",
"oagis",
"ccts"
] | [
"P",
"M",
"U",
"U",
"U"
] |
7BeJWBn | The bases associated with trellises of a lattice | It is well known that the trellises of lattices can be employed to decode efficiently. It was proved in [1] and [2] that if a lattice L has a finite trellis under the coordinate system {W-i}(i=1)(n) then there must exist a basis (b(1), b(2),..., b(n)) of L such that W-i span(b(i)) for 1 <= i <= n. In this letter, we prove this important result in a completely different method, and give an efficient method to compute all bases of this type. | [
"trellises",
"lattices",
"dual lattices",
"viterbi algorithm"
] | [
"P",
"P",
"M",
"U"
] |
4cUuzQi | Word statistics in Blogs and RSS feeds: Towards empirical universal evidence | We focus on the statistics of word occurrences and of the waiting times between such occurrences in Blogs. Due to the heterogeneity of words frequencies, the empirical analysis is performed by studying classes of frequently-equivalent words, i.e. by grouping words depending on their frequencies. Two limiting cases are considered: the dilute limit, i.e. for those words that are used less than once a day, and the dense limit for frequent words. In both cases, extreme events occur more frequently than expected from the Poisson hypothesis. These deviations from Poisson statistics reveal non-trivial time correlations between events that are associated with bursts of activities. The distribution of waiting times is shown to behave like a stretched exponential and to have the same shape for different sets of words sharing a common frequency, thereby revealing universal features. | [
"time statistics",
"information networks",
"zipf law",
"activity pattern"
] | [
"R",
"U",
"U",
"M"
] |
1AgFNiw | Presynaptic facilitation: Quantal analysis and simulations ? | The time-course of quantal neurosecretion indicated asynchrony in releases of presynaptic vesicles (quanta) in response to a stimulus. This was interpreted as reflecting the sequential release of vesicles from a single release site. We performed Monte-Carlo simulation of facilitated neurosecretion based upon the hypothesis that release sites do not limit transmitter release. The output of this simulation succeeded in reproducing the experimentally obtained distribution of quantal releases. These results support a model for neurosecretion in which each action potential evokes mobilization of synaptic vesicles to a single presynaptic release site followed by probabilistic secretion of releasable vesicles one after another. | [
"neurosecretion",
"modeling",
"synaptic plasticity",
"electrophysiology"
] | [
"P",
"P",
"M",
"U"
] |
1YgWsvt | The DECIDE Science Gateway | The motivation of this work fits with the general vision to enable e-health for European citizens, irrespective of their social and financial status and their place of residence. Services to be provided include access to a high-quality early diagnostic and prognostic service for the Alzheimer Disease and other forms of dementia, based both on the European Research and Education Networks and the European Grid Infrastructure. The present paper reports on the architecture and services of a Science Gateway developed in the context of the DECIDE project, which aims to support the medical community in its daily duties of patients' examination and diagnosis. The implementation of the Science Gateway is described with particular focus on the standard technologies adopted to ease the access by non IT-.expert users. The work leverages on an authentication and authorization infrastructure based on Identity Federations and robot certificates and on the adoption of the SAGA standard for middleware-independent Grid interaction. The architecture and the functionalities of the digital repository for medical image storage and analysis are also presented. | [
"science gateway",
"e-health service",
"grid computing",
"standard-based development and middleware-independent deploy"
] | [
"P",
"R",
"M",
"M"
] |
1fMo4FT | Modelling high-dimensional data by mixtures of factor analyzers | We focus on mixtures of factor analyzers from the perspective of a method for model-based density estimation from high-dimensional data, and hence for the clustering of such data. This approach enables a normal mixture model to be fitted to a sample of n data points of dimension p, where p is large relative to n. The number of free parameters is controlled through the dimension of the latent factor space. By working in this reduced space, it allows a model for each component-covariance matrix with complexity lying between that of the isotropic and full covariance structure models. We shall illustrate the use of mixtures of factor analyzers in a practical example that considers the clustering of cell lines on the basis of gene expressions from microarray experiments. | [
"factor analyzers",
"mixture modelling",
"em algorithm"
] | [
"P",
"P",
"U"
] |
-wiMUfV | Community detection for proximity alignment | Given a network and a group of target nodes, the task of proximity alignment is to find out a sequence of nodes that are the most relevant to the targets in terms of the linkage structure of the network. Proximity alignment will find important applications in many areas such as online recommendation in e-commerce and infectious disease controlling in public healthcare. In spite of great efforts having been made to design various metrics of similarities and centralities in terms of network structure, to the best of our knowledge, there have been no studies in the literature that address the issue of proximity alignment by explicitly and adequately exploring the intrinsic connections between macroscopic community structure and microscopic node proximities. However, the influence of community structure on proximity alignment is indispensable not only because they are ubiquitous in real-world networks but also they can characterize node proximity in a more natural way. In this work, a novel proximity alignment method called the PAA is proposed to address this problem. The PAA first decomposes the given network into communities based on its global structure and then compute node proximities based on the local structure of communities. In this way, the solution of the PAA is expected to be more reasonable in the sense of both global and local relevance among nodes being sufficiently considered during the process of proximity aligning. To handle large-scale networks, the PAA is implemented by a proposed online-offline schema, in which expensive computations such as community detection will be done offline so that online queries can be quickly responded by calculating node proximities in an efficient way based on indexed communities. The efficacy and the applications of the PAA have been validated and demonstrated. Our work shows that the PAA outperforms existing methods and enables us to explore real-world networks from a novel perspective. | [
"community detection",
"proximity alignment",
"similarity",
"ranking",
"social network analysis",
"recommender system"
] | [
"P",
"P",
"P",
"U",
"M",
"M"
] |
1FBkyAE | Query processing in DOQL: A deductive database language for the ODMG model | This paper describes the architecture, algebraic query processing framework and query execution approach that comprise the implementation of the deductive object query language (DOQL) query processing system. To the best of our knowledge, it is the first deductive object query language to be designed and implemented as a complementary and non-intrusive query component within an ODMG OODBMS architecture. The query processing framework enables the combined use of logical rewriting and algebraic optimization, and features an object algebra, local and global query optimization, physical execution algorithms implemented as iterators, and a query execution engine implemented using the dataflow technique. Several representative DOQL queries are also provided, illustrating the flexibility and expressiveness of querying object databases with DOQL. | [
"query processing",
"doql",
"deductive databases",
"odmg model"
] | [
"P",
"P",
"P",
"P"
] |