id
stringlengths
7
7
title
stringlengths
3
578
abstract
stringlengths
0
16.7k
keyphrases
sequence
prmu
sequence
2g1bEt4
Analysis and numerical simulation of strong discontinuities in finite strain poroplasticity
This paper presents an analysis of strong discontinuities in coupled poroplastic media in the finite deformation range. A multi-scale framework is developed for the characterization of these solutions involving a discontinuous deformation (or displacement) field in this coupled setting. The strong discontinuities are used as a tool for the modeling of the localized dissipative effects characteristic of the localized failures of typical poroplastic systems. This is accomplished through the inclusion of a cohesive-frictional law relating the resolved stresses on the discontinuity and the accumulated fluid content on it with the displacement and fluid flow jumps across the discontinuity surface. The formulation considers the limit of vanishing small scales, hence recovering a problem in the large scale involving the usual regular displacement and pore pressure variables, while capturing correctly these localized dissipative mechanisms. All the couplings between the mechanical and fluid problems, from the modeling of the solid's response through effective stresses and tractions to the geometric coupling consequence of the assumed finite deformation setting, are taken into account in these considerations. The multi-scale structure of the theoretical formulation is fully employed in the development of new enhanced strain finite elements to capture these discontinuous solutions with no regularization of the singular fields appearing in the formulation. Several numerical simulations are presented showing the properties and performance of the proposed localized models and the enhanced finite elements used in their numerical implementation.
[ "strong discontinuity", "finite deformations", "porous media", "coupled poro-elastoplasticity", "strain localization", "enhanced finite element methods" ]
[ "P", "P", "M", "M", "R", "M" ]
-hWWzhr
TOWARD REAL NOON-STATE SOURCES
Path-entangled N-photon systems described by NOON states are the main ingredient of many quantum information and quantum imaging protocols. Our analysis aims to lead the way toward the implementation of both NOON-state sources and their applications. To this end, we study the functionality of "real" NOON-state sources by quantifying the effect real experimental apparatuses have on the actual generation of the desired NOON state. In particular, since the conditional generation of NOON states strongly relies on photon counters, we evaluate the dependence of both the reliability and the signal-to-noise ratio of "real" NOON-state sources on detection losses. We find a surprising result: NOON-state sources relying on nondetection are much more reliable than NOON-state sources relying on single-photon detection. Also the comparison of the resources required to implement these two protocols comes out to be in favor of NOON-state sources based on nondetection. A scheme to improve the performances of "real" NOON-state sources based on single-photon detection is also proposed and analyzed.
[ "path-entanglement", "noon-state preparation", "efficiency", "quantum optics" ]
[ "P", "M", "U", "M" ]
4gJpSi1
Using TPACK as a framework to understand teacher candidates' technology integration decisions
This research uses the technological pedagogical and content knowledge (TPACK) framework as a lens for understanding how teacher candidates make decisions about the use of information and communication technology in their teaching. Pre- and post-treatment assessments required elementary teacher candidates at Brigham Young University to articulate how and why they would integrate technology in three content teaching design tasks. Researchers identified themes from student rationales that mapped to the TPACK constructs. Rationales simultaneously supported subcategories of knowledge that could be helpful to other researchers trying to understand and measure TPACK. The research showed significant student growth in the use of rationales grounded in content-specific knowledge and general pedagogical knowledge, while rationales related to general technological knowledge remained constant.
[ "technology integration", "information and communication technology", "pedagogical content knowledge", "pre-service teacher education", "technological pedagogical content knowledge" ]
[ "P", "P", "R", "M", "R" ]
3&99a3w
exploiting temporal coherence in global illumination
Producing high quality animations featuring rich object appearance and compelling lighting effects is very time consuming using traditional frame-by-frame rendering systems. In this paper we present a number of global illumination and rendering solutions that exploit temporal coherence in lighting distribution for subsequent frames to improve the computation performance and overall animation quality. Our strategy relies on extending into temporal domain well-known global illumination techniques such as density estimation photon tracing, photon mapping, and bi-directional path tracing, which were originally designed to handle static scenes only.
[ "temporal coherence", "global illumination", "density estimation", "bi-directional path tracing", "irradiance cache" ]
[ "P", "P", "P", "P", "U" ]
-bcotLA
Effectiveness of cognitive-load based adaptive instruction in genetics education
Research addressing the issue of instructional control in computer-assisted instruction has revealed mixed results. Prior knowledge level seems to play a mediating role in the students ability to effectively use given instructional control. This study examined the effects of three types of instructional control (non-adaptive program control, learner control, adaptive program control) and prior knowledge (high school, 1st year and 2nd year college students) on effectiveness and efficiency of learning in a genetics training program. The results revealed that adaptive program control led to highest training performance but not to superior post-test or far-transfer performance. Furthermore, adaptive program control proved to be more efficient in terms of learning outcomes of the test phase than the other two instructional control types. College students outperformed the high school students on all aspects of the study thereby strengthening the importance of prior knowledge in learning effectiveness and efficiency. Lastly, the interaction effects showed that for each prior knowledge level different levels of support were beneficial to learning.
[ "adaptive instruction", "non-adaptive program control", "learner control", "cognitive load", "learning efficiency", "problem selection algorithm" ]
[ "P", "P", "P", "U", "R", "U" ]
3iKY8U:
Sub-pixel mapping based on artificial immune systems for remote sensing imagery
We propose an artificial immune sub-pixel mapping framework for remote sensing imagery. The sub-pixel mapping problem is transformed to an optimization problem. The proposed algorithm can obtain better sub-pixel mapping results by immune operators. Experimental results demonstrate that the proposed approach outperforms the previous methods.
[ "sub-pixel mapping", "artificial immune systems", "remote sensing", "clonal selection", "classification" ]
[ "P", "P", "P", "U", "U" ]
25ULa4L
Modeling of the quenching of blast products from energetic materials by expansion into vacuum
Condensed phase energetic materials include propellants and explosives. Their detonation or burning products generate dense, high pressure states that are often adjacent to regions that are at vacuum or near-vacuum conditions. An important chemical diagnostic experiment is the time of flight mass spectroscopy experiment that initiates an energetic material sample via an impact from a flyer plate, whose products expand into a vacuum. The rapid expansion quenches the reaction in the products so that the products can be differentiated by molecular weight detection as they stream past a detector. Analysis of this experiment requires a gas dynamic simulation of the products of a reacting multi-component gas that flows into a vacuum region. Extreme computational difficulties can arise if flow near the vacuum interface is not carefully and accurately computed. We modify an algorithm proposed by Munz [1], that computed the fluxes appropriate to a gasvacuum interface for an inert ideal gas, and extend it to a multi-component mixture of reacting chemical components reactions with general, non-ideal equations of state. We illustrate how to incorporate that extension in the context of a complete set of algorithms for a general, cell-based flow solver. A key step is to use the local exact solution for an isentropic expansion fan, for the mixture that connects the computed flow states to the vacuum. Regularity conditions (i.e. the LiuSmoller conditions) are necessary conditions that must be imposed on the equation of state of the multicomponent fluid in the limit of a vacuum state. We show that the Jones, Wilkins, Lee (JWL) equation of state meets these requirements.
[ "time of flight mass spectroscopy", "jwl", "vacuum riemann problem", "vacuum tracking", "multi-component reacting flow", "petn", "miegruneisen equation of state" ]
[ "P", "P", "M", "M", "R", "U", "M" ]
H3y7H&Y
modeling multiple-event situations across news articles
Readers interested in the context of an event covered in the news such as the dismissal of a lawsuit can benefit from easily finding out about the overall news situation, the legal trial, of which the event is a part. Guided by abstract models of news situation types such as legal trials, corporate acquisitions, and kidnappings, Brussell is a system that presents situation instances it creates by reading multiple articles about the specific events that comprise them. We discuss how these situation models are structured and how they drive the creation of particular instances.
[ "news situations" ]
[ "P" ]
56m2n6b
Model selection for least squares support vector regressions based on small-world strategy
Model selection plays a key role in the application of support vector machine (SVM). In this paper, a method of model selection based on the small-world strategy is proposed for least squares support vector regression (LS-SVR). In this method, the model selection is treated as a single-objective global optimization problem in which generalization performance measure performs as fitness function. To get better optimization performance, the main idea of depending more heavily on dense local connections in small-world phenomenon is considered, and a new small-world optimization algorithm based on tabu search, called the tabu-based small-world optimization (TSWO), is proposed by employing tabu search to construct local search operator. Therefore, the hyper-parameters with best generalization performance can be chosen as the global optimum based on the powerful search ability of TSWO. Experiments on six complex multimodal functions are conducted, demonstrating that TSWO performs better in avoiding premature of the population in comparison with the genetic algorithm (GA) and particle swarm optimization (PSO). Moreover, the effectiveness of leave-one-out bound of LS-SVM on regression problems is tested on noisy sinc function and benchmark data sets, and the numerical results show that the model selection using TSWO can almost obtain smaller generalization errors than using GA and PSO with three generalization performance measures adopted.
[ "model selection", "small-world", "tabu search", "least squares support vector machines" ]
[ "P", "P", "P", "R" ]
SyToYjb
A model of seepage field in the tailings dam considering the chemical clogging process
The radial collector well, an important water drainage construction, has been widely applied to the tailings dam. Chemical clogging frequently occurs around the vertical shaft in radial collector well due to enough dissolved oxygen and some heavy metals in groundwater flow of tailings dam. Considering the contribution of water discharge from both vertical shaft and horizontal screen laterals and chemical clogging occurring around vertical shaft well, a new model was developed on the basis of Multi-Node Well (MNW2) package of MODFLOW. Moreover, two cases were calculated by the newly developed model. The results indicate that the model considering chemical clogging occurring around the vertical shaft well is reasonable. Owing to the decrease in hydraulic conductivity caused by chemical clogging, the groundwater level in dam body increases constantly and water discharge of radial collector well declines by 1015%. For ordinary vertical well, it decreases by 30%. Therefore, chemical clogging occurring around radial collector well can arouse increases of groundwater level, and influence dambody safety.
[ "tailing dam", "chemical clogging", "radial collector well", "groundwater flow", "modflow", "mathematical model" ]
[ "P", "P", "P", "P", "P", "M" ]
2jgnTY&
A symmetrisation method for non-associated unified hardening model
This paper presents a simple method for symmetrising the asymmetric elastoplastic matrix arising from non-associated flow rules. The symmetrisation is based on mathematical transformation and does not alter the incremental stressstrain relationship. The resulting stress increment is identical to that obtained using the original asymmetrized elastoplastic matrix. The symmetrisation method is applied to integrate the Unified Hardening (UH) model where the elastoplastic matrix is asymmetric due to stress transformation. The performance of the method is verified through finite element analysis (FEA) of boundary value problems such as triaxial extension tests and bearing capacity of foundations. It is found that the symmetrisation method can improve the convergence of the FEA and reduce computational time significantly for non-associated elastoplastic models.
[ "symmetrisation", "elastoplastic matrix", "non-associated flow rule", "three-dimensional", "finite element analyses" ]
[ "P", "P", "P", "U", "M" ]
3WSPyWv
ROFL: Routing on flat labels
It is accepted wisdom that the current Internet architecture conflates network locations and host identities, but there is no agreement on how a future architecture should distinguish the two. One could sidestep this quandary by routing directly on host identities themselves, and eliminating the need for network-layer protocols to include any mention of network location. The key to achieving this is the ability to route on flat labels. In this paper we take an initial stab at this challenge, proposing and analyzing our ROFL routing algorithm. While its scaling and efficiency properties are far from ideal, our results suggest that the idea of routing on flat labels cannot be immediately dismissed.
[ "routing", "internet architecture", "algorithms", "design", "experimentation", "naming" ]
[ "P", "P", "P", "U", "U", "U" ]
29ygQaj
A branch-and-cut approach for a generic multiple-product, assembly-system design problem
This paper presents two new models to deal with different tooling requirements in the generic multiple-product assembly-system design (MPASD) problem and proposes a new branch-and-cut solution approach, which adds cuts at each node in the search tree. It employs the facet generation procedure (FGP) to generate facets of underlying knapsack polytopes. In addition, it uses the FGP in a new way to generate additional cuts and incorporates two new methods that exploit special structures of the MPASD problem to generate cuts. One new method is based on a principle that can be applied to solve generic 0-1 problems by exploiting embedded integral polytopes. The approach includes new heuristic and pre-processing methods, which are applied at the root node to manage the size of each instance. This paper establishes benchmarks for MPASD through an experiment in which the approach outperformed IBM's Optimization Subroutine Library (OSL), a commercially available solver.
[ "programming : integer", "cutting planes", "production scheduling", "flexible manufacturing line balancing" ]
[ "U", "M", "U", "U" ]
1JvsWFm
MANAGING COGNITIVE AND MIXED-MOTIVE CONFLICTS IN CONCURRENT ENGINEERING
In collaborative activities such as concurrent engineering (GE), conflicts arise due to differences in goals, information available, and the understanding of the task. Such conflicts can be categorized into two types: mixed-motive and cognitive. Mixed-motive conflicts are essentially due to interest differentials among stakeholders. Cognitive conflicts can occur even when the stakeholders do not differ in their respective utilities, but simply because they offer multiple cognitive perspectives on the problem. Because conflicts in CE occur under a wider context of cooperative problem solving, the imperative for solving conflicts in such situations is strong. This paper argues that mechanisms for managing conflicts in CE should bear a strong conceptual mapping with the nature of the underlying conflict. Moreover, since CE activities are performed in collaborative settings, such mechanisms should accommodate information processing at multiple referent levels. We discuss the nature of both types of conflicts and the requirements of mechanisms for managing them, The functionalities of an implementation that addresses these requirements are illustrated through an example of a CE task
[ "mixed-motive conflict", "cognitive conflict", "cognitive feedback", "design rationale" ]
[ "P", "P", "M", "U" ]
1xZJrw1
Designing robust emergency medical service via stochastic programming
This paper addresses the problem of designing robust emergency medical services. Under this respect, the main issue to consider is the inherent uncertainty which characterizes real life situations. Several approaches can be used to design robust mathematical models which are able to hedge uncertain conditions. We are using here the stochastic programming framework and, in particular, the probabilistic paradigm. More specifically, we develop a stochastic programming model with probabilistic constraints aimed to solve both the location and the dimensioning problems, i.e. where service sites must be located and how many emergency vehicles must be assigned to each site, in order to achieve a reliable level of service and minimize the overall costs. In doing so, we consider the randomness of the system as far as the demand of emergency service is concerned. The numerical results, which have been collected on a large set of test problems, demonstrate the validity of the proposed model, particularly in dealing with the trade-off between quality of service and costs management.
[ "stochastic programming", "emergency services", "facility location", "health services" ]
[ "P", "P", "M", "M" ]
3T2h6BJ
Recommendation of optimized information seeking process based on the similarity of user access behavior patterns
Differing from many studies of recommendation that provided the final results directly, our study focuses on providing an optimized process of information seeking to users. Based on process mining, we propose an integrated adaptive framework to support and facilitate individualized recommendation based on the gradual adaptation model that gradually adapts to a target users transition of needs and behaviors of information access, including various search-related activities, over different time spans. In detail, successful information seeking processes are extracted from the information seeking histories of users. Furthermore, these successful information seeking processes are optimized as a series of action units to support the target users whose information access behavior patterns are similar to the reference users. Based on these, the optimized information seeking processes are navigated to the target users according to their transitions of interest focus. In addition to describing some definitions and measures introduced, we go further to present an optimized process recommendation model and show the system architecture. Finally, we discuss the simulation and scenario for the proposed system.
[ "information seeking process", "behavior patterns", "personalized recommendation" ]
[ "P", "P", "M" ]
4DR3YtK
Analytical mechanics solution for mechanism motion and elastic deformation hybrid problem of beam system
Based on the dynamics of flexible multi-body systems and finite element method, a beam system dynamics model is built for solving motiondeformation mixed problem and tracing the whole process of mechanism motion. Kinetic control equation and constraint equation in which, mechanism motion and elastic deformation is described using hybrid coordinates, and the spatial position matrix of the element is described using Euler quaternion are derived. Numerical examples show that the method can trace and solve the track and internal force of the system.
[ "dynamics of flexible multi-body systems", "euler quaternion", "hybrid coordinates description", "beam element" ]
[ "P", "P", "M", "R" ]
1RT64Ad
Availability analysis of shared backup path protection under multiple-link failure scenario in WDM networks
Dedicated protection and shared protection are the main protection schemes in optical wavelength division multiplexing (WDM) networks. Shared protection techniques surpass the dedicated protection techniques by providing the same level of availability as dedicated protection with reduced spare capacity. Satisfying the service availability levels defined by the users service-level agreement (SLA) in a cost-effective and resource-efficient way is a major challenge for networks operators. Hence, evaluating the availability of the shared protection scheme has a great interest. We recently developed an analytical model to estimate network availability of a WDM network with shared-link connections under multiple link-failures. However, this model requires the information of all possible combinations of the unshared protection paths, which is somehow troublesome. In this paper, we propose a more practical analytical model for evaluating the availability of a WDM network with shared-link connections under multiple link-failures. The proposed model requires only an estimate of the set of shared paths of each protection path. The estimated availability of the proposed model accurately matched with that of the previous model. Finally, we compare the previous model with the proposed model to demonstrate the merits and demerits of both models illustrating the threshold at which each model performs better based on the computational complexity. The proposed model significantly contributes to the related areas by providing network operators with a practical tool to evaluate quantitatively the system-availability and, thus, the expected survivability degree of WDM optical networks with shared connections under multiple-link failures.
[ "availability analysis", "wdm networks", "shared-link connections", "multiple link-failures" ]
[ "P", "P", "P", "M" ]
3fSdLmn
Evolving RBF neural networks for time-series forecasting with EvRBF
This paper is focused on determining the parameters of radial basis function neural networks (number of neurons, and their respective centers and radii) automatically. While this task is often done by hand, or based in hillclimbing methods which are highly dependent on initial values, in this work, evolutionary algorithms are used to automatically build a radial basis function neural networks (RBF NN) that solves a specified problem, in this case related to currency exchange rates forecasting. The evolutionary algorithm EvRBF has been implemented using the evolutionary computation framework evolving object, which allows direct evolution of problem solutions. Thus no internal representation is needed, and specific solution domain knowledge can be used to construct specific evolutionary operators, as well as cost or fitness functions. Results obtained are compared with existent bibliography, showing an improvement over the published methods.
[ "rbf", "time-series forecasting", "evolutionary algorithms", "currency exchange", "eo", "functional estimation" ]
[ "P", "P", "P", "P", "U", "M" ]
-UqayJM
Modified centralized ROCOF based load shedding scheme in an islanded distribution network
Two new centralized adaptive under frequency load shedding methods are proposed. DG units operation and loads willing to pay (WTP) are considered. The objective is to minimize the resulting penalties of the load shedding.
[ "under frequency load shedding", "distributed generation", "rate of change of frequency of load", "islanded operation" ]
[ "P", "M", "M", "R" ]
d1t5LPo
horn-ok-please
Road congestion is a common problem worldwide. Existing Intelligent Transport Systems (ITS) are mostly inapplicable in developing regions due to high cost and assumptions of orderly traffic. In this work, we develop a low-cost technique to estimate vehicular speed, based on vehicular honks. Honks are a characteristic feature of the chaotic road conditions common in many developing regions like India and South-East Asia. We envision a system where dynamic road-traffic information is learnt using inexpensive, wireless-enabled on-road sensors. Subsequent analyzed information can then be sent to mobile road users; this would fit well with the burgeoning mobile market in developing regions. The core of our technique comprises a pair of road side acoustic sensors, separated by a distance. If a moving vehicle honks between the two sensors, its speed can be estimated from the Doppler shift of the honk frequency. In this context, we have developed algorithms for honk detection, honk matching across sensors, and speed estimation. Based on the speed estimates, we subsequently detect road congestion. We have done extensive experiments in semi-controlled settings as well as real road scenarios under different traffic conditions. Using over 18 hours of road-side recordings, we show that our speed estimation technique is effective in real conditions. Further, we use our data to characterize traffic state as free-flowing versus congested using a variety of metrics: the vehicle speed distribution, the number and duration of honks. Our results show clear statistical divergence of congested versus free flowing traffic states, and a threshold-based classification accuracy of 70-100\% in most situations.
[ "its", "sensor network", "audio signal processing" ]
[ "P", "M", "U" ]
1LoE2eo
Triangular mesh offset for generalized cutter
In 3-axis NC (Numerical Control) machining, various cutters are used and the offset compensation for these cutters is important for a gouge free tool path generation. This paper introduces triangular mesh offset method for a generalized cutter defined based on the APT (Automatically Programmed Tools) definition or parametric curve. An offset vector is computed according to the geometry of a cutter and the normal vector of a part surface. A triangular mesh is offset to the CL (Cutter Location) surface by multiple normal vectors of a vertex and the offset vector computation method. A tool path for a generalized cutter is generated on the CL surface, and the machining test shows that the proposed offset method is useful for the NC machining.
[ "triangular mesh", "offset", "tool path", "cl surface", "nc machining", "apt cutter", "parabolic cutter" ]
[ "P", "P", "P", "P", "P", "R", "M" ]
52sfrEL
The new FIFA rules are hard: complexity aspects of sports competitions
Consider a soccer competition among various teams playing against each other in pairs (matches) according to a previously determined schedule. At some stage of the competition one may ask whether a particular team still has a (theoretical) chance to win the competition. The complexity of this question depends on the way scores are allocated according to the outcome of a match. For example, the problem is polynomially solvable for the ancient FIFA rules ( resp. ) but becomes NP-hard if the new rules ( resp. ) are applied. We determine the complexity of the above problem for all possible score allocation rules.
[ "03d15", "90c27" ]
[ "U", "U" ]
VRjoD68
A new segmentation method for phase change thermography sequence ?
A new segmentation method for image sequence is proposed in order to get the isotherm from phase change thermography sequence (PCTS). Firstly, the PCTS is transformed into a series of synthesized images by compression and conversion, so the isotherm extraction can be transformed into the segmentation of a series of synthesized images. Secondly, a virtual illumination model is constructed to eliminate the glisten of the aerocraft model. In order to get the parameters of virtual illumination model, a coordination-optimization method is employed and all parameters are obtained according to the similarity constraint. Finally, the proving isotherms are gained after the threshold coefficients are compensated. The eventual results demonstrate the efficiency of the proposed segmentation method.
[ "phase change thermography sequence", "illumination model", "threshold coefficient", "image segmentation" ]
[ "P", "P", "P", "R" ]
2KzJyjh
Multi-color continuous-variable entangled optical beams generated by NOPOs
We propose an alternative scalable way to generate multi-color entangled optical beams efficiently utilizing the tripartite entanglement existent between three fieldssignal, idler, and pumpfrom a nondegenerate optical parametric oscillator (NOPO) operating above the threshold. The special case of two cascaded NOPOs is studied, as it is shown that the five beams with very different frequencies are generated by NOPOA (one of the retained signal and idler beams, and the reflected pump beam) and NOPOB (the output signal and idler beams, and the reflected pump beam). These beams are theoretically demonstrated to be continuous variable (CV) entangled with each other by applying the positivity of the partially transposed criterion for the inseparability of multipartite CV entanglement. The symplectic eigenvalues of the partial transposition covariance matrix of the obtained optical entangled state are numerically calculated in terms of experimentally reachable system parameters. The optimal operation conditions to achieve high five-color entanglement are presented. As the cavity parameters and the nonlinear crystals of the two NOPOs can be chosen freely, the frequencies of the submodes in the entangled state thus are adjustable to match the transition frequencies of atoms or low loss fiber-optic communication window. The calculated results provide direct references for future experiment to generate multi-color entangled optical beams efficiently by means of NOPOs operating above the threshold.
[ "non-degenarate optical parametric oscillator", "multi-color entangled state", "continue-variable quantum entanglement" ]
[ "M", "R", "M" ]
4R1K:TL
BCHED - Energy Balanced Sub-Round Local Topology Management for Wireless Sensor Network
Topology controlling based on cluster structure is an important method to improve the energy efficiency of wireless sensor network (WSN) systems. Frequent clustering process of classical controlling methods, such as LEACH, is apt to cause serious energy consuming. Some improved methods reduced re-clustering frequency, but these methods sometimes lead to energy imbalance in the stable communication period. In this paper, a hierarchical topology controlling method BCHED is proposed. With double round clustering mechanism, BCHED activates a local re-clustering process between two rounds of data transmission, and with optional cluster head exchanging mechanism, BCHED reorganize the node clusters according to their residual energy distribution. Experimental results show that, with BCHED, the energy balance performance of WSN system is significantly improved, and the system lifetime can be effectively extended.
[ "wireless sensor network", "topology controlling", "network clustering" ]
[ "P", "P", "R" ]
2LALsPk
Hierarchical reconstruction for discontinuous Galerkin methods on unstructured grids with a WENO-type linear reconstruction and partial neighboring cells
The hierarchical reconstruction (HR) [Y.-J. Liu, C.-W. Shu, E. Tadmor, M.-P. Zhang, Central discontinuous Galerkin methods on overlapping cells with a non-oscillatory hierarchical reconstruction, SIAM J. Numer. Anal. 45 (2007) 2442-2467] is applied to the piecewise quadratic discontinuous Galerkin method on two-dimensional unstructured triangular grids. A variety of limiter functions have been explored in the construction of piecewise linear polynomials in every hierarchical reconstruction stage. We show that on triangular grids, the use of center biased limiter functions is essential in order to recover the desired order of accuracy. Several new techniques have been developed in the paper: (a) we develop a WENO-type linear reconstruction in each hierarchical level, which solves the accuracy degeneracy problem of previous limiter functions and is essentially independent of the local mesh structure; (b) we find that HR using partial neighboring cells significantly reduces over/under-shoots, and further improves the resolution of the numerical solutions. The method is compact and therefore easy to implement. Numerical computations for scalar and systems of nonlinear hyperbolic equations are performed. We demonstrate that the procedure can generate essentially non-oscillatory solutions while keeping the resolution and desired order of accuracy for smooth solutions.
[ "hierarchical reconstruction", "discontinuous galerkin methods", "unstructured grids", "hyperbolic conservation laws" ]
[ "P", "P", "P", "M" ]
3nqHrFc
Spatialtemporal model for demand and allocation of waste landfills in growing urban regions
Shortage of land for waste disposal is a serious and growing potential problem in most large urban regions. However, no practical studies have been reported in the literature that incorporate the process of consumption and depletion of landfill space in urban regions over time and analyse its implications for the management of waste. An evaluation of existing models of waste management indicates that they can provide significant insights into the design of solid waste management activities. However, these models do not integrate spatial and temporal aspects of waste disposal that are essential to understand and measure the problem of shortage of land. The lack of adequate models is caused in part due to limitations of the methodologies the existing models are based upon, such as limitations of geographic information systems (GIS) in handling dynamic processes, and the limitations of systems analysis in incorporating spatial physical properties. This indicates that new methods need to be introduced in waste management modelling. Moreover, existing models generally do not link waste management to the process of urban growth. This paper presents a model to spatially and dynamically model the demand for and allocation of facilities for urban solid waste disposal in growing urban regions. The model developed here consists of a loose-coupled system that integrates GIS (geographic information systems) and cellular automata (CA) in order to give it spatial and dynamic capabilities. The model combines three sub-systems: (1) a CA-based model to simulate spatial urban growth over the future; (2) a spread-sheet calculation for designing waste disposal options and hence evaluating demand for landfill space over time; and (3) a model developed within a GIS to evaluate the availability and suitability of land for landfill over time and then simulate allocation of landfills in the available land. The proposed model has been tested and set up with data from a real source (Porto Alegre City, Brazil), and has successfully assessed the demand for landfills and their allocation over time under a range of scenarios of decision-making regarding waste disposal systems, urban growth patterns and land evaluation criteria.
[ "landfill", "waste management", "geographical information systems", "dynamic modelling", "urban solid waste" ]
[ "P", "P", "P", "P", "P" ]
YsC4j6i
Dynamic delamination modelling using interface elements
Existing techniques in explicit dynamic Finite Element (FE) codes for the analysis of delamination in composite structures and components can be simplistic, using simple stress-based failure function to initiate and propagate delaminations. This paper presents an interface modelling technique for explicit FE codes. The formulation is based on damage mechanics and uses only two constants for each delamination mode; firstly, a stress threshold for damage to commence, and secondly, the critical energy release rate for the particular delamination mode. The model has been implemented into the LLNL DYNA3D Finite Element (FE) code and the LS-DYNA3D commercial FE code. The interface element modelling technique is applied to a series of common fracture toughness based delamination problems, namely the DCB, ENF and MMB tests. The tests are modelled using a simple dynamic relaxation technique, and serves to validate the methodology before application to more complex problems. Explicit Finite Elements codes, such as DYNA3D, are commonly used to solve impact type problems. A modified BOEING impact test at two energy levels is used to illustrate the application of the interface element technique, and its coupling to existing in-plane failure models. Simulations are also performed without interface elements to demonstrate the need to include the interface when modelling impact on composite components.
[ "delamination modelling", "finite elements", "impact", "composite failure" ]
[ "P", "P", "P", "R" ]
-Ucrsum
A new Steiner patch based file format for Additive Manufacturing processes ?
A new Steiner patch based Additive Manufacturing file format has been developed. Steiner format uses triangular rational Bezier representation of Steiner patches. Steiner format has high geometric fidelity and low approximation error. The Steiner patches can be easily sliced and closed form solutions can be obtained. AM parts manufactured using Steiner format has very low profile and form errors.
[ "steiner patches", "additive manufacturing (am)", "standard tessellation language (stl) file", "additive manufacturing file (amf) format", "chordal errors", "geometric dimensioning and tolerancing (gd&t) errors" ]
[ "P", "M", "M", "M", "M", "M" ]
3XXvCGg
Empirical challenges and solutions in constructing a high-performance metasearch engine
Purpose - This paper seeks to disclose the important role of missing documents, broken links and duplicate items in the results merging process of a metasearch engine in detail. It aims to investigate some related practical challenges and proposes some solutions. The study also aims to employ these solutions to improve an existing model for results aggregation. Design/methodology/approach - This research measures the amount of an increase in retrieval effectiveness of an existing results merging model that is obtained as a result of the proposed improvements. The 50 queries of the 2002 TREC web track were employed as a standard test collection based on a snapshot of the worldwide web to explore and evaluate the retrieval effectiveness of the suggested method. Three popular web search engines (Ask, Bing and Google) as the underlying resources of metasearch engines were selected. Each of the 50 queries was passed to all three search engines. For each query the top ten non-sponsored results of each search engine were retrieved. The returned result lists of the search engines were aggregated using a proposed algorithm that takes the practical issues of the process into consideration. The effectiveness of the result lists generated was measured using a well-known performance indicator called "TSAP" (TREC-style average precision). Findings - Experimental results demonstrate that the proposed model increases the performance of an existing results merging system by 14.39 percent on average. Practical implications - The findings of this research would be helpful for metasearch engine designers as well as providing motivation to the vendors of web search engines to improve their technology. Originality/value - This study provides some valuable concepts, practical challenges, solutions and experimental results in the field of web metasearching that have not been previously investigated.
[ "metasearch", "searching", "missing documents", "broken links", "duplicate documents", "data fusion", "rank aggregation", "owa operator", "information searches", "information retrieval" ]
[ "P", "P", "P", "P", "R", "U", "M", "U", "M", "M" ]
1QvVytq
Maintaining awareness using policies; Enabling agents to identify relevance of information
The field of computer supported cooperative work aims at providing information technology models, methods, and tools that assist individuals to cooperate. The presented paper is based on three main observations from literature. First, one of the problems in utilizing information technology for cooperation is to identify the relevance of information, called awareness. Second, research in computer supported cooperative work proposes the use of agent technologies to aid individuals to maintain their awareness. Third, literature lacks the formalized methods on how software agents can identify awareness. This paper addresses the problem of awareness identification. The main contribution of this paper is to propose and evaluate a formalized structure, called Policy-based Awareness Management (PAM). PAM extends the logic of general awareness in order to identify relevance of information. PAM formalizes existing policies into Directory Enabled Networks-next generation structure and uses them as a source for awareness identification. The formalism is demonstrated by applying PAM to the space shuttle Columbia disaster occurred in 2003. The paper also argues that efficacy and cost-efficiency of the logic of general awareness will be increased by PAM. This is evaluated by simulation of hypothetical scenarios as well as a case study. (C) 2011 Elsevier Inc. All rights reserved.
[ "awareness", "policy", "computer supported cooperative work", "intelligent agents" ]
[ "P", "P", "P", "M" ]
-D-U2qy
Extension headers for IPv6 anycast
Anycast is a new communication paradigm defined in IPv6. Different from unicast and multicast routing, routers on the internetwork deliver an anycast datagrant to the nearest available node. By shifting the task of resolving destinations from source node to internetwork, anycasting is highly flexible and cost-effective on routing process and inherently load-balanced and robust on server selection. To achieve these objectives, not only "distance" but also other metrics, such as load balance, reliability, QoS, can and should be taken into account in anycast routing. The IPv6 basic header is designed in a simple and fixed-length format for the purpose of efficient forwarding. Extra data and options needed for packet processing are encoded into extension headers. Such a design makes possible the adding of extension headers for special purposes. In this paper, we define routing extension headers for IPv6 anycasting to enable various types of anycast routing mechanism. Scenarios are also provided to demonstrate how to apply them. (c) 2006 Elsevier B.V. All rights reserved.
[ "extension header", "ipv6", "anycasting", "routing header" ]
[ "P", "P", "P", "R" ]
--WvaTH
The calculus of constructions as a framework for proof search with set variable instantiation
We show how a procedure developed by Bledsoe for automatically finding substitution instances for set variables in higher-order logic can be adapted to provide increased automation in proof search in the Calculus of Constructions (CC). Bledsoe's procedure operates on an extension of first-order logic that allows existential quantification over set variables. This class of variables can also be identified in CC. The existence of a correspondence between higher-order logic and higher-order type theories such as CC is well-known. CC can be viewed as an extension of higher-order logic where the basic terms of the language, the simply-typed lambda-terms, are replaced with terms containing dependent types. We show how Bledsoe's techniques can be incorporated into a reformulation of a search procedure for CC given by Dowek and extended to handle terms with dependent types. We introduce a notion of search context for CC which allows us to separate the operations of assumption introduction and backchaining. Search contexts allow a smooth integration of the step which finds solutions to set variables. We discuss how the procedure can be restricted to obtain procedures for set variable instantiation in sublanguages of CC such as the Logical Framework (LF) and higher-order hereditary Harrop formulas (hohh). The latter serves as the logical foundation of the lambda Prolog logic programming language. (C) 2000 Elsevier Science B.V. All rights reserved.
[ "calculus of constructions", "proof search", "type theory", "higher order logic", "set theory" ]
[ "P", "P", "P", "M", "R" ]
2Gmiznz
Learning temporal nodes Bayesian networks
Temporal nodes Bayesian networks (TNBNs) are an alternative to dynamic Bayesian networks for temporal reasoning with much simpler and efficient models in some domains. TNBNs are composed of temporal nodes, temporal intervals, and probabilistic dependencies. However, methods for learning this type of models from data have not yet been developed. In this paper, we propose a learning algorithm to obtain the structure and temporal intervals for TNBNs from data. The method consists of three phases: (i) obtain an initial approximation of the intervals, (ii) obtain a structure using a standard algorithm and (iii) refine the intervals for each temporal node based on a clustering algorithm. We evaluated the method with synthetic data from three different TNBNs of different sizes. Our method obtains the best score using a combined measure of interval quality and prediction accuracy, and a competitive structural quality with lower running times, compared to other related algorithms. We also present a real world application of the algorithm with data obtained from a combined cycle power plant in order to diagnose temporal faults. (C) 2013 Elsevier Inc. All rights reserved.
[ "learning", "bayesian networks", "temporal reasoning" ]
[ "P", "P", "P" ]
:7eYJmo
Solving bilevel programs with the KKT-approach
Bilevel programs (BL) form a special class of optimization problems. They appear in many models in economics, game theory and mathematical physics. BL programs show a more complicated structure than standard finite problems. We study the so-called KKT-approach for solving bilevel problems, where the lower level minimality condition is replaced by the KKT- or the FJ-condition. This leads to a special structured mathematical program with complementarity constraints. We analyze the KKT-approach from a generic viewpoint and reveal the advantages and possible drawbacks of this approach for solving BL problems numerically.
[ "bilevel problems", "mathematical programs with complementarity constraints", "genericity", "kkt-condition", "fj-condition", "critical points" ]
[ "P", "P", "P", "U", "U", "U" ]
Yu9x4JT
Modeling and evaluating of typical advanced peer-to-peer botnet
In this paper, we present a general model for an advanced peer-to-peer (P2P) botnet, in which the performance of the botnet can be systematically studied. From the model, we can derive five performance metrics to describe the robustness, security and efficiency of the botnet. Additionally, we analyze the relationship between the performance metrics and the model feature metrics of the botnet, and it is helpful to study the botnet under different model feature metrics. Furthermore, the proposed model can be easily applied to other types of botnets. Finally, taking the robustness and security into consideration, an optimization scheme for designing an optimal P2P botnet is proposed.
[ "modeling", "peer-to-peer", "botnet", "optimization scheme" ]
[ "P", "P", "P", "P" ]
BXnLubw
Learning a coverage set of maximally general fuzzy rules by rough sets
Expert systems have been widely used in domains where mathematical models cannot be easily built, human experts are not available or the cost of querying an expert is high. Machine learning or data mining can extract desirable knowledge or interesting patterns from existing databases and ease the development bottleneck in building expert systems. In the past we proposed a method [Hong, T.P., Wang, T.T., Wang, S.L. (2000). Knowledge acquisition from quantitative data using the rough-set theory. Intelligent Data Analysis (in press).], which combined the rough set theory and the fuzzy set theory to produce all possible fuzzy rules from quantitative data. In this paper, we propose a new algorithm to deal with the problem of producing a set of maximally general fuzzy rules for coverage of training examples from quantitative data. A rule is maximally general if no other rule exists that is both more general and with larger confidence than it. The proposed method first transforms each quantitative value into a fuzzy set of linguistic terms using membership functions and then calculates the fuzzy lower approximations and the fuzzy upper approximations. The maximally general fuzzy rules are then generated based on these fuzzy approximations by an iterative induction process. The rules derived can then be used to build a prototype knowledge base in a fuzzy expert system.
[ "rough set", "expert system", "machine learning", "data mining", "fuzzy set" ]
[ "P", "P", "P", "P", "P" ]
1xg-zAE
An ontological conceptualization approach for awareness in domain-independent collaborative modeling systems: Application to a model-driven development method
One of the most important aspects of collaborative systems is the concept of awareness, which refers to the perception and knowledge of the group and its activities. Support for the design and automatic development of awareness mechanisms within collaborative systems is hard to find. Furthermore, awareness conceptualizations are usually partial and differ greatly between the proposals of different authors. In response to these problems, we propose an awareness ontology that conceptualizes some of the most important aspects of awareness in a specific kind of system: collaborative systems for carrying out modeling activities. The awareness ontology brings together and extends a series of ontologies we have developed in the past. The ontology is prepared to better meet the specific implementation needs of a model-driven development approach. In order to validate the usefulness of this ontology, we relate its concepts to the awareness dimensions set out in Gutwin and Greenbergs framework, and we apply the ontology to two systems presently in use.
[ "ontologies", "awareness", "collaborative modeling", "cscw", "collaborative systems development" ]
[ "P", "P", "P", "U", "R" ]
1LxyBUM
Approximating k-node connected subgraphs via critical graphs
We present two new approximation algorithms for the problem of finding a k-node connected spanning subgraph ( directed or undirected) of minimum cost. The best known approximation guarantees for this problem were O(min{k, n/root(n-k)}) for both directed and undirected graphs, and O( ln k) for undirected graphs with n >= 6k(2), where n is the number of nodes in the input graph. Our first algorithm has approximation ratio O(n/n-k ln(2) k), which is O(ln(2) k) except for very large values of k, namely, k = n - o( n). This algorithm is based on a new result on l-connected p-critical graphs, which is of independent interest in the context of graph theory. Our second algorithm uses the primal-dual method and has approximation ratio O(v n ln k) for all values of n, k. Combining these two gives an algorithm with approximation ratio O(ln k center dot min{root k, n/n-k ln k}), which asymptotically improves the best known approximation guarantee for directed graphs for all values of n, k, and for undirected graphs for k > root n/6. Moreover, this is the first algorithm that has an approximation guarantee better than T( k) for all values of n, k. Our approximation ratio also provides an upper bound on the integrality gap of the standard LP-relaxation.
[ "approximation", "connectivity", "graphs", "network design" ]
[ "P", "P", "P", "U" ]
-JMKXQt
Development of an AutoWEP distributed hydrological model and its application to the upstream catchment of the Miyun Reservoir
Based on the physically characterized distributed hydrological modeling scheme WEP-L a more generalized and expandable method AutoWEP has been developed that is equipped with updated modules for pre-processing and automatic parameter identification. Sub-basin scale classifications of land use and soil are undertaken by incorporating remote sensing data and geographic information system techniques. In the process of developing the AutoWEP modeling scheme, a new concept of parameter partitioning is proposed and an automatic delineation of parameter partitions is achieved through programming. The sensitivity analysis algorithm, LH-OAT, and the parameter optimization algorithm, SCE-UA, are embedded in the model. Its application to the upstream watershed of the Miyun Reservoir shows that AutoWEP features time-savings, improved efficiency and suitable generalizations, that result in a long series of acceptable simulations.
[ "parameter identification", "autowep modeling", "parameter partition", "sensitivity analysis", "parameter optimization" ]
[ "P", "P", "P", "P", "P" ]
zrZAUcg
towards insider threat detection using web server logs
Malicious insiders represent one of the most difficult categories of threats an organization must consider when mitigating operational risk. Insiders by definition possess elevated privileges; have knowledge about control measures; and may be able to bypass security measures designed to prevent, detect, or react to unauthorized access. In this paper, we discuss our initial research efforts focused on the detection of malicious insiders who exploit internal organizational web servers. The objective of the research is to apply lessons learned in network monitoring domains and enterprise log management to investigate various approaches for detecting insider threat activities using standardized tools and a common event expression framework.
[ "insider threat", "insider threat detection", "web server logs", "log management", "common event expression" ]
[ "P", "P", "P", "P", "P" ]
2YH1igW
weight similarity measurement model based, object oriented approach for bug databases mining to detect similar and duplicate bugs
In this paper data mining is applied on bug database to discover the similar and duplicate bugs. Whenever a new bug will be entered in the bug database through bug tracking system, it will be matched against the existing bugs and duplicate and similar bugs will be mined from the bug database. Similar kind of bugs are resolved in almost in same manners. So if a bug is found somewhere similar to other existing bug which is already resolved then its resolution will take less time, since some of the bug analysis part is similar to existing one, hence it will save time. In the existing tradition developers must have to manually identify duplicate bug reports, but this identification process is time-consuming and exacerbates the already high cost of software maintenance. So if the similar and duplicate bugs can be found out using some approach it will be a cost and time saving activity. Based on this concept a weight similarity measurement model based object orinted approach is described here in this paper to discover similar and duplicate bugs in the bug database.
[ "similarity measurement", "duplicate bug", "information retrieval", "bug object" ]
[ "P", "P", "U", "R" ]
-cY72Z2
recursive modeling for completed code generation
Model-Driven Development is promising to software development because it can reduce the complexity and cost of developing large software systems. The basic idea is the use of different kinds of models during the software development process, transformations between them, and automatic code generation at the end of the development. But unlike the structural parts, fully-automated code generation from the behavior parts is still hard, if it works at all, restricted to specific application areas using a domain specific language, DSL. This paper proposes an approach to model the behavior parts of a system and to embed them into the structural models. The underlying idea is recursive refinements of activity elements in an activity diagram. With this, the detail generated code depends on the depth at which the refinements are done, i.e. if the lowest level of activities is mapped into activities executors , the completed code can be obtained.
[ "recursive modeling", "code generation", "activity executor ", "mdd" ]
[ "P", "P", "M", "U" ]
28AoKd2
On the theoretical comparison of low-bias steady-state estimators
The time-average estimator is typically biased in the context of steady-state simulation, and its bias is of order 1/t, where t represents simulated time. Several "low-bias" estimators have been developed that have a lower order bias, and, to first-order, the same variance of the time-average. We argue that this kind of first-order comparison is insufficient, and that a second-order asymptotic expansion of the mean square error (MSE) of the estimators is needed. We provide such an expansion for the time-average estimator in both the Markov and regenerative settings. Additionally, we provide a full bias expansion and a second-order MSE expansion for the Meketon - Heidelberger low-bias estimator, and show that its MSE can be asymptotically higher or lower than that of the time-average depending on the problem. The situation is different in the context of parallel steady-state simulation, where a reduction in bias that leaves the first-order variance unaffected is arguably an improvement in performance.
[ "steady-state simulation", "low-bias estimators", "mean-square error expansion" ]
[ "P", "P", "M" ]
4QiSWvY
The multisymplectic numerical method for GrossPitaevskii equation ?
For a BoseEinstein Condensate placed in a rotating trap and confined in the z-axis, a multisymplectic difference scheme was constructed to investigate the evolution of vortices in this paper. First, we look for a steady state solution of the imaginary time G-P equation. Then, we numerically study the vortices's development in real time, starting with the solution in imaginary time as initial value.
[ "boseeinstein condensate", "vortices", "multisymplectic methods", "two-dimensional g-p equation" ]
[ "P", "P", "R", "M" ]
FTnn7Nh
SYSTEM-DESIGN, DATA-COLLECTION AND EVALUATION OF A SPEECH DIALOG SYSTEM
This paper describes design issues of a speech dialogue system, the evaluation of the system, and the data collection of spontaneous speech in a transportation guidance domain. As it is difficult to collect spontaneous speech and to use a real system for the collection and evaluation, the phenomena related with dialogues have not been quantitatively clarified yet. The authors constructed a speech dialogue system which operates in almost real time, with acceptable recognition accuracy and flexible dialogue control. The system was used for spontaneous speech collection in a transportation guidance domain. The system performance evaluated in the domain is the understanding rate of 84.2% for the utterances within the predefined grammar and the lexicon. Also some statistics of the spontaneous speech collected are given.
[ "speech dialog system", "spontaneous speech", "continuous speech recognition", "speech understanding" ]
[ "P", "P", "M", "R" ]
-1qyuCc
abstract convex evolutionary search
Geometric crossover is a formal class of crossovers which includes many well-known recombination operators across representations. In this paper, we present a general result showing that all evolutionary algorithms using geometric crossover with no mutation perform the same form of convex search regardless of the underlying representation, the specific selection mechanism, the specific offspring distribution, the specific search space, and the problem at hand. We then start investigating a few representation/space-independent geometric conditions on the fitness landscape - various forms of generalized concavity - that when matched with the convex evolutionary search guarantee, to different extents, improvement of offspring over parents for any choice of parents. This is a first step towards showing that the convexity relation between search and landscape may play an important role towards explaining the performance of evolutionary algorithms in a general setting across representations.
[ "representations", "evolutionary algorithms", "convex search" ]
[ "P", "P", "P" ]
3cuHS6K
Area measurement of large closed regions with a mobile robot
How can a mobile robot measure the area of a closed region that is beyond its immediate sensing range? This problem, which we name as blind area measurement, is inspired from scout worker ants who assess potential nest cavities. We first review the insect studies that have shown that these scouts, who work in dark, seem to assess arbitrary closed spaces and reliably reject nest sites that are small for the colony. We briefly describe the hypothesis that these scouts use "Buffon's needle method" to measure the area of the nest. Then we evaluate and analyze this method for mobile robots to measure large closed regions. We use a simulated mobile robot system to evaluate the performance of the method through systematic experiments. The results showed that the method can reliably measure the area of large and rather open, closed regions regardless of their shape and compactness. Moreover, the method's performance seems to be undisturbed by the existence of objects and by partial barriers placed inside these regions. Finally, at a smaller scale, we partially verified some of these results on a real mobile robot platform.
[ "area measurement", "mobile robot", "ants", "buffon's needle", "stigmergy", "area coverage" ]
[ "P", "P", "P", "P", "U", "M" ]
2CuTKAN
Minimum pilot power for service coverage in WCDMA networks
Pilot power management is an important issue for efficient resource utilization in WCDMA networks. In this paper, we consider the problem of minimizing pilot power subject to a coverage constraint. The constraint can be used to model various levels of coverage requirement, among which full coverage is a special case. The pilot power minimization problem is NP-hard,as it generalizes the set covering problem. Our solution approach for this problem consists of mathematical programming models and methods. We present a linear-integer mathematical formulation for the problem. To solve the problem for large-scale networks, we propose a column generation method embedded into an iterative rounding procedure. We apply the proposed method to a range of test networks originated from realistic network planning scenarios, and compare the results to those obtained by two ad hoc approaches. The numerical experiments show that our algorithm is able to find near-optimal solutions with a reasonable amount of computing effort for large networks. Moreover, optimized pilot power considerably outperforms the ad hoc approaches, demonstrating that efficient pilot power management is an important component of radio resource optimization. As another part of our numerical study, we examine the trade-off between service coverage and pilot power consumption.
[ "pilot power", "coverage", "wcdma", "optimization" ]
[ "P", "P", "P", "P" ]
2&NExza
ASYMPTOTICALLY STABLE MULTI-VALUED MANY-TO-MANY ASSOCIATIVE MEMORY NEURAL NETWORK AND ITS APPLICATION IN IMAGE RETRIEVAL
As an important artificial neural network, associative memory model can be employed to mimic human thinking and machine intelligence. In this paper, first, a multi-valued many-to-many Gaussian associative memory model (M(3)GAM) is proposed by introducing the Gaussian unidirectional associative memory model (GUAM) and Gaussian bidirectional associative memory model (GBAM) into Hattori et al's multi-module associative memory model ((MMA)(2)). Second, the M(3)GAM's asymptotical stability is proved theoretically in both synchronous and asynchronous update modes, which ensures that the stored patterns become the M(3)GAM's stable points. Third, by substituting the general similarity metric for the negative squared Euclidean distance in M(3)GAM, the generalized multivalued many-to-many Gaussian associative memory model (GM(3)GAM) is presented, which makes the M(3)GAM become its special case. Finally, we investigate the M(3)GAM's application in association-based image retrieval, and the computer simulation results verify the M(3)GAM's robust performance.
[ "artificial neural network", "associative memory model", "asymptotical stability", "similarity metric", "association-based image retrieval" ]
[ "P", "P", "P", "P", "P" ]
1JnzCoW
Identity-Based Threshold Proxy Signature from Bilinear Pairings
Delegation of rights is a common practice in the real world. We present two identity-based threshold proxy signature schemes, which allow an original signer to delegate her signing capability to a group of n proxy signers, and it requires a consensus of t or more proxy signers in order to generate a valid signature. In addition to identity-based scheme, privacy protection for proxy singers and security assurance are two distinct features of this work. Our first scheme provides partial privacy protection to proxy signers such that all signers' identities are revealed, whereas none of those t participating signers is specified. On the other hand, all proxy signers remain anonymous in the second scheme. This provides a full privacy protection to all proxy signers; however, each valid signature contains a tag that allows one to trace all the participating proxy signers. Both our proposed schemes are secure against unforgeability under chosen message attack, and satisfy many other necessary conditions for proxy signature.
[ "threshold", "proxy signature", "privacy protection", "identity-based signature", "authentication" ]
[ "P", "P", "P", "R", "U" ]
3kzynr:
Semi-autonomous navigation of a robotic wheelchair
The present work considers the development of a wheelchair for people with special needs, which is capable of navigating semi-autonomously within its workspace. This system is expected to prove useful to people with impaired mobility and limited fine motor control of the upper extremities. Among the implemented behaviors of this robotic system are the avoidance of obstacles, the motion in the middle of the free space and the following of a moving target specified by the user (e.g., a person walking in front of the wheelchair). The wheelchair is equipped with sonars, which are used for distance measurement in preselected critical directions, and with a panoramic camera with a 360 degree field of view, which is used for following a moving target. After suitably processing the color sequence of the panoramic images using the color histogram of the desired target, the orientation of the target with respect to the wheelchair is determined, while its distance is determined by the sonars. The motion control laws developed for the system use the sensory data and take into account the non-holonomic kinematic constraints of the wheelchair, in order to guarantee certain desired features of the closed-loop system, such as stability. Moreover, they are as simplified as possible to minimize implementation requirements. An experimental prototype has been developed at ICS-FORTH, based on a commercially-available wheelchair. The sensors, the computing power and the electronics needed for the implementation of the navigation behaviors and of the user interfaces (touch screen, voice commands) were developed as add-on modules and integrated with the wheelchair.
[ "wheelchairs", "panoramic cameras", "robot navigation", "non-holonomic mobile robots", "person following", "sensor-based control" ]
[ "P", "P", "R", "R", "R", "M" ]
-c1ydxR
Learning protein secondary structure from sequential and relational data
We propose a method for sequential supervised learning that exploits explicit knowledge of short- and long-range dependencies. The architecture consists of a recursive and bi-directional neural network that takes as input a sequence along with an associated interaction graph. The interaction graph models (partial) knowledge about long-range dependency relations. We tested the method on the prediction of protein secondary structure, a task in which relations due to beta-strand pairings and other spatial proximities are known to have a significant effect on the prediction accuracy. In this particular task, interactions can be derived from knowledge of protein contact maps at the residue level. Our results show that prediction accuracy can be significantly boosted by the integration of interaction graphs.
[ "protein contact maps", "recursive neural networks", "relational learning", "protein secondary structure prediction" ]
[ "P", "R", "R", "R" ]
kUvA2K-
Document replication strategies for geographically distributed web search engines
Large-scale web search engines are composed of multiple data centers that are geographically distant to each other. Typically, a user query is processed in a data center that is geographically close to the origin of the query, over a replica of the entire web index. Compared to a centralized, single-center search engine, this architecture offers lower query response times as the network latencies between the users and data centers are reduced. However, it does not scale well with increasing index sizes and query traffic volumes because queries are evaluated on the entire web index, which has to be replicated and maintained in all data centers. As a remedy to this scalability problem, we propose a document replication framework in which documents are selectively replicated on data centers based on regional user interests. Within this framework, we propose three different document replication strategies, each optimizing a different objective: reducing the potential search quality loss, the average query response time, or the total query workload of the search system. For all three strategies, we consider two alternative types of capacity constraints on index sizes of data centers. Moreover, we investigate the performance impact of query forwarding and result caching. We evaluate our strategies via detailed simulations, using a large query log and a document collection obtained from the Yahoo! web search engine.
[ "document replication", "web search", "query forwarding", "result caching", "distributed information retrieval", "query processing" ]
[ "P", "P", "P", "P", "M", "R" ]
4&FmBa4
Collision correction using a cross-layer design architecture for dedicated short range communications vehicle safety messaging
This paper presents a new physical (PHY) and medium access control (MAC) cross-layer design frame collision correction (CC) architecture for correction of Dedicated Short Range Communications (DSRCs) safety messages. Conditions suitable for the use of this design are presented, which can be used for optimization. At its basic level, the CC at the PHY uses a new decision making block that uses information from the MAC layer for the channel estimator and equalizer. This requires a cache of previously received frames, and pre-announcing frame repetitions from the MAC. We present the theoretical equations behind CC mechanism, and describe the components required to implement the cross-layer CC using deployment and sequence diagrams. Simulation results show that especially under high user load, reception reliability of the DSRC safety messages increases and PER decreases.
[ "cross-layer design", "vehicle safety", "dsrc", "collision mitigation", "physical layer", "ofdm" ]
[ "P", "P", "P", "M", "R", "U" ]
-Z4ZK2Z
A general model of unit testing efficacy
Much of software engineering is targeted towards identifying and removing existing defects while preventing the injection of new ones. Defect management is therefore one important software development process whose principal aim is to ensure that the software produced reaches the required quality standard before it is shipped into the market place. In this paper, we report on the results of research conducted to develop a predictive model of the efficacy of one important defect management technique, that of unit testing. We have taken an empirical approach. We commence with a number of assumptions that led to a theoretical model which describes the relationship between effort expended and the number of defects remaining in a software code module tested (the latter measure being termed correctness). This model is general enough to capture the possibility that debugging of a software defect is not perfect and could lead to new defects being injected. The Model is examined empirically against actual data and validated as a good predictive model under specific conditions. The work has been done in such a way that models are derived not only for the case of overall correctness but also for specific types of correctness such as correctness arising from the removal of defects contributing to shortcoming in reliability (R-type), functionality (F-type), usability (U-type) and maintainability (M-type) aspects of the program subject to defect management.
[ "defect management", "reliability", "functionality", "usability", "maintainability", "software process", "software quality", "process efficacy", "unit testing efficacy model" ]
[ "P", "P", "P", "P", "P", "R", "R", "R", "R" ]
2Zfa8iK
Constitutive modeling of materials and contacts using the disturbed state concept: Part 1 Background and analysis
Computer methods have opened a new era for accurate and economic analysis and design of engineering problems. They account for many significant factors such as arbitrary geometries, nonhomogeneities in material composition, complex boundary conditions, nonlinear material behavior (constitutive modeling) and complex loading conditions, which were difficult to include in conventional and closed form solution procedures. Constitutive modeling characterizes the mechanical behavior of solids and contacts (e.g. interfaces and joints), and plays perhaps the most important role for realistic solutions from procedures in computational mechanics. A great number of constitutive models, from simple to the advanced, have been proposed. Most of them account for specific characteristics of the material. However, a deforming material may experience, simultaneously, many characteristics such as elastic, plastic and creep strains, different loading (stress) paths, volume change under shear stress, microcracking leading to fracture and failure, strain softening or degradation, and healing or strengthening. Hence, there is a need for developing unified models that account for these characteristics. The main objective of these two papers is to present a brief review of the available constitutive models, and identify their capabilities and limitations; then a novel and unified approach, called the disturbed state concept (DSC) with hierarchical single surface (HISS) plasticity, is presented including its theoretical background, constitutive parameters and their determination, and validation at the test specimen and boundary value problem levels. The general capabilities of the DSC/HISS approach are emphasized by its application for a wide range of materials and contacts (interfaces and joints). Because of its generality, the DSC contains many previous models as special cases. The presentation is divided in two papers. This paper (Part 1) contains the review of various models, and then description of the DSC/HISS model and its analysis for issues such as mesh dependence and localization. Part 1 also contains the capability of the DSC/HISS model to define the behavior of both solids and contacts. Validations of the DSC/HISS model at the specimen and boundary value problem levels for a wide range of materials and contacts are included in the compendium paper, Part 2. The idea of the DSC is considered to be relatively simple, and it can be easily implemented in computer procedures. It is believed that the DSC can provide a realistic and unified approach for constitutive modeling for a wide range of materials and contacts.
[ "constitutive modeling", "computer methods", "solids", "interfaces", "applications", "unified dsc model" ]
[ "P", "P", "P", "P", "P", "R" ]
ngFxHcc
An Efficient Neumann Series-Based Algorithm for Thermoacoustic and Photoacoustic Tomography with Variable Sound Speed
We present an efficient algorithm for reconstructing an unknown source in thermoacoustic and photoacoustic tomography based on the recent advances in understanding the theoretical nature of the problem. We work with variable sound speeds that also might be discontinuous across some surface. The latter problem arises in brain imaging. The algorithmic development is based on an explicit formula in the form of a Neumann series. We present numerical examples with nontrapping, trapping, and piecewise smooth speeds, as well as examples with data on a part of the boundary. These numerical examples demonstrate the robust performance of the Neumann series-based algorithm.
[ "neumann series", "photoacoustic tomography", "variable sound speed", "thermoacoustic tomography", "inverse problems" ]
[ "P", "P", "P", "R", "M" ]
1kK-jLS
Documentary genre and digital recordkeeping: red herring or a way forward?
The purpose of this paper is to provide a preliminary assessment of the utility of the genre concept for digital recordkeeping. The exponential growth in the volume of records created since the 1940s has been a key motivator for the development of strategies that do not involve the review or processing of individual documents or files. Automation now allows processes at a level of granularity that is rarely, if at all, possible in the case of manual processes, without loss of cognisance of context. For this reason, it is timely to revisit concepts that may have been disregarded because of a perceived limited effectiveness in contributing anything to theory or practice. In this paper, the genre concept and its employability in the management of current and archival digital records are considered, as a form of social contextualisation of a document and as an attractive entry point of granularity at which to implement automation of appraisal processes. Particular attention is paid to the structurational view of genre and its connections with recordkeeping theory.
[ "genre", "structurational theory", "recordkeeping continuum" ]
[ "P", "R", "M" ]
1vgp:tZ
Existence and multiplicity of positive periodic solutions for a class of higher-dimension functional differential equations with impulses
This paper deals with the existence of multiple periodic solutions for n-dimensional functional differential equations with impulses. By employing the Krasnoselskii fixed point theorem, we obtain some easily verifiable sufficient criteria which extend previous results. (C) 2009 Elsevier Ltd. All rights reserved.
[ "positive periodic solution", "functional differential equations", "impulse", "the krasnoselskii fixed point theorem" ]
[ "P", "P", "P", "P" ]
3b13sgc
Modelling and querying geographical data warehouses
A number of proposals for integrating geographical (Geographical Information Systems-GIS) and multidimensional (data warehouse-DW and online analytical processing-OLAP) processing are found in the database literature. However, most of the current approaches do not take into account the use of a COW (geographical data warehouse) metamodel or query language to make available the simultaneous specification of multidimensional and spatial operators. To address this, this paper discusses the UML class diagram of a GDW metamodel and proposes its formal specifications. We then present a formal metamodel for a geographical data cube and propose the Geographical Multidimensional Query Language (GeoMDQL) as well. GeoMDQL is based on well-known standards such as the MultiDimensional eXpressions (MDX) language and OGC simple features specification for SQL and has been specifically defined for spatial OLAP environments based on a GDW. We also present the GeoMDQL syntax and a discussion regarding the taxonomy of GeoMDQL query types. Additionally, aspects related to the GeoMDQL architecture implementation are described, along with a case study involving the Brazilian public healthcare system in order to illustrate the proposed query language. (C) 2009 Elsevier B.V. All rights reserved.
[ "geographical data warehouse", "solar", "geographical and multidimensional query language (geomdql)" ]
[ "P", "U", "R" ]
58Fktfb
DDAS: Distance and direction awareness system for intelligent vehicles
Wireless technology has been widely used for applications of wireless Internet access. With the matured wireless transmission technology, the new demand on wireless applications is toward the concept of deploying wireless devices on transportation systems such as buses, trains and vehicles. Statistics of car accident cases show that car accidents are often caused from drivers unnoticing other approaching cars during driving. Without the assistants of automotive personal computer system (also called as Auto PC), during high-speed moving, driver always counts on himself/herself to look for all vehicles around him/her via limited vision and acoustic recognition. In case that the Auto PC is able to provide useful surrounding information, such as the directions and distances to nearby vehicles, to drivers, unnecessary collisions could be obviously avoided, especially in cases of changing lane, crossing intersection and making a turn. In this paper, we will introduce the concept of automatic distance and direction awareness system (DDAS) and describe the designed embedded DDAS integrated with three-wheel and four-wheel robot cars.
[ "vehicle", "wireless", "embedded", "smart antenna", "zigbee" ]
[ "P", "P", "P", "U", "U" ]
14Tu3yV
Investigating models for preservice teachers' use of technology to support student-centered learning
The study addressed two limitations of previous research on factors related to teachers integration of technology in their teaching. It attempted to test a structural equation model (SEM) of the relationships among a set of variables influencing preservice teachers' use of technology specifically to support student-centered learning A review of literature led to a path model that provided the design and analysis for the study, which involved 206 preservice teachers in the United States. The results show that the proposed model had a moderate fit to the observed data, and a more parsimonious model was found to have a better fit. In addition, preservice teachers' self-efficacy of teaching with technology had the strongest influence on technology use, which was mediated by their perceived value of teaching and learning with technology. School's contextual factors had moderate influence on technology use. Moreover, the effect of preservice teachers' training on student-centered technology use was mediated by both perceived value and self-efficacy of technology. The implications for teacher preparation include close collaboration between teacher education program and field experience, focusing on specific technology uses (C) 2009 Elsevier Ltd All rights reserved
[ "elementary education", "improving classroom teaching", "pedagogical issues", "secondary education" ]
[ "M", "M", "U", "M" ]
8Rd4yDS
a risc approach to process groups
ISIS [1], developed at Cornell University, is a system for building applications consisting of cooperating, distributed processes. Group management and group communication are two basic building blocks provided by ISIS. ISIS has been very successful, and there is currently a demand for a version that will run on many different environments and transport protocols, and will scale to many process groups. Furthermore, performance is an important issue. For this purpose, ISIS is being redesigned and rebuilt from scratch [2]. Of particular importance to us is getting the new ISIS system to run well on modern microkernel technology, notably MACH [3] and Chorus [4]. The basic reasoning behind these plans is that microkernels appear to offer satisfactory support for memory management and communication between processes on the same machine, but that support for applications that run on multiple machines is weak. The current IPC mechanisms are adequate only for the simpler distributed applications, as they do not address any of the internal management issues of distribution.The new ISIS system has several well-defined layers. The lowest layers, which implement multicast transport and failure detection, are near completion and currently run on SUN OS using SUN LWP threads, on MACH using C Threads, and on the x-kernel [5]. This system can use several different network protocols at the same time, such as IP, UDP (with or without multicast support), and raw Ethernet. This enables processes on SUN OS, MACH, and Chorus to multicast among each other, even though the environments are very dissimilar. The system makes use of available hardware multicast if possible. It also queues messages if a backlog appears, so that multiple messages may be packed together in a single packet. Using this strategy, the number of messages per second can become very large, and in the current (simple) implementation about 10,000 per second can be sent between distributed SUN OS user processes, a figure that approaches the speed of local light-weight remote procedure call mechanisms. (The current round-trip time on SUN OS over Ethernet is about 3 milliseconds.)
[ "process", "group", "systems", "applications", "distributed", "management", "group communication", "communication", "building block", "version", "environments", "transport protocol", "transport", "scale", "performance", "scratch", "technologies", "reasoning", "support", "memorialized", "distributed application", "addressing", "intern", "layer", "implementation", "multicast", "failure", "detection", "completeness", "use", "thread", "network protocol", "timing", "ethernet", "hardware", "queue", "message", "strategies", "user", "locality", "lighting" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "U" ]
YzeCk2N
ON AFFINE SCALING ALGORITHMS FOR NONCONVEX QUADRATIC-PROGRAMMING
We investigate the use of interior algorithms, especially the affine-scaling algorithm, to solve nonconvex - indefinite or negative definite - quadratic programming (QP) problems. Although the nonconvex QP with a polytope constraint is a "hard" problem, we show that the problem with an ellipsoidal constraint is "easy". When the "hard" QP is solved by successively solving the "easy" QP, the sequence of points monotonically converge to a feasible point satisfying both the first and the second order optimality conditions.
[ "interior algorithms", "affine-scaling algorithm", "nonconvex quadratic programming", "np-hard problems" ]
[ "P", "P", "R", "M" ]
ZAT345o
Multi-agent simulation of group behavior in E-Government policy decision
To research complex group behavior in E-Government policy decision, this study proposes a multi-agent qualitative simulation approach using EGGBM (E-Government Group Behavior Model). Causal reasoning is employed to analyze it from the perspective of system. Then, a multi-agent simulation decision system based on Java-Repast is developed. Moreover, three validation experiments are designed to prove that EGGBM can exactly represent the actual situation. At last, an example of application is given to show that this method can help policy-makers choose appropriate policies to improve the level of accepting information technology (LAIT) of groups. It is shown that this approach could be a new attempt for the research of group behavior in governmental organization.
[ "multi-agent", "group behavior", "e-government", "causal reasoning", "repast" ]
[ "P", "P", "P", "P", "U" ]
3q4&BPY
Independent component analysis for unaveraged single-trial MEG data decomposition and single-dipole source localization
This paper presents a novel method for decomposing and localizing unaveraged single-trial magnetoencephalographic data based on the independent component analysis (ICA) approach associated with pre- and post-processing techniques. In the pre-processing stage, recorded single-trial raw data are first decomposed into uncorrelated signals with the reduction of high-power additive noise. In the stage of source separation, the decorrelated source signals are further decomposed into independent source components. In the post-processing stage, we perform a source localization procedure to seek a single-dipole map of decomposed individual source components, e.g., evoked responses. The first results of applying the proposed robust ICA approach to single-trial data with phantom and auditory evoked field tasks indicate the following. (1) A source signal is successfully extracted from unaveraged single-trial phantom data. The accuracy of dipole estimation for the decomposed source is even better than that of taking the average of total trials. (2) Not only the behavior and location of individual neuronal sources can be obtained but also the activity strength (amplitude) of evoked responses corresponding to a stimulation trial can be obtained and visualized. Moreover, the dynamics of individual neuronal sources, such as the trial-by-trial variations of the amplitude and location, can be observed.
[ "single-dipole source localization", "independent component analysis (ica)", "magnetoencephalography (meg)", "single-trial data analysis", "phantom experiment", "auditory evoked fields (aef)", "robust pre-whitening technique" ]
[ "P", "P", "M", "R", "M", "M", "M" ]
3noVnBN
The effects of learning style and hypermedia prior experience on behavioral disorders knowledge and time on task: a case-based hypermedia environment
This study involved 17 graduate students enrolled in a Behavioral Disorders course. As a part of the course, they engaged in an extensive case-based hypermedia program designed to enhance their ability to solve student emotional and behavioral problems. Results include: (1) students increased their knowledge about behavioral disorders; (2) those students with more hypermedia experience spent more time using the hypermedia program; (3) those students who acquired greater knowledge also wrote better student reports; and (4) students, regardless of learning style (as measured by Kolb's Learning Style Inventory), benefited equally from using the hypermedia program.
[ "learning style", "hypermedia", "behavioral disorders" ]
[ "P", "P", "P" ]
1Ahe3pY
Rough Sets, Coverings and Incomplete Information
Rough sets are often induced by descriptions of objects based on the precise observations of an insufficient number of attributes. In this paper, we study generalizations of rough sets to incomplete information systems, involving imprecise observations of attributes. The precise role of covering-based approximations of sets that extend the standard rough sets in the presence of incomplete information about attribute values is described. In this setting, a covering encodes a set of possible partitions of the set of objects. A natural semantics of two possible generalisations of rough sets to the case of a covering (or a non transitive tolerance relation) is laid bare. It is shown that uncertainty due to granularity of the description of sets by attributes and uncertainty due to incomplete information are superposed, whereby upper and lower approximations themselves (in Pawlak's sense) become ill-known, each being bracketed by two nested sets. The notion of measure of accuracy is extended to the incomplete information setting, and the generalization of this construct to fuzzy attribute mappings is outlined.
[ "rough sets", "covering", "possibility theory", "fuzzy sets" ]
[ "P", "P", "M", "R" ]
-27KFfE
exploiting power budgeting in thermal-aware dynamic placement for reconfigurable systems
In this paper, a novel thermal-aware dynamic placement planner for reconfigurable systems is presented, which targets transient temperature reduction. Rather than solving time-consuming differential equations to obtain the hotspots, we propose a fast and accurate heuristic model based on power budgeting to plan the dynamic placements of the design statically, while considering the boundary conditions. Based on our heuristic model, we have developed a fast optimization technique to plan the dynamic placements at design time. Our results indicate that our technique is two orders of magnitude faster while the quality of the placements generated in terms of temperature and interconnection overhead is the same, if not better, compared to the thermal-aware placement techniques which perform thermal simulations inside the search engine.
[ "placement", "reconfigurable systems", "temperature", "dynamic reconfiguration", "computer aided design" ]
[ "P", "P", "P", "R", "M" ]
-GQhT2A
Optimized independent components for parameter regression
In this paper, a modified ICR algorithm is proposed for quality prediction purpose. The disadvantage of original Independent Component Regression (ICR) is that the extracted Independent Components (ICs) are not informative for quality prediction and interpretation. In the proposed method, to enhance the causal relationship between the extracted ICs and quality variables, a dual-objective optimization which combines the cost function w(T)X(T)Yv in Partial Least Squares (PLS) and the approximations of negentropy in Independent Component Analysis (ICA) is constructed in the first step for feature extraction. It simultaneously considers both the quality-correlation and the independence, and then the ICR-MLR (Multiple Linear Regression) method is used to obtain the regression coefficients. The proposed method is applied to the quality prediction in continuous annealing process and Tennessee Eastman process. Applications indicate that the proposed approach effectively captures the relations in the process variables and use of proposed method instead of original PLS and ICR improves the regression matching and prediction ability. (C) 2010 Elsevier B.V. All rights reserved.
[ "pls", "negentropy", "ica", "feature extraction" ]
[ "P", "P", "P", "P" ]
4VVXQYH
Discriminant Bag of Words based representation for human action recognition ?
Human action recognition based on Bag of Words representation. Discriminant codebook learning for better action class discrimination. Unified framework for the determination of both the optimized codebook and linear data projections.
[ "bag of words", "codebook learning", "discriminant learning" ]
[ "P", "P", "R" ]
Aop:Qu:
Unsupervised connectionist algorithms for clustering an environmental data set: A comparison ?
Various unsupervised algorithms for vector quantization can be found in the literature. Being based on different assumptions, they do not all yield exactly the same results on the same problem. To better understand these differences, this article presents an evaluation of some unsupervised neural networks, considered among the most useful for quantization, in the context of a real-world problem: radioelectric wave propagation. Radio wave propagation is highly dependent upon environmental characteristics (e.g. those of the city, country, mountains, etc.). Within the framework of a cell net planning its radiocommunication strategy, we are interested in determining a set of environmental classes, sufficiently homogeneous, to which a specific prediction model of radio electrical field can be applied. Of particular interest are techniques that allow improved analysis of results. Firstly, Mahalanobis distance, taking data correlation into account, is used to make assignments. Secondly, studies of class dispersion and homogeneity, using both a data structure mapping representation and statistical analysis, emphasize the importance of the global properties of each algorithm. In conclusion, we discuss the advantages and disadvantages of each method on real problems.
[ "vector quantization", "neural networks", "radiocommunication", "unsupervised learning" ]
[ "P", "P", "P", "M" ]
K-TBzgd
Preference-based multi-objective evolutionary algorithms for power-aware application mapping on NoC platforms
Network-on-chip (NoC) are considered the next generation of communication infrastructure in embedded systems. In the platform-based design methodology, an application is implemented by a set of collaborative intellectual property (IP) blocks. The selection of the most suited set of IPs as well as their physical mapping onto the NoC infrastructure to implement efficiently the application at hand are two hard combinatorial problems that occur during the synthesis process of Noc-based embedded system implementation. In this paper, we propose an innovative preference-based multi-objective evolutionary methodology to perform the assignment and mapping stages. We use one of the well-known and efficient multi-objective evolutionary algorithms NSGA-II and microGA as a kernel. The optimization processes of assignment and mapping are both driven by the minimization of the required silicon area and imposed execution time of the application, considering that the decision makers preference is a pre-specified value of the overall power consumption of the implementation.
[ "network-on-chip", "ip assignment", "ip mapping", "multi-objective design" ]
[ "P", "R", "R", "R" ]
2NYfuXZ
2D dry granular free-surface transient flow over complex topography with obstacles. Part II: Numerical predictions of fluid structures and benchmarking
Dense granular flows are present in geophysics and in several industrial processes, which has lead to an increasing interest for the knowledge and understanding of the physics which govern their propagation. For this reason, a wide range of laboratory experiments on gravity-driven flows have been carried out during the last two decades. The present work is focused on geomorphological processes and, following previous work, a series of laboratory studies which constitute a further step in mimicking natural phenomena are described and simulated. Three situations are considered with some common properties: a two-dimensional configuration, variable slope of the topography and the presence of obstacles. The setup and measurement technique employed during the development of these experiments are deeply explained in the companion work. The first experiment is based on a single obstacle, the second one is performed against multiple obstacles and the third one studies the influence of a dike on which overtopping occurs. Due to the impact of the flow against the obstacles, fast moving shocks appear, and a variety of secondary waves emerge. In order to delve into the physics of these types of phenomena, a shock-capturing numerical scheme is used to simulate the cases. The suitability of the mathematical models employed in this work has been previously validated. Comparisons between computed and experimental data are presented for the three cases. The computed results show that the numerical tool is able to predict faithfully the overall behavior of this type of complex dense granular flow.
[ "obstacles", "granular flow", "landslides", "numerical modeling" ]
[ "P", "P", "U", "R" ]
yfzvEJo
Context sharing in a real world ubicomp deployment
While the application of ubicomp systems to explore context sharing has received a large amount of interest, only a very small number of studies have been carried out which involve real world use outside of the lab. This article presents an in-depth analysis of context sharing behaviours that built up around use of the Hermes interactive office door display system received during deployment. The Hermes system provided a groupware application supporting asynchronous messaging facilities, analogous to a digital form of Post-it notes, in order to explore the use of situated display systems to support awareness and coordination in an office environment. From this analysis we distil a set of issues relating to context sharing ranging from privacy concerns to ease of use; each supported through qualitative data from user interviews and questionnaires.
[ "context sharing", "situated displays", "ubiquitous computing", "longitudinal deployment" ]
[ "P", "P", "U", "M" ]
2T52--z
Holding-time-aware dynamic traffic grooming algorithms based on multipath routing for WDM optical networks
This paper investigates approaches for the traffic grooming problem that consider connection holding-times and bandwidth availability. Moreover, solutions can indicate the splitting of connections into two or more sub-streams by multipath routing and fine-tuned by traffic grooming to utilize network resources better. Algorithms are proposed and the results of simulations using a variety of realistic scenarios indicate that the proposed algorithms significantly reduce the blocking of connection requests yet promote a fair distribution of the network resources in relation to the state-of-the-art solutions.
[ "traffic grooming", "multipath routing", "wdm", "holding time awareness", "load balancing" ]
[ "P", "P", "P", "U", "U" ]
-CpsTLQ
Three Classes of Maximal Hyperclones
In this paper, we present three classes of maximal hyperclones. They are determined by three classes of Rosenberg's relations: nontrivial equivalence relations, central relations and h-regular relations.
[ "maximal hyperclone", "hyperclone", "clone", "maximal clone" ]
[ "P", "P", "U", "M" ]
3zZADNd
The knowledge acquisition workshops: A remarkable convergence of ideas
Intense interest in knowledge-acquisition research began 25 years ago, stimulated by the excitement about knowledge-based systems that emerged in the 1970s followed by the realities of the AI Winter that arrived in the 1980s. The knowledge-acquisition workshops that responded to this interest led to the formation of a vibrant research community that has achieved remarkable consensus on a number of issues. These viewpoints include (1) the rejection of the notion of knowledge as a commodity to be transferred from one locus to another, (2) an acceptance of the situated nature of human expertise, (3) emphasis on knowledge acquisition as the modeling of problem solving, and (4) the pursuit of reusable patterns in problem solving and in domain descriptions that can facilitate both modeling and system implementation. The Semantic Web community will benefit greatly by incorporating these perspectives in its work.
[ "knowledge acquisition", "knowledge-based systems", "semantic web", "workshops and conferences" ]
[ "P", "P", "P", "M" ]
2cT7jY8
Topological Persistence for Medium Access Control
The primary function of the medium access control (MAC) protocol is managing access to the shared communication channel. From the viewpoint of the transmitters, the MAC protocol determines each transmitter's channel occupancy, the fraction of time that it spends transmitting over the channel. In this paper, we define a set of topological persistences that conform to both network topology and traffic load. We employ these persistences as target occupancies for the MAC layer protocol. A centralized algorithm is developed for calculating topological persistences and its correctness is established. A distributed algorithm and implementation are developed that can operate within scheduled and contention-based MAC protocols. In the distributed algorithm, network resources are allocated through auctions at each receiver in which transmitters participate as bidders to converge on the topological allocation. Very low overhead is achieved by piggybacking auction and bidder communication on existing data packets. The practicality of the distributed algorithm is demonstrated in a wireless network via simulation using the ns-2 network simulator. Simulation results show fast convergence to the topological solution and, once operating with topological persistences, improved performance compared to IEEE 802.11 in delay, throughput, and drop rate.
[ "medium access control", "wireless networks" ]
[ "P", "P" ]
49kFsSP
Plate on layered foundation analyzed by a semi-analytical and semi-numerical method
A semi-analytical and semi-numerical method is developed for the analysis of plate-layered soil systems. Applying a Hankel transform, an expression relating the surface settlement and the reaction of the layered soil is derived. Such a reaction can be treated as a load acting on the plate in addition to the applied external load. Having the plate modeled by eight-noded isoparametric elements, the governing equations of the plate can be formed and solved. Numerical examples, including square, trapezoidal and circular plates resting on elastic layered soil, are given to demonstrate the advantages, accuracy and versatility of this method.
[ "layered foundation", "raft on foundation", "fundamental solution", "transfer matrix method", "finite element method" ]
[ "P", "M", "U", "M", "M" ]
4vTWV3u
Semi-divisible triangular norms
Semi-divisibility of left-continuous triangular norms is a weakening of the divisibility (i.e., continuity) axiom for t-norms. In this contribution we focus on the class of semi-divisible t-norms and show the following properties: Each semi-divisible t-norm with Ran(n (T) ) = [0, 1] is nilpotent. Semi-divisibility of an ordinal sum t-norm is determined by the corresponding property of its first component (which can be a proper t-subnorm, too). Finally, negations with finite range derived from semi-divisible t-norms are studied.
[ "triangular norm", "ordinal sum", "residual implication" ]
[ "P", "P", "U" ]
1bgTVW&
Expert system for remnant life prediction of defected components under fatigue and creep-fatigue loadings
Life prediction and management of cracked high temperature structures is a matter of great importance for both economical and safe reasons. To implement such a task, many fields such as material science, structure engineering and mechanics science etc. are involved and expertise is generally required. In terms of the methodology of advanced time-dependent fracture mechanics, this paper developed an expert system to realize an appropriate combination of material database, condition database and knowledge database. Many assessment criteria including the multi-defects interaction and combination, invalidation criterion and creep-fatigue interaction are employed in the inference engine of expert system. The over-conservativeness of life prediction from traditional method is reduced reasonably and therefore the accuracy of predicted life is improved. Consequently, the intelligent and expert life management of cracked high temperature structures is realized which provides a powerful tool in practice. (c) 2006 Elsevier Ltd. All rights reserved.
[ "expert system", "high temperature structure", "creep-fatigue interaction", "life management", "multiple cracks" ]
[ "P", "P", "P", "P", "M" ]
2bQtK5X
Packet-mode scheduling in input-queued cell-based switches
We consider input-queued switch architectures dealing at their interfaces with variable-size packets, but internally operating on fixed-size cells. Packets are segmented into cells at input ports, transferred through the switching fabric, and reassembled at output ports. Cell transfers are controlled by a scheduling algorithm, which operates in packet-mode: all cells belonging to the same packet are transferred from inputs to outputs without interruption. We prove that input-queued switches using packet-mode scheduling can achieve 100% throughput, and we show by simulation that, depending on the packet size distribution, packet-mode scheduling may provide advantages over cell-mode scheduling.
[ "scheduling algorithms", "input queued switched", "packet switching", "variable size packets" ]
[ "P", "M", "R", "M" ]
-2tm1Yh
Walkneta biologically inspired network to control six-legged walking
To investigate walking we perform experimental studies on animals in parallel with software and hardware simulations of the control structures and the body to be controlled. Therefore, the primary goal of our simulation studies is not so much to develop a technical device, but to develop a system which can be used as a scientific tool to study insect walking. To this end, the animat should copy essential properties of the animals. In this review, we will first describe the basic behavioral properties of hexapod walking, as the are known from stick insects. Then we describe a simple neural network called Walknet which exemplifies these properties and also shows some interesting emergent properties. The latter arise mainly from the use of the physical properties to simplify explicit calculations. The model is simple too, because it uses only static neuronal units. Finally, we present some new behavioral results.
[ "walking", "stick insect", "leg coordination", "positive feedback", "six-legged robot", "situatedness", "decentralized control" ]
[ "P", "P", "U", "U", "M", "U", "M" ]
1xfewcG
Modelling the scatter of EN curves using a serial hybrid neural network
If structural reliability is estimated by following a strain-based approach, a materials strength should be represented by the scatter of the ?N (EN) curves that link the strain amplitude with the corresponding statistical distribution of the number of cycles-to-failure. The basic shape of the ?N curve is usually modelled by the CoffinManson relationship. If a loading mean level also needs to be considered, the original CoffinManson relationship is modified to account for the non-zero mean level of the loading, which can be achieved by using a SmithWatsonTopper modification of the original CoffinManson relationship. In this paper, a methodology for estimating the dependence of the statistical distribution of the number of cycles-to-failure on the SmithWatsonTopper modification is presented. The statistical distribution of the number of cycles-to-failure was modelled with a two-parametric Weibull probability density function. The core of the presented methodology is represented by a multilayer perceptron neural network combined with the Weibull probability density function using a size parameter that follows the SmithWatsonTopper analytical model. The article presents the theoretical background of the methodology and its application in the case of experimental fatigue data. The results show that it is possible to model ?N curves and their scatter for different influential parameters, such as the specimens diameter and the testing temperature.
[ "en curves", "serial hybrid neural network", "weibull pdf", "fatigue life scatter", "smithwatsontopper parameter" ]
[ "P", "P", "M", "M", "R" ]
TG3msmx
rate-distortion problem for physics based distributed sensing
We consider the rate-distortion problem for sensing the continuous space-time physical temperature in a circular ring on which a heat source is applied over space and time, and which is also allowed to cool by radiation or convection to its surrounding medium. The heat source is modelled as a continuous space-time stochastic process which is bandlimited over space and time. The temperature field is the result of a circular convolution over space and a continuous-time causal filtering over time of the heat source with the Green's function corresponding to the heat equation, which is space and time invariant. The temperature field is sampled at uniform spatial locations by a set of sensors and it has to be reconstructed at a base station. The goal is to minimize the mean-square-error per second, for a given number of nats per second, assuming ideal communication channels between sensors and base station. We find a) the centralized R c (D) function of the temperature field, where all the space-time samples can be observed and encoded jointly. Then, we obtain b) the R s-i (D) function, where each sensor, independently, encodes its samples optimally over time and c) the R st-i (D) function, where each sensor is constrained to encode also independently over time. We also study two distributed prediction-based approaches: a) with perfect feedback from the base station, where temporal prediction is performed at the base station and each sensor performs differential encoding, and b) without feedback, where each sensor locally performs temporal prediction.
[ "rate-distortion", "temperature field", "green's function", "heat equation", "prediction", "feedback", "sensor networks", "distributed sampling", "local coding", "centralized coding", "distributed coding", "spatio-temporal correlation" ]
[ "P", "P", "P", "P", "P", "P", "M", "R", "M", "M", "M", "U" ]
3:ohEMm
developing a media space for remote synchronous parent-child interaction
While supporting family communication has traditionally been a domain of interest for interaction designers, few research initiatives have explicitly investigated remote synchronous communication between children and parents. We discuss the design of the ShareTable, a media space that supports synchronous interaction with children by augmenting videoconferencing with a camera-projector system to allow for shared viewing of physical artifacts. We present an exploratory evaluation of this system, highlighting how such a media space may be used by families for learning and play activities. The ShareTable was positively received by our participants and preferred over standard videoconferencing. Informed by the results of our exploratory evaluation, we discuss the next design iteration of the ShareTable and directions for future investigations in this area.
[ "media space", "distributed families", "computer-mediated communication", "parents and children" ]
[ "P", "M", "M", "R" ]
-L5zigJ
Cross-Noise-Coupled Architecture of Complex Bandpass Delta Sigma AD Modulator
Complex bandpass Delta Sigma AD modulators can provide superior performance to a pair of real bandpass Delta Sigma AD modulators of the same order. They process just input I and Q signals, not image signals, and AD conversion can be realized with low power dissipation, so that they are desirable for such low-IF receiver applications. This paper proposes a new architecture for complex bandpass Delta Sigma AD modulators with cross-noise-coupled topology, which effectively raises the order of the complex modulator and achieves higher SQNDR (Signal to Quantization Noise and Distortion Ratio) with low power dissipation. By providing the cross-coupled quantization noise injection to internal I and Q paths, noise coupling between two quantizers can be realized in complex form, which enhances the order of noise shaping in complex domain, and provides a higher-order NTF using a lower-order loop filter in the complex Delta Sigma AD modulator. Proposed higher-order modulator can be realized just by adding some passive capacitors and switches, the additional integrator circuit composed of an operational amplifier is not necessary, and the performance of the complex modulator can be effectively raised without more power dissipation. We have performed simulation with MATLAB to verify the effectiveness of the proposed architecture. The simulation results show that the proposed architecture can achieve the realization of higher-order enhancement, an improve SQNDR of the complex bandpass Delta Sigma AD modulator.
[ "complex bandpass delta sigma ad modulator", "noise coupling", "feedforward", "multibit" ]
[ "P", "P", "U", "U" ]
1oXT8nQ
Facial motion cloning
We propose a method for automatically copying facial motion from one 3D face model to another, while preserving the compliance of the motion to the MPEG-4 Face and Body Animation (FBA) standard. Despite the enormous progress in the field of Facial Animation, producing a new animatable face from scratch is still a tremendous task for an artist. Although many methods exist to animate a face automatically based on procedural methods, these methods still need to be initialized by defining facial regions or similar, and they lack flexibility because the artist can only obtain the facial motion that a particular algorithm offers. Therefore a very common approach is interpolation between key facial expressions, usually called morph targets, containing either speech elements (visemes) or emotional expressions. Following the same approach, the MPEG-4 Facial Animation specification offers a method for interpolation of facial motion from key positions, called Facial Animation Tables, which are essentially morph targets corresponding to all possible motions specified in MPEG-4. The problem of this approach is that the artist needs to create a new set of morph targets for each new face model. In case of MPEG-4 there are 86 morph targets, which is a lot of work to create manually. Our method solves this problem by cloning the morph targets, i.e. by automatically copying the motion of vertices, as well as geometry transforms, from source face to target face while maintaining the regional correspondences and the correct scale of motion. It requires the user only to identify a subset of the MPEG-4 Feature Points in the source and target faces. The scale of the movement is normalized with respect to MPEG-4 normalization units (FAPUs), meaning that the MPEG-4 FBA compliance of the copied motion is preserved. Our method is therefore suitable not only for cloning of free facial expressions, but also of MPEG-4 compatible facial motion, in particular the Facial Animation Tables. We believe that Facial Motion Cloning offers dramatic time saving to artists producing morph targets for facial animation or MPEG-4 Facial Animation Tables.
[ "mpeg-4", "fba", "facial animation", "morph targets", "vrml", "text-to-speech", "virtual characters", "virtual humans" ]
[ "P", "P", "P", "P", "U", "U", "U", "U" ]
1VWTCp4
An analysis of the Intel 80x86 security architecture and implementations
An in-depth analysis of the 80x86 processor families identifies architectural properties that may have unexpected, and undesirable, results in secure computer systems. In addition, reported implementation errors in some processor versions render them undesirable for secure systems because of potential security and reliability problems. In this paper, we discuss the imbalance in scrutiny for hardware protection mechanisms relative to software, and why this imbalance is increasingly difficult to justify as hardware complexity increases. We illustrate this difficulty with examples of architectural subtleties and reported implementation errors.
[ "hardware security architecture", "hardware implementation error", "microprocessor", "computer security", "penetration testing", "covert channels" ]
[ "R", "R", "U", "R", "U", "U" ]
9CTvf87
Realtime concatenation technique for skeletal motion in humanoid animation
In this paper, we propose a realtime concatenation technique between basic skeletal motions obtained Ly the motion capture technique and etc. to generate a lifelike behavior for a humanoid character (avatar). We execute several experiments to show the advantage and the property of our technique and also report the results. Finally, we describe our applied system called WonderSpace which leads participants to the exciting and attractive virtual worlds with humanoid characters in cyberspace. Our concatenation technique has the following features: (1) based on a blending method between a preceding motion and a succeeding motion by a transition function, (2) realizing "smooth transition," "monotone transition," and "equivalent transition" by the transition function called paste function, (3) generating a connecting interval by making the backward and forward predictions for the preceding and succeeding motions, (4) executing the prediction under the hypothesis of "the smooth stopping state" or "the state of connecting motion", (5) controlling the prediction intervals by the parameter indicating the importance of the motion, and (6) realizing realtime calculation.
[ "3d computer graphics", "web3d", "interactive", "3d virtual world", "3d character", "blending function" ]
[ "U", "U", "U", "M", "M", "R" ]
2ZC9F:&
Call-by-value is dual to call-by-name
The rules of classical logic may be formulated in pairs corresponding to De Morgan duals: rules about & are dual to rules about V. A line of work, including that of Filinski (1989), Griffin (1990), Parigot (1992), Danos, Joinet, and Schellinx (1995), Selinger (1998,2001), and Curien and Herbelin (2000), has led to the startling conclusion that call-by-value is the de Morgan dual of call-by-name. This paper presents a dual calculus that corresponds to the classical sequent calculus of Gentzen (1935) in the same way that the lambda calculus of Church (1932,1940) corresponds to the intuitionistic natural deduction of Gentzen (1935). The paper includes crisp formulations of call-by-value and call-by-name that are obviously dual; no similar formulations appear in the literature. The paper gives a CPS translation and its inverse, and shows that the translation is both sound and complete, strengthening a result in Curien and Herbelin (2000). Note. This paper uses color to clarify the relation of types and terms, and of source and target calculi. If the URL below is not in blue, please download the color version, which can be found in the ACM Digital Library archive for ICFP 2003, at http://portal.acm.org/proceedings/icfp/archive, or by googling 'wadler dual'.
[ "logic", "de morgan dual", "sequent calculus", "lambda calculus", "natural deduction", "curry-howard correspondence", "lambda mu calculus" ]
[ "P", "P", "P", "P", "P", "M", "M" ]
-sAnbRe
Universal automata and NFA learning
The aim of this paper is to develop a new algorithm that, with a complete sample as input, identifies the family of regular languages by means of nondeterministic finite automata. It is a state-merging algorithm. One of its main features is that the convergence (which is proved) is achieved independently from the order in which the states are merged, that is, the merging of states may be done "randomly". (C) 2008 Elsevier B.V. All rights reserved.
[ "finite automata", "grammatical inference", "universal automaton" ]
[ "P", "U", "M" ]
2eRn3zW
Effect of load models on assessment of energy losses in distributed generation planning
Distributed Generation (DG) is gaining in significance due to the keen public awareness of the environmental impacts of electric power generation and significant advances in several generation technologies which are much more environmentally friendly (wind power generation, micro-turbines, fuel cells, and photovoltaic) than conventional coal, oil and gas-fired plants. Accurate assessment of energy losses when DG is connected is gaining in significance due to the developments in the electricity market place, such as increasing competition, real time pricing and spot pricing. However, inappropriate modelling can give rise to misleading results. This paper presents an investigation into the effect of load models on the predicted energy losses in DG planning. Following a brief introduction the paper proposes a detailed voltage dependent load model, for DG planning use, which considers three categories of loads: residential, industrial and commercial. The paper proposes a methodology to study the effect of load models on the assessment of energy losses based on time series simulations to take into account both the variations of renewable generation and load demand. A comparative study of energy losses between the use of a traditional constant load model and the voltage dependent load model and at various load levels is carried out using a 38-node example power system. Simulations presented in the paper indicate that the load model to be adopted can significantly affect the results of DG planning.
[ "load model", "energy losses", "distributed generation", "voltage profile", "load forecasting" ]
[ "P", "P", "P", "M", "M" ]
3Hzp-WV
generational stack collection and profile-driven pretenuring
This paper presents two techniques for improving garbage collection performance: generational stack collection and profile-driven pretenuring. The first is applicable to stack-based implementations of functional languages while the second is useful for any generational collector. We have implemented both techniques in a generational collector used by the TIL compiler (Tarditi, Morrisett, Cheng, Stone, Harper, and Lee 1996), and have observed decreases in garbage collection times of as much as 70\% and 30\%, respectively.Functional languages encourage the use of recursion which can lead to a long chain of activation records. When a collection occurs, these activation records must be scanned for roots. We show that scanning many activation records can take so long as to become the dominant cost of garbage collection. However, most deep stacks unwind very infrequently, so most of the root information obtained from the stack remains unchanged across successive garbage collections. Generational stack collection greatly reduces the stack scan cost by reusing information from previous scans.Generational techniques have been successful in reducing the cost of garbage collection (Ungar 1984). Various complex heap arrangements and tenuring policies have been proposed to increase the effectiveness of generational techniques by reducing the cost and frequency of scanning and copying. In contrast, we show that by using profile information to make lifetime predictions, pretenuring can avoid copying data altogether. In essence, this technique uses a refinement of the generational hypothesis (most data die young) with a locality principle concerning the age of data: most allocations sites produce data that immediately dies, while a few allocation sites consistently produce data that survives many collections.
[ "generation", "stack", "collect", "profiles", "paper", "performance", "implementation", "functional languages", "use", "compilation", "recursion", "activation", "records", "scan", "cost", "informal", "complexity", "arrangement", "policy", "effect", "lifetime", "predict", "data", "refine", "locality", "age", "allocation" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P" ]
25-BBG1
Regulated Secretion in Chromaffin Cells
ARFs constitute a family of structurally related proteins that forms a subset of the ras GTPases. In chromaffin cells, secretagogue-evoked stimulation triggers the rapid translocation of ARF6 from secretory granules to the plasma membrane and the concomitant activation of PLD in the plasma membrane. Both PLD activation and catecholamine secretion are strongly inhibited by a synthetic peptide corresponding to the N-terminal domain of ARF6. ARNO, a potential guanine nucleotide exchange factor for ARF6, is expressed and localized in the plasma membrane of chromaffin cells. Using permeabilized cells, we found that the introduction of anti-ARNO antibodies into the cytosol inhibits both PLD activation and catecholamine secretion. Chromaffin cells express PLD1 at the plasma membrane. We found that microinjection of the catalytically inactive PLD1(K898R) dramatically reduces catecholamine secretion monitored by amperometry, most likely by interfering with a late postdocking step of calcium-regulated exocytosis. We propose that ARNO-ARF6 participate in the exocytotic reaction by controlling the plasma membrane-bound PLD1. By generating fusogenic lipids at the exocytotic sites, PLD1 may represent an essential component of the fusion machinery in neuroendocrine cells.
[ "chromaffin", "arf", "secretory granule", "arno", "exocytosis", "phospholipase d" ]
[ "P", "P", "P", "P", "P", "U" ]
27qtCSA
Analysis of elastic wave propagation in a functionally graded thick hollow cylinder using a hybrid mesh-free method
In this paper, a hybrid mesh-free method based on generalized finite difference (GFD) and Newmark finite difference (NFD) methods is presented to calculate the velocity of elastic wave propagation in functionally graded materials (FGMs). The physical domain to be considered is a thick hollow cylinder made of functionally graded material in which mechanical properties are graded in the radial direction only. A power-law variation of the volume fractions of the two constituents is assumed for mechanical property variation. The cylinder is excited by shock loading to obtain the time history of the radial displacement. The velocity of elastic wave propagation in functionally graded cylinder is calculated from periodic behavior of the radial displacement in time domain. The effects of various grading patterns and various constitutive mechanical properties on the velocity of elastic wave propagation in functionally graded cylinders are studied in detail. Numerical results demonstrate the efficiency of the proposed method in simulating the wave propagation in FGMs.
[ "wave propagation", "thick hollow cylinder", "mesh-free methods", "functionally graded materials", "thermal shock" ]
[ "P", "P", "P", "P", "M" ]
fVFrakL
A nonparametric methodology for evaluating convergence in a multi-input multi-output setting
The paper presents a novel nonparametric methodology to evaluate convergence. We develop two new indexes to evaluate ?-convergence and ?-convergence. The indexes developed allow evaluations using multiple inputs and outputs. The methodology complements productivity assessments based on the Malmquist index. The methodology is applied to Portuguese construction companies operating in 20082010.
[ "convergence", "productivity", "malmquist index", "data envelopment analysis", "construction industry" ]
[ "P", "P", "P", "U", "M" ]