id
stringlengths
7
7
title
stringlengths
3
578
abstract
stringlengths
0
16.7k
keyphrases
sequence
prmu
sequence
vXFe8Vy
virtually enhancing the perception of user actions
This paper proposes using virtual reality to enhance the perception of actions by distant users on a shared application. Here, distance may refer either to space ( e.g. in a remote synchronous collaboration) or time ( e.g. during playback of recorded actions). Our approach consists in immersing the application in a virtual inhabited 3D space and mimicking user actions by animating avatars. We illustrate this approach with two applications, the one for remote collaboration on a shared application and the other to playback recorded sequences of user actions. We suggest this could be a low cost enhancement for telepresence.
[ "animation", "avatars", "telepresence", "application sharing", "collaborative virtual environments" ]
[ "P", "P", "P", "R", "M" ]
1EgZj7J
Dynamic range improvement of multistage multibit Sigma Delta modulator for low oversampling ratios
This paper presents an improved architecture of the multistage multibit sigma-delta modulators (EAMs) for wide-band applications. Our approach is based on two resonator topologies, high-Q cascade-of-resonator-with-feedforward (HQCRFF) and low-Q cascade-of-integrator-with-feedforward (LQCEFF). Because of in-band zeros introduced by internal loop filters, the proposed architecture enhances the suppression of the in-band quantization noise at a low OSR. The HQCRFF-based modulator with single-bit quantizer has two modes of operation, modulation and oscillation. When the HQCRFF-based modulator is operating in oscillation mode, the feedback path from the quantizer output to the input summing node is disabled and hence the modulator output is free of the quantization noise terms. Although operating in oscillation mode is not allowed for single-stage SigmaDeltaM, the oscillation of HQCRFF-based modulator can improve dynamic range (DR) of the multistage (MASH) SigmaDeltaM. The key to improving DR is to use HQCRFF-based modulator in the first stage and have the first stage oscillated. When the first stage oscillates, the coarse quantization noise vanishes and hence circuit nonidealities, such as finite op-amp gain and capacitor mismatching, do not cause leakage quantization noise problem. According to theoretical and numerical analysis, the proposed MASH architecture can inherently have wide DR without using additional calibration techniques.
[ "dynamic range improvement", "sigma delta modulators", "multistage (mash)", "analog-to-digital converters (adcs)", "multibit quantizer" ]
[ "P", "P", "P", "M", "R" ]
1M:sC-T
An ontology modelling perspective on business reporting
In this paper, we discuss the motivation and the fundamentals of an ontology representation of business reporting data and metadata structures as defined in the eXtensible business reporting language (XBRL) standard. The core motivation for an ontology representation is the enhanced potential for integrated analytic applications that build on quantitative reporting data combined with structured and unstructured data from additional sources. Applications of this kind will enable significant enhancements in regulatory compliance management, as they enable business analytics combined with inference engines for statistical, but also for logical inferences. In order to define a suitable ontology representation of business reporting language structures, an analysis of the logical principles of the reporting metadata taxonomies and further classification systems is presented. Based on this analysis, a representation of the generally accepted accounting principles taxonomies in XBRL by an ontology provided in the web ontology language (OWL) is proposed. An additional advantage of this representation is its compliance with the recent ontology definition metamodel (ODM) standard issued by OMG.
[ "enterprise information integration and interoperability", "languages for conceptual modelling", "ontological approaches to content and knowledge management", "ontology-based software engineering for enterprise solutions", "domain engineering" ]
[ "M", "M", "M", "M", "M" ]
r4HSvDR
The self-organizing map
An overview of the self-organizing map algorithm, on which the papers in this issue are based, is presented in this article.
[ "self-organizing map", "learning vector quantization" ]
[ "P", "U" ]
4BpKsNs
The Amygdala and Development of the Social Brain
The amygdala comprises part of an extended network of neural circuits that are critically involved in the processing of socially salient stimuli. Such stimuli may be explicitly social, such as facial expressions, or they may be only tangentially social, such as abstract shapes moving with apparent intention relative to one another. The coordinated interplay between neural activity in the amygdala and other brain regions, especially the medial prefrontal cortex, the occipitofrontal cortex, the fusiform gyrus, and the superior temporal sulcus, allows us to develop social responses and to engage in social behaviors appropriate to our species. The harmonious functioning of this integrated social cognitive network may be disrupted by congenital or acquired lesions, by genetic anomalies, and by exceptional early experiences. Each form of disruption is associated with a slightly different outcome, dependent on the timing of the experience, the location of the lesion, or the nature of the genetic anomaly. Studies in both humans and primates concur; the dysregulation of basic emotions, especially the processing of fear and anger, is an almost invariable consequence of such disruption. These, in turn, have direct or indirect consequences for social behavior.
[ "amygdala", "social brain", "facial expression", "behavior" ]
[ "P", "P", "P", "P" ]
17UWX&n
Modeling and Cost Analysis of an Improved Movement-Based Location Update Scheme in Wireless Communication Networks
A movement based location update (MBLU) scheme is an LU scheme, under which a user equipment (UE) performs an LU when the number of cells crossed by the UE reaches a movement threshold. The MBLU scheme suffers from ping-pong LU effect. The ping-pong LU effect arises when the UE that moves repetitively between two adjacent cells performs LUs in the same way as in the case of straight movement. To tackle the ping-pong LU effect encountered by the original MBLU (OMBLU) scheme, an improved MBLU (IMBLU) scheme was proposed in the literature. Under the IMBLU scheme, the UE performs an LU when the number of different cells, rather than the number of cells, visited by the UE reaches the movement threshold. In this paper we develop an embedded Markov chain model to calculate the signaling cost of the IMBLU scheme. We derive analytical formulas for the signaling cost, whose accuracy is tested through simulation. It is observed from a numerical study based on these formulas that 1) the signaling cost is a downward convex function of the movement threshold, implying the existence of an optimal movement threshold that can minimize the signaling cost, and that 2) the reduction in the signaling cost achieved by the IMBLU scheme relative to the OMBLU scheme is more prominent in the case of stronger repetitiveness in the UE movement. The model developed and the formulas derived in this paper can guide the implementation of the IMBLU scheme in wireless communication networks.
[ "modeling", "movement-based location update", "ping-pong lu effect", "embedded markov chain" ]
[ "P", "P", "P", "P" ]
2bKq8sa
A modified offset roll printing for thin film transistor applications
In order to realize a high resolution and high throughput printing method for thin film transistor application, a modified offset roll printing was studied. This roll printing chiefly consists of a blanket with low surface energy and a printing plate (clich) with high surface energy. In this study, a finite element analysis was done to predict the blanket deformation and to find the optimal angle of clichs sidewall. Various etching methods were investigated to obtain a high resolution clich and the surface energy of the blanket and clich was analyzed for ink transfer. A high resolution clich with the sidewall angle of 90 and the intaglio depth of 13?m was fabricated by the deep reactive ion etching method. Based on the surface energy analysis, we extracted the most favorable condition to transfer inks from a blanket to a clich, and thus thin films were deposited on a Si-clich to increase the surface energy. Through controlling roll speed and pressure, two inks, etch-resist and silver paste, were printed on a rigid substrate, and the fine patterns of 10?m width and 6?m line spacing were achieved. By using this printing process, the top gate amorphous indiumgalliumzinc-oxide TFTs with channel width/length of 12/6?m were successfully fabricated by printing etch-resists.
[ "offset roll printing", "surface energy", "printing plate (clich)", "tft" ]
[ "P", "P", "P", "P" ]
-g2&N7p
On the complement graph and defensive k k -alliances
In this paper, we obtain several tight bounds of the defensive k k -alliance number in the complement graph from other parameters of the graph. In particular, we investigate the relationship between the alliance numbers of the complement graph and the minimum and maximum degree, the domination number and the isoperimetric number of the graph. Moreover, we prove the NP-completeness of the decision problem underlying the defensive k k -alliance number.
[ "05c69", "15c05" ]
[ "U", "U" ]
3kca74Q
Hyperspectral image segmentation through evolved cellular automata
Efficient segmentation of hyperspectral images through the use of cellular automata. The rule set for the CA is automatically obtained using an evolutionary algorithm. Synthetic images of much lower dimensionality are used to evolve the automata. The CA works with spectral information but do not project it onto a lower dimension. The CA-based classification outperforms reference techniques.
[ "hyperspectral imaging", "segmentation", "cellular automata", "evolution" ]
[ "P", "P", "P", "U" ]
1:LfRSi
Analytical and empirical evaluation of the impact of Gaussian noise on the modulations employed by Bluetooth Enhanced Data Rates
Bluetooth (BT) is a leading technology for the deployment of wireless Personal Area Networks and Body Area Networks. Versions 2.0 and 2.1 of the standard, which are massively implemented in commercial devices, improve the throughput of the BT technology by enabling the so-called Enhanced Data Rates (EDR). EDRs are achieved by utilizing new modulation techniques (?/4-DQPSK and 8-DPSK), apart from the typical Gaussian Frequency Shift Keying modulation supported by previous versions of BT. This manuscript presents and validates a model to characterize the impact of white noise on the performance of these modulations. The validation is systematically accomplished in a testbed with actual BT interfaces and a calibrated white noise generator.
[ "modulation", "bluetooth", "white noise", "bit error rate" ]
[ "P", "P", "P", "M" ]
-eRwhws
Spectral analysis of irregularly-sampled data: Paralleling the regularly-sampled data approaches ?
The spectral analysis of regularly-sampled (RS) data is a well-established topic, and many useful methods are available for performing it under different sets of conditions. The same cannot be said about the spectral analysis of irregularly-sampled (IS) data: despite a plethora of published works on this topic, the choice of a spectral analysis method for IS data is essentially limited, on either technical or computational grounds, to the periodogram and its variations. In our opinion this situation is far from satisfactory, given the importance of the spectral analysis of IS data for a considerable number of applications in such diverse fields as engineering, biomedicine, economics, astronomy, seismology, and physics, to name a few. In this paper we introduce a number of IS data approaches that parallel the methods most commonly used for spectral analysis of RS data: the periodogram (PER), the Capon method (CAP), the multiple-signal characterization method (MUSIC), and the estimation of signal parameters via rotational invariance technique (ESPRIT). The proposed IS methods are as simple as their RS counterparts, both conceptually and computationally. In particular, the fast algorithms derived for the implementation of the RS data methods can be used mutatis mutandis to implement the proposed parallel IS methods. Moreover, the expected performance-based ranking of the IS methods is the same as that of the parallel RS methods: all of them perform similarly on data consisting of well-separated sinusoids in noise, MUSIC and ESPRIT outperform the other methods in the case of closely-spaced sinusoids in white noise, and CAP outperforms PER for data whose spectrum has a small-to-medium dynamic range (MUSIC and ESPRIT should not be used in the latter case).
[ "spectral analysis", "sinusoids in noise", "irregular sampling", "nonuniform sampling", "carma signals" ]
[ "P", "P", "U", "U", "M" ]
-c71Fa1
Time-Series Data Mining
In almost every scientific field, measurements are performed over time. These observations lead to a collection of organized data called time series. The purpose of time-series data mining is to try to extract all meaningful knowledge from the shape of data. Even if humans have a natural capacity to perform these tasks, it remains a complex problem for computers. In this article we intend to provide a survey of the techniques applied for time-series data mining. The first part is devoted to an overview of the tasks that have captured most of the interest of researchers. Considering that in most cases, time-series task relies on the same components for implementation, we divide the literature depending on these common aspects, namely representation techniques, distance measures, and indexing methods. The study of the relevant literature has been categorized for each individual aspects. Four types of robustness could then be formalized and any kind of distance could then be classified. Finally, the study submits various research trends and avenues that can be explored in the near future. We hope that this article can provide a broad and deep understanding of the time-series data mining research field.
[ "data mining", "performance", "time series", "distance measures", "algorithms", "data indexing", "query by content", "sequence matching", "similarity measures", "stream analysis", "temporal analysis" ]
[ "P", "P", "P", "P", "U", "R", "U", "U", "M", "U", "U" ]
4qrgnfT
A small hybrid JIT for embedded systems
Just-In-Time (JIT) Compilation is a technology used to improve speed of virtual machines that support dynamic loading of applications. It is better known as a technique that accelerates Java programs. Current JIT compilers are either huge in size or compile complete methods of the application requiring large amounts of memory for their functioning. This has made Java Virtual Machines for embedded systems devoid of JIT compilers. This paper explains a simple technique of combining interpretation with compilation to get a hybrid interpretation strategy. It also describes a new code generation technique that works using its self-code. The combination gives a JIT compiler that is very small (10K) and suitable for Java Virtual Machines for embedded systems.
[ "jit", "embedded system", "java", "code generation", "dynamic compilation" ]
[ "P", "P", "P", "P", "R" ]
3ERTiGA
Rationality of induced ordered weighted operators based on the reliability of the source of information in group decision-making
The aggregation of preference relations in group decision-making (GDM) problems can be carried out based on either the reliability of the preference values to be aggregated, as is the case with ordered weighted averaging operators, or on the reliability of the source of information that provided the preferences, as is the case with weighted mean operators. In this paper, we address the problem of aggregation based on the reliability of the source of information, with a double aim: a) To provide a general framework for induced ordered weighted operators based upon the source of information, and b) to provide a study of their rationality. We study the conditions which need to be verified by an aggregation operator in order to maintain the rationality assumptions on the individual preferences in the aggregation phase of the selection process of alternatives. In particular, we show that any aggregation operator based on the reliability of the source of information does verify these conditions.
[ "rationality", "group decision-making", "preference relations", "aggregation operators", "induced aggregation", "consistency" ]
[ "P", "P", "P", "P", "R", "U" ]
-X-93LH
Digital preservation of knowledge in the public sector: a pre-ingest tool
This paper describes the need for coordinating pre-ingest activities in digital preservation of archival records. As a result of the wide use of electronic records management systems (ERMS) in agencies, the focus is on several issues relating to the interaction of the agencys ERMS and public repositories. This paper indicates the importance of using digital recordkeeping metadata to meet more precisely and at the same time semi-automatically the criteria set by memory institutions. The paper provides an overview of one prospective solution and describes the Estonian National Archives universal archiving module (UAM). A case study reports the use of the UAM in preserving the digital records of the Estonian Minister for Population and Ethnic Affairs. In this project, the preparation and transfer of archival records was divided into ten phases, starting from the description of the archival creator and ending with controlled transfer. The case study raises questions about how much recordkeeping metadata can be used in archival description and how the interaction of the agencys ERMS and ingest by the archives could be more automated. The main issues (e.g. classification, metadata elements variations, mapping, and computer files conversions) encountered during that project are discussed. Findings show that the Open Archival Information System functional models ingest part should be reconceptualised to take into account preparatory work. Adding detailed metadata about the structure, context and relationships in the right place at the right time could get one step closer to digital codified knowledge archiving by creating synergies with various other digital repositories.
[ "digital preservation", "pre-ingest", "ingest", "universal archiving module", "estonia" ]
[ "P", "P", "P", "P", "U" ]
4YX:XfK
The Regulation of SERCA-Type Pumps by Phospholamban and Sarcolipin
Both sarcolipin (SLN) and phospholamban (PLN) lower the apparent affinity of either SERCA1a or SERCA2a for Ca2+. Since SLN and PLN are coexpressed in the heart, interactions among these three proteins were investigated. When SERCA1a or SERCA2a were coexpressed in HEK-293 cells with both SLN and PLN, superinhibition resulted. The ability of SLN to elevate the content of PLN monomers accounts, at least in part, for the superinhibitory effects of SLN in the presence of PLN. To evaluate the role of SLN in skeletal muscle, SLN cDNA was injected directly into rat soleus muscle and force characteristics were analyzed. Overexpression of SLN resulted in significant reductions in both twitch and tetanic peak force amplitude and maximal rates of contraction and relaxation and increased fatigability with repeated electrical stimulation. Ca2+ uptake in muscle homogenates was impaired, suggesting that overexpression of SLN may reduce the sarcoplasmic reticulum Ca2+ store. SLN and PLN appear to bind to the same regulatory site in SERCA. However, in a ternary complex, PLN occupies the regulatory site and SLN binds to the exposed side of PLN and to SERCA.
[ "phospholamban", "sarcolipin", "ca2+-atpase", "cardiomyopathy", "regulatory molecules" ]
[ "P", "P", "M", "U", "M" ]
1YsokPf
Bootstrap confidence intervals for principal response curves
The principal response curve (PRC) model is of use to analyse multivariate data resulting from experiments involving repeated sampling in time. The time-dependent treatment effects are represented by PRCs, which are functional in nature. The sample PRCs can be estimated using a raw approach, or the newly proposed smooth approach. The generalisability of the sample PRCs can be judged using confidence bands. The quality of various bootstrap strategies to estimate such confidence bands for PRCs is evaluated. The best coverage was obtained with BCa BC a intervals using a non-parametric bootstrap. The coverage appeared to be generally good, except for the case of exactly zero population PRCs for all conditions. Then, the behaviour is irregular, which is caused by the sign indeterminacy of the PRCs. The insights obtained into the optimal bootstrap strategy are useful to apply in the PRC model, and more generally for estimating confidence intervals in singular value decomposition based methods.
[ "singular value decomposition", "resampling" ]
[ "P", "U" ]
55n3HSb
A concurrent specification of Brzozowski's DFA construction algorithm
In this paper two concurrent versions of Brzozowski's deterministic finite automaton (DFA) construction algorithm are developed from first principles, the one being a slight refinement of the other. We rely on Hoare's CSP as our notation. The specifications that are proposed of the Brzozowski algorithm are in terms of the concurrent composition of a number of top-level processes, each participating process itself composed of several other concurrent processes. After considering a number of alternatives, this particular overall architectural structure seemed like a natural and elegant mapping from the sequential algorithm's structure. While we have carefully argued the reasons for constructing the concurrent versions as proposed in the paper, there are of course, a large range of alternative design choices that could be made. There might also be scope for a more fine-grained approach to updating sets or checking for similarity of regular expressions. At this stage, we have chosen to abstract away from these considerations, and leave their exploration for a subsequent step in our research.
[ "concurrency", "csp", "regular expressions", "automaton construction" ]
[ "P", "P", "P", "R" ]
2JgDh:j
collaborative multimedia learning environments
I use the term "collaborative", to identify a way that enables conversation to occur in, about, and around the digital medium, therefore making the "digital artifacts" contributed by all individuals a key element of a conversation as opposed to consecutive, linear presentations used by most faculty at the Design School.Installations of collaborative multimedia in classrooms at the Harvard University Graduate School of Design show an enhancement of the learning process via shared access to media resources and enhanced spatial conditions within which these resources are engaged. Through observation and controlled experiments I am investigating how the use of shared, collaborative interfaces for interaction with multiple displays in a co-local environment enhances the learning process. The multiple spatial configurations and formats of learning mandate that with more effective interfaces and spaces for sharing digital media with fellow participants, the classroom can be used much more effectively and thus, learning and interaction with multimedia can be improved.
[ "digital artifacts", "multiple displays", "shared interfaces", "rich control interfaces", "beneficial interruption" ]
[ "P", "P", "R", "M", "U" ]
1hPfydu
clustering multi-way data via adaptive subspace iteration
Clustering multi-way data is a very important research topic due to the intrinsic rich structures in real-world datasets. In this paper, we propose the subspace clustering algorithm on multi-way data, called ASI-T (Adaptive Subspace Iteration on Tensor). ASI-T is a special version of High Order SVD (HOSVD), and it simultaneously performs subspace identification using 2DSVD and data clustering using K-Means. The experimental results on synthetic data and real-world data demonstrate the effectiveness of ASI-T.
[ "clustering", "multi-way data", "subspace", "tensor" ]
[ "P", "P", "P", "P" ]
2fvZA31
SAT-based model-checking for security protocols analysis
We present a model checking technique for security protocols based on a reduction to propositional logic. At the core of our approach is a procedure that, given a description of the protocol in a multi-set rewriting formalism and a positive integer k, builds a propositional formula whose models (if any) correspond to attacks on the protocol. Thus, finding attacks on protocols boils down to checking a propositional formula for satisfiability, problem that is usually solved very efficiently by modern SAT solvers. Experimental results indicate that the approach scales up to industrial strength security protocols with performance comparable with (and in some cases superior to) that of other state-of-the-art protocol analysers.
[ "security protocols", "multi-set rewriting", "bounded model checking", "sat-based model checking" ]
[ "P", "P", "M", "R" ]
--DMVV3
Existence of positive solutions for 2n 2 n th-order singular superlinear boundary value problems ?
This paper investigates the existence of positive solutions for 2n 2 n th-order (n>1 n > 1 ) singular superlinear boundary value problems. A necessary and sufficient condition for the existence of C2n?2[0,1] C 2 n ? 2 [ 0 , 1 ] as well as C2n?1[0,1] C 2 n ? 1 [ 0 , 1 ] positive solutions is given by constructing a special cone and with the e e -Norm.
[ "positive solution", "cone", "singular boundary value problem", "e e -norm e e e e e", "fixed-point theorem" ]
[ "P", "P", "R", "R", "U" ]
-csFUCw
Embedding the Internet in the lives of college students - Online and offline Behavior
The Internet is increasingly becoming embedded in the lives of most American citizens. College students constitute a group who have made particularly heavy use of the technology for everything from downloading music to distance education to instant messaging. Researchers know a lot about the uses made of the Internet by this group of people but less about the relationship between their offline activities and online behavior. This study reports the results of a web survey of a group of university undergraduates exploring the nature of both online and offline in five areas-the use of news and information, the discussion of politics, the seeking of health information, the use of blogs, and the downloading of media and software.
[ "internet", "college students", "downloading", "online behavior", "news", "blogs", "information technology" ]
[ "P", "P", "P", "P", "P", "P", "R" ]
tftCi5c
A capacitive tactile sensor array for surface texture discrimination
This paper presents a silicon MEMS based capacitive sensing array, which has the ability to resolve forces in the sub mN range, provides directional response to applied loading and has the ability to differentiate between surface textures. Texture recognition is achieved by scanning surfaces over the sensing array and assessing the frequency spectrum of the sensor outputs.
[ "tactile sensor", "mems", "biomimetic", "capacitive sensor" ]
[ "P", "P", "U", "R" ]
-kKNXdh
Segmenting, modeling, and matching video clips containing multiple moving objects
This paper presents a novel representation for dynamic scenes composed of multiple rigid objects that may undergo different motions and are observed by a moving camera. Multiview constraints associated with groups of affine-covariant scene patches and a normalized description of their appearance are used to segment a scene into its rigid components, construct three-dimensional models of these components, and match instances of models recovered from different image sequences. The proposed approach has been applied to the detection and matching of moving objects in video sequences and to shot matching, i.e., the identification of shots that depict the same scene in a video clip.
[ "shot matching", "affine-covariant patches", "structure from motion", "motion segmentation", "video retrieval" ]
[ "P", "R", "M", "R", "M" ]
2wi&z9o
Weighted fuzzy interpolative reasoning systems based on interval type-2 fuzzy sets
In this paper, we present a weighted fuzzy interpolative reasoning method for sparse fuzzy rule-based systems based on interval type-2 fuzzy sets. We also apply the proposed weighted fuzzy interpolative reasoning method to deal with the truck backer-upper control problem. The proposed method satisfies the seven evaluation indices for fuzzy interpolative reasoning. The experimental results show that the proposed method outperforms the existing methods. It provides us with a useful way for dealing with fuzzy interpolative reasoning in sparse fuzzy rule-based systems.
[ "weighted fuzzy interpolative reasoning", "fuzzy interpolative reasoning", "interval type-2 fuzzy sets", "sparse fuzzy rule-based systems" ]
[ "P", "P", "P", "P" ]
-7PrH6j
Vestibular PREHAB
A sudden unilateral loss or impairment of vestibular function causes vertigo, dizziness, and impaired postural function. In most occasions, everyday activities supported or not by vestibular rehabilitation programs will promote a compensation and the symptoms subside. As the compensatory process requires sensory input, matching performed motor activity, both motor learning of exercises and matching to sensory input are required. If there is a simultaneous cerebellar lesion caused by the tumor or the surgery of the posterior cranial fossa, there may be a risk of a combined vestibulocerebellar lesion, with reduced compensatory abilities and with prolonged or sometimes permanent disability. On the other hand, a slow gradual loss of unilateral function occurring as the subject continues well-learned everyday activities may go without any prominent symptoms. A pretreatment plan was therefore implemented before planned vestibular lesions, that is, PREHAB. This was first done in subjects undergoing gentamicin treatment for morbus Mnire. Subjects would perform vestibular exercises for 14 days before the first gentamicin installation, and then continue doing so until free of symptoms. Most subjects would only experience slight dizziness while losing vestibular function. The approachwhich is reported herewas then expanded to patients with pontine-angle tumors requiring surgery, but with remaining vestibular function to ease postoperative symptoms and reduce risk of combined cerebellovestibular lesions. Twelve patients were treated with PREHAB and had gentamicin installations transtympanically. In all cases there was a caloric loss, loss of VOR in head impulse tests, and impaired subjective vertical and horizontal. Spontaneous, positional nystagmus, subjective symptoms, and postural function were normalized before surgery and postoperative recovery was swift. Pretreatment training with vestibular exercises continued during the successive loss of vestibular function during gentamicin treatment, and pre-op gentamicin ablation of vestibular function offers a possibility to reduce malaise and speed up recovery.
[ "vestibular", "prehab", "rehabilitation", "compensation", "recovery", "schwannoma" ]
[ "P", "P", "P", "P", "P", "U" ]
1j8-GMy
SW1PerS: Sliding windows and 1-persistence scoring; discovering periodicity in gene expression time series data
Identifying periodically expressed genes across different processes (e.g. the cell and metabolic cycles, circadian rhythms, etc) is a central problem in computational biology. Biological time series may contain (multiple) unknown signal shapes of systemic relevance, imperfections like noise, damping, and trending, or limited sampling density. While there exist methods for detecting periodicity, their design biases (e.g. toward a specific signal shape) can limit their applicability in one or more of these situations.
[ "sliding windows", "periodicity", "gene expression", "time series", "persistent homology" ]
[ "P", "P", "P", "P", "U" ]
fu8q9zE
Parallel generation of unstructured surface grids
In this paper, a new grid generation system is presented for the parallel generation of unstructured triangular surface grids. The object-oriented design and implementation of the system, the internal components and the parallel meshing process itself are described. Initially in a rasterisation stage, the geometry to be meshed is analysed and a smooth distribution of local element sizes in 3-D space is set up automatically and stored in a Cartesian mesh. This background mesh is used by the advancing front surface mesher as spacing definition for the triangle generation. Both the rasterisation and the meshing are MPI-parallelised. The underlying principles and strategies will be outlined together with the advantages and limitations of the approach. The paper will be concluded with examples demonstrating the capabilities of the presented approach.
[ "automatic", "unstructured surface mesh generation", "geometry rasterisation", "mpi-parallel", "object-orientation" ]
[ "P", "R", "R", "U", "U" ]
-q5tRLK
H structured model reduction algorithms for linear discrete systems via LMI-based optimisation
In this article, H structured model reduction is addressed for linear discrete systems. Two important classes of systems are considered for structured model reduction, i.e. Markov jump systems and uncertain systems. The problem we deal with is the development of algorithms with the flexibility to allow any structure in the reduced-order system design, such as the structure of an original system, decentralisation of a networked system, pole assignment of the reduced system, etc. The algorithms are derived such that an associated model reduction error guarantees to satisfy a prescribed H norm-bound constraint. A new condition for the existence of desired reduced-order models preserving a certain structure is presented in a set of linear matrix inequalities (LMI) and non-convex equality constraints. Effective computational algorithms involving LMI are suggested to solve the matrix inequalities characterising a solution of the structured model reduction problem. Numerical examples demonstrate the advantages of the proposed model reduction method.
[ "structured model reduction", "uncertain systems", "linear matrix inequality", "h design", "markovian jump linear systems", "bilinear matrix inequality" ]
[ "P", "P", "P", "R", "M", "M" ]
3tp98Jq
A power-aware code-compression design for RISC/VLIW architecture
We studied the architecture of embedded computing systems from the viewpoint of power consumption in memory systems and used a selective-code-compression (SCC) approach to realize our design. Based on the LZW (Lempel-Ziv-Welch) compression algorithm, we propose a novel cost effective compression and decompression method. The goal of our study was to develop a new SCC approach with an extended decision policy based on the prediction of power consumption. Our decompression method had to be easily implemented in hardware and to collaborate with the embedded processor. The hardware implementation of our decompression engine uses the TSMC 0.18 mu m-2p6m model and its cell-based libraries. To calculate power consumption more accurately, we used a static analysis method to estimate the power overhead of the decompression engine. We also used variable sized branch blocks and considered several features of very long instruction word (VLIW) processors for our compression, including the instruction level parallelism (ILP) technique and the scheduling of instructions. Our code-compression methods are not limited to VLIW machines, and can be applied to other kinds of reduced instruction set computer (RISC) architecture.
[ "cell-based libraries", "instruction level parallelism (ilp)", "lzw compression", "vliw processors" ]
[ "P", "P", "R", "R" ]
1-FDE5s
Globallocal negotiations for implementing configurable packages: The power of initial organizational decisions
The purpose of this paper is to draw attention to the critical influence that initial organizational decisions regarding power and knowledge balance between internal members and external consultants have on the globallocal negotiation that characterizes configurable packages implementation. To do this, we conducted an intensive research study of a configurable information technology (IT) implementation project in a Canadian firm.
[ "intensive research", "configurable technology", "erp implementation", "critical discourse analysis", "temporal bracketing analysis", "qualitative research methods", "global/local negotiation", "power/knowledge balance" ]
[ "P", "R", "M", "M", "U", "M", "M", "M" ]
3UcQ317
Communication in Random Geometric Radio Networks with Positively Correlated Random Faults
We study the feasibility and time of coin communication in random geometric radio networks, where nodes fail randomly with positive correlation. We consider a set of radio stations with the same communication range, distributed in a random uniform way on a unit square region. In order to capture fault dependencies, we introduce the ranged spot model in which damaging events, called spots, occur randomly and independently on the region, causing faults in all nodes located within distance s from them. Node faults within distance 2s become dependent in this model and are positively correlated. We investigate the impact of the spot arrival rate on the feasibility and the time of communication in the fault-free part of the network. We provide an algorithm which broadcasts correctly with probability l - epsilon in faulty random geometric radio networks of diameter D in time O (D + log l/epsilon).
[ "random", "geometric radio network", "broadcast", "fault-tolerance", "dependent faults", "crash faults" ]
[ "P", "P", "P", "U", "R", "M" ]
2ahopA6
Consumer complaint behaviour in telecommunications: The case of mobile phone users in Spain
Consumer complaint behaviour theory is used to analyze Spanish telecommunications data. The main determinants are the type of problem, socio-demographic factors and the user?s type of contract. Proper complaint handling leads to satisfied, loyal and profitable consumers.
[ "consumer complaint behaviour", "mobile phones", "consumer retention", "consumer satisfaction", "consumer loyalty", "voice", "exit", "service failure", "complainers" ]
[ "P", "P", "M", "M", "M", "U", "U", "U", "U" ]
3NJBEzx
Combining OWL ontologies using epsilon-Connections
The standardization of the Web Ontology Language ( OWL) leaves ( at least) two crucial issues for Web-based ontologies unsatisfactorily resolved, namely how to represent and reason with multiple distinct, but linked ontologies, and how to enable effective knowledge reuse and sharing on the Semantic Web. In this paper, we present a solution for these fundamental problems based on E- Connections. We aim to use E- Connections to provide modelers with suitable means for developing Web ontologies in a modular way and to provide an alternative to the owl: imports construct. With such motivation, we present in this paper a syntactic and semantic extension of the Web Ontology language that covers E- Connections of OWL-DL ontologies. We show how to use such an extension as an alternative to the owl: imports construct in many modeling situations. We investigate different combinations of the logics SHIN( D), SHON( D) and SHIO( D) for which it is possible to design and implement reasoning algorithms, well- suited for optimization. Finally, we provide support for E-Connections in both an ontology editor, SWOOP, and an OWL reasoner, Pellet. (c) 2005 Elsevier B. V. All rights reserved.
[ "web ontology language", "integration and combination of ontologies", "combination of knowledge representation formalisms", "description logics reasoning" ]
[ "P", "M", "M", "M" ]
-8mVp3e
Model with artificial neural network to predict the relationship between the soil resistivity and dry density of compacted soil
This paper presents a technique to obtain the outcomes of soil dry density and optimum moisture contents with artificial neural network (ANN) for compacted soil monitoring through soil resistivity measurement in geotechnical engineering. The compacted soil monitoring through soil electrical resistivity shows the important role in the construction of highway embankments, earth dams and many other engineering structure. Generally, soil compaction is estimated through the determination of maximum dry density at optimum moisture contents in laboratory test. To estimate the soil compaction in conventional soil monitoring technique is time consuming and costly for the laboratory testing with a lot of samples of compacted soil. In this work, an ANN model is developed for predicting the relationship between dry density of compacted soil and soil electrical resistivity based on experimental data in soil profile. The regression analysis between the output and target values shows that the R-2 values are 0.99 and 0.93 for the training and testing sets respectively for the implementation of ANN in soil profile. The significance of our research is to obtain an intelligent model for getting faster, cost-effective and consistent outcomes in soil compaction monitoring through electrical resistivity for a wide range of applications in geotechnical investigation.
[ "dry density", "electrical resistivity", "soil compaction", "ann modeling" ]
[ "P", "P", "P", "P" ]
-S-A4&G
A fuzzy bi-criteria transportation problem
In this paper, a fuzzy bi-criteria transportation problem is studied. Here, the model concentrates on two criteria: total delivery time and total profit of transportation. The delivery times on links are fuzzy intervals with increasing linear membership functions, whereas the total delivery time on the network is a fuzzy interval with a decreasing linear membership function. On the other hand, the transporting profits on links are fuzzy intervals with decreasing linear membership functions and the total profit of transportation is a fuzzy number with an increasing linear membership function. Supplies and demands are deterministic numbers. A nonlinear programming model considers the problem using the max-min criterion suggested by Bellman and Zadeh. We show that the problem can be simplified into two bi-level programming problems, which are solved very conveniently. A proposed efficient algorithm based on parametric linear programming solves the bi-level problems. To explain the algorithm two illustrative examples are provided, systematically. (C) 2011 Elsevier Ltd. All rights reserved.
[ "bi-criteria transportation", "fuzzy interval", "membership function", "bi-level programming", "fuzzy transportation", "parametric programming" ]
[ "P", "P", "P", "P", "R", "R" ]
-NY:hBf
Capacity Gain of Mixed Multicast/Unicast Transport Schemes in a TV Distribution Network
This paper presents three approaches to estimate the required resources in an infrastructure where digital TV channels can be delivered in unicast or multicast (broadcast) mode. Such situations arise for example in Cable TV, IPTV distribution networks or in (future) hybrid mobile TV networks. The three approaches presented are an exact calculation, a Gaussian approximation and a simulation tool. We investigate two scenarios that allow saving bandwidth resources. In a static scenario, the most popular channels are multicast and the less popular channels rely on unicast. In a dynamic scenario, the list of multicast channels is dynamic and governed by the users' behavior. We prove that the dynamic scenario always outperforms the static scenario. We demonstrate the robustness, versatility and the limits of our three approaches. The exact calculation application is limited because it is computationally expensive for cases with large numbers of users and channels, while the Gaussian approximation is good exactly for such systems. The simulation tool takes long to yield results for small blocking probabilities. We explore the capacity gain regions under varying model parameters. Finally, we illustrate our methods by discussing some realistic network scenarios using channel popularities based on measurement data as much as possible.
[ "multicast", "unicast", "iptv", "mobile tv", "capacity planning", "digital tv/video", "streaming", "switched broadcast" ]
[ "P", "P", "P", "P", "M", "M", "U", "M" ]
-67TMGD
quantitatively evaluating the influence of online social interactions in the community-assisted digital library
Online social interactions are useful in information seeking from digital libraries, but how to measure their influence on the user's information access actions has not yet been revealed. Studies on this problem give us interesting insights into the workings of human dynamics in the context of information access from digital libraries. On the basis, we wish to improve the technological supports to provide more intelligent services in the ongoing China-America Million Books Digital Library so that it can reach its potential in serving human needs. Our research aims at developing a common framework to model online social interaction process in community-assisted digital libraries. The underlying philosophy of our work is that the online social interaction can be viewed as a dynamic process, and the next state of each participant in this process (e.g., personal information access competency) depends on the value of the previous states of all participants involving interactions in the period. Hence, considering the dynamics of interaction process, we model each participant with a Hidden Markov Model (HMM) chain and then employ the Influence Model , which was developed by C. Asavathiratham as a Dynamic Bayes Net (DBN) of representing the influences a number of Markov chains have on each other, to analyze the effects of participants influencing each other. Therefore, one can think of the entire interaction process as a DBN framework having two levels of structure: the local level and the network level. Each participant i has a local HMM chain &Ggr;( A ) which characterizes the transition of his internal states in the interaction process with state-transition probability ∑over j d ij P ( S i t | S j t-1 ) (Here states are his personal information access competence in different periods, while observations are his information access actions). Meanwhile, the network level, which is described by a network graph &Ggr;( D T ) where D ={ d ij } is the influence factor matrix , represents the interacting relations between participants. The strength of each connection, d ij , describes the influence factor of the participant j at its begin on the one i at its end. Hence, this model describes the dynamic inter-influence process of the internal states of all participants involving online interactions. To automatically build the model, we need firstly to extract observed features from the data of online social interactions and information access actions. Obviously, the effects of interactions are stronger if messages are exchanged more frequently, or the participants access more information in the online digital libraries during the period of time. Based on this consideration, we select the interaction measure IM i,j t and the amount of information IA j t as the estimation features of x i t . The interaction measure IA i t and the amount of information parameterize the features calculated automatically from the data of online social interactions between the participants i and j , and the features calculated from the data of information access actions respectively. Secondly, we need to develop a mechanism for learning the parameters d ij and P ( S i t | S j t-1 . Given sequences of observations { x i t } for each chain i , we may easily utilize the Expectation-Maximization algorithm or the gradient-based learning algorithm to get their estimation equations. We ran our experiments in the online digital library of W3C Consortium (www.w3c.org), which contains a mass of news, electronic papers or other materials related to web technologies. Users may access and download any information and materials in this digital library, and also may free discuss on any related technological problems by means of its mailing lists. Six users were selected in our experiments to collaboratively perform paper -gathering tasks related to four given topics. Any user might call for help from the others through the mailing lists when had difficulties in this process. All participants were required to record subjective evaluations of the effects that the others influenced his tasks. Each experiment was scheduled by ten phases. And in each phase, we sampled IM i,j t and IA i t for each participant and then fed them into the learning algorithms to automatically build the influence model. By comparing with the subjective influence graphs, the experimental results show that the influence model can estimate approximately the influences of online social interactions.
[ "online social interactions", "community-assisted digital library", "the influence model", "statistical feature extraction" ]
[ "P", "P", "P", "M" ]
3-::K:v
THE CEO/CIO RELATIONSHIP REVISITED - AN EMPIRICAL-ASSESSMENT OF SATISFACTION WITH IS
The necessity of integrating information systems (IS) into corporate strategy has received widespread attention in recent years. Strategic planning has moved IS from serving primarily as a support function to a point where it may influence corporate strategy. The strength of this influence, however, usually is determined by the nature of the relationship between the chief information officer (CIO) and the CEO. Generally the more satisfied CEOs are with CIOs, the greater the influence IS has on top-level decisions. Results of a nationwide survey of motor carrier CEOs and CIOs indicate that CEOs are generally satisfied with their CIOs' activities, and that CIOs perceive CEOs as placing a high priority on strategic IS plans. However, IS does not appear to be truly a part of corporate strategy formulation.
[ "strategic planning", "relationship between ceo and cio", "satisfaction with information systems", "role of information technology in strategic planning", "strategic information systems planning", "information technology in the motor carrier industry", "ceo satisfaction with is", "cio perceptions of corporate use of is" ]
[ "P", "R", "R", "M", "R", "M", "R", "M" ]
4vssBMm
Simulations of photosynthesis by a K-subset transforming system with membrane
By considering the inner regions of living cells' membranes, P systems with inner regions are introduced. Then, a new type of membrane computing systems are considered, called K-subset transforming systems with membranes, which can treat nonintegral multiplicities of objects. As an application, a K-subset transforming system is proposed in order to model the light reactions of the photosynthesis. The behaviour of such systems is simulated on a computer,
[ "photosynthesis", "k-subset", "p system", "nonintegral multiplicity" ]
[ "P", "P", "P", "P" ]
22ZV:nL
Technologische Innovation und die Auswirkung auf Geschftsmodell, Organisation und Unternehmenskultur Die Transformation der IBM zum global integrierten, dienstleistungsorientierten Unternehmen
Im vorliegenden Beitrag wird der Einfluss von Innovationen der Informations- und Kommunikationstechnologie (IKT) auf die Transformation von Unternehmen untersucht. Zunchst werden die allgemeinen IKT-getriebenen Entwicklungslinien der Globalisierung und der Dienstleistungsorientierung beschrieben. Die nachfolgende Analyse der Transformation der IBM Corporation ber die letzten 50Jahre zu einem global integrierten, dienstleistungsorientierten Unternehmen macht deutlich, dass IKT-Innovationen mit gleichzeitigen Anpassungen des Geschftsmodells, der Organisation und der Unternehmenskultur begegnet werden muss. Die Fhigkeit zu derartiger Adaption gewinnt eine zunehmend zentrale Bedeutung fr Unternehmen.
[ "innovation", "innovation", "geschftsmodell", "organisation", "unternehmenskultur", "transformation", "transformation", "ibm", "ibm", "informations- und kommunikationstechnologie", "vernderungsmanagement", "information and communication technology", "business model", "organization", "corporate culture", "change management" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "U", "U", "U", "U", "M", "U" ]
-BuT&os
Modular robotics as a tool for education and entertainment
We developed I-BLOCKS, a modular electronic building block system and here we show how this system has proven useful, especially as an educational tool that allows hands-on learning in an easy manner. Through user studies we find limitations of the first I-BLOCKS system, and we show how the system can be improved by introducing a graphical user interface for authoring the contents of the individual I-BLOCK. This is done by developing a new cubic block shape with new physical and electrical connectors, and by including new embedded electronics. We developed and evaluated the I-BLOCKS as a manipulative technology through studies in both schools and hospitals, and in diverse cultures such as in Denmark, Finland, Italy and Tanzania.
[ "constructionism", "developing countries", "educational robots", "educational technology", "entertainment robots" ]
[ "U", "M", "R", "R", "R" ]
w1YPhu3
The selective use of redundancy for video streaming over Vehicular Ad Hoc Networks
Video streaming over Vehicular Ad Hoc Networks (VANETs) offers the opportunity to deploy many interesting services. These services, however, are strongly prone to packet loss due to the highly dynamic topology and shared wireless medium inherent in the VANETs. A possible strategy to enhance the delivery rate is to use redundancy for handling packet loss. This is a suitable technique for VANETs as it does not require any interaction between the source and receivers. In this work, we discuss novel approaches for the use of redundancy based on the particularities of video streaming over VANETs. A thorough study on the use of redundancy through Erasure Coding and Network Coding in both video unicast and video broadcast in VANETs is provided. We investigate each strategy, design novel solutions and compare their performance. We evaluated the proposed solutions from the perspective not only of cost as bandwidth utilization, but also the offered receiving rate of unique video content at the application layer. This perspective is fundamental to understanding how redundancy can be used without limiting the video quality that can be displayed to end users. Furthermore, we propose the selective use of redundancy solely on data that is more relevant to the video quality. This approach offers increases in overall video quality without leading to an excessive overhead nor to a substantial decrease in the receiving rate of unique video content.
[ "redundancy", "video streaming", "vanets", "erasure coding", "network coding" ]
[ "P", "P", "P", "P", "P" ]
2YwTB2F
Syntactic recognition of ECG signals by attributed finite automata
A syntactic pattern recognition method of electrocardiograms (ECG) is described in which attributed automata are used to execute the analysis of ECG signals. An ECG signal is first encoded into a string of primitives and then attributed automata are used to analyse the string. We have found that we can perform fast and reliable analysis of ECG signals by attributed automata.
[ "syntactic pattern recognition", "electrocardiograms", "attributed automata", "filtering", "medical computation", "nondeterministic top-down parsing", "primitive extraction", "signal analysis" ]
[ "P", "P", "P", "U", "U", "U", "M", "R" ]
-xVKCMG
Lane-mark extraction for automobiles under complex conditions
We proposed a vision based lane-mark extraction method. We used multi-adaptive thresholds for different blocks. Based on the results, our method was robust for complex conditions. The proposed system could operate in real-time.
[ "line fitting", "local edge-orientation", "kalman filter" ]
[ "U", "U", "U" ]
292M57T
Cultural differences explaining the differences in results in GSS: implications for the next decade
For the next decade, the support that comes from Group Support Systems (GSS) will be increasingly directed towards culturally diversified groups. While there have been many GSS studies concerning culture and cultural differences, no dedicated review of GSS researches exists for the identification of current gaps and opportunities of doing cross-cultural GSS research. For this purpose, this paper provides a comprehensive review utilizing a taxonomy of six categories: research type, GSS technology used, independent variables, dependent variables, use of culture, and findings. Additionally, this study also aims to illustrate how differences in experimental results arising from comparable studies, but from a different cultural setting, can be explained consistently using Hofstede's dimensions. To do so, we presented a comparative study on the use of GSS in Australia and Singapore and explain the differences in results using Hofstede's [G. Hofstede, Culture's ConsequencesInternational Differences in Work-related Values, Sage, Beverly Hills, CA (1980).] cultural dimensions. Last, but not least, we present the implications of the impact of culture on GSS research for the next decade from the viewpoint of the three GSS stakeholders: the facilitators, GSS software designers, and the GSS researchers. With the above, this paper seeks (i) to prepare a comprehensive map of GSS research involving culture, and (ii) to prepare a picture of what all these mean and where we should be heading in the next decade.
[ "cultural differences", "gss", "implications" ]
[ "P", "P", "P" ]
16Smzik
Building an IP-based community wireless mesh network: Assessment of PACMAN as an IP address autoconfiguration protocol
Wireless mesh networks are experiencing rapid progress and inspiring numerous applications in different scenarios. due to features such as autoconfiguration, self-healing, connectivity coverage extension and support for dynamic topologies These particular characteristics make wireless mesh networks an appropriate architectural basis for the design of easy-to-deploy community or neighbourhood networks One of the main challenges in building a community network using mesh networks is the minimisation of user intervention in the IP address configuration of the network nodes In this paper we first consider the process of building an IP-based mesh network using typical residential routers, exploring the options for the configuration of their wireless interfaces. Then we focus on IP address autoconfiguration, identifying the specific requirements for community mesh networks and analysing the applicability of existing solutions. As a result of that analysis, we select PACMAN, an efficient distributed address autoconfiguration mechanism originally designed for ad-hoc networks. and we perform an experimental study - using off-the-shelf routers and assuming worst-case scenarios - analysing its behaviour as an IP address autoconfiguration mechanism for community wireless mesh networks The results of the conducted assessment show that PACMAN meets all the identified requirements of the community scenario (C) 2009 Elsevier B V All rights reserved.
[ "wireless mesh networks", "pacman", "community networks", "experimental evaluation" ]
[ "P", "P", "P", "M" ]
QzD8dRo
Approximating partition functions of the two-state spin system
Two-state spin system is a classical topic in statistical physics. We consider the problem of computing the partition function of the system on a bounded degree graph. Based on the self-avoiding tree, we prove the system exhibits strong correlation decay under the condition that the absolute value of inverse temperature is small. Due to strong correlation decay property, an FPTAS for the partition function is presented and uniqueness of Gibbs measure of the two-state spin system on a bounded degree infinite graph is proved, under the same condition. This condition is sharp for Ising model. (C) 2011 Elsevier B.V. All rights reserved.
[ "two-state spin system", "strong correlation decay", "fptas", "uniqueness of gibbs measure", "gibbs measure", "ising model", "approximation algorithms" ]
[ "P", "P", "P", "P", "P", "P", "M" ]
4xAgdCt
Efficient memory utilization for high-speed FPGA-based hardware emulators with SDRAMs
FPGA-based hardware. emulators are often used for the verification of LSI functions. They generally have dedicated external memories, such as SDRAMs, to compensate for the lack of memory capacity in FPGAs. In such a case, access between the FPGAs and the dedicated external memory may represent a major bottleneck with respect to emulation speed since the dedicated external memory may have to emulate a large number of memory blocks. In this paper, we propose three methods, "Dynamic Clock Control (DCC)," "Memory Mapping Optimization (MMO)," and "Efficient Access Scheduling (EAS)," to avoid this bottleneck. DCC controls an emulation clock dynamically in accord with the number of memory accesses within one emulation clock cycle. EAS optimizes the ordering of memory access to the dedicated external memory, and MMO optimizes the arrangement of the dedicated external memory addresses to which respective memories will be emulated. With them, emulation speed can be made 29.0 times faster, as evaluated in actual LSI emulations.
[ "fpga-based hardware emulators", "sdram", "memory controller clock generator" ]
[ "P", "P", "R" ]
34-YVUT
Minimum stress optimal design with the level set method
This paper is devoted to minimum stress design in structural optimization. We propose a simple and efficient numerical algorithm for shape and topology optimization based on the level set method coupled with the topological derivative. We compute a shape derivative, as well as a topological derivative, for a stress-based objective function. Using an adjoint equation we implement a gradient algorithm for the minimization of the objective function. Several numerical examples in 2-d and 3-d are discussed.
[ "level set method", "topological derivative", "shape derivative" ]
[ "P", "P", "P" ]
5&iaCcF
Determination of wire recovery length in steel cables and its practical applications
In the presence of relatively significant states of radial pressures between the helical wires of a steel cable (spiral strand and/or wire rope), and significant levels of interwire friction, the individual broken wires tend to take up their appropriate share of the axial load within a certain length from the fractured end, which is called the recovery (or development) length. The paper presents full details of the formulations for determining the magnitude of recovery length in any layer of an axially loaded multi-layered spiral strand with any construction details. The formulations are developed for cases of fully bedded-in (old) spiral strands within which the pattern of interlayer contact forces and associated significant values of line-contact normal forces between adjacent wires in any layer, are fully stabilised, and also for cases when (in the presence of gaps between adjacent wires) hoop line-contact forces do not exist and only radial forces are present. Based on a previously reported extensive series of theoretical parametric studies using a wide range of spiral strand constructions with widely different wire (and cable) diameters and lay angles, a very simple method (aimed at practising engineers) for determining the magnitude of recovery length in any layer of an axially loaded spiral strand with any type of construction details is prestented. Using the final outcome of theoretical parametric studies, the minimum length of test specimens for axial fatigue tests whose test data may safely be used for estimating the axial fatigue lives of the much longer cables under service conditions may now be determined in a straightforward fashion. Moreover, the control length over which one should count the number of broken wires for cable discard purposes is suggested to be equal to one recovery length whose upper bound value for both spiral strands and/or wire ropes with any construction details is theoretically shown to be equal to 2.5 lay lengths.
[ "wire recovery length", "steel cables", "multi-layered spiral strand", "radial forces" ]
[ "P", "P", "P", "P" ]
B2wiN8s
Generalized PCM Coding of Images
Pulse-code modulation (PCM) with embedded quantization allows the rate of the PCM bitstream to be reduced by simply removing a fixed number of least significant bits from each codeword. Although this source coding technique is extremely simple, it has poor coding efficiency. In this paper, we present a generalized PCM (GPCM) algorithm for images that simply removes bits from each codeword. In contrast to PCM, however, the number and the specific bits that a GPCM encoder removes in each codeword depends on its position in the bitstream and the statistics of the image. Since GPCM allows the encoding to be performed with different degrees of computational complexity, it can adapt to the computational resources that are available in each application. Experimental results show that GPCM outperforms PCM with a gain that depends on the rate, the computational complexity of the encoding, and the degree of inter-pixel correlation of the image.
[ "pulse-code modulation", "quantization", "binning", "interpolative coding" ]
[ "P", "P", "U", "M" ]
36qruZu
Influence of motor and converter non-linearities on dynamic properties of DC drive with field weakening range
Improvement of the dynamic properties of DC drive in the field weakening range was the aim of investigation. The non-linear model of the drive system was applied. In the paper results of the comparative analysis of two emf control structures are presented. The classic emf control structure with subordinated excitation current control loop was compared with this one consisting of a non-linear compensation block. For both control structures different kinds of the parameter designing for the emf and excitation controllers are considered. Verification of the theoretical assumptions and synthesis methods of the investigated control structures are made by simulation tests using the PSpice language.
[ "dynamic properties", "dc drive", "field weakening", "field control", "non-linear control" ]
[ "P", "P", "P", "R", "R" ]
3FMGiHV
Rental software valuation in IT investment decisions
The growth of application service providers (ASPs) is very rapid, leading to a number of options to organizations interested in developing new information technology services. The advantages of an ASP include spreading out payments over a contract period and flexibility in terms of responding to changes in technology. Likewise, newer risks are associated with ASPs, including pricing variability. Some of the more common capital budgeting models may not be appropriate in this volatile marketplace. However, option models allow for many of the quirks to be considered. Modification of the option pricing model and an analytical solution method incorporated into a spreadsheet for decision support are described and illustrated. The analytical tool allows for better decisions compared to traditional value analysis methods which do not fully account for the entry and exit options of the market.
[ "application service providers", "options", "capital budgeting", "information technology investment", "stochastic processes", "risk analysis" ]
[ "P", "P", "P", "R", "U", "R" ]
44rbDiy
An averaging scheme for macroscopic numerical simulation of nonconvex minimization problems
Averaging or gradient recovery techniques, which are a popular tool for improved convergence or superconvergence of finite element methods in elliptic partial differential equations, have not been recommended for nonconvex minimization problems as the energy minimization process enforces finer and finer oscillations and hence at the first glance, a smoothing step appears even counterproductive. For macroscopic quantities such as the stress field, however, this counterargument is no longer true. In fact, this paper advertises an averaging technique for a surprisingly improved convergence behavior for nonconvex minimization problems. Similar to a finite volume scheme, numerical experiments on a double-well benchmark example provide empirical evidence of superconvergence phenomena in macroscopic numerical simulations of oscillating microstructures.
[ "averaging scheme", "macroscopic numerical simulation", "nonconvex minimization", "convexification", "adaptive mesh refinement" ]
[ "P", "P", "P", "U", "U" ]
1aNxK6H
Which App? A recommender system of applications in markets: Implementation of the service for monitoring users' interaction
Users face the information overload problem when downloading applications in markets. This is mainly due to (i) the increasing unmanageable number of applications and (ii) the lack of an accurate and fine-grained categorization of the applications in the markets. To address this issue, we present an integrated solution which recommends to the users applications by considering a big amount of information: that is, according to their previously consumed applications, use pattern, tags used to annotate resources and history of ratings. We focus this paper on the service for monitoring users' interaction. (C) 2012 Elsevier Ltd. All rights reserved.
[ "recommender system", "context-awareness", "mobile applications", "filtering" ]
[ "P", "U", "M", "U" ]
-U9Tmbj
New statistical features for the design of fiber optic statistical mode sensors
Novel statistical features are proposed for the design of statistical mode sensors. Proposed statistical features are first and second order moments. Features are compared in terms of precision error, non-linearity, and hysteresis.
[ "statistical features", "statistical mode sensor", "fiber optic sensor", "image processing" ]
[ "P", "P", "R", "U" ]
WoBZ7Mr
An efficient indexing method for content-based image retrieval
In this paper, we propose an efficient indexing method for content-based image retrieval. The proposed method introduces the ordered quantization to increase the distinction among the quantized feature descriptors. Thus, the feature point correspondences can be determined by the quantized feature descriptors, and they are used to measure the similarity between query image and database image. To implement the above scheme efficiently, a multi-dimensional inverted index is proposed to compute the number of feature point correspondences, and then approximate RANSAC is investigated to estimate the spatial correspondences of feature points between query image and candidate images returned from the multi-dimensional inverted index. The experimental results demonstrate that our indexing method improves the retrieval efficiency while ensuring the retrieval accuracy in the content-based image retrieval.
[ "content-based image retrieval", "ordered quantization", "feature point correspondences", "multi-dimensional inverted index", "approximate ransac" ]
[ "P", "P", "P", "P", "P" ]
3y:yLCQ
Prediction intervals in linear regression taking into account errors on both axes
This study reports the expressions for the variances in the prediction of the response and predictor variables calculated with the bivariate least squares (BLS) regression technique. This technique takes into account the errors on both axes. Our results are compared with those of a simulation process based on six different real data sets. The mean error in the results from the new expressions is between 4% and 5%. With weighted least squares, ordinary least squares, the constant variance ratio approach and orthogonal regression, on the other hand, mean errors can be as high as 85%, 277%, 637% and 1697% respectively. An important property of the prediction intervals calculated with BLS is that the results are not affected when the axes are switched. Copyright (C) 2001 John Wiley Sons, Ltd.
[ "prediction", "linear regression", "errors on both axes", "confidence intervals", "predictor intervals" ]
[ "P", "P", "P", "M", "R" ]
3bPz3y7
The PIAM approach to modular integrated assessment modelling
The next generation of integrated assessment modelling is envisaged as being organised as a modular process, in which modules encapsulating knowledge from different scientific disciplines are independently developed at distributed institutions and coupled afterwards in accordance with the question raised by the decision maker. Such a modular approach needs to respect several stages of the model development process, approaching modularisation and integration on a conceptual, numerical, and technical level. The paper discusses the challenges at each level and presents partial solutions developed by the PIAM (Potsdam Integrated Assessment Modules) project at the Potsdam Institute for Climate Impact Research (PIK). The challenges at each level differ greatly in character and in the work done addressing them. At the conceptual level, the notion of conceptual consistency of modular integrated models is discussed. At the numerical level, it is shown how an adequate modularisation of a problem from climateeconomy leads to a modular configuration into which independently developed climate and economic modules can be plugged. At the technical level, a software tool is presented which provides a simple consistent interface for data transfer between modules running on distributed and heterogeneous computer platforms.
[ "modularity", "integrated assessment modelling", "integrated modelling", "modular modelling", "model integration", "climate change" ]
[ "P", "P", "P", "R", "R", "M" ]
-KAPy7A
Finding multivariate outliers in fMRI time-series data
Multivariate outlier detection methods are applicable to fMRI time-series data. Removing outliers increases spatial specificity without hurting classification. Simulation shows PCOut is more sensitivity to small outliers than HD BACON.
[ "fmri", "outlier detection", "high dimensional data" ]
[ "P", "P", "M" ]
ZjHCVhv
Cardinal Consistency of Reciprocal Preference Relations: A Characterization of Multiplicative Transitivity
Consistency of preferences is related to rationality, which is associated with the transitivity property. Many properties suggested to model transitivity of preferences are inappropriate for reciprocal preference relations. In this paper, a functional equation is put forward to model the "cardinal consistency in the strength of preferences" of reciprocal preference relations. We show that under the assumptions of continuity and monotonicity properties, the set of representable uninorm operators is characterized as the solution to this functional equation. Cardinal consistency with the conjunctive representable cross ratio uninorm is equivalent to Tanino's multiplicative transitivity property. Because any two representable uninorms are order isomorphic, we conclude that multiplicative transitivity is the most appropriate property for modeling cardinal consistency of reciprocal preference relations. Results toward the characterization of this uninorm consistency property based on a restricted set of (n - 1) preference values, which can be used in practical cases to construct perfect consistent preference relations, are also presented.
[ "consistency", "reciprocity", "transitivity", "rationality", "uninorm", "fuzzy preference relation" ]
[ "P", "P", "P", "P", "P", "M" ]
4wae1zP
Infomarker - A new Internet information service system
As the web grows, the massive increase in information is placing severe burdens on information retrieval and sharing. Automated search engines and directories with small editorial staff are unable to keep up with the increasing submission of web sites. To address the problem, this paper presents Infomarker - an Internet information service system based on Open Directory and Zero-Keyword Inquiry. The Open Directory sets up a net-community in which the increasing net-citizens can each organize a small portion of the web and present it to the others. By means of Zero-Keyword Inquiry, user can get the information he is interested in without inputting any keyword that is often required by search engines. In Infomarker, user can record the web address he likes and can put forward an information request based on his web records. The information matching engine checks the information in the Open Directory to find what fits user's needs and adds it to user's web address records. The key to the matching process is layered keyword mapping. Infomarker provides people with a whole new approach to getting information and shows a wide prospect.
[ "open directory", "zero-keyword inquiry", "information matching engine", "layered keyword mapping" ]
[ "P", "P", "P", "P" ]
3CeMYV8
Grain flow measurements with X-ray techniques
The use of low energy X-rays, up to 30 keV, densitometry is demonstrated for grain flow rate measurements through laboratory experiments. Mass flow rates for corn were related to measured X-ray intensity in gray scale units with a 0.99 correlation coefficient for flow rates ranging from 2 to 6 kg/s. Larger flow rate values can be measured by using higher energy or a higher tube current. Measurements were done in real time at a 30 Hz sampling rate. Flow rate measurements are relatively independent of grain moisture due to a negligible change in the X-ray attenuation coefficients at typical moisture content values from 15 to 25%. Grain flow profile changes did not affect measurement accuracy. X-rays easily capture variations in the corn thickness profile. Due to the low energy of the X-ray photons, biological shielding can be accomplished with 2-mm-thick lead foil or 5 mm of steel.
[ "x-ray", "precision farming", "yield monitoring", "yield sensor" ]
[ "P", "U", "U", "U" ]
38&5siU
Dynamic performance enhancement of microgrids by advanced sliding mode controller
Dynamics are the most important problems in the microgrid operation. In the islanded microgrid, the mismatch of parallel operations of inverters during dynamics can result in the instability. This paper considers severe dynamics which can occur in the microgrid. Microgrid can have different configurations with different load and generation dynamics which are facing voltage disturbances. As a result, microgrid has many uncertainties and is placed in the distribution network where is full of voltage disturbances. Moreover, characteristics of the distribution network and distributed energy resources in the islanded mode make microgrid vulnerable and easily lead to instability. The main aim of this paper is to discuss the suitable mathematical modeling based on microgrid characteristics and to design properly inner controllers to enhance the dynamics of microgrid with uncertain and changing parameters. This paper provides a method for inner controllers of inverter-based distributed energy resources to have a suitable response for different dynamics. Parallel inverters in distribution networks were considered to be controlled by nonlinear robust voltage and current controllers. Theoretical prove beyond simulation results, reveal evidently the effectiveness of the proposed controller.
[ "microgrid", "sliding mode control", "disturbances", "current controlled-voltage source inverter", "dynamic stability", "transients" ]
[ "P", "P", "P", "M", "M", "U" ]
2iMz2x3
NEW IDENTIFICATION PROCEDURE FOR CONTINUOUS-TIME RADIO FREQUENCY POWER AMPLIFIER MODEL
In this paper, we present a new method for characterization of radio frequency Power Amplifier (PA) in the presence of nonlinear distortions which affect the modulated signal in Radiocommunication transmission system. The proposed procedure uses a gray box model where PA dynamics are modeled with a MIMO continuous filter and the nonlinear characteristics are described as general polynomial functions, approximated by means of Taylor series. Using the baseband input and output data, model parameters are obtained by an iterative identification algorithm based on Output Error method. Initialization and excitation problems are resolved by an association of a new technique using initial values extraction with a multi-level binary sequence input exciting all PA dynamics. Finally, the proposed estimation method is tested and validated on experimental data.
[ "modeling", "nonlinear distortions", "rf power amplifier", "parameter estimation", "continuous time domain" ]
[ "P", "P", "M", "R", "M" ]
28HrXfr
Manufacturing lead-time rules: Customer retention versus tardiness costs
Inaccurate production backlog information is a major cause of late deliveries, which can result in penalty fees and loss of reputation. We identify conditions when it is particularly worthwhile to improve an information system to provide good lead-time information. We first analyze a sequential decision process model of lead-time decisions at a firm which manufactures standard products to order, and has complete backlog information. There are Poisson arrivals, stochastic processing times, customers may balk in response to quoted delivery dates, and revenues are offset by tardiness penalties. We characterize an optimal policy and show how to accelerate computations. The second part of the paper is a computational comparison of this optimum (with full backlog information) with a lead-time quotation rule that is optimal with statistical shop-status information. This reveals when the partial-information method does well and when it is worth implementing measures to improve information transfer between operations and sales.
[ "manufacturing", "dynamic programming", "markov decision processes", "due-date assignment", "marketing" ]
[ "P", "U", "M", "U", "U" ]
5&Hj-gB
On the Wiberg algorithm for matrix factorization in the presence of missing components
This paper considers the problem of factorizing a matrix with missing components into a product of two smaller matrices, also known as principal component analysis with missing data (PCAMD). The Wiberg algorithm is a numerical algorithm developed for the problem in the community of applied mathematics. We argue that the algorithm has not been correctly understood in the computer vision community. Although there are many studies in our community, almost every one of which refers to the Wiberg study, as far as we know, there is no literature in which the performance of the Wiberg algorithm is investigated or the detail of the algorithm is presented. In this paper, we present derivation of the algorithm along with a problem in its implementation that needs to be carefully considered, and then examine its performance. The experimental results demonstrate that the Wiberg algorithm shows a considerably good performance, which should contradict the conventional view in our community, namely that minimization-based algorithms tend to fail to converge to a global minimum relatively frequently. The performance of the Wiberg algorithm is such that even starting with random initial values, it converges in most cases to a correct solution, even when the matrix has many missing components and the data are contaminated with very strong noise. Our conclusion is that the Wiberg algorithm can also be used as a standard algorithm for the problems of computer vision.
[ "matrix factorization", "principal component analysis with missing data (pcamd)", "numerical algorithm", "singular value decomposition", "structure from motion" ]
[ "P", "P", "P", "M", "U" ]
478PuMj
Semi-supervised local Fisher discriminant analysis for dimensionality reduction
When only a small number of labeled samples are available, supervised dimensionality reduction methods tend to perform poorly because of overfitting. In such cases, unlabeled samples could be useful in improving the performance. In this paper, we propose a semi-supervised dimensionality reduction method which preserves the global structure of unlabeled samples in addition to separating labeled samples in different classes from each other. The proposed method, which we call SEmi-supervised Local Fisher discriminant analysis (SELF), has an analytic form of the globally optimal solution and it can be computed based on eigen-decomposition. We show the usefulness of SELF through experiments with benchmark and real-world document classification datasets.
[ "local fisher discriminant analysis", "dimensionality reduction", "semi-supervised learning", "cluster assumption", "principal component analysis" ]
[ "P", "P", "M", "U", "M" ]
3msWhtJ
A stable fluidstructure-interaction solver for low-density rigid bodies using the immersed boundary projection method
Dispersion of low-density rigid particles with complex geometries is ubiquitous in both natural and industrial environments. We show that while explicit methods for coupling the incompressible NavierStokes equations and Newton's equations of motion are often sufficient to solve for the motion of cylindrical particles with low density ratios, for more complex particles such as a body with a protrusion they become unstable. We present an implicit formulation of the coupling between rigid body dynamics and fluid dynamics within the framework of the immersed boundary projection method. Similarly to previous work on this method, the resulting matrix equation in the present approach is solved using a block-LU decomposition. Each step of the block-LU decomposition is modified to incorporate the rigid body dynamics. We show that our method achieves second-order accuracy in space and first-order in time (third-order for practical settings), only with a small additional computational cost to the original method. Our implicit coupling yields stable solution for density ratios as low as 10?4. We also consider the influence of fictitious fluid located inside the rigid bodies on the accuracy and stability of our method.
[ "newton's equations of motion", "low density ratios", "complex particles", "implicit coupling", "fictitious fluid", "immersed boundary method" ]
[ "P", "P", "P", "P", "P", "R" ]
2TEX4Cq
Latent word context model for information retrieval
The application of word sense disambiguation (WSD) techniques to information retrieval (IR) has yet to provide convincing retrieval results. Major obstacles to effective WSD in IR include coverage and granularity problems of word sense inventories, sparsity of document context, and limited information provided by short queries. In this paper, to alleviate these issues, we propose the construction of latent context models for terms using latent Dirichletallocation. We propose building one latent context per word, using a well principled representation of local context based on word features. In particular, context words are weighted using a decaying function according to their distance to the target word, which is learnt from data in an unsupervised manner. The resulting latent features are used to discriminate word contexts, so as to constrict querys semantic scope. Consistent and substantial improvements, including on difficult queries, are observed on TREC test collections, and the techniques combines well with blind relevance feedback. Compared to traditional topic modeling, WSD and positional indexing techniques, the proposed retrieval model is more effective and scales well on large-scale collections.
[ "word context", "word sense disambiguation (wsd)", "topic models", "retrieval models", "word context discrimination (wcd)" ]
[ "P", "P", "P", "P", "M" ]
vdqAYw8
Clusterization, frustration and collectivity in random networks
We consider the random Erdos-Renyi network with enhanced clusterization and Ising spins s = +/- 1 at the network nodes. Mutually linked spins interact with energy J. Magnetic properties of the system that are dependent on the clustering coefficient C are investigated with the Monte Carlo heat bath algorithm. For J > 0 the Curie temperature T(c) increases from 3.9 to 5.5 when C increases from almost zero to 0.18. These results deviate only slightly from the mean field theory. For J < 0 the spin-glass phase appears below T(SG); this temperature decreases with C, on the contrary to the mean field calculations. The results are interpreted in terms of social systems.
[ "random networks", "phase transitions" ]
[ "P", "M" ]
34rK6Ep
GPS/INS integration utilizing dynamic neural networks for vehicular navigation
Recently, methods based on Artificial Intelligence (AI) have been suggested to provide reliable positioning information for different land vehicle navigation applications integrating the Global Positioning System (GPS) with the Inertial Navigation System (INS). All existing AI-based methods are based on relating the INS error to the corresponding INS output at certain time instants and do not consider the dependence of the error on the past values of INS. This study, therefore, suggests the use of Input-Delayed Neural Networks (IDNN) to model both the INS position and velocity errors based on current and some past samples of INS position and velocity, respectively. This results in a more reliable positioning solution during long GPS outages. The proposed method is evaluated using road test data of different trajectories while both navigational and tactical grade INS are mounted inside land vehicles and integrated with GPS receivers. The performance of the IDNN - based model is also compared to both conventional (based mainly on Kalman filtering) and recently published Al - based techniques. The results showed significant improvement in positioning accuracy especially for cases of tactical grade INS and long GPS outages. (C) 2010 Elsevier B.V. All rights reserved.
[ "gps", "dynamic neural network", "inertial navigation system (ins)", "data fusion", "ins/gps road tests" ]
[ "P", "P", "P", "M", "R" ]
1xKTXK2
A unified probabilistic framework for automatic 3D facial expression analysis based on a Bayesian belief inference and statistical feature models
Textured 3D face models capture precise facial surfaces along with the associated textures, making it possible for an accurate description of facial activities. In this paper, we present a unified probabilistic framework based on a novel Bayesian Belief Network (BBN) for 3D facial expression and Action Unit (AU) recognition. The proposed BBN performs Bayesian inference based on Statistical Feature Models (SFM) and Gibbs-Boltzmann distribution and feature a hybrid approach in fusing both geometric and appearance features along with morphological ones. When combined with our previously developed morphable partial face model (SFAM), the proposed BBN has the capacity of conducting fully automatic facial expression analysis. We conducted extensive experiments on the two public databases, namely the BU-3DFE dataset and the Bosphorus dataset. When using manually labeled landmarks, the proposed framework achieved an average recognition rate of 94.2% and 85.6% for the 7 and 16 AU on face data from the Bosphorus dataset respectively, and 89.2% for the six universal expressions on the BU-3DFE dataset. Using the landmarks automatically located by SFAM, the proposed BBN still achieved an average recognition rate of 84.9% for the six prototypical facial expressions. These experimental results demonstrate the effectiveness of the proposed approach and its robustness in landmark localization errors. Published by Elsevier B.V.
[ "statistical feature model", "3d face", "bayesian belief network", "facial expression recognition", "action units recognition", "automatic landmarking" ]
[ "P", "P", "P", "R", "R", "R" ]
2i87EYq
MPML3D: Scripting Agents for the 3D Internet
The aim of this paper is two-fold. First, it describes a scripting language for specifying communicative behavior and interaction of computer-controlled agents ("bots") in the popular three-dimensional (3D) multiuser online world of "Second Life" and the emerging "OpenSimulator" project. While tools for designing avatars and in-world objects in Second Life exist, technology for nonprogrammer content creators of scenarios involving scripted agents is currently missing. Therefore, we have implemented new client software that controls bots based on the Multimodal Presentation Markup Language 3D (MPML3D), a highly expressive XML-based scripting language for controlling the verbal and nonverbal behavior of interacting animated agents. Second, the paper compares Second Life and OpenSimulator platforms and discusses the merits and limitations of each from the perspective of agent control. Here, we also conducted a small study that compares the network performance of both platforms.
[ "scripting languages", "markup languages", "artificial, augmented, and virtual realities", "graphical user interfaces", "synchronous interaction", "visualization" ]
[ "P", "P", "M", "U", "M", "U" ]
pazL-FG
tlb and snoop energy-reduction using virtual caches in low-power chip-multiprocessors
In our quest to bring down the power consumption in low-power chip-multiprocessors, we have found that TLB and snoop accesses account for about 40\% of the energy wasted by all L1 data-cache accesses. We have investigated the prospects of using virtual caches to bring down the number of TLB accesses. A key observation is that while the energy wasted in the TLBs are cut, the energy associated with snoop accesses becomes higher. We then contribute with two techniques to reduce the number of snoop accesses and their energy cost. Virtual caches together with the proposed techniques are shown to reduce the energy wasted in the L1 caches and the TLBs by about 30\%.
[ "snoop", "virtual caches", "virtualization", "caches", "low-power", "power consumption", "account", "energy", "association", "cost", "chip multiprocessors", "data cache", "reduction", "cmp" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "U", "M", "U", "U" ]
2&b&SyN
extracting tennis statistics from wireless sensing environments
Creating statistics from sporting events is now widespread with most efforts to automate this process using various sensor devices. The problem with many of these statistical applications is that they require proprietary applications to process the sensed data and there is rarely an option to express a wide range of query types. Instead, applications tend to contain built-in queries with predefined outputs. In the research presented in this paper, data from a wireless network is converted to a structured and highly interoperable format to facilitate user queries by expressing high level queries in a standard database language and automatically generating the results required by coaches.
[ "sensor", "query", "ubisense", "xml" ]
[ "P", "P", "U", "U" ]
3LEqKoA
Critical success factors of inter-organizational information systems- A case study of Cisco and Xiao Tong in China
This paper reports a case study of an inter-organizational information system (IOS) of Cisco and Xiao Tong in China. We interviewed their senior managers, heads of departments and employees who have been directly affected in their work. Other sources of information are company documents and publicly available background information. The study examines the benefits of the IOS for both corporations. The research also reveals seven critical success factors for the IOS, namely intensive stimulation, shared vision, cross-organizational implementation team, high integration with internal information systems, inter-organizational business process re-engineering, advanced legacy information system and infrastructure and shared industry standard. (c) 2005 Elsevier B.V. All rights reserved.
[ "critical success factors", "inter-organizational information systems", "china" ]
[ "P", "P", "P" ]
4aLRKk6
Maize grain shape approaches for DEM modelling
The shape of a grain of maize was approached using the multi-sphere method. Models with single-spherical particles and with rolling friction were also used. Results from two DEM software codes were compared. Recommendations on the shape approach for DEM modelling were provided.
[ "maize", "dem", "multi-spheres", "rolling friction", "particle shape", "flow" ]
[ "P", "P", "P", "P", "R", "U" ]
YKthEqM
multi-sector antenna performance in dense wireless networks
Sectorized antennas provide an attractive solution to increase wireless network capacity through higher spatial reuse. Despite their increasing popularity, the real-world performance characteristics of such antennas in dense wireless mesh networks are not well understood. In this demo, we demonstrate our multi-sector antenna prototypes and their performance through video streaming over an indoor wireless network in the presence of interfering nodes. We use our graphical tool to vary the sender, receiver, and interferer antenna configurations and the resulting performance is directly visible in the video quality displayed at the receiver.
[ "sectorized antenna", "dense wireless mesh networks", "sector selection", "directional hidden terminal problem" ]
[ "P", "P", "M", "U" ]
-CUmrj&
Improvement of 3P and 6R mechanical robots reliability and quality applying FMEA and QFD approaches
In the past few years, extending usage of robotic systems has increased the importance of robot reliability and quality. To improve the robot reliability and quality by applying standard approaches such as Failure Mode and Effect Analysis (FMEA) and Quality Function Deployment (QFD) during the design of robot is necessary. FMEA is a qualitative method which determines the critical failure modes in robot design. In this method Risk Priority Number is used to sort failures with respect to critical situation. Two examples of mechanical robots are analyzed by using this method and critical failure modes are determined for each robot. Corrective actions are proposed for critical items to modify robots reliability and reduce their risks. Finally by using QFD, quality of these robots is improved according to the customers requirements. In this method by making four matrixes, optimum values for all technical parameters are determined and the final product has the desired quality.
[ "robot", "reliability", "quality", "fmea", "qfd", "performance" ]
[ "P", "P", "P", "P", "P", "U" ]
3X-M4oH
Informatics methodologies for evaluation research in the practice setting
A continuing challenge in health informatics and health evaluation is to enable access to the practice of health care so that the determinants of successful care and good health outcomes can be measured, evaluated and analysed. Furthermore the results of the analysis should be available to the health care practitioner or to the patient as might be appropriate, so that he or she can use this information for continual improvement of practice and optimisation of outcomes. In this paper we review two experiences, one in primary care, the FAMUS project, and the other in hospital care, the Autocontrol project. Each project demonstrates an informatics approach for evaluation research in the clinical setting and indicates ways in which useful information can be obtained which with appropriate feed-back and education can be used towards the achievement of better health. Emphasis is given to data collection methods compatible with practice and to high quality information feedback, particularly in the team context, to enable the formulation of strategies for practice improvement.
[ "evaluation research", "health informatics", "data collection", "clinical strategies" ]
[ "P", "P", "P", "R" ]
4c6a&tj
An improved SOM algorithm and its application to color feature extraction
Reducing the redundancy of dominant color features in an image and meanwhile preserving the diversity and quality of extracted colors is of importance in many applications such as image analysis and compression. This paper presents an improved self-organization map (SOM) algorithm namely MFD-SOM and its application to color feature extraction from images. Different from the winner-take-all competitive principle held by conventional SOM algorithms, MFD-SOM prevents, to a certain degree, features of non-principal components in the training data from being weakened or lost in the learning process, which is conductive to preserving the diversity of extracted features. Besides, MFD-SOM adopts a new way to update weight vectors of neurons, which helps to reduce the redundancy in features extracted from the principal components. In addition, we apply a linear neighborhood function in the proposed algorithm aiming to improve its performance on color feature extraction. Experimental results of feature extraction on artificial datasets and benchmark image datasets demonstrate the characteristics of the MFD-SOM algorithm.
[ "color feature extraction", "self-organizing map", "non-principal component", "competitive mechanism" ]
[ "P", "P", "P", "M" ]
4RYT8hY
A Motion Planning System for Mobile Robots
In this paper, a motion planning system for a mobile robot is proposed. Path planning tries to find a feasible path for mobile robots to move from a starting node to a target node in an environment with obstacles. A genetic algorithm is used to generate an optimal path by taking the advantage of its strong optimization ability. Mobile robot, obstacle and target localizations are realized by means of camera and image processing. A graphical user interface (GUI) is designed for the motion planning system that allows the user to interact with the robot system and to observe the robot environment. All the software components of the system are written in MATLAB that provides to use non-predefined accessories rather than the robot firmware has, to avoid confusing in C++ libraries of robot's proprietary software, to control the robot in detail and not to re-compile the programs frequently in real-time dynamic operations.
[ "motion planning", "mobile robot", "genetic algorithm" ]
[ "P", "P", "P" ]
3xpupzE
A unified strategy for search and result representation for an online bibliographical catalogue
Purpose - One of the biggest concerns of modem information retrieval systems is reducing the user effort required for manual traversal and filtering of long matching document lists. Thus, the first goal of this research is to propose an improved scheme for representation of search results. Further, it aims to explore the impact of various user information needs on the searching process with the aim of finding a unified searching approach well suited for different query types and retrieval tasks. Design/methodology/approach - The BoW online bibliographical catalogue is based on a hierarchical concept index to which entries are linked. The key idea is that searching in the hierarchical catalogue should take advantage of the catalogue structure and return matching topics from the hierarchy, rather than just a long list of entries. Likewise, when new entries are inserted, a search for relevant topics to which they should be linked is required. Therefore, a similar hierarchical scheme for query-topic matching can be applied for both tasks. Findings - The experiments show that different query types used for the above tasks are best treated by different topic ranking functions. To further examine this phenomenon a user study was conducted, where various statistical weighting factors were incorporated and their impact on the performance for different query types was measured. Finally, it is found that the mixed strategy that applies the most suitable ranking function to each query type yielded a significant increase in precision relative to the baseline and to employing any examined strategy in isolation on the entire set of user queries. Originality/value - The main contributions of this paper are: the alternative approach for compact and concise representation of search results, which were implemented in the BoW online bibliographical catalogue; and the unified or mixed strategy for search and result representation applying the most suitable ranking function to each query type, which produced superior results compared to different single-strategy-based approaches.
[ "information retrieval", "online catalogues", "technology led strategy" ]
[ "P", "R", "M" ]
-YhkW8A
robust multiple-phase switched-capacitor dc-dc converter with digital interleaving regulation scheme
An integrated switched-capacitor (SC) DC-DC converter with a digital interleaving regulation scheme is presented. By interleaving the newly-structured charge pump (CP) cells in multiple phases, the input current ripple and output voltage ripple are reduced significantly. The converter exhibits excellent robustness, even when one of the CP cells fails to operate. A fully digital controller is employed with a hysteretic control algorithm. It features dead-beat system stability and fast transient response. Hspice post-layout simulation shows that, with a 1.5 V input power supply, the SC converter accurately provides an adjustable regulated power output in a range of 1.6 to 2.7 V. The maximum output ripple is 40 mV when a full load of 0.54 W is supplied. Transient response of 1.8 ms is observed when the load current switches from half- to full-load (from 100 to 200 mA).
[ "switched-capacitor dc-dc converter", "interleaving regulation" ]
[ "P", "P" ]
d&:Yf6j
Teeth recognition based on multiple attempts in mobile device
Most traditional biometric approaches generally utilize a single image for personal identification. However, these approaches sometimes failed to recognize users in practical environment due to false-detected or undetected subject. Therefore, this paper proposes a novel recognition approach based on multiple frame images that are implemented in mobile devices. The aim of this paper is to improve the recognition accuracy and to reduce computational complexity through multiple attempts. Here, multiple attempts denote that multiple frame images are used in time of recognition procedure. Among sequential frame images, an adequate subject, i.e., teeth image, is chosen by subject selection module which is operated based on differential image entropy. The selected subject is then utilized as a biometric trait of traditional recognition algorithms including PCA, LDA, and EHMM. The performance evaluation of proposed method is performed using two teeth databases constructed by a mobile device. Through experimental results, we confirm that the proposed method exhibits improved recognition accuracy of about 3.64.8%, and offers the advantage of lower computational complexity than traditional biometric approaches.
[ "teeth recognition", "multiple attempts", "mobile device", "subject selection" ]
[ "P", "P", "P", "P" ]
-B1Wx-y
A conceptual approach for the die structure design
A large number of decisions are made during the conceptual design stage which is characterized by a lack of complete geometric information. While existing CAD systems supporting the geometric aspects of design have had little impact at the conceptual design stage. To support the conceptual die design and the top-down design process, a new concept called conceptual assembly modeling framework (CAMF) is presented in this paper. Firstly, the framework employs the zigzag function-symbol mapping to implement the function design of the die. From the easily understood analytical results of the function-symbol mapping matrix, the designer can evaluate the quality of a proposed die concept. Secondly, a new method-logic assembly modeling is proposed using logic components in this framework to satisfy the characteristic of the conceptual die design. Representing shapes and spatial relations in logic can provide a natural, intuitive method of developing complete computer systems for reasoning about die construction design at the conceptual stage. The logic assembly which consists of logic components is an innovative representation that provides a natural link between the function design of the die and the detailed geometric design.
[ "die structure design", "conceptual design", "cad", "logic component", "logic assembly", "zigzag mapping" ]
[ "P", "P", "P", "P", "P", "R" ]
1GT9txV
Approximation algorithm for coloring of dotted interval graphs
Dotted interval graphs were introduced by Aumann et al. [Y. Aumann, M. Lewenstein, O. Melamud, R. Pinter, Z. Yakhini, Dotted interval graphs and high throughput genotyping, in: ACM-SIAM Symposium on Discrete Algorithms. SODA 2005, pp. 339-348] as a generalization of interval graphs. The problem of coloring these graphs found application in high-throughput genotyping. Jiang [M. Jiang, Approximating minimum coloring and maximum independent set in dotted interval graphs, Information Processing Letters 98 (2006) 29-33] improves the approximation ratio of Aumann et al. [Y. Aumann, M. Lewenstein, O. Melamud, R. Pinter, Z. Yakhini, Dotted interval graphs and high throughput genotyping, in: ACM-SIAM Symposium on Discrete Algorithms, SODA 2005, pp. 339-348]. In this work we improve the approximation ratio of Jiang [M. Jiang, Approximating minimum coloring and maximum independent set in dotted interval graphs, Information Processing Letters 98 (2006) 29-33] and Aumarm et al. [Y. Aumann, M. Lewenstein, O. Melamud, R. Pinter, Z. Yakhini, Dotted interval graphs and high throughput genotyping, in: ACM-SIAM Symposium on Discrete Algorithms, SODA 2005, pp. 339-348]. In the exposition we develop a generalization of the problem of finding the maximum number of non-attacking queens on a triangle. (c) 2008 Elsevier B.V. All rights reserved.
[ "approximation algorithms", "dotted interval graph", "minimum coloring", "intersection graph", "microsatellite genotyping" ]
[ "P", "P", "P", "M", "M" ]
4yCgxVU
Scalable visibility color map construction in spatial databases
Recent advances in 3D modeling provide us with real 3D datasets to answer queries, such as What is the best position for a new billboard? and Which hotel room has the best view? in the presence of obstacles. These applications require measuring and differentiating the visibility of an object (target) from different viewpoints in a dataspace, e.g., a billboard may be seen from many points but is readable only from a few points closer to it. In this paper, we formulate the above problem of quantifying the visibility of (from) a target object from (of) the surrounding area with a visibility color map (VCM). A VCM is essentially defined as a surface color map of the space, where each viewpoint of the space is assigned a color value that denotes the visibility measure of the target from that viewpoint. Measuring the visibility of a target even from a single viewpoint is an expensive operation, as we need to consider factors such as distance, angle, and obstacles between the viewpoint and the target. Hence, a straightforward approach to construct the VCM that requires visibility computation for every viewpoint of the surrounding space of the target is prohibitively expensive in terms of both I/Os and computation, especially for a real dataset comprising thousands of obstacles. We propose an efficient approach to compute the VCM based on a key property of the human vision that eliminates the necessity for computing the visibility for a large number of viewpoints of the space. To further reduce the computational overhead, we propose two approximations; namely, minimum bounding rectangle and tangential approaches with guaranteed error bounds. Our extensive experiments demonstrate the effectiveness and efficiency of our solutions to construct the VCM for real 2D and 3D datasets.
[ "visibility color map", "spatial databases", "query processing", "three-dimensional (3d) objects" ]
[ "P", "P", "M", "M" ]
1q9J-9Y
Toward a Neurogenetic Theory of Neuroticism
Recent advances in neuroscience and molecular biology have begun to identify neural and genetic correlates of complex traits. Future theories of personality need to integrate these data across the behavioral, neural, and genetic level of analysis and further explain the underlying epigenetic processes by which genes and environmental variables interact to shape the structure and function of neural circuitry. In this chapter, I will review some of the work that has been conducted at the cognitive, neural, and molecular genetic level with respect to one specific personality traitneuroticism. I will focus particularly on individual differences with respect to memory, self-reference, perception, and attention during processing of emotional stimuli and the significance of gene-by-environment interactions. This chapter is intended to serve as a tutorial bridge for psychologists who may be intrigued by molecular genetics and for molecular biologists who may be curious about how to apply their research to the study of personality.
[ "neuroticism", "complex traits", "personality" ]
[ "P", "P", "P" ]
1noyXgy
Technological means of communication and collaboration in archives and records management
This study explores the international collaboration efforts of archivists and records managers starting with the hypothesis that Internet technologies have had a significant impact on both national and international communication for this previously conservative group. The use and importance of mailing lists for this purpose is studied in detail. A quantitative analysis looks globally at the numbers of lists in these fields and the numbers of subscribers. A qualitative analysis of list content is also described. The study finds that archivists and records managers have now created more than 140 mailing lists related to their profession and have been contributing to these lists actively. It also 'estimates' that about half of the profession follows a list relating to their work and that archivists seem to like lists more than records managers do. The study concludes that mailing lists can be seen as a virtual college binding these groups together to develop the field.
[ "records management", "internet", "mailing lists", "archives administration", "forums" ]
[ "P", "P", "P", "M", "U" ]
4eTScBe
Privacy Preserving Decision Tree Learning Using Unrealized Data Sets
Privacy preservation is important for machine learning and data mining, but measures designed to protect private information often result in a trade-off: reduced utility of the training samples. This paper introduces a privacy preserving approach that can be applied to decision tree learning, without concomitant loss of accuracy. It describes an approach to the preservation of the privacy of collected data samples in cases where information from the sample database has been partially lost. This approach converts the original sample data sets into a group of unreal data sets, from which the original samples cannot be reconstructed without the entire group of unreal data sets. Meanwhile, an accurate decision tree can be built directly from those unreal data sets. This novel approach can be applied directly to the data storage as soon as the first sample is collected. The approach is compatible with other privacy preserving approaches, such as cryptography, for extra protection.
[ "machine learning", "data mining", "classification", "security and privacy protection" ]
[ "P", "P", "U", "M" ]
-sFwoCE
Selected topics on assignment problems ?
We survey recent developments in the fields of bipartite matchings, linear sum assignment and bottleneck assignment problems and applications, multidimensional assignment problems, quadratic assignment problems, in particular lower bounds, special cases and asymptotic results, biquadratic and communication assignment problems.
[ "90c27", "90b80", "68q25", "90c05" ]
[ "U", "U", "U", "U" ]
-8W3HDg
Fast parameter-free region growing segmentation with application to surgical planning
In this paper, we propose a self-assessed adaptive region growing segmentation algorithm. In the context of an experimental virtual-reality surgical planning software platform, our method successfully delineates main tissues relevant for reconstructive surgery, such as fat, muscle, and bone. We rely on a self-tuning approach to deal with a great variety of imaging conditions requiring limited user intervention (one seed). The detection of the optimal parameters is managed internally using a measure of the varying contrast of the growing region, and the stopping criterion is adapted to the noise level in the dataset thanks to the sampling strategy used for the assessment function. Sampling is referred to the statistics of a neighborhood around the seed(s), so that the sampling period becomes greater when images are noisier, resulting in the acquisition of a lower frequency version of the contrast function. Validation is provided for synthetic images, as well as real CT datasets. For the CT test images, validation is referred to manual delineations for 10 cases and to subjective assessment for another 35. High values of sensitivity and specificity, as well as Dice's coefficient and Jaccard's index on one hand, and satisfactory subjective evaluation on the other hand, prove the robustness of our contrast-based measure, even suggesting suitability for calibration of other region-based segmentation algorithms.
[ "region growing", "segmentation", "surgical planning", "ct", "virtual reality" ]
[ "P", "P", "P", "P", "U" ]
1yvp9qr
Accuracy and efficiency in computing electrostatic potential for an ion channel model in layered dielectric/electrolyte media
This paper will investigate the numerical accuracy and efficiency in computing the electrostatic potential for a finite-height cylinder, used in an explicit/implicit hybrid solvation model for ion channel and embedded in a layered dielectric/electrolyte medium representing a biological membrane and ionic solvents. A charge locating inside the cylinder cavity, where ion channel proteins and ions are given explicit atomistic representations, will be influenced by the polarization field of the surrounding implicit dielectric/electrolyte medium. Two numerical techniques, a specially designed boundary integral equation method and an image charge method, will be investigated and compared in terms of accuracy and efficiency for computing the electrostatic potential. The boundary integral equation method based on the three-dimensional layered Green?s functions provides a highly accurate solution suitable for producing a benchmark reference solution, while the image charge method is found to give reasonable accuracy and highly efficient and viable to use the fast multipole method for interactions of a large number of charges in the atomistic region of the hybrid solvation model.
[ "ion channels", "image charge method", "poissonboltzmann equation", "layered electrolytes and dielectrics", "the explicit/implicit hybrid solvation model" ]
[ "P", "P", "M", "R", "R" ]
-qiayxt
The social sharing of emotion (SSE) in online social networks: A case study in Live Journal
Using content analysis, we gauge the occurrence of social sharing of emotion (SSE) in Live Journal. We present a theoretical model of a three-cycle process for online SSE. A large part of emotional blog posts showed full initiation of social sharing. Affective feedback provided empathy, emotional support and admiration. This study is the first one to empirically assess the occurrence and structure of online SSE.
[ "social sharing of emotion", "emotion", "blog", "online communication", "social networking sites", "social interaction" ]
[ "P", "P", "P", "M", "M", "M" ]
6FL25vk
Non-testing approaches under REACH - help or hindrance? Perspectives from a practitioner within industry
Legislation such as REACH strongly advocates the use of alternative approaches including invitro, (Q)SARs, and chemical categories as a means to satisfy the information requirements for risk assessment. One of the most promising alternative approaches is that of chemical categories, where the underlying hypothesis is that the compounds within the category are similar and therefore should have similar biological activities. The challenge lies in characterizing the chemicals, understanding the mode/mechanism of action for the activity of interest and deriving a way of relating these together to form inferences about the likely activity outcomes. (Q)SARs are underpinned by the same hypothesis but are packaged in a more formalized manner. Since the publication of the White Paper for REACH, there have been a number of efforts aimed at developing tools, approaches and techniques for (Q)SARs and read-across for regulatory purposes. While technical guidance is available, there still remains little practical guidance about how these approaches can or should be applied in either the evaluation of existing (Q)SARs or in the formation of robust categories. Here we provide a perspective of how some of these approaches have been utilized to address our in-house REACH requirements.
[ "reach", "(q)sar", "chemical category", "qmrf", "qprf" ]
[ "P", "P", "P", "U", "U" ]
23&gD-7
Realtime performance analysis of different combinations of fuzzyPID and bias controllers for a two degree of freedom electrohydraulic parallel manipulator
Development of a 2 DOF electrohydraulic motion simulator as a parallel manipulator. Control of heave, pitch and combined heave and pitch motion of the parallel manipulator. Design of PID, fuzzyPID, self-tuning fuzzyPID and self-tuning fuzzyPID with bias controllers. Use of different combinations of fuzzyPID and bias controllers for study of real time control performance. Best control response found for the self-tuning fuzzyPID with bias controller.
[ "parallel manipulator", "electrohydraulic systems", "real-time control", "fuzzy control" ]
[ "P", "M", "M", "M" ]

KP20k Benchmark Dataset for Keyphrase Generation

About

KP20k is a dataset for benchmarking keyphrase extraction and generation models. The data is composed of 570 809 abstracts and their associated titles from scientific articles.

Details about the dataset can be found in the original paper:

  • Meng et al 2017. Deep keyphrase Generation Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 582–592

Reference (indexer-assigned) keyphrases are also categorized under the PRMU (Present-Reordered-Mixed-Unseen) scheme as proposed in the following paper:

Text pre-processing (tokenization) is carried out using spacy (en_core_web_sm model) with a special rule to avoid splitting words with hyphens (e.g. graph-based is kept as one token). Stemming (Porter's stemmer implementation provided in nltk) is applied before reference keyphrases are matched against the source text.

Content

The dataset is divided into the following three splits:

Split # documents # keyphrases by document (average) % Present % Reordered % Mixed % Unseen
Train 530 809 5.29 58.19 10.93 17.36 13.52
Test 20 000 5.28 58.40 10.84 17.20 13.56
Validation 20 000 5.27 58.20 10.94 17.26 13.61

The following data fields are available:

  • id: unique identifier of the document. NB There were no ids in the original dataset. The ids were generated using the python module shortuuid (https://pypi.org/project/shortuuid/)
  • title: title of the document.
  • abstract: abstract of the document.
  • keyphrases: list of the author assigned keyphrases.
  • prmu: list of Present-Reordered-Mixed-Unseen categories for reference keyphrases.

NB: The present keyphrases (represented by the "P" label in the PRMU column) are sorted by their apparition order in the text (title + abstract).

Downloads last month
162
Edit dataset card

Models trained or fine-tuned on taln-ls2n/kp20k