abstract
stringlengths 7
10.1k
| authors
stringlengths 9
1.96k
⌀ | title
stringlengths 6
367
| __index_level_0__
int64 5
1,000k
|
---|---|---|---|
The integrated production and transportation scheduling problem (PTSP) with capacity constraints is common in many industries. An optimal solution to PTSP requires one to simultaneously solve the production scheduling and the transportation routing problems, which requires excessive computational time, even for relatively small problems. In this study, we consider a variation of PTSP that involves a short shelf life product; hence, there is no inventory of the product in process. Once a lot of the product is produced, it must be transported with nonnegligible transportation time directly to various customer sites within its limited lifespan. The objective is to determine the minimum time required to complete producing and delivering the product to meet the demand of a given set of customers over a wide geographic region. This problem is NP-hard in the strong sense. We analyze the properties of this problem, develop lower bounds on the optimal solution, and propose a two-phase heuristic based on the analysis. The first phase uses either a genetic or a memetic algorithm to select a locally optimal permutation of the given set of customers; the second phase partitions the customer sequence and then uses the Gilmore-Gomory algorithm to order the subsequences of customers to form the integrated schedule. Empirical observations on the performance of this heuristic are reported. | ['H. Neil Geismar', 'Gilbert Laporte', 'Lei Lei', 'Chelliah Sriskandarajah'] | The Integrated Production and Transportation Scheduling Problem for a Product with a Short Lifespan | 145,888 |
The paper deals with the problem of reconstructing the tree-like topological structure of a network of linear dynamical systems. A distance function is defined in order to evaluate the “closeness” of two processes and some useful mathematical properties are derived. Theoretical results to guarantee the correctness of the identification procedure for networked linear systems characterized by a tree topology are provided as well. The paper also suggests the approximation of a complex connected network with a tree in order to detect the most meaningful interconnections. The application of the techniques to the analysis of an actual complex network, i.e., to high frequency time series of the stock market, is extensively illustrated. | ['Donatello Materassi', 'Giacomo Innocenti'] | Topological identification in networks of dynamical systems | 540,735 |
Abstract The utility of a Kolmogorov complexity method in combinatorial theory is demonstrated by several examples. | ['Ming Li', 'Paul M. B. Vitányi'] | Kolmogorov complexity arguments in combinatorics | 248,698 |
We formally study two privacy-type properties for e-auction protocols: bidding-price-secrecy and receipt-freeness. These properties are formalised as observational equivalences in the applied pi calculus. We analyse two receipt-free auction protocols: one proposed by Abe and Suzuki in 2002 (AS02) and the other by Howlader et al. in 2014 (HRM14). Bidding-price-secrecy of the AS02 protocol is verified using the automatic verifier ProVerif, whereas receipt-freeness of the two protocols, as well as bidding-price-secrecy of the HRM14 protocol, are proved manually. | ['Naipeng Dong', 'Hugo Jonker', 'Jun Pang'] | Formal modelling and analysis of receipt-free auction protocols in applied pi | 905,347 |
The performance features of a massively multi-agent system (MMAS) when applying the contract net protocol (CNP) are examined. The recent growth in the volume of e-commerce on the Internet is increasing the opportunities for coordinated transactions by agents, concurrently occurring everywhere. Because of limited CPU and network resources, running many interactive tasks among agents can lower the quality or efficiency of MMASs. Although CNP is a widely used negotiation protocol that can allocate tasks and resources to appropriate agents, it is unclear how effectively CNP works in an MMAS where thousands of agents work together and interfere with each other. The performance of CNP in such an MMAS, especially the overall efficiency and the reliability of promised completion times, is investigated by using an MAS simulation environment. The results show that only manager- side control of CNP can improve performance in an MMAS. | ['Toshiharu Sugawara', 'Toshio Hirotsu', 'Satoshi Kurihara', 'Kensuke Fukuda'] | Performance variation due to interference among a large number of self-interested agents | 87,698 |
A composer’s practice | ['Pauline Oliveros', 'Ted Krueger'] | A composer’s practice | 969,456 |
This paper describes controlling robot's gaze which has relation to smoothness of turn-taking in communication. We considered the role of gaze in dialogues between human beings and examined it by simulation and our humanoid. Also we analyzed the features of gaze movement in dialogues by plural persons and confirmed that controlling gaze is efficient in confirmation of communication channel by implementing it on the humanoid. | ['Hideaki Kikuchi', 'Masao Yokoyama', 'Keiichiro Hoashi', 'Yasuaki Hidaki', 'Tetsunori Kobayashi', 'Katsuhiko Shirai'] | Controlling gaze of humanoid in communication with human | 244,527 |
We investigate the impact of dynamic topology reconfiguration on the complexity of verification problems for models of protocols with broadcast communication. We first consider reachability of a configuration with a given set of control states and show that parameterized verification is decidable with polynomial time complexity. We then move to richer queries and show how the complexity changes when considering properties with negation or cardinality constraints. | ['Giorgio Delzanno', 'Arnaud Sangnier', 'Riccardo Traverso', 'Gianluigi Zavattaro'] | On the Complexity of Parameterized Reachability in Reconfigurable Broadcast Networks | 670,665 |
We propose a novel routing algorithm for reverse proxy servers, called adaptive loud balancing content address hashing (AH), and evaluate the performance of the proposed routing algorithm compared with that of the content address hashing (SH) and the hash and slide (HS) routing algorithms. The proposed AH routing algorithm calculates the popularity of pages in the load balancer using an LFU caching technique and periodically makes a popularity list. Using this popularity list, the proposed routing algorithm selects a reverse proxy server as follows. When the requested page appears in the popularity list, the request is routed according to the round robin method; otherwise, it is routed according to the content address hashing method. We evaluate and compare the AH, SH and HS routing algorithms by simulation experiments from the viewpoints of load balancing, consumed cache space and cache hit rate. Simulation experiments show that the proposed AH routing algorithm achieves almost the same degree of load balancing as the HS algorithm and the same cache hit rate as the SH algorithm, for reverse proxy servers in various web site environments. | ['Toyofumi Takenaka', 'Satosi Kato', 'Hidetosi Okamoto'] | Adaptive load balancing content address hashing routing for reverse proxy servers | 390,406 |
This paper presents an LLVM+QEMU (LnQ)framework for building high performance and retargetable binary translators with existing compiler modules. Dynamic binary translation is a just-in-time (JIT) compilation from binary code of guest ISA to binary code of host ISA. The quality of translated code is critical to the performance of a dynamic binary translator, which translates code between different IS As, so the translated code is often carefully hand-optimized. As a result, it takes tremendous implementation efforts for software engineers to port an existing dynamic binary translator to anew host ISA. The goal of LnQ framework is to enable the process of building high performance and retarget able dynamic binary translators with existing optimizers and code generation back ends. LnQ framework consists of a translation module and an emulation engine. We deisgn the translation module based on LLVM compiler infrastructure, and use QEMU as our emulation engine. We implement an x86-to-x86 64 dynamic binary translator with our LnQ framework to show that the framework is retarget able, and conduct experiments on SPECCPU2006 benchmarks to show that the resulting binary translator has good perfromance. The experiment results indicate that the x86-to-x86 64 LnQ translator achieves an average speedup of 1.62X in integer benchmarks, and 3.02X in floating point benchmarks than QEMU. | ['Chun Chen Hsu', 'Pangfeng Liu', 'Chien Min Wang', 'Jan Jan Wu', 'Ding Yong Hong', 'Pen Chung Yew', 'Wei Chung Hsu'] | LnQ: Building High Performance Dynamic Binary Translators with Existing Compiler Backends | 490,644 |
Mobile social networks allow users to access, publish, and share information with friends, family, or groups of friends by using mobile devices. Location is one kind of information frequently shared. By using location-sharing on a social network, users allow service providers to register this information and use it to offer products and services based on the geographic area. Many users consider offers a personal gain, but for others, it causes concerns with security and privacy. These concerns can eliminate the use of mobile social networks. This paper presents a model of a mobile social network with a privacy guarantee. The model enables the user to set rules determining when, where, and with whom (friends or a group of friends) location information will be shared. Moreover, the model provides levels of privacy with anonymity techniques which hide the userâs high-accuracy current location before it is shared. To validate the model, a mobile social network prototype, MSNPrivacy (Mobile Social Network with Privacy), was developed for Android. Tests were carried out aiming to measure MSNPrivacyâs performance. The results verify that the rules and privacy levels in place provide an acceptable delay, and the model can be applied in real applications. | ['Tiago Antonio', 'Sergio Donizetti Zorzo'] | Location-sharing Model in Mobile Social Networks with Privacy Guarantee | 680,288 |
Powered transfemoral prostheses are robotic systems that aim to restore the mobility of transfemoral amputees by mimicking the functionalities of healthy human legs. The advantage of using a powered prosthetic device is the enhanced performance on various terrains. One of the most frequent terrain found during daily locomotion (other than flat ground) is the surface with slope. In this work, we introduce a framework to generate upslope walking gaits automatically utilizing an online algorithmic formulation. This approach is inspired from analyzing human gait characteristics during upslope walking. In particularly, it is found that the ankle and knee trajectories of upslope walking share a similar pattern with flat ground walking during the middle section (from 20% to 80%) of one step. This observation motivates us to propose an approach of blending the first portion of nominal flat ground gaits with a set of cubic splines to achieve upslope gaits. Importantly, parameters of these cubic splines are solved using an online optimization, which gives the users ability to traverse in different terrains without using any intention detection algorithm. For the last portion of a step, an impedance controller with low gains is considered upon the contact of prosthetic legs to the ground, which allows the users to step onto unknown terrains. The proposed framework is validated on a custom transfemoral prosthesis AMPRO II with showing automatic motion switches between flat ground and upslope walking. | ['Victor Paredes', 'Woolim Hong', 'Shawanee Patrick', 'Pilwon Hur'] | Upslope walking with transfemoral prosthesis using optimization based spline generation | 849,091 |
Impact of Humanoid Social Robots on Treatment of a Pair of Iranian Autistic Twins | ['Alireza Taheri', 'Minoo Alemi', 'Ali Meghdari', 'Hamidreza Pouretemad', 'Nasim Mahboub Basiri', 'Pegah Poorgoldooz'] | Impact of Humanoid Social Robots on Treatment of a Pair of Iranian Autistic Twins | 628,607 |
We consider Kalman filtering in a network with packet losses, and use a two state Markov chain to describe the normal operating condition of packet delivery and transmission failure. Based on the sojourn time of each visit to the failure or successful packet reception state, we analyze the behavior of the estimation error covariance matrix and introduce the notion of peak covariance, as an estimate of filtering deterioration caused by packet losses, which describes the upper envelope of the sequence of error covariance matrices {P"t,t>=1} for the case of an unstable scalar model. We give sufficient conditions for the stability of the peak covariance process in the general vector case, and obtain a sufficient and necessary condition for the scalar case. Finally, the relationship between two different types of stability notions is discussed. | ['Minyi Huang', 'Subhrakanti Dey'] | Stability of Kalman filtering with Markovian packet losses | 350,996 |
Patient Adapted Augmented Reality System for Real-Time Echocardiographic Applications | ['Gabriel Kiss', 'Cameron Lowell Palmer', 'Hans Torp'] | Patient Adapted Augmented Reality System for Real-Time Echocardiographic Applications | 670,038 |
Fractional-order filter design examples, realized through the substitution of fractional-order capacitors and inductors by appropriate active emulators, are presented in this paper. The implementation of emulators has been achieved using Current Feedback Operational Amplifiers (CFOAs) as active element. Also, the realization of the required fractional-order differentiation/integration, for emulating the fractional-order elements, is performed through the employment of an integer-order multi-feedback topology. An important benefit, from the design flexibility point of view, is that the same topology could be used for emulating both fractional-order capacitor and inductor, and this is achieved through an appropriate selection of the time-constants and gain factors. The behavior of the realized filters is evaluated using the commercially available AD844 discrete CFOA. | ['Ilias Dimeas', 'Georgia Tsirimokou', 'Costas Psychalinos', 'Ahmed S. Elwakil'] | Experimental verification of filters using fractional-order capacitor and inductor emulators | 951,727 |
Report on the 1st International Workshop on the Irrelations between Requirements Engineering Business Process Mana- gement (REBPM). | ['Robert Heinrich', 'Kathrin Kirchner', 'Rüdiger Weißbach'] | Report on the 1st International Workshop on the Irrelations between Requirements Engineering Business Process Mana- gement (REBPM). | 746,223 |
Administration of access control policies is a difficult task, especially in large organizations. We consider the problem of detecting whether administrative actions can yield in policies where some security goals are compromised. In particular, we are interested in problems generated by modifications --- such as adding/deleting elements to/from the set of possible users or permissions --- of policies specified as term-rewrite systems. We propose to use rewriting techniques to compare the behaviors of the modified version and the original version of the policy. More precisely, we use narrowing to compute counter-examples to the equivalence of rewrite-based policies. We prove that our technique provides a sound and complete way to recursively enumerate the set of counter-examples, even when this set is not finite, or when a mistake of the administrator makes one or both systems non-terminating. | ['Clara Bertolissi', 'Jean-Marc Talbot', 'Didier Villevalois'] | Analysis of access control policy updates through narrowing | 875,158 |
Here, we present LNCipedia (http://www.lncipedia.org), a novel database for human long non-coding RNA (lncRNA) transcripts and genes. LncRNAs constitute a large and diverse class of non-coding RNA genes. Although several lncRNAs have been functionally annotated, the majority remains to be characterized. Different high-throughput methods to identify new lncRNAs (including RNA sequencing and annotation of chromatin-state maps) have been applied in various studies resulting in multiple unrelated lncRNA data sets. LNCipedia offers 21 488 annotated human lncRNA transcripts obtained from different sources. In addition to basic transcript information and gene structure, several statistics are determined for each entry in the database, such as secondary structure information, protein coding potential and microRNA binding sites. Our analyses suggest that, much like microRNAs, many lncRNAs have a significant secondary structure, in-line with their presumed association with proteins or protein complexes. Available literature on specific lncRNAs is linked, and users or authors can submit articles through a web interface. Protein coding potential is assessed by two different prediction algorithms: Coding Potential Calculator and HMMER. In addition, a novel strategy has been integrated for detecting potentially coding lncRNAs by automatically re-analysing the large body of publicly available mass spectrometry data in the PRIDE database. LNCipedia is publicly available and allows users to query and download lncRNA sequences and structures based on different search criteria. The database may serve as a resource to initiate small- and large-scale lncRNA studies. As an example, the LNCipedia content was used to develop a custom microarray for expression profiling of all available lncRNAs. | ['Pieter-Jan Volders', 'Kenny Helsens', 'Xiaowei Wang', 'Björn Menten', 'Lennart Martens', 'Kris Gevaert', 'Jo Vandesompele', 'Pieter Mestdagh'] | LNCipedia: a database for annotated human lncRNA transcript sequences and structures | 476,509 |
When solving large-scale integer programming (IP) models, there are the conflicting goals of solution quality and solution time. Solving realistic-size instances of many problems to optimality is still beyond the capability of state-of-the-art solvers. However, by reducing the size of the solution space, such as by fixing variables, good primal solutions frequently can be found quickly. Methods for choosing which variables to fix include those used within linear programming-based branch-and-bound algorithms such as local branching (Fischetti and Lodi (2003)) and relaxation induced neighborhood search (Danna et al. (2005)). These techniques use information from the solution to the active linear program and the best known feasible solution to define a small integer program, which is then optimized. As these techniques do not rely on problem structure to choose which variables to fix, they can be applied to any integer program and are available in commercial solvers such as CPLEX and Gurobi. However, models solved in real-world settings often have a specific structure that can be exploited in a local search scheme to define small integer programs whose solutions are likely to produce high-quality solutions to the original problem. Hewitt et al. (2010) develop such a scheme for the multi-commodity fixedcharge network flow problem. Combining exact and heuristic search techniques by embedding the solution of small integer programs in a heuristic search has been studied recently, see for example, De Franceschi et al. (2006), Archetti et al. (2008), and Savelsbergh and Song (2008). Among these, only the approach of Hewitt et al. (2010) produces a dual bound and performance guarantee for the solutions produced. None of the approaches are guaranteed to converge to an optimal solution. Here we introduce a new approach that uses an extended formulation of the problem whose solution automatically and dynamically yields a small, restricted integer program to be solved next. The extended formulation is solved with a branch-and-price algorithm, which, when run to completion, produces a provably � Talk presented at ISCO 2012, 2 nd International Symposium on Combinatorial Optimization, Athens Greece. The full paper entitled Branch-and-Price Guided Search for Integer Programs with an Application to the Multicommodity Fixed Charge Network Flow Problem is to be published in the INFORMS Journal on Computing, 2012. | ['Mike Hewitt', 'George L. Nemhauser', 'Martin W. P. Savelsbergh'] | Branch-and-Price Guided Search - (Extended Abstract). | 738,315 |
Efficient key distribution is an important problem for secure group communications. The communication and storage complexity of multicast key distribution problem has been studied extensively. In this paper, we propose a new multicast key distribution scheme whose computation complexity is significantly reduced. Instead of using conventional encryption algorithms, the scheme employs MDS codes, a class of error control codes, to distribute multicast key dynamically. This scheme drastically reduces the computation load of each group member compared to existing schemes employing traditional encryption algorithms. Such a scheme is desirable for many wireless applications where portable devices or sensors need to reduce their computation as much as possible due to battery power limitations. Easily combined with any key-tree-based schemes, this scheme provides much lower computation complexity while maintaining low and balanced communication complexity and storage complexity for secure dynamic multicast key distribution. | ['Lihao Xu', 'Cheng Huang'] | Computation-Efficient Multicast Key Distribution | 462,911 |
Open Education: A Growing, High Impact Area for Linked Open Data. | ["Mathieu d'Aquin", 'Stefan Dietze'] | Open Education: A Growing, High Impact Area for Linked Open Data. | 755,022 |
This paper proposes a low-power, small chip area design technique for flash analog-to-digital converters (A/D converters). The proposed technique reduces power consumption of flash A/D converters by 50 % reduction of the number of comparators. Because output signals of flash A/D converters are thermometer code, only a few comparators whose reference voltages are around the input signal are significant and the other comparators can be removed. A novel track and hold circuit (T/H circuit) which can exchange its two balanced output signals is introduced. Thanks to the T/H circuit, the required input range of the comparator is limited to half of that of conventional one. The proposed A/D converter using the proposed T/H circuit can realize the same accuracy with the conventional one. The proposed technique is applied to a 6-bit 528 Msamples/s A/D converter realization. Its power consumption is evaluated by HSPICE simulations. It is confirmed that the proposed technique can save 34% of power consumption compared with conventional one. | ['Takahide Sato', 'Shigetaka Takagi', 'Nobuo Fujii'] | Low-power design technique for flash A/D converters based on reduction of the number of comparators | 336,876 |
We propose a new method for the large-scale collection and analysis of drawings by using a mobile game specifically designed to collect such data. Analyzing this crowdsourced drawing database, we build a spatially varying model of artistic consensus at the stroke level. We then present a surprisingly simple stroke-correction method which uses our artistic consensus model to improve strokes in real-time. Importantly, our auto-corrections run interactively and appear nearly invisible to the user while seamlessly preserving artistic intent. Closing the loop, the game itself serves as a platform for large-scale evaluation of the effectiveness of our stroke correction algorithm. | ['Alex Limpaecher', 'Nicolas Feltman', 'Adrien Treuille', 'Michael F. Cohen'] | Real-time drawing assistance through crowdsourcing | 508,664 |
Abstract This article is concerned with the problem of improving software products and investigates how to base that process on solid empirical foundations. Our key contribution is a contextual method that provides a means of identifying new features to support discovered and currently unsupported ways of working and a means of evaluating the usefulness of proposed features. Standard methods of discovery and evaluation, such as interviews and usability testing, gather some of the necessary data but fall short of covering important aspects. The shortcomings of these approaches are overcome by applying an integrated and iterative method for collecting and interpreting data about product usage in context. This article demonstrates its effectiveness when applied to the discovery and evaluation of new features for standard Web clients. | ['Rachel Jones', 'Natasa Milic-Frayling', 'Kerry Rodden', 'Alan F. Blackwell'] | Contextual Method for the Redesign of Existing Software Products | 454,945 |
This paper introduces a new concept of information that can exist in a mobile environment with no fixed infrastructure and centralized servers, which we call the Hovering Information. This information will be capable of staying attached to a specific geographical point, hovering from one device to another for surviving or even moving from one place to another as defined by its creator or other factors. Its applications can be many. In this paper we describe two main scenarios: disaster areas and tagged world, discussing related issues such as persistency and reliability, distribution, consistency and security. | ['Alfredo Villalba', 'Dimitri Konstantas'] | Towards hovering information | 193,264 |
In this paper, we consider a supply chain planning problem for a single manufacturer with corporate social responsibility (CSR) investment decision under uncertain demand. The activity of CSR is modeled as the investment to customers. In the mathematical model, we assume that the average demand increases if the investment of CSR is increased. The objective function is the total profit including the piecewise linear investment costs. The supply chain planning problem is formulated as a mixed integer nonlinear programming problem. An efficient solution procedure based on Lagrangian relaxation is developed. The effectiveness of the proposed method is confirmed from computational experiments. | ['Takuya Aoyama', 'Tatsushi Nishi'] | A solution procedure based on Lagrangian relaxation for supply chain planning problem with CSR investment | 611,970 |
Two main aspects in hardware/software co-design are hardware/software partitioning and co-synthesis. Most co-design approaches work only on one of these problems. In this paper, an approach coupling hard-ware/software partitioning and co-synthesis will be presented, working fully-automatic. The techniques have been integrated in the co-design tool COOL1 supporting the complete design flow from system specification to board-level implementation for multi-processor and multi-ASIC target architectures for data-flow dominated applications. | ['Ralf Niemann', 'Peter Marwedel'] | Synthesis of communicating controllers for concurrent hardware/software systems | 268,386 |
We consider energy-efficient time synchronization in a wireless sensor network where a head node is equipped with a powerful processor and supplied power from outlet, and sensor nodes are limited in processing and battery-powered. It is this asymmetry that our study focuses on; unlike most existing schemes to save the power of all network nodes, we concentrate on battery-powered sensor nodes in minimizing energy consumption for time synchronization. We present a time synchronization scheme based on asynchronous source clock frequency recovery and reverse two-way message exchanges combined with measurement data report messages, where we minimize the number of message transmissions from sensor nodes while achieving sub-microsecond time synchronization accuracy through propagation delay compensation. We carry out the performance analysis of the estimation of both measurement time and clock frequency with lower bounds for the latter. Simulation results verify that the proposed scheme outperforms the schemes based on conventional two-way message exchanges with and without clock frequency recovery in terms of the accuracy of measurement time estimation and the number of message transmissions and receptions at sensor nodes as an indirect measure of energy efficiency. | ['Kyeong Soo Kim', 'Sanghyuk Lee', 'Eng Gee Lim'] | Energy-Efficient Time Synchronization Based on Asynchronous Source Clock Frequency Recovery and Reverse Two-Way Message Exchanges in Wireless Sensor Networks | 648,488 |
A fractional diffusion equation with advection term is rigorously derived from a kinetic transport model with a linear turning operator, featuring a fat-tailed equilibrium distribution and a small directional bias due to a given vector field. The analysis is based on bounds derived by relative entropy inequalities and on two recently developed approaches for the macroscopic limit: a Fourier--Laplace transform method for spatially homogeneous data and the so called moment method, based on a modified test function. | ['Pedro Aceves-Sanchez', 'Christian Schmeiser'] | Fractional-diffusion-advection limit of a kinetic model | 635,942 |
A low cost concatenation based speech synthesis system for German is described which combines the advantage of minimal memory requirements with good intelligibility and high segmental and prosodic acceptability. This is achieved by the multiple use of "microsegments", stretches of speech signal varying in length from demi phone to phone size. All prosodic structuring is carried out in the time domain. | ['Ralf Benzmüller', 'William J. Barry'] | Microsegment synthesis-economic principles in a low-cost solution | 373,701 |
comp-i: A System for Visual Exploration and Editing of MIDI Datasets | ['Reiko Miyazaki', 'Issei Fujishiro', 'Rumi Hiraga'] | comp-i: A System for Visual Exploration and Editing of MIDI Datasets | 790,603 |
Markov Decision Processes (MDPs) are a formulation for optimization problems in sequential decision making. Solving MDPs often requires implementing a simulator for optimization algorithms to invoke when updating decision making rules known as policies. The combination of simulator and optimizer are subject to failures of specification, implementation, integration, and optimization that may produce invalid policies. We present these failures as queries for a visual analytic system (MDP VIS ). MDP VIS addresses three visualization research gaps. First, the data acquisition gap is addressed through a general simulator-visualization interface. Second, the data analysis gap is addressed through a generalized MDP information visualization. Finally, the cognition gap is addressed by exposing model components to the user. MDP VIS generalizes a visualization for wildfire management. We use that problem to illustrate MDP VIS and show the visualization's generality by connecting it to two reinforcement learning frameworks that implement many different MDPs of interest in the research community. | ['Sean McGregor', 'Hailey Buckingham', 'Thomas G. Dietterich', 'Rachel Houtman', 'Claire A. Montgomery', 'Ronald A. Metoyer'] | Interactive visualization for testing Markov Decision Processes: MDPVIS | 920,009 |
Este artigo descreve um raciocinador semântico para entendimento de linguagem natural que implementa um algoritmo que raciocina sobre o conteudo inferencial de conceitos e padroes de sentencas – o Analisador Semântico Inferencialista (SIA). O SIA implementa um raciocinio material e holistico sobre a rede de potenciais inferencias em que os conceitos de uma lingua podem participar, considerando como os conceitos estao relacionados na sentenca, de acordo com padroes de estruturas sintaticas. A medida de relacionamento inferencial e o processo de raciocinio do SIA sao descritos. O SIA e usado como raciocinador semântico em um sistema de extracao de informacoes sobre crimes – WikiCrimesIE. Os resultados obtidos e uma analise comparativa sao apresentados e discutidos, servindo para a identificacao de vantagens e oportunidades de melhoria para o SIA. | ['Vládia Pinheiro', 'Tarcisio H. C. Pequeno', 'Vasco Furtado'] | Um Analisador Semântico Inferencialista de Sentenças em Linguagem Natural | 662,777 |
Localization of UHF RFID tags in an industrial environments is difficult due to signal reflections and multipaths caused by steel and metal objects. Existing solutions have shown decent accuracy for small distances but fail to maintain the accuracy as the distance between the antenna and the tag increases. In this paper, we describe a novel UHF RFID localization approach based on location fingerprinting. The approach uses machine learning to transform localization into a classification problem. Location fingerprints are generated using outputs Bartlett beamformer and MUSIC algorithms that estimate the incoming angle of a signal. We evaluated our approach in an industrial environment, and the results show that we achieve a high classification accuracy and maintain it with the increase of the distance between the tag and the antenna. | ['Stefan Nosovic', 'Alois Ascher', 'Johannes Lechner', 'Bernd Bruegge'] | 2-D localization of passive UHF RFID tags using location fingerprinting | 954,598 |
In this paper, by exploiting the special features of temporal correlations of dynamic sparse channels that path delays change slowly over time but path gains evolve faster, we propose the structured matching pursuit (SMP) algorithm to realize the reconstruction of dynamic sparse channels. Specifically, the SMP algorithm divides the path delays of dynamic sparse channels into two different parts to be considered separately, i.e., the common channel taps and the dynamic channel taps. Based on this separation, the proposed SMP algorithm simultaneously detects the common channel taps of dynamic sparse channels in all time slots at first, and then tracks the dynamic channel taps in each single time slot individually. Theoretical analysis of the proposed SMP algorithm provides a guarantee that the common channel taps can be successfully detected with a high probability, and the reconstruction distortion of dynamic sparse channels is linearly upper bounded by the noise power. Simulation results demonstrate that the proposed SMP algorithm has excellent reconstruction performance with competitive computational complexity compared with conventional reconstruction algorithms. | ['Xudong Zhu', 'Linglong Dai', 'Guan Gui', 'Wei Dai', 'Zhaocheng Wang', 'Fumiyuki Adachi'] | Structured Matching Pursuit for Reconstruction of Dynamic Sparse Channels | 584,490 |
Because of the difficulty of increasing single-threaded processor performance, multi-core systems are becoming increasingly popular. These systems bring new challenges to the design of a reconfigurable computing system, with reconfigurable hardware potentially shared between multiple simultaneously-executing applications. In this paper, we examine how to best use reconfigurable hardware in a multiprocessor system. One of the key aspects of this work is improving overall system throughput by sharing configured circuits between multiple processes concurrently executing on the system. In this work, we show that using our extensions for sharing configured circuits between processes improves overall system throughput, and outperforms a static schedule of the kernels between the multiple processes. | ['Philip Garcia', 'Katherine Compton'] | Kernel sharing on reconfigurable multiprocessor systems | 310,493 |
A single-chip system is designed, implemented, tested, and analyzed for the measurement of velocity from incremental optical encoders with quadrature outputs. The system uses a field programmable gate array (FPGA) chip to take advantage of high flexibility and a low-cost design cycle. The device uses two counting methods: period counting for low velocities and frequency counting for high velocities to obtain high resolution measurements for a wide range of velocities with a fixed 16-b word length. Verification testing of the device was consistent with predicted error and showed that quantization errors can be made arbitrarily small by adjusting the tradeoff between velocity range and minimum resolution. This tradeoff can be adjusted by the designer by simple modifications to the basic design. | ['Pamela T. Bhatti', 'Blake Hannaford'] | Single-chip velocity measurement system for incremental optical encoders | 135,767 |
We consider a centralised (client-server) digital TV network with heterogeneous receiver devices of different resolutions, requiring a multi-rate transport system. There exist two main ways to store and transport (streamed) TV channels in such a system: either by providing different single-layer versions of a channel (simulcast transport mode) or by keeping one multi-layered version (encoded e.g. in SVC) with extractable substreams.#R##N##R##N#We propose one approximate analytical and two simulation methods to estimate the capacity demand in such a network with variable bit rate channels and we consider two behaviour models. In some TV distribution networks, the video is delivered in constant bit rate. However, this implies that the video quality is varying. In order to provide better quality of service (QoS), a network operator must deliver the channels in non-constant bit rate aiming in this way at constant video quality. Our models take into account also the correlations between the different resolutions of a channel.#R##N##R##N#Starting from real experimental data, we obtain the necessary input to our models and explore two realistic TV network scenarios - with bouquets of 50 and 300 channels, respectively. The results by the three approaches correspond well (relative error of 0.5% at most). In the case of 50 channels, SVC outperforms simulcast in terms of required bandwidth, while in the case of 300 channels, SVC is outperformed by simulcast. Therefore, we conclude that it depends on the system parameters which of both transport strategies will be more beneficial to save network resources. | ['Zlatka Avramova', 'Sabine Wittevrongel', 'Herwig Bruneel', 'Danny De Vleeschauwer'] | Dimensioning of a Multi-Rate Network Transporting Variable Bit Rate TV Channels | 255,914 |
Purpose#R##N##R##N##R##N##R##N##R##N#Organizations rely on social outreach campaigns to raise financial support, recruit volunteers, and increase public awareness. In order to maximize response rates, organizations face the challenging problem of designing appropriately tailored interactions for each user. An interaction consists of a specific combination of message, media channel, sender, tone, and possibly many other attributes. The purpose of this paper is to address the problem of how to design tailored interactions for each user to maximize the probability of a desired response.#R##N##R##N##R##N##R##N##R##N#Design/methodology/approach#R##N##R##N##R##N##R##N##R##N#A nearest-neighbor (NN) algorithm is developed for interaction design. Simulation-based experiments are then conducted to compare positive response rates obtained by two forms of this algorithm against that of several control interaction design strategies. A factorial experimental design is employed which varies three user population factors in a combinatorial manner, allowing the methods to be compared across eight distinct scenarios.#R##N##R##N##R##N##R##N##R##N#Findings#R##N##R##N##R##N##R##N##R##N#The NN algorithms significantly outperformed all three controls in seven out of the eight scenarios. Increases in response rates ranging from approximately 20 to 400 percent were observed.#R##N##R##N##R##N##R##N##R##N#Practical implications#R##N##R##N##R##N##R##N##R##N#This work proposes a data-oriented method for designing tailored interactions for individual users in social outreach campaigns which can enable significant increases in positive response rates. Additionally, the proposed algorithm is relatively easy to implement.#R##N##R##N##R##N##R##N##R##N#Originality/value#R##N##R##N##R##N##R##N##R##N#The problem of optimal interaction design in social outreach campaigns is scarcely addressed in the literature. This work proposes an effective and easy to implement solution approach for this problem. | ['Christopher Garcia'] | A nearest-neighbor algorithm for targeted interaction design in social outreach campaigns | 915,920 |
The use of context-free grammars in automatic speech recognition is discussed. A dynamic programming algorithm for recognizing and parsing spoken word strings of a context-free grammar is presented. The time alignment is incorporated in to the parsing algorithm. The algorithm performs all functions simultaneously, namely, time alignment, work boundary detection, recognition, and parsing. As a result, no postprocessing is required. From the probabilistic point of view, the algorithm finds the most likely explanation or derivation for the observed input string, which amounts to Viterbi scoring rather than Baum-Welch scoring in the case of regular or finite-state languages. The algorithm provides a closed-form solution. The computational complexity of the algorithm is studied. Details of the implementation and experimental tests are described. > | ['Hermann Ney'] | Dynamic programming parsing for context-free grammars in continuous speech recognition | 182,494 |
Learning of inverse dynamics modeling errors is key for compliant or force control when analytical models are only rough approximations. Thus, designing real time capable function approximation algorithms has been a necessary focus towards the goal of online model learning. However, because these approaches learn a mapping from actual state and acceleration to torque, good tracking is required to observe data points on the desired path. Recently it has been shown how online gradient descent on a simple modeling error offset term to minimize tracking at acceleration level can address this issue. However, to adapt to larger errors a high learning rate of the online learner is required, resulting in reduced compliancy. Thus, here we propose to combine both approaches: The online adapted offset term ensures good tracking such that a nonlinear function approximator is able to learn an error model on the desired trajectory. This, in turn, reduces the load on the adaptive feedback, enabling it to use a lower learning rate. Combined this creates a controller with variable feedback and low gains, and a feedforward model that can account for larger modeling errors. We demonstrate the effectiveness of this framework, in simulation and on a real system. | ['Franziska Meier', 'Daniel Kappler', 'Nathan D. Ratliff', 'Stefan Schaal'] | Towards robust online inverse dynamics learning | 956,229 |
Rabbit Field is an infestation of inflatable rabbit-like forms, filling their display space and inviting tactile interaction. They cover much of the floor, and any other available surfaces, growing in number each night. Each rabbit is self-inflating using a simple computer fan, and can sense its internal pressure state by monitoring its fan speed. If a rabbit is squeezed, and partially deflated, the rabbits around it respond, as if out of empathy, deflating themselves. In this way, a wave of deflation ripples out from the squeezed center. By connecting an entire field of forms into a network of sensors and output media, interactions between viewer and inflatable are further displayed and amplified as deflation data is passed from one rabbit to the next. The organic feel of the forms and the rhythm of their inflation and deflation in reaction to human touch are easily anthropomorphized by the audience as simple expressions of emotion. This initiates and encourages play and exploration. This piece seeks to encourage and reward a 'tangible dialogue' between viewer and inflatables, as well as hoping to establish social connection between viewers who co-interact with the system. Rabbit forms were chosen to engage and invite inquiry. These animals also have strong cultural connotations of fertility and innocence, and are prevalent images in modern eastern and western aesthetic. Use of the unique properties of inflatable structures in architecture, art and design has a long and creative history, flirting between chic design and tacky novelty. http://www.media.mit.edu/~bcd/rabbits. | ['Ben Dalton'] | Rabbit field | 661,571 |
Intermodulation distortion is one of the key design requirements of Radio Frequency circuits. The standard approach for analyzing distortion using circuit simulators is to mimic measurement environments and compute the response due to a two-tone input. This considerably increases the CPU cost of the simulation because of the large number of variables resulting from the harmonics of these two tones and their intermodulation products. In this paper, we propose an analytical method for directly obtaining the intermodulation distortion from the Harmonic Balance equations with a only single-tone input, without the need to perform a Harmonic Balance simulation. The proposed method is shown to be significantly faster than traditional simulation based approaches. | ['Dani Tannir', 'Roni Khazaka'] | Computation of IP3 using single-tone moments analysis | 542,814 |
Ensuring Fast Adaptation in an Ant-Based Path Management System | ['Laurent Paquereau', 'Bjarne E. Helvik'] | Ensuring Fast Adaptation in an Ant-Based Path Management System | 581,992 |
We describe a simple method of unsupervised morpheme segmentation of words in an unknown language. All that is needed is a raw text corpus (or a list of words) in the given language. The algorithm identifies word parts occurring in many words and interprets them as morpheme candidates (prefixes, stems and suffixes). New treatment of prefixes is the main innovation in comparison to [1]. After filtering out spurious hypotheses, the list of morphemes is applied to segment input words. Official Morpho Challenge 2008 evaluation is given together with some additional experiments. Processing of prefixes improved the F-score by 5 to 11 points for German, Finnish and Turkish, while it failed to improve English and Arabic. We also analyze and discuss errors with respect to the evaluation method. | ['Daniel Zeman'] | Using unsupervised paradigm acquisition for prefixes | 181,879 |
To assist the vulnerability identification process, researchers proposed prediction models that highlight (for inspection) the most likely to be vulnerable parts of a system. In this paper we aim at making a reliable replication and comparison of the main vulnerability prediction models. Thus, we seek for determining their effectiveness, i.e., their ability to distinguish between vulnerable and non-vulnerable components, in the context of the Linux Kernel, under different scenarios. To achieve the above-mentioned aims, we mined vulnerabilities reported in the National Vulnerability Database and created a large dataset with all vulnerable components of Linux from 2005 to 2016. Based on this, we then built and evaluated the prediction models. We observe that an approach based on the header files included and on function calls performs best when aiming at future vulnerabilities, while text mining is the best technique when aiming at random instances. We also found that models based on code metrics perform poorly. We show that in the context of the Linux kernel, vulnerability prediction models can be superior to random selection and relatively precise. Thus, we conclude that practitioners have a valuable tool for prioritizing their security inspection efforts. | ['Matthieu Jimenez', 'Mike Papadakis', 'Yves Le Traon'] | Vulnerability Prediction Models: A Case Study on the Linux Kernel | 881,420 |
We study strategies for enhanced secrecy using cooperative jamming in secure communication systems with limited rate feedback. A Gaussian multiple-input multiple-output (MIMO) wiretap channel with a jamming helper is considered. The transmitter and helper both require channel state information (CSI), which is quantized at the receiver and fed back through two sum-rate-limited feedback channels. The quantization errors result in reduced beamforming gain from the transmitter, as well as interference leakage from the helper. First, under the assumption that the eavesdropper's CSI is completely unknown, we derive a lower bound on the average main channel rate and find the feedback bit allocation that maximizes the jamming power under a constraint on the bound. For the case where statistical CSI for the eavesdropper's channel is available, we derive a lower bound on the average secrecy rate, and we optimize the bound to find a suitable bit allocation and the transmit powers allocated to the transmitter and helper. For the case where the transmitter and helper have the same number of antennas, we obtain a closed-form solution for the optimal bit allocation. Simulations verify the theoretical analysis and demonstrate the significant performance gain that results with intelligent feedback bit allocation and power control. | ['Xinjie Yang', 'A. Lee Swindlehurst'] | Limited Rate Feedback in a MIMO Wiretap Channel With a Cooperative Jammer | 790,536 |
This paper presents a Class D amplifier output stage with low Total Harmonic Distortion (THD) and high Power Supply Rejection Ratio (PSRR). The Class D output stage reduces the non-linearities and supply noise by means of a second-order negative feedback loop embodying a single stage second-order integrator and a Schmitt trigger comparator. Unlike conventional feedback, the reference input signal of the feedback loop is a digital pulse width modulated signal. The feedback loop compensates for any external errors or non-linearities in the output PWM signal by modulating the pulse width of the output signal. Based on simulation using AMS 0.35µm CMOS process, our proposed closed-loop output stage can achieve a PSRR of .90dB at 1kHz and a THD well below 0.05% up to 10 kHz. This shows that negative feedback can effectively be employed to improve the PSRR and THD performance of a Class D output stage with digital PWM input. | ['Chun Kit Lam', 'Meng Tong Tan'] | A Class D amplifier output stage with low THD and high PSRR | 263,089 |
Cross-cultural Deception Detection | ['Verónica Pérez-Rosas', 'Rada Mihalcea'] | Cross-cultural Deception Detection | 613,050 |
JPEG2000 is a highly scalable compression standard, allowing access to image representations with a reduced resolution, a reduced quality or confined to a spatial region of interest. As such, it is well placed to play an important role in interactive imaging applications. However, the standard itself stops short of providing guidance or specific mechanisms for exploiting its scalability in such applications. We describe the JPIK (JPEG2000 interactive with Kakadu) protocol for interactive imaging with JPEG2000. Our results suggest, somewhat surprisingly, that image tiling (dividing into independently compressed sub-images), can be detrimental to effective browsing of large compressed images over low bandwidth connections. | ['David Taubman'] | Remote browsing of JPEG2000 images | 67,381 |
Augmented Reality is a technique that enables users to interact with their physical environment through the overlay of digital information. While being researched for decades, more recently, Augmented Reality moved out of the research labs and into the field. While most of the applications are used sporadically and for one particular task only, current and future scenarios will provide a continuous and multi-purpose user experience. Therefore, in this paper, we present the concept of Pervasive Augmented Reality, aiming to provide such an experience by sensing the user’s current context and adapting the AR system based on the changing requirements and constraints. We present a taxonomy for Pervasive Augmented Reality and context-aware Augmented Reality, which classifies context sources and context targets relevant for implementing such a context-aware, continuous Augmented Reality experience. We further summarize existing approaches that contribute towards Pervasive Augmented Reality. Based our taxonomy and survey, we identify challenges for future research directions in Pervasive Augmented Reality. | ['Jens Grubert', 'Tobias Langlotz', 'Stefanie Zollmann', 'Holger Regenbrecht'] | Towards Pervasive Augmented Reality: Context-Awareness in Augmented Reality | 696,832 |
Lee and Siu made possible for the first time modeling and reasoning with set variables in weighted constraint satisfaction problems (WCSPs). In addition to an efficient set variable representation scheme, they also defined the notion of set bounds consistency, which is generalized from NC* and AC* for integer variables in WCSPs, and their associated enforcement algorithms. In this paper, we adapt ideas from FDAC and EDAC for integer variables to achieve stronger consistency notions for set variables. The generalization is non-trivial due to the common occurrence of ternary set constraints. Enforcement algorithms for the new consistencies are proposed. Empirical results confirm the feasibility and efficiency of our proposal. | ['Jimmy Ho-Man Lee', 'C. F. K. Siu'] | Stronger Consistencies in WCSPs with Set Variables | 420,218 |
This paper investigates how digital traces of people's movements and activities in the physical world (e.g., at college campuses and commutes) may be used to detect local, short-lived events in various urban spaces. Past work that use occupancy-related features can only identify high-intensity events (those that cause large-scale disruption in visit patterns). In this paper, we first show how longitudinal traces of the coordinated and group-based movement episodes obtained from individual-level movement data can be used to create a socio-physical network (with edges representing tie strengths among individuals based on their physical world movement & collocation behavior). We then investigate how two additional families of socio-physical features: (i) group-level interactions observed over shorter timescales and (ii) socio-physical network tie-strengths derived over longer timescales, can be used by state-of-the-art anomaly detection methods to detect a much wider set of both high & low intensity events. We utilize two distinct datasets--one capturing coarse-grained SMU campus-wide indoor location data from hundreds of students, and the other capturing commuting behavior by millions of users on Singapore's public transport network--to demonstrate the promise of our approaches: the addition of group and socio-physical tie-strength based features increases recall (the percentage of events detected) more than 2-folds (to 0.77 on the SMU campus and to 0.73 at sample MRT stations), compared to pure occupancy-based approaches. | ['Kasthuri Jayarajah', 'Archan Misra', 'Xiao Wen Ruan', 'Ee-Peng Lim'] | Event Detection: Exploiting Socio-Physical Interactions in Physical Spaces | 658,764 |
Massive dataset sizes can make visualization difficult or impossible. One solution to this problem is to divide a dataset into smaller pieces and then stream these pieces through memory, running algorithms on each piece. This paper presents a modular data-flow visualization system architecture for culling and prioritized data streaming. This streaming architecture improves program performance both by discarding pieces of the input dataset that are not required to complete the visualization, and by prioritizing the ones that are. The system supports a wide variety of culling and prioritization techniques, including those based on data value, spatial constraints, and occlusion tests. Prioritization ensures that pieces are processed and displayed progressively based on an estimate of their contribution to the resulting image. Using prioritized ordering, the architecture presents a progressively rendered result in a significantly shorter time than a standard visualization architecture. The design is modular, such that each module in a user-defined data-flow visualization program can cull pieces as well as contribute to the final processing order of pieces. In addition, the design is extensible, providing an interface for the addition of user-defined culling and prioritization techniques to new or existing visualization modules. | ['J. Ahrens', 'Nehal N. Desai', 'Patrick S. McCormick', 'Ken Martin', 'Jonathan Woodring'] | A modular, extensible visualization system architecture for culled, prioritized data streaming | 604,324 |
Providing feedback, both assessing final work and giving hints to stuck students, is difficult for open-ended assignments in massive online classes which can range from thousands to millions of students. We introduce a neural network method to encode programs as a linear mapping from an embedded precondition space to an embedded postcondition space and propose an algorithm for feedback at scale using these linear maps as features. We apply our algorithm to assessments from the Code.org Hour of Code and Stanford University’s CS1 course, where we propagate human comments on student assignments to orders of magnitude more submissions. | ['Chris Piech', 'Jonathan Huang', 'Andy Nguyen', 'Mike Phulsuksombati', 'Mehran Sahami', 'Leonidas J. Guibas'] | Learning Program Embeddings to Propagate Feedback on Student Code | 118,139 |
Document fraud detection by ink analysis using texture features and histogram matching | ['Apurba Gorai', 'Rajarshi Pal', 'Phalguni Gupta'] | Document fraud detection by ink analysis using texture features and histogram matching | 946,281 |
Computer interfaces representation, design and implementation as the computer software outward window had a large impact on software learning and using, especially for virtual instrument. Several types of virtual instrument developing software are available for the virtual instrument development and parts of them have great influence in the instrument science field. However, only a limited number of inexperienced or previous untrained people are able to well utilize them. Part of the limitation stems from the difficulty in learning how to use them and part of from the demand of software developing expertise background or hardware design abilities. Therefore, user friendliness of virtual instrument software is needed for a great number of people who are without expertise background or hardware design abilities. There are a number of features in the software described in this article would server to meet the need, such as requirement-driven idea from human-computer interface implementation aspect helped step obstacles encountered by users with limited experience. An experimental interface design has been developed to use an advanced object-oriented development environment, thus allowing a great deal of flexibility in implementing changes and adding new features in order to provide a friendliness operation interface to actual users. | ['Yongkai Fan', 'Tianze Sun', 'Jun Lin', 'Xiaolong Fu', 'Yangyi Sui'] | Computer interface for learning and using virtual instrument | 309,627 |
In this study the hot spot phenomenon was explained successfully in the BRDF analysis by adopting the shadowing model as a volume scattering kernel. In addition, The proposed BRDF models with the shadowing kernel showed an excellent agreement in the correlation between the measured and model predicted BRDF values. | ['Yoshiyuki Kawata'] | Introduction of the Shadowing Kernel for BRDF Analysis | 222,306 |
Computer graphics can not only generate synthetic images and ground truth but it also offers the possibility of constructing virtual worlds in which: (i) an agent can perceive, navigate, and take actions guided by AI algorithms, (ii) properties of the worlds can be modified (e.g., material and reflectance), (iii) physical simulations can be performed, and (iv) algorithms can be learnt and evaluated. But creating realistic virtual worlds is not easy. The game industry, however, has spent a lot of effort creating 3D worlds, which a player can interact with. So researchers can build on these resources to create virtual worlds, provided we can access and modify the internal data structures of the games. To enable this we created an open-source plugin UnrealCV (this http URL) for a popular game engine Unreal Engine 4 (UE4). We show two applications: (i) a proof of concept image dataset, and (ii) linking Caffe with the virtual world to test deep network algorithms. | ['Weichao Qiu', 'Alan L. Yuille'] | UnrealCV: Connecting Computer Vision to Unreal Engine | 881,159 |
Three-dimensional pharmacophore models were generated for A2A and A2B adenosine receptors (ARs) based on highly selective A2A and A2B antagonists using the Catalyst program. The best pharmacophore model for selective A2A antagonists (Hypo-A2A) was obtained through a careful validation process. Four features contained in Hypo-A2A (one ring aromatic feature (R), one positively ionizable feature (P), one hydrogen bond acceptor lipid feature (L), and one hydrophobic feature (H)) seem to be essential for antagonists in terms of binding activity and A2A AR selectivity. The best pharmacophore model for selective A2B antagonists (Hypo-A2B) was elaborated by modifying the Catalyst common features (HipHop) hypotheses generated from the selective A2B antagonists training set. Hypo-A2B also consists of four features: one ring aromatic feature (R), one hydrophobic aliphatic feature (Z), and two hydrogen bond acceptor lipid features (L). All features play an important role in A2B AR binding affinity and are essential ... | ['Jing Wei', 'Songqing Wang', 'Shaofen Gao', 'Xuedong Dai', 'Qingzhi Gao'] | 3D-Pharmacophore Models for Selective A2A and A2B Adenosine Receptor Antagonists | 483,877 |
Content-based retrieval of spatio-temporal patterns from human motion databases is inherently nontrivial since finding effective distance measures for such data is difficult. These data are typically modelled as time series of high dimensional vectors which incur expensive storage and retrieval cost as a result of the high dimensionality. In this paper, we abstract such complex spatio-temporal data as a set of frames which are then represented as high dimensional categorical feature vectors. New distance measures and queries for high dimensional categorical time series are then proposed and efficient query processing techniques for answering these queries are developed. We conducted experiments using our proposed distance measures and queries on human motion capture databases. The results indicate that significant improvement on the efficiency of query processing of categorical time series (more than 10,000 times faster than that of the original motion sequences) can be achieved while guaranteeing the effectiveness of the search. | ['Yueguo Chen', 'Shouxu Jiang', 'Beng Chin Ooi', 'Anthony K. H. Tung'] | Querying Complex Spatio-Temporal Sequences in Human Motion Databases | 138,920 |
In this paper we consider the problem of joint carrier offset and code timing estimation for CDMA (code division multiple access) systems. In contrast to most existing schemes which require multi-dimensional search over the parameter space, we propose a blind estimator that solves the joint estimation problem algebraically. By exploiting the noise subspace of the covariance matrix of the received data, the multiuser estimation is decoupled into parallel estimations of individual users, which makes computations efficient. The proposed estimator is non-iterative and near- far resistant. It can deal with frequency-selective and time-varying channels. The performance of the proposed scheme is illustrated by some computer simulations. | ['Khaled Amleh', 'Hongbin Li'] | Blind code timing and carrier offset estimation for DS-CDMA systems | 55,400 |
Heterogene Online-Lernumgebungen wie Go-Lab bieten unterschiedliche Reprasentati- onen von Wissensobjekten zur Unterstutzung der kognitiven und metakognitiven Fahigkeiten von Lernenden. Verschiedene "Inquiry Learning Apps" unterstutzen dabei eigenstandiges Explorieren und Aneignen von Domanenwissen. Geeignete Reprasentationsformen wie z.B. Concept Maps oder Wiki-Texte konnen den Lernprozess unterstutzen. In dieser Arbeit wird die ConceptCloud- App vorgestellt, mit deren Hilfe Lernobjekte verschiedener Typen aggregiert dargestellt werden. Mittels semantischer Analysen werden relevante Konzepte aus den verschiedenen Lernobjekten extrahiert und in Anlehnung an eine Tag Cloud visualisiert. Durch Vergleichsmoglichkeiten mit anderen Schulern sowie Reflexionsfragen hilft die App, Lernende zum kritischen Denken anzure- gen. Lehrende werden durch die Aggregation von Konzepten uber einen gesamten Kurs hinweg in der Supervision der Schuleraktivitaten unterstutzt. | ['Kristina Angenendt', 'Jeanny Bormann', 'Tim Donkers', 'Tabitha Goebel', 'Anna Kizina', 'Timm Kleemann', 'Lisa Michael', 'Hifsa Raja', 'Franziska Sachs', 'Christina Schneegass', 'Lisa-Maria Sinzig', 'Juliane Steffen', 'Sven Manske', 'Tobias Hecking', 'Heinz Ulrich Hoppe'] | ConceptCloud - Entwicklung einer Applikation zur Unterstützung von Reflexionsprozessen im Online-Lernportal Go-Lab. | 675,635 |
Capacity formulas and random-coding exponents are derived for a generalized family of Gel'fand-Pinsker coding problems. These exponents yield asymptotic upper bounds on the achievable log probability of error. In our model, information is to be reliably transmitted through a noisy channel with finite input and output alphabets and random state sequence, and the channel is selected by a hypothetical adversary. Partial information about the state sequence is available to the encoder, adversary, and decoder. The design of the transmitter is subject to a cost constraint. Two families of channels are considered: 1) compound discrete memoryless channels (CDMC), and 2) channels with arbitrary memory, subject to an additive cost constraint, or more generally, to a hard constraint on the conditional type of the channel output given the input. Both problems are closely connected. The random-coding exponent is achieved using a stacked binning scheme and a maximum penalized mutual information decoder, which may be thought of as an empirical generalized maximum a posteriori decoder. For channels with arbitrary memory, the random-coding exponents are larger than their CDMC counterparts. Applications of this study include watermarking, data hiding, communication in presence of partially known interferers, and problems such as broadcast channels, all of which involve the fundamental idea of binning | ['Pierre Moulin', 'Ying Wang'] | Capacity and Random-Coding Exponents for Channel Coding With Side Information | 170,976 |
We consider the cascade and triangular rate-distortion problem where side information is known to the source encoder and to the first user but not to the second user. We characterize the rate-distortion region for these problems, as well as some of their extensions. For the quadratic Gaussian case, we show that it is sufficient to consider jointly Gaussian distributions, which leads to an explicit solution. | ['Haim H. Permuter', 'Tsachy Weissman'] | Cascade and Triangular Source Coding With Side Information at the First Two Nodes | 9,102 |
Advanced Ethernet Networks' Diagnozer. | ['Hela Lajmi', 'Adel M. Alimi', 'Joseba Rodriguez'] | Advanced Ethernet Networks' Diagnozer. | 754,377 |
In this letter, we consider resource allocation for OFDMA-based secure cooperative communication by employing a trusted decode and forward (DF) relay among the untrusted users. We formulate two optimization problems, namely, 1) sum rate maximization subject to individual power constraints on source and relay, and 2) sum power minimization subject to a fairness constraint in terms of per-user minimum support secure rate requirement. The optimization problems are solved utilizing the optimality of KKT conditions for pseudolinear functions. | ['Ravikant Saini', 'Deepak Mishra', 'Swades De'] | OFDMA-Based DF Secure Cooperative Communication With Untrusted Users | 709,119 |
Replacement and Collection in Intuitionistic Set Theory | ['Nicolas D. Goodman'] | Replacement and Collection in Intuitionistic Set Theory | 147,267 |
Abstract#R##N##R##N#Geospatial data analysis techniques are widely used to find optimal routes from specified starting points to specified destinations. Optimality is defined in terms of minimizing some impedance value over the length of the route – the value to be minimized might be distance, travel time, financial cost, or any other metric. Conventional analysis procedures assume that impedance values of all possible travel routes are known a priori, and when this assumption holds, efficient solution strategies exist that allow truly optimal solutions to be found for even very large problems. When impedance values are not known with certainty a priori, exact solution strategies do not exist and heuristics must be employed. This study evaluated how the quality of the solutions generated by one such heuristic were impacted by the nature of the uncertainty in the cost database, the nature of the costs themselves, and the parameters used in the heuristic algorithm. It was found that all of these factors influenced the qualities of the solutions produced by the heuristic, but encouragingly, an easily controlled parameter of the heuristic algorithm itself played the most important role in controlling solution quality. | ['Denis J. Dean'] | Finding Optimal Travel Routes with Uncertain Cost Data | 113,278 |
Calibrating Probability with Undersampling for Unbalanced Classification | ['Andrea Dal Pozzolo', 'Olivier Caelen', 'Reid A. Johnson', 'Gianluca Bontempi'] | Calibrating Probability with Undersampling for Unbalanced Classification | 672,376 |
Density-based rare event detection from streams of neuromorphic sensor data | ['Csaba Beleznai', 'Ahmed Nabil Belbachir', 'Peter M. Roth'] | Density-based rare event detection from streams of neuromorphic sensor data | 925,992 |
A novel modular coding paradigm is investigated using residual vector quantization (RVQ) with memory that incorporates a modular neural network vector predictor in the feedback loop. A modular neural network predictor consists of several expert networks that are optimized for predicting a particular class of data. The predictor also consists of an integrating unit that mixes the outputs of the expert networks to form the final output of the prediction system. The vector quantizer also has a modular structure. The proposed modular predictive RVQ (modular PRVQ) is designed by imposing a constraint on the output rate of the system. Experimental results show that the modular PRVQ outperforms simple PRVQ by as much as 1 dB at low bit rates. Furthermore, for the same peak signal-to-noise ratio (PSNR), the modular PRVQ reduces the bit rate by more than a half when compared to the JPEG algorithm. | ['Sayed A. Rizvi', 'Lin-Cheng Wang', 'Nasser M. Nasrabadi'] | Rate-constrained modular predictive residual vector quantization of digital images | 379,460 |
In the database context, the hypertree decomposition method is used for query optimization, whereby conjunctive queries having a low degree of cyclicity can be recognized and decomposed automatically, and efficiently evaluated. Hypertree decompositions were introduced at ACM PODS 1999. The present paper reviews' in form of questions and answers' the main relevant concepts and algorithms and surveys selected related work including applications and test results. | ['Georg Gottlob', 'Gianluigi Greco', 'Nicola Leone', 'Francesco Scarcello'] | Hypertree Decompositions: Questions and Answers | 817,832 |
The problem of blind multiuser detection for a DS-CDMA system employing multiple transmit and receive antennae over a fading dispersive channel is considered. Relying upon a well known signal representation, we develop a new family of linear receivers adapted to account for MIMO channels. Linear receivers share the key property of substantial immunity to co-channel interference, without requiring any prior knowledge on the signals to be decoded, except for the spreading sequence. The performance assessment, conducted through semianalytical methods - whenever possible - and validated through Monte Carlo counting techniques, shows that the newly proposed receivers perform pretty close to their non-blind counterparts, which rely on prior knowledge of the spreading codes, symbol timings and channel impulse responses for all of the active users. | ['Stefano Buzzi', 'Marco Lops', 'Luca Venturino'] | Blind multi-antenna receivers for dispersive DS/CDMA channels with no channel-state information | 303,323 |
This paper presents a high frame rate capable Active Pixel Sensor (APS) using Carbon Nanotube Field Effect Transistor (CNTFET) instead of Complementary Metal Oxide Semiconductor (CMOS). Conventionally, the design of a single APS circuit is based on three transistors (3T) model. In order to achieve higher frame rate, one extra transistor with a column sensor circuit has been introduced in the proposed design to reduce the readout time. This study also concerns about the effect of transistor sizing, bias current, and moreover, the chiral vector of CNTFET. The power consumption and power delay product (PDP) are also investigated for specific sets of reset and row selector signal. Data for these studies were collected with the help of HSPICE software which were further plotted in OriginPro to analyze the optimal operation point of APS circuit. The bias current was also recorded for the readout transistor which is uniquely introduced in the proposed model for achieving better readout time. Hence, the main focus of this paper is to improve the frame rate by reducing the readout time. Results of the proposed CNTFET APS circuit are compared with the conventional CMOS APS circuit. The performance benchmarking shows that CNTFET APS cell significantly reduces readout time, PDP, and thus can achieve much higher frame rate than that of conventional CMOS APS cell. | ['Subrata Biswas', 'Poly Kundu', 'Md. Hasnat Kabir', 'Sagir Ahmed', 'Md. Moidul Islam'] | Design and Analysis of High Frame Rate Capable Active Pixel Sensor by Using CNTFET Devices for Nanoelectronics | 688,716 |
Error Estimates for well-balanced and time-split schemes on a locally damped wave equation | ['Debora Amadori', 'Laurent Gosse'] | Error Estimates for well-balanced and time-split schemes on a locally damped wave equation | 678,755 |
In this paper we focus on a linear optimization problem with uncertainties, having expectations in the objective and in the set of constraints. We present a modular framework to obtain an approximate solution to the problem that is distributionally robust and more flexible than the standard technique of using linear rules. Our framework begins by first affinely extending the set of primitive uncertainties to generate new linear decision rules of larger dimensions and is therefore more flexible. Next, we develop new piecewise-linear decision rules that allow a more flexible reformulation of the original problem. The reformulated problem will generally contain terms with expectations on the positive parts of the recourse variables. Finally, we convert the uncertain linear program into a deterministic convex program by constructing distributionally robust bounds on these expectations. These bounds are constructed by first using different pieces of information on the distribution of the underlying uncertainties to develop separate bounds and next integrating them into a combined bound that is better than each of the individual bounds. | ['Joel Goh', 'Melvyn Sim'] | Distributionally Robust Optimization and Its Tractable Approximations | 13,358 |
Optimized versions of frequency-wavenumber (F-K) migration methods are introduced to better focus ground-penetrating radar (GPR) data in applications of shallow subsurface object localization, e.g., landmine remediation. Migration methods are based on the wave equation and operate by backpropagating the received data into the earth so as to localize buried objects. Traditional F-K migration is based on an underlying assumption that the wavefields propagate in a homogeneous medium. The presence of a rough air-ground interface in the GPR case degrades the localization ability. To overcome this problem in the context of the F-K algorithm, we introduce lateral variations in the velocity of waves in the medium. An optimization approach is employed to choose that velocity function that results in a well-focused image where an entropy-like criterion is used to quantify the notion of focus. Extension of the basic method to lossy medium is also described. The utility of these techniques is demonstrated using field data from a number of GPR systems. | ['Xiaoyin Xu', 'Eric L. Miller', 'Carey M. Rappaport'] | Minimum entropy regularization in frequency-wavenumber migration to localize subsurface objects | 297,898 |
Fully Homomorphic Encryption (FHE) becomes an important encryption scheme in the frame of Cloud computing. Current software implementations are however very slow and require a huge computing power. This work investigates the possibility to accelerate FHE by implementing it in off-the-shelf FPGAs. The focus is on one critical function in the FHE scheme: the polynomial multiplication. In this paper, three algorithms are considered and an optimized architecture is proposed for each of them. The major contribution of this paper is the comparison of the different multiplication algorithms on a programmable device: results show that the simplest algorithm is the most efficient for a hardware implementation, in the case of polynomials of order 511 with 32-bit coefficients. The acceleration is about one order of magnitude compared with a software reference implementation. | ['C. Jayet-Griffon', 'M.-A. Cornelie', 'Paolo Maistri', 'Philippe Elbaz-Vincent', 'Régis Leveugle'] | Polynomial multipliers for fully homomorphic encryption on FPGA | 649,572 |
Interactive selection is a critical component in exploratory visualization, allowing users to isolate subsets of the displayed information for highlighting, deleting, analysis, or focused investigation. Brushing, a popular method for implementing the selection process, has traditionally been performed in either screen space or data space. In this paper, we introduce an alternate, and potentially powerful, mode of selection that we term structure-based brushing, for selection in data sets with natural or imposed structure. Our initial implementation has focused on hierarchically structured data, specifically very large multivariate data sets structured via hierarchical clustering and partitioning algorithms. The structure-based brush allows users to navigate hierarchies by specifying focal extents and level-of-detail on a visual representation of the structure. Proximity-based coloring, which maps similar colors to data that are closely related within the structure, helps convey both structural relationships and anomalies. We describe the design and implementation of our structure-based brushing tool. We also validate its usefulness using two distinct hierarchical visualization techniques, namely hierarchical parallel coordinates and tree-maps. Finally, we discuss relationships between different classes of brushes and identify methods by which structure-based brushing could be extended to alternate data structures. | ['Ying-Huey Fua', 'Matthew O. Ward', 'Elke A. Rundensteiner'] | Structure-based brushes: a mechanism for navigating hierarchically organized data and information spaces | 378,861 |
While smartphone usage become more and more pervasive, people start also asking to which extent such devices can be maliciously exploited as "tracking devices". The concern is not only related to an adversary taking physical or remote control of the device, but also to what a passive adversary without the above capabilities can observe from the device communications. Work in this latter direction aimed, for example, at inferring the apps a user has installed on his device, or identifying the presence of a specific user within a network. In this paper, we move a step forward: we investigate to which extent it is feasible to identify the specific actions that a user is doing on mobile apps, by eavesdropping their encrypted network traffic. We design a system that achieves this goal by using advanced machine learning techniques. We did a complete implementation of this system and run a thorough set of experiments, which show that it can achieve accuracy and precision higher than 95% for most of the considered actions. | ['Mauro Conti', 'Luigi V. Mancini', 'Riccardo Spolaor', 'Nino Vincenzo Verde'] | Can't You Hear Me Knocking: Identification of User Actions on Android Apps via Traffic Analysis | 112,144 |
Geographic Information Systems (GIS) is a foundational application for different information systems, such as navigation system and global position system. However, due to the complexity of the system and algorithms, traditional testing methodologies confronted with the test oracle problem. Metamorphic testing (MT) can help resolve the problem by comparing metamorphic relations (MR) among multiple inputs and outputs, which have applied in many different domains. In this paper, we try to apply MT in GIS testing. We propose a semi-automated MT approach for GIS testing. To illustrate the effectiveness of the approach, we conduct a case study with a typical component of GIS: superficial area calculation program. In the empirical study, we construct six kinds of MRs with different properties and characters of the program or its algorithm. Our method could detect the target faults effectively without generating test oracles manually. | ['Zhanwei Hui', 'Song Huang'] | Experience Report: How Do Metamorphic Relations Perform in Geographic Information Systems Testing | 879,996 |
Patterns for visualization evaluation | ['Niklas Elmqvist', 'Ji Soo Yi'] | Patterns for visualization evaluation | 60,028 |
The performance and development review PADR evaluation in a company is a complex group decision-making problem that is influenced by multiple and conflicting objectives. The complexity of the PADR evaluation problem is often due to the difficulties in determining the degrees of an alternative that satisfies the criteria. In this paper, we present a hesitant fuzzy multiple criteria group decision-making methods for PADR evaluation. We first develop some operations based on Einstein operations. Then, we proposed some aggregation operators to aggregate hesitant fuzzy elements and the relationship between our proposed operators and the existing ones are discussed in detail. Furthermore, the procedure of multicriteria group decision making based on the proposed operators is given under hesitant fuzzy environment. Finally, a practical example about PADR evaluation in a company is provided to illustrate the developed method. | ['Dejian Yu'] | Some Hesitant Fuzzy Information Aggregation Operators Based on Einstein Operational Laws | 214,293 |
Multi-server-based distributed virtual environment (MSDVE) systems have become prevalent, supporting a large number of internet users. In MSDVEs, the load balancing among servers is an important issue to achieve system scalability. However, existing approaches must pay high migration overhead for the state transition of users or regions, thus the excessive holding time during load distribution makes it difficult for the system to keep the interactive performance acceptable. This paper aims to provide an efficient load distribution mechanism in which a group of servers takes charge of regions and shares region information among servers. The proposed mechanism dynamically classifies task types based on features of requested messages, and distributes each task fairly to neighboring servers. We have implemented the proposed mechanism extending our network framework for DVE, ATLAS, and our experiments show that the task distribution reduces both communication and processing overhead during load distribution without significant classification overhead. | ['Mingyu Lim', 'Dongman Lee'] | A task-based load distribution scheme for multi-server-based distributed virtual environment systems | 533,496 |
The expressive power of multi-parent creation in monotonic access control models | ['Paul Ammann', 'Ravi S. Sandhu', 'Richard Lipton'] | The expressive power of multi-parent creation in monotonic access control models | 210,436 |
The United States military is investigating large-scale, realistic virtual world simulations to facilitate warfighter training. As the simulation community strives towards meeting these military training objectives, methods must be developed and validated that measure scalability performance in these virtual world simulators. With such methods, the simulation community will be able to quantifiably compare scalability performance between system changes. This work contributes to the development and validation prerequisite by evaluating the effectiveness of commonly used system metrics to measure scalability in a three-dimensional virtual trainer. Specifically, the metrics of CPU utilization and simulation frames per second are evaluated for their effectiveness in vertical scalability benchmarking. | ['Sean C. Mondesire', 'Jonathan Stevens', 'Douglas B. Maxwell'] | Vertical scalability benchmarking in three-dimensional virtual world simulation | 670,058 |
Several algorithms have been proposed for image reconstruction in MREIT. These algorithms reconstruct conductivity distribution either directly from magnetic flux density measurements or from reconstructed current density distribution. In this study, performance of all major algorithms are evaluated and compared on a common platform, in terms of their reconstruction error, reconstruction time, perceptual image quality, immunity against measurement noise, required electrode size. J-Substitution (JS) and Hybrid J-Substitution algorithms have the best reconstruction accuracy but they are among the slowest. Another current density based algorithm, Equipotential Projection (EPP) algorithm along with magnetic flux density based B z Sensitivity (BzS) algorithm has moderate reconstruction accuracy. BzS algorithm is the fastest. | ['B. Murat Eyüboğlu', 'V. Emre Arpgnar', 'Rasim Boyacioglu', 'Evren Degirmenci', 'Gokhan Eker'] | Comparison of magnetic resonance electrical impedance tomography (MREIT) reconstruction algorithms | 543,708 |
The pathwidth of a graph is a measure of how path-like the graph is. Given a graph G and an integer k, the problem of finding whether there exist at most k vertices in G whose deletion results in a graph of pathwidth at most one is NP-complete. We initiate the study of the parameterized complexity of this problem, parameterized by k. We show that the problem has a quartic vertex-kernel: We show that, given an input instance (G = (V, E), k); |V| = n, we can construct, in polynomial time, an instance (G′, k′) such that (i) (G, k) is a YES instance if and only if (G′, k′) is a YES instance, (ii) G′ has O(k4) vertices, and (iii) k′ ≤ k. We also give a fixed parameter tractable (FPT) algorithm for the problem that runs in O(7kk ċ n2) time. | ['Geevarghese Philip', 'Venkatesh Raman', 'Yngve Villanger'] | A quartic kernel for pathwidth-one vertex deletion | 165,707 |
Complexity Reduction Using Two Stage Tracking | ['Ravi Narayan Panda', 'Sasmita Kumari Padhy', 'Siba Prasada Panigrahi'] | Complexity Reduction Using Two Stage Tracking | 733,062 |
A quad-band 2.5G receiver is designed to replace the front-end SAW filters with on-chip bandpass filters and to integrate the LNA matching components, as well as the RF baluns. The receiver achieves a typical sensitivity of -110 dBm or better, while saving a considerable amount of BOM. Utilizing an arrangement of four baseband capacitors and MOS switches driven by 4-phase 25% duty-cycle clocks, high-Q BPF's are realized to attenuate the 0 dBm out-of-band blocker. The 65 nm CMOS SAW-less receiver integrated as a part of a 2.5G SoC, draws 55 mA from the battery, and measures an out-of-band 1 dB-compression of greater than +2 dBm. Measured as a stand-alone, as well as the baseband running in call mode in the platform level, the receiver passes the 3GPP specifications with margin. | ['Ahmad Mirzaei', 'Hooman Darabi', 'Ahmad Yazdi', 'Zhimin Zhou', 'Ethan Chang', 'Puneet Suri'] | A 65 nm CMOS Quad-Band SAW-Less Receiver SoC for GSM/GPRS/EDGE | 459 |
Current automatic facial recognition systems are not robust against changes in illumination, pose, facial expression and occlusion. In this paper, we propose an algorithm based on a probabilistic approach for face recognition to address the problem of pose change by a probabilistic approach that takes into account the pose difference between probe and gallery images. By using a large facial image database called CMU PIE database, which contains images of the same set of people taken from many different angles, we have developed a probabilistic model of how facial features change as the pose changes. This model enables us to make our face recognition system more robust to the change of poses in the probe image. The experimental results show that this approach achieves a better recognition rate than conventional face recognition methods over a much larger range of pose. For example, when the gallery contains only images of a frontal face and the probe image varies its pose orientation, the recognition rate remains within a less than 10% difference until the probe pose begins to differ more than 45 degrees, whereas the recognition rate of a PCA-based method begins to drop at a difference as small as 10 degrees, and a representative commercial system at 30 degrees. | ['Takeo Kanade', 'Akihiko Yamada'] | Multi-subregion based probabilistic approach toward pose-invariant face recognition | 534,209 |
Privacy preserving delegated word search in the cloud. | ['Kaoutar Elkhiyaoui', 'Melek Önen', 'Refik Molva'] | Privacy preserving delegated word search in the cloud. | 783,310 |
In the biomedical analytics pipeline data preprocessing is the first and crucial step as subsequent results and visualization depend heavily on original data quality. However, the latter often contain a large number of outliers or missing values. Moreover, they may be corrupted by noise of unknown characteristics. This is in many cases aggravated by lack of sufficient information to construct a data cleaning mechanism. Regularization techniques remove erroneous values and complete missing ones while requiring little or no information regarding either data or noise dynamics. This paper examines the theory and practice of a regularization class based on finite differences and implemented through the conjugate gradient method. Moreover, it explores the connection of finite differences to the discrete Laplace operator. The results obtained from applying the proposed regularization techniques to heart rate time series from the MIT-BIH dataset are discussed. | ['Georgios Drakopoulos', 'Vasileios Megalooikonomou'] | Regularizing large biosignals with finite differences | 960,949 |
In today’s business environment, deception is commonplace. In hiring situations, successful deception by job candidates can lead to a poor fit between the candidate’s abilities and the requirements of the job, and this can lead to poor performance. This study seeks to inhibit successful deception by job candidates by suggesting that managers limit communication with job applicants to the media that the applicant is least comfortable using for deception. In today’s multicultural business environment, job applicants can come from a variety of cultural backgrounds. Taking this into account, the current study seeks to predict media choice for deception based on a subject’s espoused national culture. A scenario-based media choice task was given to subjects in the United States and China, and the results indicate that espoused collectivism, power distance and masculinity influence media choice. Implications for research and practitioners are discussed. | ['Christopher P. Furner', 'Joey F. George'] | Making it Hard to Lie: Cultural Determinants of Media Choice for Deception | 472,888 |
In mobile adhoc netwotk (MANET), a node’s quality of service (QoS) trust represents how much it is reliable in quality. QoS trust of a node is computed based on its multiple quality parameters and it is an interesting and challenging area in MANETs. In this work, QoS trust is evaluated by taking into consideration quality parameters like node residual energy, bandwidth and mobility. The proposed method “Recommendations Based QoS Trust Aggregation and Routing in Mobile Adhoc Networks-QTAR” is a frame work. Where the trust is established through four phases like QoS trust computation, aggregation, propagation and routing. The Dempster Shafer Theory (DST) is used for aggregation of trust recommendations. In the network, trust information is propagated through HELLO packets. Each node stores the QoS trust information of other nodes in the form of trust matrices. We applied matrix algebra operations on trust matrices for route establishment from source to destination. The time and space complexity of proposed method is discussed theoretically. The simulation is conducted for the varying of node velocity and network size, where the proposed method shown considerable improvement over existing protocols. | ['NageswaraRao Sirisala', 'Shoba Bindu C'] | Recommendations Based QoS Trust Aggregation and Routing in Mobile Adhoc Networks | 970,270 |
For an effective Internet based distributed parallel computing platform, Java-Internet Computing Environment (JICE) is designed and implemented with multithreading and remote method invocation mechanisms provided in Java. Specifically, JICE supports a shared memory system model for communication between any two nodes. Under the JICE, communication time is a major candidate of performance bottleneck. To reduce this communication overhead, a method of grouping is designed based on the optimal communication time. Communication performance given by grouping is evaluated through the analysis of execution time and verified via experiments. The results show that communication time can be reduced about 80% from executing some Java benchmarks on JICE. | ['Chun-Mok Chung', 'Pil-Sup Shin', 'Shin-Dug Kim'] | A Java Internet computing environment with effective configuration method | 192,761 |
Abstract#R##N##R##N#Research on the organizational implementation of information technology (IT) and social power has favoured explanations based on issues of resource power and process power at the expense of matters of meaning power. As a result, although the existence and importance of meaning power is acknowledged, its distinctive practices and enacted outcomes remain relatively under-theorized and under-explored by IT researchers. This paper focused on unpacking the practices and outcomes associated with the exercise of meaning power within the IT implementation process. Our aim was to analyze the practices employed to construct meaning and enact a collective ‘definition of the situation’. We focused on framing and utilizing the signature matrix technique to represent and analyze the exercise of meaning power in practice. The paper developed and illustrated this conceptual framework using a case study of a conflictual IT implementation in a challenging public sector environment. We concluded by pointing out the situated nature of meaning power practices and the enacted outcomes. Our research extends the literature on IT and social power by offering an analytical framework distinctly suited to the analysis and deeper understanding of the meaning power properties. | ['Bijan Azad', 'Samer Faraj'] | Social power and information technology implementation: a contentious framing lens | 537,097 |
This investigation aimed to fabricate a flexible micro resistive temperature sensor to measure the junction temperature of a light emitting diode (LED). The junction temperature is typically measured using a thermal resistance measurement approach. This approach is limited in that no standard regulates the timing of data capture. This work presents a micro temperature sensor that can measure temperature stably and continuously, and has the advantages of being lightweight and able to monitor junction temperatures in real time. Micro-electro-mechanical-systems (MEMS) technologies are employed to minimize the size of a temperature sensor that is constructed on a stainless steel foil substrate (SS-304 with 30 μm thickness). A flexible micro resistive temperature sensor can be fixed between the LED chip and the frame. The junction temperature of the LED can be measured from the linear relationship between the temperature and the resistance. The sensitivity of the micro temperature sensor is 0.059 ± 0.004 Ω/°C. The temperature of the commercial CREE® EZ1000 chip is 119.97 °C when it is thermally stable, as measured using the micro temperature sensor; however, it was 126.9 °C, when measured by thermal resistance measurement. The micro temperature sensor can be used to replace thermal resistance measurement and performs reliably. | ['Chi-Yuan Lee', 'Ay Su', 'Yin-Chieh Liu', 'Wei-Yuan Fan', 'Wei-Jung Hsieh'] | In Situ Measurement of the Junction Temperature of Light Emitting Diodes Using a Flexible Micro Temperature Sensor | 163,394 |